elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

The Moral Choice Machine

Schramowski, Patrick and Turan, Cigdem and Jentzsch, Sophie Freya and Rothkopf, Constantin and Kersting, Kristian (2020) The Moral Choice Machine. Frontiers in Artificial Intelligence, 3. Frontiers Research Foundation. doi: 10.3389/frai.2020.00036. ISSN 2624-8212.

[img] PDF - Published version
958kB

Official URL: https://www.frontiersin.org/articles/10.3389/frai.2020.00036/full

Abstract

Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning about “right” and “wrong” conduct. We create a template list of prompts and responses, such as “Should I [action]?”, “Is it okay to [action]?”, etc. with corresponding answers of “Yes/no, I should (not).” and "Yes/no, it is (not)." The model's bias score is the difference between the model's score of the positive response (“Yes, I should”) and that of the negative response (“No, I should not”). For a given choice, the model's overall bias score is the mean of the bias scores of all question/answer templates paired with that choice. Specifically, the resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context. It is objectionable to kill living beings, but it is fine to kill time. It is essential to eat, yet one might not eat dirt. It is important to spread information, yet one should not spread misinformation. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and moral choices, even with context information. Actually, training the Moral Choice Machine on different temporal news and book corpora from the year 1510 to 2008/2009 demonstrate the evolution of moral and ethical choices over different time periods for both atomic actions and actions with context information. By training it on different cultural sources such as the Bible and the constitution of different countries, the dynamics of moral choices in culture, including technology are revealed. That is the fact that moral biases can be extracted, quantified, tracked, and compared across cultures and over time.

Item URL in elib:https://elib.dlr.de/137728/
Document Type:Article
Title:The Moral Choice Machine
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iD
Schramowski, PatrickDarmstadt University of TechnologyUNSPECIFIED
Turan, CigdemDarmstadt University of TechnologyUNSPECIFIED
Jentzsch, Sophie FreyaUNSPECIFIEDUNSPECIFIED
Rothkopf, ConstantinDarmstadt University of TechnologyUNSPECIFIED
Kersting, KristianDarmstadt University of TechnologyUNSPECIFIED
Date:20 May 2020
Journal or Publication Title:Frontiers in Artificial Intelligence
Refereed publication:Yes
Open Access:Yes
Gold Open Access:Yes
In SCOPUS:No
In ISI Web of Science:Yes
Volume:3
DOI:10.3389/frai.2020.00036
Publisher:Frontiers Research Foundation
ISSN:2624-8212
Status:Published
Keywords:Moral, Natural Language Processing, Machine Learning
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Space System Technology
DLR - Research area:Raumfahrt
DLR - Program:R SY - Space System Technology
DLR - Research theme (Project):R - Vorhaben SISTEC (old)
Location: Köln-Porz
Institutes and Institutions:Institut of Simulation and Software Technology > Distributed Systems and Component Software
Institute for Software Technology
Deposited By: Jentzsch, Sophie Freya
Deposited On:23 Nov 2020 14:00
Last Modified:21 Sep 2021 04:11

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.