If X is good then the question whether X is good is meaningless.
Act only according to a rule of which you can, at the same time, will that it should become a universal law without contradiction.
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law without contradiction.
Example: Killing an animal is wrong if an animal has (sufficient) sentience and/or sapience.
Conditions for moral status:
If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.
Uploading: A hypothetical future technology enables a human to be transferred from her original implementation in an organic brain onto a digital computer.
Suppose an upload could be sentient, then its subjective rate of time would be different from humans.
In cases where the duration of an experience is of basic normative significance, it is experience's subjective duration that counts.
AI has ethical requirements through inheritance.
Machines take on cognitive work with social dimensions from humans.
[To] constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.
What if Archimedes of Syracuse had been able to create a long‐lasting AI with a fixed version of the moral code of Ancient Greece?
Deep Blue was not programmed in terms of individual chess moves.
Watson was not programmed in terms of individual trivia questions.
AIs are programmed in terms of the optimization of a non-local criterion (winning chess, answering trivia questions).
Good behavior as a non-local extrapolation of (future) consequences of generic behavior.
Require the AI to think like a human engineer that is concerned about ethics.
How do you build an AI which, when it executes, becomes more ethical than you?