Ethics & AI

Wouter Beek



  1. The moral status of AI
  2. Ethical requirements for AI
  3. Superintelligence

Part I: The moral status of AI

What is Ethics?

  • Theoretical Ethics
  • Practical Ethics
  • Societal Norms

Theoretical Ethics: Categorical Imperative

Act only according to a rule of which you can, at the same time, will that it should become a universal law without contradiction.

Absolute and unconditional requirements that must be obeyed in all circumstances.

Let's apply this...



Act only according to that maxim whereby you can, at the same time, will that it should become a universal law without contradiction.

Practical Ethics: To whom do these laws apply?

Example: Abortion is wrong if an embryo has (sufficient) sentience and/or sapience.

Conditions for moral status:

  • Sentience: The capacity for phenomenal experience or qualia.
  • Sapience: A set of capabilities associated with higher intelligence (e.g., self-awareness, being a reason-responsive agent).

Principles regarding moral status attribution

  • Substrate Non-Discrimination
  • Ontogeny Non-Discrimination
  • Subjective Rate-of-Time

Principle of Substrate Non-Discrimination

If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.

Principle of Ontogeny Non-Discrimination

If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.

Principle of Subjective Rate-of-Time

Uploading: A hypothetical future technology enables a human to be transferred from her original implementation in an organic brain onto a digital computer.

Suppose an upload could be sentient, then its subjective rate of time would be different from humans.

In cases where the duration of an experience is of basic normative significance, it is experience's subjective duration that counts.

Societal norms

Not founded in ethics.

Implicitly conditioned on empirical contingencies.


  1. Moral agents have reproductive freedom.
  2. Society must step in to provide the basic needs of children in cases where their parents are unable or refusing to do so.
  3. An AI may reproduce itself with exponential speed.

Part II: Ethical requirements for AI

Why does AI have ethical requirements?

AI has ethical requirements through inheritance.

Machines take on cognitive work with social dimensions from humans.

How would you specifcy ethical requirements for AI?

3 Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with Law 1.
  3. A robot must protect its own existence as long as such protection does not conflict with Laws 1 & 2.

Static laws

What if Archimedes of Syracuse had been able to create a long‐lasting AI with a fixed version of the moral code of Ancient Greece?

AI behavior is specified non-locally

Deep Blue was not programmed in terms of individual chess moves.

Watson was not programmed in terms of individual trivia questions.

AIs are programmed in terms of the optimization of a non-local criterion (winning chess, answering trivia questions).

Thus... Good behavior ought to be a non-local extrapolation of (future) consequences of generic behavior.

Ethical requirements specification for AIs

Require the AI to think like a human engineer concerned about ethics, not just a simple product of ethical engineering.

An AI programmed by Archimedes, with no more moral expertise than Archimedes has, should be able to recognize our own civilization's ethics as moral progress when compared to Ancient Greece.

Part III: Superintelligence


  • Weak superintelligence: [Quantitative] Increased processing speed.
  • Strong superintelligence [Quantitative]

Analogies for superintelligence

  • Differences between humans
  • Differences between past and present civilizations
  • Differences between humans and other animals
Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight?

"Intelligence" shifts as AI progresses

Good-story bias

Intuitions about which future scenarios are plausible and realistic may be shaped by what is seen in media, overestimating the probability of scenarios that make for a good story.

When was the last time you saw a movie about humanity suddenly going extinct (without warning and without being replaced by some other civilization)?