With the advent of autonomous machines, such as autonomous vehicles, robots and even weapons, comes a need to embed some kind of morality into these machines. By definition, autonomous systems have to make choices of their own accord, to go left or right, to kill or not to kill, and we want these choices to reflect our own values and norms. One way of achieving this is for developers to translate explicit normative rules into code. Another way, arguably more democratic, is to crowdsource morality. For instance, by asking the public to “vote” on all sorts of moral dilemmas (e.g. the well-known trolley problem) or to let autonomous systems learn from our actual behavior (e.g. from observing how we drive). Interestingly, such forms of crowdsourcing could actually result in autonomous systems whose behavior aligns with local values and norms, instead of some kind of desired universal morality. The downside, however, would be that those systems, especially those that mimic our behavior, would not be able to make “better” decisions than we humans can.