AI technology has the potential to dwarf the impact of other revolutionary technologies such as the printing press, electricity, antibiotics, and the internet. But AI is developing so quickly that there has been little time to reflect on the nature, scope, and (dis)value of that impact. This has given rise to a host of pressing moral questions that we are only beginning to consider (let alone answer). In this class, we'll consider some of those questions. Among them are these: How might AI transform the world for unimaginable good? How might it pose an existential threat, and what can we do to mitigate it? Should governments attempt to regulate the development of AI, and if so, how? Can AI make moral judgments? If so, what moral judgments should we program them to make? And how do we avoid programming our own biases and moral failings into them? Could AI become conscious, and if so, what (if any) moral obligations might this impose on humans? For instance, can AI have rights, interests, or welfare? Could I merge with a superintelligent AI, becoming superintelligent myself? What would that even mean, and would it be morally OK for me to do it? Could I befriend, fall in love with, or even have sex with, an AI? If so, should I? Will AI lead to mass unemployment, and if so, what should be done for those who are left jobless? How might AI be used by militaries, governments, employers, and others with interests in surveillance and what (if any) moral obligations might this impose on those with the technology? Finally, how can AI be used to capture our attention and engagement and what obligations (if any) do we have to resist such attempts?