Ethics and AI: Should there be limits to the development of AI? How can we be developing AI responsibly? Below are three possible questions in this area. They're not all equally good questions; start by asking, for each of them, whether there might be a more productive question to pose about the issue. (So, what's the best question to ask about the ethics of self-driving cars? About AI and jobs? About AI and laws?)
What about a self-driving car that has to make a choice between the life of its passenger and the life of a pedestrian? How does that choice get made?
There's also the issue of who gets the benefits of AI. The people whose jobs are being replaced by robots are disproportionately lower-income. How can we make sure that everyone benefits from developments in AI?
What laws apply to AI? What happens when a robot commits a crime? Who gets punished?
Bias and AI: Machine learning algorithms have enabled innovation in medicine, business, and science. However, because they use existing data to develop "understanding" of the world, they are influenced by existing biases in the data, and their results have been used to discriminate against groups of individuals. Research this issue and discuss how AI researchers might overcome this problem.
Throughout this course you have seen that technology has both benefits and risks. Imagine yourself working in the field of AI or robotics. What are you interested in working on? What are the benefits? How will you minimize the risks?