Spring 2026
James Brusseau
jamesbrusseau@gmail.com
jbrusseau@pace.edu
UniTrento webpage
Objectives
1. |
Develop the core principles of AI ethics and explore their application to today's technology. |
|
2. |
Consider the primary debates in the philosophy and ethics of artificial intelligence. |
|
3. |
Learn how to respond to ethics committees, and how to produce AI ethics evaluations, sometimes referred to as AI ethics audits. |
Content
An autonomous vehicle must decide whether to veer left or right. If it goes left, it hits an elderly woman, and if it goes right it hits a young boy. Which way should the car go? Why? These are the kinds of dilemmas we will explore across domains from AI healthcare, to hiring tools, to the recommendation algorithms of social media, and across the range of today’s AI applications.
To respond to the dilemmas, the core principles employed in today’s AI ethics will be developed. They are: Autonomy, Human Dignity, Privacy, Fairness, Equity, Social Wellbeing, Explainability, Safety, Performance.
Some examples here.
Teaching method
The teaching method is classroom discussion of case studies, supplemented by lectures from the professor. There are no required texts and no homework - but attendance at seminar sessions is required because the course's main ideas will be developed collaboratively, through the seminar discussions. AI ethics will be learned by doing AI ethics.
Assessment
Students will present a power point / poster presentation. It will be an AI ethics evaluation of an AI application. The AI application may be a tool the student is developing in their own work, or it may be a publicly known artificial intelligence application (ChatGPT, for example, or smart glasses, or Tesla and autopilot). The presentation will last 15 - 20 minutes plus 5 - 10 minutes of questions.
Students will be graded on their ability to locate the ethical dilemmas that arise around AI technology, and their ability to discuss the dilemmas knowledgeably. There are no right or wrong answers in ethics, but there are better and worse understandings of the human values that guide and justify decisions.
Because the main ideas will be developed through classroom discussion, attendance to at least 80% of seminar sessions is required in order to do the final presentation.
Bibliography
The bibliography will be the seminar sessions and the subsequently published decks.
Schedule
Wednesday, May 7, 4.30 p.m. - 6.30 p.m.
Autonomy (Course Introduction)
Thursday, May 8, 4.30 p.m. - 6.30 p.m.
Dignity
Friday, May 9, 4.30 p.m. - 6.30 p.m.
Privacy
Wednesday, May 14, 4.30 p.m. - 6.30 p.m.
Fairness
Thursday, May 15, 4.30 p.m. - 6.30 p.m.
Equity/Solidarity
Friday, May 16, 4.30 p.m. - 6.30 p.m.
Social Wellbeing
Wednesday, May 21, 4.30 p.m. - 6.30 p.m.
Explainability +
Safety
Thursday, May 22, 4.30 p.m. - 6.30 p.m.
Performance +
AI Audits +
History of AI/Tech ethics
Wednesday, May 28, 4.30 p.m. - 6.30 p.m.
Student Presentations
Thursday, May 29, 4.30 p.m. - 6.30 p.m.
Student Presentations
Cases
Assessment
Students will present a power point / poster presentation. It will be an AI ethics evaluation of an AI application. The AI application may be a tool the student is developing in their own work, or it may be a publicly known artificial intelligence application (ChatGPT, for example, or smart glasses, or Tesla and autopilot). The presentation will last 15 - 20 minutes plus 5 - 10 minutes of questions.
Students will be graded on their ability to locate the ethical dilemmas that arise around AI technology, and their ability to discuss the dilemmas knowledgeably. There are no right or wrong answers in ethics, but there are better and worse understandings of the human values that guide and justify decisions.
Because the main ideas will be developed through classroom discussion, attendance to at least 80% of seminar sessions is required in order to do the final presentation.
Presentation Schedule
Autonomy/Freedom
Dignity
Privacy
Fairness
Equity/Solidarity
Social wellbeing/Sustainability
Performance
Safety
Explainability/Accountability
Compared to others, our principles lean toward human freedom/libertarianism, and are more streamlined. Small differences.
Ethics Guidelines for Trustworthy AI
AI High Level Expert Group, European Commission
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment
European Commission for the Efficiency of Justice
https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c
Ethical and Societal Implications of Data and AI
Nuffield Foundation
https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Sheffield-Foundat.pdf
The Five Principles Key to Any Ethical Framework for AI
New Statesman, Luciano Floridi and Lord Clement-Jones
https://tech.newstatesman.com/policy/ai-ethics-framework
Postscript on Societies of Control
October 1997, Deleuze
/Library: Deleuze, Foucault, Discipline, Control.pdf
A Declaration of the Independence of Cyberspace
John Perry Barlow
https://www.eff.org/cyberspace-independence