The Psychology of AI Decision-Making: Truth, Trust, Bias
Everyday decisions such as hiring, content moderation, risk scoring, and credibility assessments are increasingly shaped by AI systems. The most concerning part is not just their reach, but how quickly this influence becomes routine and unquestioned.
In this book talk, Chris introduces key ideas from “The Psychology of AI Decision Making” and argues that AI governance is not just a technical issue, but a human one. The values baked into systems reflect how people define truth, trust, and fairness, carrying the same biases, blind spots, and socioeconomic constraints—even when the technology appears neutral.
Using plain language and real-world examples, Chris compares human judgment with algorithmic decision-making to help students recognize the power dynamics inside so-called “objective” systems. This session is for students and supporters who are curious, cautious, or already feeling AI’s impact—and who want clearer tools for asking better questions and pushing back.
About Chris:
Chris is the author of “The Psychology of AI Decision Making” and brings more than 45 years of IT experience, with honors degrees in Computer Science and Psychology. A Fellow of the British Computer Society and graduate member of the British Psychological Society, Chris has led teams across multiple industries, including building large-scale QA operations for Electronic Arts and Microsoft, and advising leaders and investors on strategy and governance.
Now focused on mentoring and education, Chris supports TEALS, a Microsoft Philanthropies program expanding equitable computer science education, and mentors young people through IT projects. Chris speaks internationally and is committed to helping organizations understand the human impacts of AI and automation on work, wellbeing, and society.
