Davin Nabizadeh - Research Statement
"As artificial intelligence becomes increasingly powerful and pervasive, our responsibility as researchers is to ensure these systems enhance rather than diminish human moral agency and well-being."
Research Program Overview
My study focuses on the intersection of AI ethics and moral psychology, which addresses a key issue: aligning AI systems with human ethical principles. This multidisciplinary approach, based on my psychology background, enables me to make significant contributions to AI ethics and AI education research.
My research interests began in moral psychology and progressed to study on moral development in educational settings. At the Center for the Study of Ethical Development, I worded norms for the Defining Issues Test (DIT2) using the largest human dataset in moral reasoning research, which was published in high-impact journals such as Ethics and Behavior.
Current Research: AI Moral Alignment
My doctoral research, which examined how large language models interact with moral domains such as moral judgment, moral machine, moral foundations, practical wisdom, moral identity, and ethical decision-making, is one of the most comprehensive studies for AI ethics. I examined how large language models (LLMs) process moral information compared to human data. The studies found both convergences and critical divergences that inform AI development in moral domains. This work fills a critical gap: as AI systems make more consequential decisions that affect human life, from healthcare algorithms to self-driving cars, understanding their moral values is critical for ensuring AI safety and alignment with human values.
Key Research Contributions
My research focuses on four interconnected areas: (1) developing novel methodologies for systematically evaluating AI moral performance in moral domains using psychological instruments; (2) conducting comparative analyses of human and AI moral psychology patterns; (3) generating practical insights for AI design through real-world moral dilemma scenarios; and (4) pioneering replicable methods for studying AI behavior using established psychological research paradigms.
Future Research Directions
I am working on several interconnected projects, including multi-modal moral assessment beyond text-based scenarios, visual moral dilemmas and real-life dilemmas, moral foundation analysis to better understand AI decision-making frameworks, longitudinal tracking of AI moral capabilities across model versions, and AI for school safety.
Broader Impact
This research program establishes empirical foundations for AI safety while also adding to our understanding of moral psychology. My interdisciplinary approach encourages collaboration across psychology, education, computer science, philosophy, and policy studies. I'm actively forming collaborations with AI developers, education specialists, ethicists, and policymakers to ensure that research findings lead to practical improvements in AI system design and governance.
My long-term research goals include establishing a comprehensive research program that systematically maps the landscape of AI moral capabilities and develops evidence-based approaches to improving AI alignment with human values, as well as AI's role in higher education. Through this research program, I hope to ensure that as AI systems improve, they remain consistent with human moral values and contribute positively to human flourishing and education.