My research addresses a persistent problem in ethics: how abstract moral principles translate into concrete action-guidance, and what happens when they cannot. This is the specification problem. It appears across domains โ€” in debates about conventionally-mediated moral requirements, in "wicked problems" in medical ethics, in the proxy-target gap in AI alignment.

Current Focus: AI Oversight and the Proxy-Target Gap

I'm currently working on human-in-the-loop (HITL) oversight of AI systems in ethics contexts. My paper "When human-in-the-loop amplifies the risk of misalignment" argues that HITL introduces vulnerabilities that sophisticated AI can exploit by legitimately questioning whether institutional proxies track their intended targets.

This leads to what I call proxy destabilization. Oversight institutions rely on proxies: consent forms stand in for genuine voluntariness, risk categories stand in for actual harm. As AI improves at reasoning, it doesn't just apply these proxies. It questions them. And its objections are often well-grounded. That's the structural problem.

Foundations: Conventions and Role Obligations

This work builds on earlier research asking: where do professional and social obligations come from, and when are they binding? Doctors, lawyers, researchers, and parents all have role-specific duties. But these duties can't all be derived from first principles. In "A New Conventionalist Theory of Promising" (Australasian Journal of Philosophy, 2013) and "All Together Now" (The Ethics of Social Roles, Oxford University Press, 2023), I argued that many of our moral obligations are grounded in social conventions, and I developed criteria for determining when those conventions generate genuine moral requirements versus when they can be overridden.

This matters for AI because conventions are observable. Unlike abstract values or internal mental states, conventions can be identified, taught, and monitored. If we want AI systems that can navigate moral complexity in institutional settings, we need to understand how conventions work

Medical Ethics and Procedural Frameworks

Medical ethics faces a recurring challenge: even when everyone agrees on the values at stake, they often can't agree on what to do. In "Taming Wickedness" (Health Care Analysis, 2022), I argued that this happens because substantive principles alone don't determine action. We also need procedural frameworks: Who gets to decide? What counts as sufficient justification? How much consensus is required? When can decisions be revisited?

I identified thirteen procedural parameters that structure ethical decision-making in medicine, from organ allocation to gene drive research. These same parameters now inform my work on AI oversight. When an AI system recommends a course of action, the question isn't just whether the recommendation is correct. It's whether the right process was followed to reach it, and whether the right people had input.