I am a PhD student at the College of Information and Computer Sciences, UMass Amherst, where I am advised by David Jensen. My research spans the areas of causal inference, probabilistic machine learning, and reinforcement learning. More recently, I have also worked on approaches for mechanistic interpretability in LLMs.

I aim to create tools for analyzing and evaluating the behaviour of complex AI systems, with a focus on problems in blame and responsibility attribution, explainability, and alignment with human norms. Unlike most applications of causal inference that involve objective experimentation and interaction with the external world, these issues are traditionally grounded in subjective human judgments. These involve norms that can be very counterintuitive, and pose a significant challenge to purely statistical approaches in causal inference. By developing formal approaches for modeling norms and inference algorithms that align with norms, I hope to support open and scientific evaluation and auditing of AI systems, and the growth of AI systems that better align with norms.

For a complete list of my publications, see my Google Scholar.

Research

lean

Automated Discovery of Functional Actual Causes in Complex Environments
Caleb Chuck*, Sankaran Vaidyanathan*, Stephen Giguere, Amy Zhang, David Jensen, Scott Niekum
In preparation | arXiv

Classical definitions of actual causation often declare a large number of events and entities in an environment to be causes, even when many of them rarely influence the outcome. This is an issue of normality, or the distinction between normal and rare events as potential causes. By exploiting context-specific independencies in the environment, we can prune out events that do not affect the outcome in the observed context and identify a restricted and focused set of actual causes. We extend the formal definition of actual causation to account for these independencies, and show how to automatically infer actual causes under this definition.