2024_pic_cropped.jpg

I work on AI safety and alignment.

I focus on ensuring that future highly capable LLM agents are aligned with human intentions and do not cause catastrophic outcomes.

Previously, I was:


Please consider providing anonymous feedback to me! You can use this Google Form.



Highlighted Research