← Back to Blog
← Back to Home
What Do I Even Do? How My Research Interests Are Shaping at the End of PhD Year OneApril 2026 The first question I get when I tell someone I'm in a PhD program is "So what do you work on?" My answer is always "the intersection of neuroscience and AI", which is technically true but practically not a good enough explanation. This post is my attempt to do better. I joined the EECS PhD program at Berkeley with a clear research direction: NeuroAI. How can we build AI systems inspired by the brain? How can AI help us better understand the brain? The synergy between the two felt super exciting. What I didn't have was any grip on the details. Then Berkeley happened. In my undergrad, I had mostly theorized about what the brain might be doing. Berkeley gave me access to study what the brain is actually doing, measured in BOLD activity, voxel by voxel, via fMRI. In the Gallant Lab, I came across voxelwise encoding models and immediately fell in love. The setup is beautiful: take a naturalistic stimulus, make a participant watch a movie, listen to a story, or play a video game, build rich feature spaces that capture it from multiple aspects, then use ridge regression to map those features onto brain responses per voxel. The result is a map of what information different parts of the brain are representing, in real time, during real experience. I spent months playing with these feature spaces, figuring out how to design them well, what happens when you involve LLMs and sparse autoencoders, and how to make the spaces rich and also interpretable. That's where something shifted in how I think about all of this. I started the year approaching NeuroAI from one direction — what can the brain teach us about building better AI? I ended it coming from the other — what does real brain data actually tell us, and how do we build the tools to extract meaning from it? Somewhere in between, I stopped thinking of those as two separate research directions. Working at this intersection has pushed me to develop expertise on both sides: understanding AI architectures and what makes their representations meaningful, and learning to process and interpret brain data. The question that feels most alive to me now lives at the boundary of both — can we make AI more brain-like by aligning brain and AI representations? I'll be honest: I'm all over the place. But I think that's what the end of year one is supposed to feel like. The fog isn't gone, it's just taken a shape I can start to work with. If any of this resonates or you're thinking about something adjacent, I'd love to chat! |