hey, i'm nishkal hundia
cs + math @ UMD, trying to understand how ai works under the hood.
right now, i'm working on mechanistic interpretability research in the CLIP Lab with Dr. Sarah Wiegreffe. i'm evaluating the effectiveness of representation-based steering approaches for long-form generation. i'm also exploring unfaithful reasoning in large language models supported by an Open Philanthropy grant.
i recently had a paper accepted to the ICLR 2025 Sparse LLM Workshop on MoELens, where we developed techniques to interpret routing patterns in Mixture of Experts models.
i lead the ai/ml club at UMD, which i grew from zero to 800+ members. we run workshops, speaker events, and semester-long projects to help students get hands-on with machine learning research and applications.
previous quests
previously, i worked with the RISE Lab on tropical cyclone modeling and storm data imputation we even published a paper from that project!
i also worked with the Center for Disaster Resilience at UMD on writing and optimizing simulation logic for flood insurance programs.
and i built DenseTeX, a model that converts images of math into LaTeX (mostly works). i also served as a teaching assistant for discrete mathematics.
i'm always down to chat about sparse models, interpretability, llms, weird math problems, or why Jet Lag: The Game is absolute peak content.