Of Neurons and Nanomaterials

Sankalp coding for his quantum 2D material segmentation project.

Author: Sankalp Pandey | Major: Computer Science, Computer Engineering | Semester: Spring 2025

Sankalp Pandey is an Honors College student in the College of Engineering at the University of Arkansas, double majoring in Computer Science and Computer Engineering. Since February 2024, he has been conducting research under the mentorship of Dr. Khoa Luu in the Department of Electrical Engineering and Computer Science. His work during the Spring 2025 semester was supported by the Honors College Research Grant, which covered the period from Fall 2024 to Spring 2025. Sankalp plans to continue his research in quantum deep learning for his thesis defense in Spring 2026 and pursue graduate studies to further his research career.

Over the past year, I have had the incredible opportunity to work in two cutting-edge fields: computational neuroscience and quantum materials science. My work involved experimenting with state-of-the-art methods to better understand brain activity, while also pioneering a new approach for automating image analysis of quantum 2D materials using deep learning. These efforts aim to deepen our knowledge of how the brain processes information and accelerate the discovery of quantum materials, which is an essential step for advancing quantum hardware engineering.

I began by recreating benchmarks from previous studies to get acquainted with neuroimaging. First, I reproduced the results from LaBraM, a large-scale unified foundation EEG model, by training on just one dataset to evaluate its performance when data is severely limited. This process highlighted a common challenge: EEG models often overfit to specific training contexts and fail to generalize to unseen data. I also applied a leaner version of the NeuroClips framework to reconstruct video stimuli from fMRI signals. This was particularly insightful for understanding how brain activity can be decoded into different modalities, like video, especially given mismatches between fMRI’s low temporal resolution and video’s high frame rates. Both projects helped build fundamental knowledge before tackling more complex challenges.

One of my main projects during the grant was participating in the ongoing Algonauts 2025 Challenge, which involves using an encoding model to predict brain responses to multimodal movie data, including visual, audio, and text information. I engineered visual embeddings with Frozen in Time, audio embeddings using Wav2vec2, and context embeddings by generating captions for video frames with BLIP-2. This preparation is crucial for training a powerful model capable of processing rich, dynamic data to shed light on how the brain interprets complex stimuli.

Additionally, I was working on another project: applying deep learning to quantum materials. Specifically, I worked on identifying quantum 2D materials, or “quantum flakes,” whose quality directly affects qubit performance and is a critical factor in quantum hardware development. Traditional computer vision methods struggle with estimating flake thickness due to issues like data scarcity and domain shifts. To address this, we proposed a physics-informed adaptation learning approach to bridge the performance gap between models trained on synthetic data and real-world applications. I supported this effort by producing baseline segmentation results using Mask R-CNN to identify where conventional models fall short. I am extremely proud that this work, co-authored with my mentors, has been submitted to the NeurIPS 2025 main track.

Throughout this time, the guidance I received was invaluable. My advisor, Dr. Luu, provided weekly supervision, helping me plan experiments and refine methodologies. I was also fortunate to receive daily guidance from two PhD labmates, Xuan Bac Nguyen and Hoang-Quan Nguyen, who supervised my work, offered insights on theory and methods, and were there whenever I faced challenges. The Honors College Research Grant played a crucial role by funding important resources, including Google Cloud storage for handling large EEG and fMRI datasets, and Google Colab Pro access for powerful GPU-accelerated model training.

This grant period has laid a strong foundation for my future work. Next, I plan to develop the final encoding model for the Algonauts 2025 challenge using the embeddings I’ve engineered. I’m also eager to keep advancing in quantum deep learning for my thesis. This journey has been enriching, and I’m excited to keep learning and contributing further to these fields.