
Headshot of Sankalp Pandey
Author: Sankalp Pandey | Major: Computer Science, Computer Engineering | Semester: Fall 2024
Hello! My name is Sankalp Pandey, and I am a Computer Science and Computer Engineering major at the University of Arkansas. Over the past year, I have worked under the mentorship of Dr. Khoa Luu in the Computer Vision and Image Understanding (CVIU) Lab. With my research, I want to utilize deep learning to explore how electroencephalogram (EEG) data can help us understand visual perception.
The inspiration for my project came from my lab’s success in predicting neural responses using functional magnetic resonance imaging (fMRI) data in the Algonauts Project. Recognizing fMRI’s limitations in cost and accessibility, I turned to EEG as a more practical alternative. EEG’s portability and temporal resolution make it ideal for studying the brain’s dynamic responses. Dr. Khoa Luu, my mentor, encouraged me to pursue this interdisciplinary topic, blending AI, neuroscience, and signal processing.
This semester, my work centered around developing methods to interpret brain activity using EEG data. I began by designing a pilot experiment to classify which visual stimulus a participant was observing, specifically focusing on the MNIST dataset which consists of handwritten digits. EEG signals were recorded while participants viewed the digits, and I tested different approaches to classify the observed digit. Although the results were challenging to optimize, this initial step offered valuable insights into the complexity of EEG-based classification tasks. In parallel, I furthered my understanding by reproducing results from the LaBraM framework, a large-scale foundational model for EEG tasks. This involved finetuning the model on several EEG datasets to uncover generalizable patterns in brain activity. Building on these efforts, I also explored how brain activity, represented through fMRI data, can be mapped to visual inputs. Using paired image and fMRI data from the COCO dataset, I developed a model to reconstruct images based on predicted brain activity. The workflow involved encoding an image with a Vision Transformer (ViT-16) to create representations that mimic brain responses, predicting corresponding fMRI signals, and reconstructing the image using a decoder. A key challenge was processing EEG data, which was often obscured by noise and artifacts from external sources or muscle movements. I applied preprocessing techniques like bandpass filtering and Independent Component Analysis (ICA) to clean the signals, ensuring they were suitable for machine learning models.
Dr. Luu has been an exceptional mentor throughout my research, offering guidance while encouraging me to explore creative solutions independently. My labmates provided valuable insights, helping me refine methods and rethink my approaches to overcome challenges. This collaborative environment has been a cornerstone of my progress.
This research has been an incredible stepping stone in my journey. As it continues, I plan to refine my methods further and produce a work that helps in generalizing the understanding of EEG data and how it relates to what we see. I look forward to presenting my findings at my Honors thesis defense and preparing a submission for NeurIPS 2025. Beyond my work, I am eager to pursue a Master’s in Computer Science, focusing on computational neuroscience and artificial intelligence. My research has helped me strengthen the passion to discover new things!