👋🏼 Hello there, I’m Raja!

👨🏻‍💻 I’m currently a NeuroAI PhD researcher at CNRS–ANITI (Toulouse), where I’m deeply interested in exploring innovative research challenges. My work centers on multimodal representation learning and neuroscience-inspired AI, driven by a curiosity to build intelligent systems that can tackle real-world, large-scale problems.

🔬 I’m passionate about understanding how the brain inspires efficient and generalizable representations, and how these insights can be leveraged to improve the adaptability of modern AI systems.

📚 Before starting my PhD, I completed my Master’s thesis at IIT Bombay, working at the intersection of Natural Language Processing (NLP), Machine Learning (ML), and Computational Social Science. My research explored mental disorder prediction from social media posts using novel representation learning frameworks.

đź’ˇ Broadly, my interests lie in bridging cognitive neuroscience and artificial intelligence, with the long-term goal of developing models that learn and reason in a more human-like, multimodal, and interpretable way.

Research Experience

Deep‑learning Implementations of the Global Workspace Theory

Guide: Prof. Rufin VanRullen, VanRullen Lab

  • Developing a transformer-based semi-supervised framework for multimodal learning grounded in the principles of the Global Workspace Theory (GWT).
  • Designing Global Workspace-inspired mechanisms for VLMs to enable flexible, cross-modal communication and integration within a shared representational space.

Multimodal Mixup Contrastive Learning for Multimodal Classification | Research Project, Monash University & IIT Bombay

Guide: Prof. Kshitij Jadhav & Dr. Deval Mehta

  • Developed a novel multimodal contrastive loss incorporating mixup training to improve representation learning for complex real‑world multimodal data relations, beating SOTA on four diverse public multimodal classification benchmarks
  • Designed and implemented a multimodal learning framework incorporating unimodal prediction modules, afusion module, and a new Mixup‑based contrastive loss for continuous representation updating

Mental Disorder Identification through Linguistic Markers | Master’s Thesis, IIT Bombay

Guide: Prof. Pushpak Bhattacharyya, CFILT Lab

  • Proposed a unique method to convert social media text into time series data for post‑level analysis of mental disorders
  • Developed a novel framework for mental disorder identification via foundational deep learning models which surpasses the performance of BERT‑based approaches by 5% in the F1 score on mental conditions: Depression, Self‑harm, and Anorexia
  • Explored semantic overlaps among these disorders, underscoring the value of cross‑domain data in mental health research

Cognitively Inspired Hallucination Detection | Research Project, UT Austin & IIT Bombay

Guide: Prof. Abhijit Mishra & Prof. Pushpak Bhattacharyya

  • Curated eye‑tracking data with 500 instances for the task of hallucination detection and developed a BERT‑based framework
  • Proposed a novel attention bias framework inspired by human behavior for detecting hallucinated texts

Professional Experience

I have professional experience working as an AI Student Researcher at Assert AI.
There, I deployed customized YOLOv4 models for surveillance tasks leveraging the Nvidia Jetson series GPU accelerators, generated tailored datasets and trained YOLOv4 models for diverse object detection and classification scenarios.