Connect with us

The Sciences

Memory Formation Unveiled: An Interview with Sajikumar Sreedharan

“Our goal is to correct or rewire neural network activity so that memory can be preserved with minimal damage, especially during conditions such as aging, Alzheimer’s Disease, and mental health disorders.”

Dipin Damodharan

Published

on

sajinew
Image credit: Sajikumar Sreedharan

In an enlightening conversation with EdPublica, Sajikumar Sreedharan, Associate Professor at the NUS Yong Loo Lin School of Medicine, Singapore, shares his research insights on memory formation and the transition from short-term to long-term memory. His areas of research include aging and neurodegeneration, the neural basis of long-term memory (LTM), and synaptic tagging and capture (STC) as an elementary mechanism for storing LTM in neural networks. He also explores metaplasticity as a compensatory mechanism for improving memory in neural networks. With a career spanning over two decades, Prof. Sreedharan discusses his key findings, innovative methodologies, and the significance of receiving the “Investigator” award from the International Association for the Study of Neurons and Brain Diseases. Join us as he reflects on his journey and the collaborative spirit that drives his research.


Edited Excerpts:

Your research has been recognized for significantly advancing our understanding of memory formation. Could you elaborate on your key findings related to the transition from short-term to long-term memory?

I have been working in the field of learning and memory since 2000. My first mentor in neuroscience was Prof. T. Ramakrishna, the founder and first head of the Life Sciences Department at the University of Calicut, Kerala, India. He was a great motivator, and we often had insightful discussions about learning and memory in the evenings. I had the chance to work with him for my master’s dissertation, which was my first real research experience. Prof. Ramakrishna encouraged me to expand my knowledge further, and he connected me with Dr. Shobi Valeri, a senior researcher in Delhi at the time.

Dr. Shobi soon left for Germany to pursue his Ph.D. and recommended me to DRDO (Defence Research and Development Organisation). Dr. Shobi is now a senior scientist at the National Institute of Nutrition in Hyderabad. I worked at DRDO for a year before moving to Magdeburg, Germany, where I began my Ph.D. under Prof. Juletta Frey. She is well-known in the field of learning and memory, particularly for her research on the cellular mechanisms involved in forming associative memory.

In Prof. Frey’s lab, I discovered how different pieces of information can link together to form long-term memories. This work later inspired the development of many computational models of memory. After completing my Ph.D., I did my postdoctoral studies with Prof. Martin Korte in Braunschweig. There, I discovered how activating neurons before learning could enhance memory formation in the future, a process known as metaplasticity—an exciting and emerging area of neuroscience.

“Using animal models, we have uncovered the role of specific brain regions, like CA2 and CA1, in forming social and spatial memories—both of which are significantly affected by aging, neurodegenerative diseases, and mental health conditions


Since 2012, I have been working at the National University of Singapore, where I have focused more on aging, neurodegeneration, and mental health. Using animal models, we have uncovered the role of specific brain regions, like CA2 and CA1, in forming social and spatial memories—both of which are significantly affected by aging, neurodegenerative diseases, and mental health conditions.

How do you approach the study of molecular mechanisms in memory, and what methodologies do you find most effective?

In my lab, we approach research questions by examining them from different angles—molecular, cellular, behavioural, and system-level. We choose the most appropriate method depending on the specific question we’re investigating. I can’t say that one method is better than the others because each plays an important role in confirming our findings.

Recently, we’ve been using optogenetic and chemogenetic tools, which allow us to target and stimulate specific neurons. These methods are particularly helpful because they ensure precision in how we activate or deactivate brain cells.

saji

Congrats on receiving the “Investigator” award from the International Association for the Study of Neurons and Brain Diseases. What does this recognition mean to you personally and professionally?

Thank you for your kind words. As a researcher, I feel proud and happy that my work is being recognized internationally. Professionally, this recognition is a significant motivation to continue pursuing my research.

This achievement is not just mine alone—I owe it to all my Ph.D. students, postdocs, and research technicians who have worked with me over the past 20 years. This award is for them as well.

How do you feel your work contributes to the broader scientific community, especially concerning memory impairments related to aging and mental health?

I am the Research Director of the Healthy Longevity Translational Research Programme at the School of Medicine, National University of Singapore, where we have more than 36 scientists working on various aspects of healthy aging. One of our key areas is brain health. Living a long life is not meaningful without a healthy brain.

alzheimers disease 6653912 1280
Image by Moondance from Pixabay

I am one of the principal investigators studying how neural networks are impaired during aging and neurodegeneration. My wife, Dr. Sheeja Navakkode, is also a neuroscientist, focusing on Alzheimer’s disease using animal models. Neural networks undergo tremendous changes during aging and in various mental health conditions. Our goal is to correct or rewire neural network activity so that memory can be preserved with minimal damage, especially during conditions such as aging, Alzheimer’s Disease, and mental health disorders.

(Read the full interview in the upcoming December 2024 issue of EdPublica magazine.)

Dipin is the Co-founder and Editor-in-Chief of EdPublica. A journalist and editor with over 15 years of experience leading and co-founding both print and digital media outlets, he has written extensively on education, politics, and culture. His work has appeared in global publications such as The Huffington Post, The Himalayan Times, DailyO, Education Insider, and others.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

The Sciences

Most Earthquake Energy Is Spent Heating Up Rocks, Not Shaking the Ground:  New MIT Study Finds

How do earthquakes spend their energy? MIT’s latest research shows heat—not ground motion—is the main outcome of a quake, reshaping how scientists understand seismic risks

Published

on

Earthquake
A scanning electron microscope image reveals the slick, glassy zone where laboratory-induced seismic slip melted the rock through intense friction. The central “flow” pattern marks the area rapidly transformed into glass as the fault moved. Credit: Courtesy of the researchers

When an earthquake strikes, we experience its violent shaking on the surface. But new research from MIT shows that most of a quake’s energy actually goes into something entirely different — heat.

Using miniature “lab quakes” designed to mimic real seismic slips deep underground, geologists at MIT have, for the first time, mapped the full energy budget of an earthquake. Their study reveals that only about 10 percent of a quake’s energy translates into ground shaking, while less than 1 percent goes into fracturing rock. The vast majority — nearly 80 percent — is released as heat at the fault, sometimes creating sudden spikes hot enough to melt surrounding rock.

“These results show that what happens deep underground is far more dynamic than what we feel on the surface,” said Daniel Ortega-Arroyo, a graduate researcher in MIT’s Department of Earth, Atmospheric and Planetary Sciences, in a media statement. “A rock’s deformation history — essentially its memory of past seismic shifts — dictates how much energy ends up in shaking, breaking, or heating. That history plays a big role in determining how destructive a quake can be.”

The team’s findings, published in AGU Advances, suggest that understanding a fault zone’s “thermal footprint” might be just as important as recording surface tremors. Laboratory-created earthquakes, though simplified models of natural ones, provide a rare window into processes that are otherwise impossible to observe deep within Earth’s crust.

MIT researchers created the “microshakes” by applying immense pressures to samples of granite mixed with magnetic particles that acted as ultra-sensitive heat gauges. By stacking the results of countless tiny quakes, they tracked exactly how the energy distributed among shaking, fracturing, and heating. Some events saw fault zones heat up to over 1,200 degrees Celsius in mere microseconds, momentarily liquefying parts of the rock before cooling again.

“We could never reproduce the full complexity of Earth, so we simplify,” explained co-author Matěj Peč, MIT associate professor of geophysics. “By isolating the physics in the lab, we can begin to understand the mechanisms that govern real earthquakes — and apply this knowledge to better models and risk assessments.”

The work also provides a fresh perspective on why some regions remain vulnerable long after previous seismic activity. Past quakes, by altering the structure and material properties of rocks, may influence how future ones unfold. If researchers can estimate how much heat was generated in past quakes, they might be able to assess how much stress still lingers underground — a factor that could refine earthquake forecasting.

The study was conducted by Ortega-Arroyo and Peč, along with colleagues from MIT, Harvard University, and Utrecht University.

Continue Reading

The Sciences

Giant Human Antibody Found to Act Like a Brace Against Bacterial Toxins

This synergistic bracing action gives IgM a unique advantage in neutralizing bacterial toxins that are exposed to mechanical forces inside the body

Published

on

Scientific illustration showing a large human antibody (IgM) attaching to and stabilizing a spiky bacterial toxin protein, visually representing how IgM acts as a physical brace against bacterial toxins to protect human cells
Illustration depicting a giant human antibody (IgM) mechanically bracing a spiky bacterial toxin protein, inspired by recent research on antibodies acting as mechanical stabilizers against bacterial toxins rather than just chemical blockers. Image credit: EdPublica

Our immune system’s largest antibody, IgM, has revealed a hidden superpower — it doesn’t just latch onto harmful microbes, it can also act like a brace, mechanically stabilizing bacterial toxins and stopping them from wreaking havoc inside our bodies.

A team of scientists from the S.N. Bose National Centre for Basic Sciences (SNBNCBS) in Kolkata, India, an autonomous institute under the Department of Science and Technology (DST), made this discovery in a recent study. The team reports that IgM can mechanically stiffen bacterial proteins, preventing them from unfolding or losing shape under physical stress.

“This changes the way we think about antibodies,” the researchers said in a media statement. “Traditionally, antibodies are seen as chemical keys that unlock and disable pathogens. But we show they can also serve as mechanical engineers, altering the physical properties of proteins to protect human cells.”

Unlocking a new antibody role

Our immune system produces many different antibodies, each with a distinct function. IgM, the largest and one of the very first antibodies generated when our body detects an infection, has long been recognized for its front-line defense role. But until now, little was known about its ability to physically stabilize dangerous bacterial proteins.

The SNBNCBS study focused on Protein L, a molecule produced by Finegoldia magna. This bacterium is generally harmless but can become pathogenic in certain situations. Protein L acts as a “superantigen,” binding to parts of antibodies in unusual ways and interfering with immune responses.

image0010VR3
Image credit: PIB

Using single-molecule force spectroscopy — a high-precision method that applies minuscule forces to individual molecules — the researchers discovered that when IgM binds Protein L, the bacterial protein becomes more resistant to mechanical stress. In effect, IgM braces the molecule, preventing it from unfolding under physiological forces, such as those exerted by blood flow or immune cell pressure.

Why size matters

The stabilizing effect depended on IgM concentration: more IgM meant stronger resistance. Simulations showed that this is because IgM’s large structure carries multiple binding sites, allowing it to clamp onto Protein L at several locations simultaneously. Smaller antibodies lack this kind of stabilizing network.

“This synergistic bracing action gives IgM a unique advantage in neutralizing bacterial toxins that are exposed to mechanical forces inside the body,” the researchers explained.

The finding highlights an overlooked dimension of how our immune system works — antibodies don’t merely bind chemically but can also act as mechanical modulators, physically disarming toxins.

Such insights could open a new frontier in drug development, where future therapies may involve engineering antibodies to stiffen harmful proteins, effectively locking them in a harmless state.

The study suggests that by harnessing this natural bracing mechanism, scientists may be able to design innovative treatments that go beyond traditional antibody functions.

Continue Reading

The Sciences

Researchers Unveil Breakthrough in Efficient Machine Learning with Symmetric Data

Published

on

Untitled design 5 1
Illustrated image

MIT researchers have developed the first mathematically proven method for training machine learning models that can efficiently interpret symmetric data—an advance that could significantly enhance the accuracy and speed of AI systems in fields ranging from drug discovery to climate analysis.

In traditional drug discovery, for example, a human looking at a rotated image of a molecule can easily recognize it as the same compound. However, standard machine learning models may misclassify the rotated image as a completely new molecule, highlighting a blind spot in current AI approaches. This shortcoming stems from the concept of symmetry, where an object’s fundamental properties remain unchanged even when it undergoes transformations like rotation.

“If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties,” the researchers explained. While some empirical techniques have shown promise, there was previously no provably efficient way to train models that rigorously account for symmetry—until now.

“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” said Behrooz Tahmasebi, MIT graduate student and co-lead author of the new study, in a media statement.

The research, recently presented at the International Conference on Machine Learning, is co-authored by fellow MIT graduate student Ashkan Soleymani (co-lead author), Stefanie Jegelka (associate professor of EECS, IDSS member, and CSAIL member), and Patrick Jaillet (Dugald C. Jackson Professor of Electrical Engineering and Computer Science and principal investigator at LIDS).

Rethinking how AI sees the world

Symmetric data appears across numerous scientific disciplines. For instance, a model capable of recognizing an object irrespective of its position in an image demonstrates such symmetry. Without built-in mechanisms to process these patterns, machine learning models can make more mistakes and require massive datasets for training. Conversely, models that leverage symmetry can work faster and with fewer data points.

“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi noted.

The MIT researchers explored the trade-off between how much data a model needs and the computational effort required. Their resulting algorithm brings symmetry to the fore, allowing models to learn from fewer examples without spending excessive computing resources.

Blending algebra and geometry

The team combined strategies from both algebra and geometry, reformulating the problem so the machine learning model could efficiently process the inherent symmetries in the data. This innovative blend results in an optimization problem that is computationally tractable and requires fewer training samples.

“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” explained Tahmasebi.

By demonstrating that symmetry-aware training can be both accurate and efficient, the breakthrough paves the way for the next generation of neural network architectures, which promise to be more precise and less resource-intensive than conventional models.

“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” added Soleymani.

This foundational advance is expected to influence future research in diverse applications, including materials science, astronomy, and climate modeling, wherever symmetry in data is a key feature.

Continue Reading

Trending