Connect with us

Space & Physics

The Story of the World’s Most Underrated Quantum Maestro

As the world celebrates the 131st birth anniversary of S.N. Bose, EdPublica explores the theoretical physicist’s unparalleled contributions to the field of quantum mechanics

Karthik Vinod

Published

on

KNOW THE SCIENTIST

It’s 1924, and Satyendra Nath Bose, going by S.N. Bose was a young physicist teaching in Dhaka, then British India. Grappled by an epiphany, he was desperate to have his solution, fixing a logical inconsistency in Planck’s radiation law, get published. He had his eyes on the British Philosophical Magazine, since word could spread to the leading physicists of the time, most if not all in Europe. But the paper was rejected without any explanations offered. 

But he wasn’t going to give up just yet. Unrelenting, he sent another sealed envelope with his draft and this time a cover letter again, to Europe. One can imagine months later, Bose breathing out a sigh of relief when he finally got a positive response – from none other than the great man of physics himself – Albert Einstein. 

In some ways, Bose and Einstein were similar. Both had no PhDs when they wrote their treatises that brought them into limelight. And Einstein introduced E=mc2 derived from special relativity with little fanfare, so did Bose who didn’t secure a publisher with his groundbreaking work that invented quantum statistics. He produced a novel derivation of the Planck radiation law, from the first principles of quantum theory. 

Satyendra nath bose
Satyendra Nath Bose at Kolkata in 1915. Credit: Wikimedia Commons

This was a well-known problem that had plagued physicists since Max Planck, the father of quantum physics himself. Einstein himself had struggled time and again, to only have never resolved the problem. But Bose did, and too nonchalantly with a simple derivation from first principles grounded in quantum theory. For those who know some quantum theory, I’m referring to Bose’s profound recognition that the Maxwell-Boltzmann distribution that holds true for ideal gasses, fails for quantum particles. A technical treatment of the problem would reveal that photons, that are particles of light with the same energy and polarization, are indistinguishable from each other, as a result of the Pauli exclusion principle and Heisenberg’s uncertainty principle. 

Fascinatingly, last July marked the 100 years since Einstein submitted Bose’s paper, “Planck’s law and the quantum hypothesis” on his behalf to Zeitschrift fur Physik.

Fascinated and moved by what he read, Einstein was magnanimous enough to have Bose’s paper translated in German and published in the journal, Zeitschrift für Physik in Germany the same year. It would be the beginning of a brief, but productive professional collaboration between the two theoretical physicists, that would just open the doors to the quantum world much wider. Fascinatingly, last July marked the 100 years since Einstein submitted Bose’s paper, “Planck’s law and the quantum hypothesis” on his behalf to Zeitschrift fur Physik. 

With the benefit of hindsight, Bose’s work was really nothing short of revolutionary for its time. However, a Nobel Committee member, the Swedish Oskar Klein – and theoretical physicist of repute – deemed it a mere advance in applied sciences, rather than a major conceptual advance. With hindsight again, it’s a known fact that Nobel Prizes are handed in for quantum jumps in technical advancements more than ever before. In fact, the 2001 Nobel Prize in Physics went to Carl Wieman, Eric Allin Cornell, and Wolfgang Ketterle for synthesizing the Bose-Einstein condensate, a prediction made actually by Einstein based on Bose’s new statistics. These condensates are created when atoms are cooled to near absolute zero temperature, thus attaining the quantum ground state. Atoms at this state possess some residual energy, or zero-point energy, marking a macroscopic phase transition much like a fourth state of matter in its own right. 

Such were the changing times that Bose’s work received much attention gradually. To Bose himself, he was fine without a Nobel, saying, “I have got all the recognition I deserve”. A modest character and gentleman, he resonates a lot with the mental image of a scientist who’s a servant to the scientific discipline itself.

He was awarded the Padma Vibhushan, the highest civilian award by the Government of India in 1954. Institutes have been named in his honour, but despite this, his reputation has little if no mention at all in public discourse. 

But what’s more upsetting is that, Bose is still a bit of a stranger in India, where he was born and lived. He studied physics at the Presidency College, Calcutta under the tutelage that saw other great Indian physicists, including Jagdish Chandra Bose and Meghnad Saha. He was awarded the Padma Vibhushan, the highest civilian award by the Government of India in 1954. Institutes have been named in his honour, but despite this, his reputation has little if no mention at all in public discourse. 

BOSE INSIDE

To his physicists’ peers in his generation and beyond, he was recognized in scientific lexicology. Paul Dirac, the British physicist coined the name ‘bosons’ in Bose’s honor (‘bose-on’). These refer to quantum particles including photons and others with integer quantum spins, a formulation that arose only because of Bose’s invention of quantum statistics. In fact, the media popular, ‘god particle’, the Higgs boson, carries a bit of Bose as much as it does of Peter Higgs who shared the 2013 Nobel Prize in Physics with Francois Euglert for producing the hypothesis. 

Health

Researchers Develop AI Method That Makes Computer Vision Models More Explainable

A new technique developed by MIT researchers could help make artificial intelligence systems more accurate and transparent in high-stakes fields such as health care and autonomous driving by improving how computer vision models explain their decisions.

Published

on

MIT researchers have developed a new explainable AI method that improves the accuracy and transparency of computer vision models, helping users trust AI predictions in healthcare and autonomous driving.
Image credit: Tara/Pexels

MIT researchers have developed a new explainable AI method that improves the accuracy and transparency of computer vision models, helping users trust AI predictions in healthcare and autonomous driving.

Researchers at MIT have developed a new approach to make computer vision models more transparent, offering a potential boost to trust and accountability in safety-critical applications such as medical diagnosis and autonomous driving.

In a media statement, the researchers said the method improves on a widely used explainability technique known as concept bottleneck modeling, which enables AI systems to show the human-understandable concepts behind a prediction. The new approach is designed to produce clearer explanations while also improving prediction accuracy.

Why explainable AI matters

In areas such as health care, users often need more than just a model’s output. They want to understand why a system arrived at a particular conclusion before deciding whether to rely on it. Concept bottleneck models attempt to address that need by forcing an AI system to make predictions through a set of intermediate concepts that humans can interpret.

For example, when analysing a medical image for melanoma, a clinician might define concepts such as “clustered brown dots” or “variegated pigmentation.” The model would first identify those concepts and then use them to arrive at its final prediction.

But the researchers said pre-defined concepts can sometimes be too broad, irrelevant or incomplete for a specific task, limiting both the quality of explanations and the model’s performance. To overcome that, the MIT team developed a method that extracts concepts the model has already learned during training and then compels it to use those concepts when making decisions.

The approach relies on two specialised machine-learning models. One extracts the most relevant internal features learned by the target model, while the other translates them into plain-language concepts that humans can understand. This makes it possible to convert a pretrained computer vision model into one capable of explaining its reasoning through interpretable concepts.

“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” Antonio De Santis, lead author of the study, said in a media statement.

De Santis is a graduate student at Polytechnic University of Milan and carried out the research while serving as a visiting graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The paper was co-authored by Schrasing Tong, Marco Brambilla of Polytechnic University of Milan, and Lalana Kagal of CSAIL. The research will be presented at the International Conference on Learning Representations.

Concept bottleneck models have gained attention as a way to improve AI explainability by introducing an intermediate reasoning step between an input image and the final output. In one example, a bird-classification model might identify concepts such as “yellow legs” and “blue wings” before predicting a barn swallow.

However, the researchers noted that these concepts are often generated in advance by humans or large language models, which may not always match the needs of the task. Even when a model is given a fixed concept set, it can still rely on hidden information not visible to users, a challenge known as information leakage.

“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis said in a media statement.

The team’s solution was to tap into the knowledge the model had already acquired from large volumes of training data. Using a sparse autoencoder, the method isolates the most relevant learned features and reconstructs them into a small number of concepts. A multimodal large language model then describes each concept in simple language and labels the training images by marking which concepts are present or absent.

The annotated dataset is then used to train a concept bottleneck module, which is inserted into the target model. This forces the model to make predictions using only the extracted concepts.

The researchers said one of the biggest challenges was ensuring that the automatically identified concepts were both accurate and understandable to humans. To reduce the risk of hidden reasoning, the model is limited to just five concepts for each prediction, encouraging it to focus only on the most relevant information and making the explanation easier to follow.

When tested against state-of-the-art concept bottleneck models on tasks including bird species classification and skin lesion identification, the new method delivered the highest accuracy while also producing more precise explanations, according to the researchers. It also generated concepts that were more relevant to the images in the dataset.

Still, the team acknowledged that the broader challenge of balancing accuracy and interpretability remains unresolved.

“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis said in a media statement.

Looking ahead, the researchers plan to explore ways to further reduce information leakage, possibly by adding additional concept bottleneck modules. They also aim to scale up the method by using a larger multimodal language model to annotate a larger training dataset, which could improve performance further.

This latest work adds to growing efforts to make AI systems not only more powerful, but also more understandable in domains where trust can be as important as accuracy.

Continue Reading

Space & Physics

Researchers Develop Stretchable Material That Can Instantly Switch How It Conducts Heat

MIT engineers have developed a stretchable material heat conduction system that can rapidly switch how heat flows, enabling adaptive cooling applications.

Published

on

Laboratory experiment showing a stretchable polymer fibre demonstrating stretchable material heat conduction as its thermal behaviour changes when the material is stretched.
Experiments show that a fibre made from a widely used polymer can reversibly change how it conducts heat when stretched. Image credit: Courtesy of the researchers/MIT

Stretchable material heat conduction has taken a major leap forward as engineers at MIT have developed a polymer that can rapidly and reversibly switch how it conducts heat simply by being stretched. The discovery opens new possibilities for adaptive cooling technologies in clothing, electronics, and building infrastructure.

Engineers at the Massachusetts Institute of Technology have developed a new polymer material that can rapidly and reversibly switch how it conducts heat—simply by being stretched.

The research shows that a commonly used soft polymer, known as an olefin block copolymer (OBC), can more than double its thermal conductivity when stretched, shifting from heat-handling behaviour similar to plastic to levels closer to marble. When the material relaxes back to its original form, its heat-conducting ability drops again, returning to its plastic-like state.

The transition happens extremely fast—within just 0.22 seconds—making it the fastest thermal switching ever observed in a material, according to the researchers.

The findings open up possibilities for adaptive materials that respond to temperature changes in real time, with potential applications ranging from cooling fabrics and wearable technology to electronics, buildings, and infrastructure.

A new direction for adaptive materials

“We need materials that are inexpensive, widely available, and able to adapt quickly to changing environmental temperatures,” said Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering, in a media statement. She explained that the discovery of rapid thermal switching in this polymer creates new opportunities to design materials that actively manage heat rather than passively resisting it.

The research team initially began studying the material while searching for more sustainable alternatives to spandex, a petroleum-based elastic fabric that is difficult to recycle. During mechanical testing, the researchers noticed unexpected changes in how the polymer handled heat as it was stretched and released.

“What caught our attention was that the material’s thermal conductivity increased when stretched and decreased again when relaxed, even after thousands of cycles,” said Duo Xu, a co-author of the study, in a media statement. He added that the effect was fully reversible and occurred while the material remained largely amorphous, which contradicted existing assumptions in polymer science.

The discovery demonstrates how stretchable material heat conduction can be actively controlled in real time, allowing materials to respond dynamically to temperature changes.

How stretching unlocks heat flow

At the microscopic level, most polymers consist of tangled chains of carbon atoms that block heat flow. The MIT team found that stretching the olefin block copolymer temporarily straightens these tangled chains and aligns small crystalline regions, creating clearer pathways for heat to travel through the material.

“This gives the material the ability to toggle its heat conduction thousands of times without degrading

Unlike earlier work on polyethylene—where similar alignment permanently increased thermal conductivity—the new material does not crystallise under strain. Instead, its internal structure switches back and forth between straightened and tangled states, allowing repeated and reversible thermal switching.

“This gives the material the ability to toggle its heat conduction thousands of times without degrading,” Xu said.

From smart clothing to cooler electronics

The researchers say the material could be engineered into fibres for clothing that normally retain heat but instantly dissipate excess warmth when stretched. Similar concepts could be applied to electronics, laptops, and buildings, where materials could respond dynamically to overheating without external cooling systems.

“The difference in heat dissipation is similar to the tactile difference between touching plastic and touching marble,” Boriskina said in a media statement, highlighting how noticeable the effect can be.

The team is now working on optimising the polymer’s internal structure and exploring related materials that could produce even larger thermal shifts.

“If we can further enhance this effect, the industrial and societal impact could be substantial,” Boriskina said.

Researchers say advances in stretchable material heat conduction could significantly influence future designs of smart textiles, electronics cooling, and energy-efficient buildings.

The study has been published in the journal Advanced Materials. The authors include researchers from MIT and the Southern University of Science and Technology in China.

Researchers say advances in stretchable material heat conduction could significantly influence future designs of smart textiles, electronics cooling, and energy-efficient buildings.

Continue Reading

Space & Physics

Physicists Capture ‘Wakes’ Left by Quarks in the Universe’s First Liquid

Scientists at CERN’s Large Hadron Collider have observed, for the first time, fluid-like wakes created by quarks moving through quark–gluon plasma, offering direct evidence that the universe’s earliest matter behaved like a liquid rather than a cloud of free particles.

Published

on

Physicists Capture ‘Wakes’ Left by Quarks in the Universe’s First Liquid
Image credit: Jose-Luis Olivares, MIT

Physicists working at the CERN(The European Organization for Nuclear Research) have reported the first direct experimental evidence that quark–gluon plasma—the primordial matter that filled the universe moments after the Big Bang—behaves like a true liquid.

Using heavy-ion collisions at the Large Hadron Collider, researchers recreated the extreme conditions of the early universe and observed that quarks moving through this plasma generate wake-like patterns, similar to ripples trailing a duck across water.

The study, led by physicists from the Massachusetts Institute of Technology, shows that the quark–gluon plasma responds collectively, flowing and splashing rather than scattering randomly.

“It has been a long debate in our field, on whether the plasma should respond to a quark,” said Yen-Jie Lee in a media statement. “Now we see the plasma is incredibly dense, such that it is able to slow down a quark, and produces splashes and swirls like a liquid. So quark-gluon plasma really is a primordial soup.”

Quark–gluon plasma is believed to be the first liquid to have existed in the universe and the hottest ever observed, reaching temperatures of several trillion degrees Celsius. It is also considered a near-perfect liquid, flowing with almost no resistance.

To isolate the wake produced by a single quark, the team developed a new experimental technique. Instead of tracking pairs of quarks and antiquarks—whose effects can overlap—they identified rare collision events that produced a single quark traveling in the opposite direction of a Z boson. Because a Z boson interacts weakly with its surroundings, it acts as a clean marker, allowing scientists to attribute any observed plasma ripples solely to the quark.

“We have figured out a new technique that allows us to see the effects of a single quark in the QGP, through a different pair of particles,” Lee said.

Analysing data from around 13 billion heavy-ion collisions, the researchers identified roughly 2,000 Z-boson events. In these cases, they consistently observed fluid-like swirls in the plasma opposite to the Z boson’s direction—clear signatures of quark-induced wakes.

The results align with theoretical predictions made by MIT physicist Krishna Rajagopal, whose hybrid model suggested that quarks should drag plasma along as they move through it.

“This is something that many of us have argued must be there for a good many years, and that many experiments have looked for,” Rajagopal said.

“We’ve gained the first direct evidence that the quark indeed drags more plasma with it as it travels,” Lee added. “This will enable us to study the properties and behavior of this exotic fluid in unprecedented detail.”

The research was carried out by members of the CMS Collaboration using the Compact Muon Solenoid detector at CERN. The open-access study has been published in the journal Physics Letters B.

Continue Reading

Trending