Connect with us

Space & Physics

Why does superconductivity matter?

Dr. Saurabh Basu

Published

on

sup
A high-temperature (liquid nitrogen cooled) superconductor levitating above a permanent magnet (TU Dresden). Credit: Henry Mühlpfordt/Wikemedia Commons

Superconductivity was discovered by H. Kamerlingh Onnes on April 8, 1911, who was studying the resistance of solid Mercury (Hg) at cryogenic temperatures. Liquid helium was recently discovered at that time. At T = 4.2K, the resistance of Hg disappeared abruptly. This marked a transition to a new phase that was never seen before. The state is resistanceless, strongly diamagnetic, and denotes a new state of matter. K. Onnes sent two reports to KNAW (the local journal of the Netherlands), where he preferred calling the zero-resistance state ‘superconductivity’’.

There was another discovery that went unnoticed in the same experiment, which was the transition of superfluid Helium (He) at 2.2K, the so-called λ transition, below which He becomes a superfluid. However, we shall skip that discussion for now. A couple of years later, superconductivity was found in lead (Pb) at 7K. Much later, in 1941, Niobium Nitride was found to superconduct below 16 K. The burning question in those days was: what would the conductivity or resistivity of metals be at a very low temperature?

The reason behind such a question is Lord Kelvin’s suggestion that for metals, initially the resistivity decreases with falling temperature and finally climbs to infinity at zero Kelvin because electrons’ mobility becomes zero at 0 K, yielding zero conductivity and hence infinite resistivity. Kamerlingh Onnes and his assistant Jacob Clay studied the resistance of gold (Au) and platinum (Pt) down to T = 14K. There was a linear decrease in resistance until 14 K; however, lower temperatures cannot be accessed owing to the unavailability of liquid He, which eventually happened in 1908.

Super Condu
Heike Kamerlingh Onnes (right), the discoverer of superconductivity. Paul Ehrenfest, Hendrik Lorentz, Niels Bohr stand to his left.

In fact, the experiment with Au and Pt was repeated after 1908. For Pt, the resistivity became constant after 4.2K, while Au is found to superconduct at very low temperatures. Thus, Lord Kelvin’s notion about infinite resistivity at very low temperatures was incorrect. Onnes had found that at 3 K (below the transition), the normalised resistance is about 10−7. Above 4.2 K, the resistivity starts appearing again. The transition is too sharp and falls abruptly to zero within a temperature window of 10−4 K.

Perfect conductors, superconductors, and magnets

All superconductors are normal metals above the transition temperature. If we ask in the periodic table where most of the superconductors are located, the answer throws some surprises. The good metals are rarely superconducting. The examples are Ag, Au, Cu, Cs, etc., which have transition temperatures of the order of ∼ 0.1K, while the bad metals, such as niobium alloys, copper oxides, and 1 MgB2, have relatively larger transition temperatures. Thus, bad metals are, in general, good superconductors. An important quantity in this regard is the mean free path of the electrons. The mean free path is of the order of a few A0 for metals (above Tc), while for good metals (or the bad superconductors), it is usually a few hundred of A0. Whereas for the bad metals (good superconductors), it is still small as the electrons are strongly coupled to phonons. The orbital overlap is large in a superconductor. In good metals, the orbital overlap is small, and often they become good magnets. In the periodic table, transition elements such as the 3D series elements, namely Al, Bi, Cd, Ga, etc., become good superconductors, while Cr, Mn, and Fe are bad superconductors and in fact form good magnets. For all of them, that is, whether they are superconductors or magnets, there is a large density of states at the Fermi level. So, a lot of electronic states are necessary for the electrons in these systems to be able to condense into a superconducting state (or even a magnetic state). The nature of the electronic wave function determines whether they develop superconducting order or magnetic order. For example, electronic wavefunctions have a large spatial extent for superconductors, while they are short-range for magnets.

Meissner effect

The near-complete expulsion of the magnetic field from a superconducting specimen is called the Meissner effect. In the presence of a magnetic field, the current loops at the periphery will be generated so as to block the entry of the external field inside the specimen. If a magnetic field is allowed within a superconductor, then, by Ampere’s law, there will be normal current within the sample. However, there is no normal current inside the specimen. Thus, there can’t be any magnetic field. For this reason, superconductors are known as perfect diamagnets with very large diamagnetic susceptibility. Even the best-known diamagnets (which are non-superconductors) have magnetic susceptibilities of the order of 10−5. Thus, the diamagnetic property can be considered a distinct property of superconductors compared to zero electrical resistance.

A typical experiment demonstrating the Meissner effect can be thought of as follows: Take a superconducting sample (T < Tc), sprinkle iron filings around the sample, and switch on the magnetic field. The iron filings are going to line up in concentric circles around the specimen. This implies the expulsion of the flux lines outside the sample, which makes the filings line up.

Distinction between perfect conductors and superconductors

The distinction between a perfect conductor and a superconductor is brought about by magnetic field-cooled (FC) and zero-field-cooled (ZFc) cases, as shown below in Fig. 1.

fig1

In the absence of an external magnetic field, temperature is lowered for both the metal and the superconductor in their metallic states from T > Tc to T < Tc (see left panel for both in Fig. 1). Hence, a magnetic field is applied, which eventually gets expelled owing to the Meissner effect. The field has finally been withdrawn. However, if cooling is done in the presence of an external field, after the field is withdrawn, the flux lines get trapped for a perfect conductor; however, the superconductor is left with no memory of an applied field, a situation similar to what happens in the zero-field cooling case. So, superconductors have no memory, while perfect conductors have memory.

Microscopic considerations: BCS theory

The first microscopic theory of superconductivity was proposed by Berdeen, Cooper, and Schrieffer (BCS) in 1957, which earned them a Nobel Prize in 1972. The underlying assumption was that an attractive interaction between the electrons is possible, which is mediated via phonons. Thus, electrons form bound pairs under certain conditions, such as (i) two electrons in the vicinity of the filled Fermi Sea within an energy range ¯hωD (set by the phonons or lattice). (ii) The presence of phonons or the underlying lattice is confirmed by the isotope effect experiment, which confirms that the transition temperature is proportional to the mass of ions. Since the Debye frequency depends on the ionic mass, it implies that the lattice must be involved. 3 A small calculation yields that an attractive interaction is possible in a narrow range of energy. This attractive interaction causes the system to be unstable, and a long-range order develops via symmetry breaking. In a book by one of the discoverers, namely, Schrieffer, he described an analogy between a dancing floor comprising couples, dancing one with any other couple, and being completely oblivious to any other couple present in the room. The couples, while dancing, drift from one end of the room to another but do not collide with each other. This implies less dissipation in the transport of a superconductor. The BCS theory explained most of the features of the superconductors known at that time, such as (i) the discontinuity of the specific heat at the transition temperature, Tc. (ii) Involvement of the lattice via the isotope effect. (iii) Estimation of Tc and the energy gap. The value of Tc and the gap are confirmed by tunnelling experiments across metal-superconductor (M-S) or metal-insulator-superconductor (MIS) types of junctions. Giaever was awarded the Nobel Prize in 1973 for his work on these experiments. (iv) The Meissner effect can be explained within a linear response regime. (v) Temperature dependence of the energy gap, confirming gradual vanishing, which confirms a second-order phase transition. Most of the features of conventional superconductors can be explained using BCS theory. Another salient feature of the theory is that it is non-perturbative. There is no small parameter in the problem. The calculations were done with a variational theory where the energy is minimised with respect to some free parameters of the variational wavefunction, known as the BCS wavefunction.

Unconventional Superconductors: High-Tc Cuprates

This is a class of superconductors where the two-dimensional copper oxide planes play the main role, and superconductivity occurs in these planes. Doping these planes with mobile carriers makes the system unstable towards superconducting correlations. At zero doping, the system is an antiferromagnetic insulator (see Fig. 2). With about 15% to 20% doping with foreign elements, such as strontium (Sr), etc. (for example, in La2−xSrxCuO4), the system turns superconductivity. There are two things that are surprising in this regard. (i) The proximity of the insulating state to the superconducting state; (ii) For the system initially in the superconducting state, as the temperature is raised, instead of going into a metallic state, it shows several unfamiliar features that are very unlike the known Fermi liquid characteristics. It is called a strange metal.

fig2

In fact, there are some signatures of pre-formed pairs in the ‘so-called’ metallic state, known as the pseudo gap phase. Since the starting point from which one should build a theory is missing, a complete understanding of the mechanism leading to the phenomenon cannot be understood. It remained a theoretical riddle.

Dr. Saurabh Basu is Professor at Department of Physics, Indian Institute of Technology (IIT) Guwahati. He works in the area of correlated electron systems with the main focus on bosonic superfluidity in (optical) lattices.

Space & Physics

Inside India’s Semiconductor Push: ‘This Is a 100-Year Bet’

This is not an industry that rewards speed alone; it demands persistence, coordination, and long-term commitment. In semiconductors, success is not measured in years, but built over generations.

Dipin Damodharan

Published

on

IIT Bombay semiconductor experts Swaroop and Udayan Ganguly discussing India’s semiconductor mission
Swaroop Ganguly and Udayan Ganguly

In a conversation with Education Publica Editor Dipin Damodharan, leading semiconductor researchers Swaroop Ganguly and Udayan Ganguly delve into the science, strategy, and systemic challenges shaping India’s chip ambitions. Both are professors in the Department of Electrical Engineering at the Indian Institute of Technology Bombay. Swaroop Ganguly currently leads SemiX—the institute’s semiconductor initiative that brings together expertise across disciplines to advance India’s capabilities in the sector. Udayan Ganguly previously headed SemiX. India’s semiconductor journey, they argue, is only just beginning. The foundations— policy, infrastructure, talent, and partnerships—are being put in place, but the real challenge lies ahead. This is not an industry that rewards speed alone; it demands persistence, coordination, and long-term commitment. In semiconductors, success is not measured in years, but built over generations. Edited excerpts

India formally launched the semiconductor mission in 2021. Five years on, where does the country stand today?

Swaroop Ganguly:

The India Semiconductor Mission really began taking shape around 2021, but for a couple of years it was largely policy without visible industry participation. The turning point came around 2023 with the approval of the Micron packaging facility. That was important not just as a project, but as a signal—that global companies were willing to invest in India.

Following that, we saw a series of announcements, particularly in packaging and assembly. Now, packaging is not the highest value-add segment in the semiconductor value chain, but it is still a very important step. It generates employment, it helps build supporting capabilities, and it allows the ecosystem to start forming.

why India semiconductor mission matters
Image credit: Athena Sandrini/Pexels

But the real centrepiece—the crown of the semiconductor ecosystem—is the fabrication facility, or fab. That is where silicon wafers are actually processed into chips. We now have at least one major fab announcement, and that is a very significant milestone.

At the same time, we should be careful not to judge progress too quickly. This is not an industry where outcomes can be evaluated in five years. The correct time horizon is at least 10 to 15 years.

BUY THIS SEMICON SPECIAL MAGAZINE
Buy the latest issue of Education Publica. After completing the payment, please share your details at contact@edpublica.com

Why did India take so long to enter this space, especially given its strength in technology?

Swaroop Ganguly:

It’s not entirely accurate to say India never tried. There were attempts in the past. In fact, in the 1980s, India had a silicon fabrication facility in Chandigarh that was not very far behind global standards at that time.

Unfortunately, that facility was destroyed in a fire, and that event set India back significantly—by decades, in fact. But the loss was not just infrastructure. It was also talent. Many of the people who were working there moved abroad and went on to become leaders in global semiconductor companies.

When you lose something like that, you don’t just lose a facility—you lose the continuity of knowledge, mentorship, and ecosystem-building. That has long-term consequences.

After that, the global semiconductor industry moved very fast, and re-entering it became increasingly difficult. It required a level of policy support and industrial coordination that did not exist at the time. That is what has changed with the India Semiconductor Mission.

j4 2

How should we interpret the progress under India Semiconductor Mission 1.0 (ISM 1.0)? Has it delivered what was expected?

Swaroop Ganguly:

I think it would be a mistake to look at ISM 1.0 as something that should have delivered results within five years. This industry demands a long-term, patient approach.

ISM 1.0 has led to the approval of multiple manufacturing-related units, most of them in packaging. That is actually a sensible place to begin. Countries like Taiwan and South Korea also started their semiconductor journeys with packaging before moving up the value chain.

There has also been progress in specialty areas such as compound semiconductors, which are used in applications like power electronics, renewable energy, and communications.

So overall, I would say the direction is correct. But the success of ISM should be evaluated over a much longer period—10 to 15 years at least.

So India Semiconductor Mission (ISM) 2.0 is not a reset, but an expansion?

Swaroop Ganguly:

Exactly. ISM 2.0 should be seen as an expansion of scope.

In ISM 1.0, the focus was largely on attracting manufacturing—fabs and packaging units. Now, the thinking is evolving towards building a more complete ecosystem.

That means looking at materials, chemicals, gases, equipment, and all the ancillary industries that support semiconductor manufacturing. At the same time, there is increasing emphasis on research, innovation, education, and training.

This is important because semiconductors are not a one-time investment. As we often say, this is not a bandwagon you jump onto—it’s a treadmill.

What do you mean by that analogy?

Swaroop Ganguly:

The treadmill analogy simply means that once you enter this industry, you have to keep moving. If you stop, you fall off.

Udayan Ganguly:

Yes, and the reason is very simple. The industry evolves continuously. Every couple of years, chips become more powerful, more efficient, more densely packed.

If you don’t keep up with that pace of innovation, your products become uncompetitive. Unlike many other industries, you cannot just build a plant and continue producing the same thing for decades.

j5 2

For a layperson, what does this “semiconductor moment” actually mean for India?

Udayan Ganguly:

Think about everything you do today—medicine, education, transportation, entertainment. All of it runs on semiconductors.

Now imagine that every time you engage in any of these activities, you are effectively paying someone else for that underlying technology.

You go to a doctor—you are paying a semiconductor fee.

You drive a car—you are paying a semiconductor fee.

You watch a movie—you are paying a semiconductor fee.

So the question is: can a country continue to grow while constantly paying for the technological backbone of its economy?

So this is fundamentally about control over technology?

Udayan Ganguly:

Absolutely.

If India does not control semiconductors to some extent, we are basically fighting a losing battle. This is not just about manufacturing chips—it is about controlling the substrate on which modern society operates.

And this is not a short-term project. This is a 100-year bet. Even building meaningful capability will take at least 30 years.

What are the biggest challenges India faces in this journey?

Udayan Ganguly:

There are three core challenges: technology, talent, and governance.

On technology, the reality is that only a handful of companies globally have access to cutting-edge capabilities. These are not technologies that can simply be purchased at cost.

So India will have to start with slightly older technologies, which is perfectly fine. That is how most countries begin.

On talent, it is not just about having engineers—it is about having deep know-how. The ability to solve problems, innovate, and adapt.

And on governance, this is not a free-market industry. It requires sustained policy support and coordination. Without that, it cannot take off.

j8 1

What role do startups and academia play in this ecosystem?

Swaroop Ganguly:

They are central to innovation.

India has had design centres of global semiconductor companies for decades. But what we have not had is a large number of products that are designed, owned, and commercialised by Indian companies.

That is where startups and academia come in.

Innovation typically emerges from these spaces—either from academic research translating into startups, or from experienced professionals building new companies.

Can startups play a role in manufacturing as well?

Swaroop Ganguly:

Manufacturing is much more capital-intensive, so it is difficult for startups to enter that space in the conventional sense.

However, there are opportunities in specialised areas—materials, processes, equipment components—where startups can contribute.

Academia also plays a critical role, particularly in advancing research that can feed into industry.

Is there a missing link in India’s semiconductor ecosystem today?

Udayan Ganguly:

Yes—R&D infrastructure.

Globally, there are dedicated semiconductor research centres where new ideas can be tested at scale without disrupting commercial manufacturing.

These centres act as a bridge between academia and industry.

India needs similar facilities. Without them, it becomes difficult to translate research into real-world applications.

What about talent—are we producing enough skilled people?

Udayan Ganguly:

We have strong core capability, but we need to scale significantly.

To meet the demands of a domestic semiconductor ecosystem, we probably need to increase our talent pool by at least ten times.

And this is no longer just about selecting the best candidates. It is about building a pipeline—training, education, and capacity-building across institutions.

j6 3

Is semiconductor engineering limited to electronics?

Swaroop Ganguly:

Not at all. That is a common misconception.

Semiconductor manufacturing is highly interdisciplinary. It involves physics, chemistry, materials science, and mechanical engineering.

For example, consider a thermal processing step in fabrication. A wafer can be heated from room temperature to over 1000°C in a matter of seconds and then cooled rapidly. That involves complex thermal and mechanical engineering.

So the opportunities extend far beyond traditional electronics.

Who are the key stakeholders in building this ecosystem?

Swaroop Ganguly:

It essentially comes down to three groups: academia, industry, and government.

These three must work together very closely. Without that collaboration, the ecosystem cannot develop.

Government provides policy and support. Industry drives manufacturing and commercialisation. Academia contributes research, talent, and innovation.

j7 2
Image credit: Dipin Damodharan

Does India need to increase its R&D spending?

Swaroop Ganguly:

Spending is already increasing, which is a positive sign.

But equally important is how that money is used. There are global models where competing companies collaborate on early-stage research, pooling resources and working with academia.

Such models can significantly improve the effectiveness of R&D investment.

Finally, are you optimistic about India’s semiconductor journey?

Udayan Ganguly:

Yes, broadly.

The policy direction is strong, and the incentives are competitive. But this is not something that will succeed automatically.

It requires sustained effort over decades.

Swaroop Ganguly:

Exactly. The direction is right, but the time horizon is long. This is not a sprint—it is a marathon.

Continue Reading

Space & Physics

JWST study reveals how rare exoplanet pair formed

MIT study uses JWST to decode a rare exoplanet system, revealing how mini-Neptunes form beyond the frost line.

Published

on

JWST mini-Neptune study reveals rare exoplanet formation clue
Image credit: Jose-Luis Olivares, MIT

Astronomers have uncovered fresh clues about how distant worlds form, thanks to a new JWST mini-Neptune study that examines a rare planetary system 190 light years away. Using NASA’s powerful space telescope, researchers analysed the atmosphere of a small gas planet orbiting unusually close to its star — and found evidence that challenges long-held assumptions about where such planets originate.

In a discovery that’s quietly reshaping how astronomers think about planet formation, scientists have uncovered new clues behind one of the Milky Way’s strangest planetary pairings — a hot Jupiter and a mini-Neptune orbiting the same star.

The finding by scientists from MIT, based on observations from NASA’s James Webb Space Telescope (JWST), suggests that these two unlikely neighbours didn’t form where they are today. Instead, they likely began life much farther out in their star system and gradually migrated inward — staying together against the odds. The study, appeared in The Astrophysics Journal of Letters, reveals new measurements of the mini-Neptune’s atmosphere.

JWST mini-Neptune study : A rare planetary pairing

The system, located about 190 light years from Earth, has puzzled astronomers since its discovery in 2020. Hot Jupiters — massive gas giants that orbit very close to their stars — are usually “lonely,” with no nearby planetary companions.

But this one breaks the rule.

“This is the first time we’ve observed the atmosphere of a planet that is inside the orbit of a hot Jupiter. This measurement tells us this mini-Neptune indeed formed beyond the frost line,” says Saugata Barat, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and the lead author of the study.

“This was a one-of-a-kind system,” Chelsea X. Huang, faculty at University of South Queensland, said in a media statement, explaining how such massive planets typically scatter away anything inside their orbit.

Yet in this case, a smaller mini-Neptune somehow survives closer to the star, orbiting every four days, while the hot Jupiter circles every eight.

Back in 2020, Chelsea Huang — then a Torres Postdoctoral Fellow at MIT — spotted something unusual: a mini-Neptune orbiting its star alongside an unexpected companion, a hot Jupiter.

JWST captures a crucial clue

To understand how this system formed, researchers from MIT and international institutions turned to JWST, focusing on the inner planet, TOI-1130b.

What they found was telling.

The mini-Neptune’s atmosphere is unusually “heavy,” rich in water vapour, carbon dioxide, sulfur dioxide, and traces of methane — a composition that shouldn’t exist if the planet formed close to its star.

JWST mini-Neptune study : Rethinking planet formation

That “frost line” — the region in a young star system where temperatures are low enough for ice to form — appears to be central to the story.

Scientists now believe both planets likely formed in this colder, outer region, where icy materials helped build dense atmospheres. Over time, they migrated inward together, maintaining their unusual orbital arrangement.

The findings challenge earlier assumptions that mini-Neptunes forming close to stars should have lighter atmospheres dominated by hydrogen and helium.

A system that shouldn’t exist — but does

Even observing the system was no easy task. The two planets are in what astronomers call a “mean motion resonance,” subtly tugging at each other’s orbits and making their movements harder to predict.

“It was a challenging prediction, and we had to be spot-on,” Barat said, referring to the effort required to time JWST’s observations precisely.

JWST mini-Neptune study : Why this matters

Mini-Neptunes are among the most common planets in the galaxy, yet none exist in our own solar system — making them both familiar and mysterious.

This study, appeared in Astrophysical Journal Letters, offers the clearest evidence yet that such planets can form far from their stars and migrate inward, carrying their atmospheres with them.

“This system represents one of the rarest architectures that astronomers have ever found,” Barat said in a media statement.

And in a universe full of planets, that rarity might just hold the key to understanding how many of them — including worlds very different from our own — come to be.

Continue Reading

Space & Physics

Researchers Develop Ultra-Efficient Chip for Post-Quantum Security in Medical Devices

The breakthrough addresses a critical vulnerability in next-generation healthcare technology as quantum computing advances threaten current encryption standards.

Published

on

ultra chip
Credit: Courtesy of the researchers

Breakthrough Enables Strong Encryption on Tiny, Power-Constrained Devices

Researchers at the Massachusetts Institute of Technology have developed a highly energy-efficient microchip capable of running advanced post-quantum cryptography (PQC) on small, power-limited devices such as pacemakers, insulin pumps, and ingestible sensors. The breakthrough addresses a critical vulnerability in next-generation healthcare technology as quantum computing advances threaten current encryption standards.

The chip, roughly the size of a needle tip, integrates robust security features designed to protect sensitive patient data while maintaining extremely low power consumption. This makes it suitable for wireless biomedical devices that have historically lacked strong encryption due to energy constraints.

Why Post-Quantum Cryptography Matters

As quantum computers evolve, traditional encryption methods are expected to become obsolete. Governments and regulatory bodies, including the National Institute of Standards and Technology (NIST), are already preparing to transition toward PQC algorithms to safeguard digital infrastructure.

However, PQC techniques are computationally intensive, often increasing energy usage by up to 100–1000 times—making them impractical for small, battery-powered devices until now.

This new chip bridges that gap by enabling advanced encryption without significantly increasing energy demand.

Key Innovations Behind the Chip

Multi-Layered Security Design

The chip incorporates multiple PQC algorithms to ensure long-term resilience, even if one encryption method becomes vulnerable in the future.

Built-in Random Number Generator

A highly efficient on-chip random number generator strengthens encryption by producing secure cryptographic keys internally, eliminating reliance on external components.

Protection Against Physical Attacks

The design includes safeguards against “power side-channel attacks,” where hackers attempt to extract data by analyzing power consumption patterns.

Early Fault Detection

The chip can detect voltage irregularities and abort compromised operations early, preventing energy waste and potential security breaches.

Major Gains in Energy Efficiency

The researchers report that the chip achieves 20 to 60 times greater energy efficiency compared to existing PQC implementations, while also occupying a smaller physical footprint.

This efficiency breakthrough is crucial for expanding secure computing to edge devices—systems that operate outside traditional data centers, often with strict power limitations.

Continue Reading

Trending