Connect with us

Space & Physics

Why does superconductivity matter?

Dr. Saurabh Basu

Published

on

A high-temperature (liquid nitrogen cooled) superconductor levitating above a permanent magnet (TU Dresden). Credit: Henry Mühlpfordt/Wikemedia Commons

Superconductivity was discovered by H. Kamerlingh Onnes on April 8, 1911, who was studying the resistance of solid Mercury (Hg) at cryogenic temperatures. Liquid helium was recently discovered at that time. At T = 4.2K, the resistance of Hg disappeared abruptly. This marked a transition to a new phase that was never seen before. The state is resistanceless, strongly diamagnetic, and denotes a new state of matter. K. Onnes sent two reports to KNAW (the local journal of the Netherlands), where he preferred calling the zero-resistance state ‘superconductivity’’.

There was another discovery that went unnoticed in the same experiment, which was the transition of superfluid Helium (He) at 2.2K, the so-called λ transition, below which He becomes a superfluid. However, we shall skip that discussion for now. A couple of years later, superconductivity was found in lead (Pb) at 7K. Much later, in 1941, Niobium Nitride was found to superconduct below 16 K. The burning question in those days was: what would the conductivity or resistivity of metals be at a very low temperature?

The reason behind such a question is Lord Kelvin’s suggestion that for metals, initially the resistivity decreases with falling temperature and finally climbs to infinity at zero Kelvin because electrons’ mobility becomes zero at 0 K, yielding zero conductivity and hence infinite resistivity. Kamerlingh Onnes and his assistant Jacob Clay studied the resistance of gold (Au) and platinum (Pt) down to T = 14K. There was a linear decrease in resistance until 14 K; however, lower temperatures cannot be accessed owing to the unavailability of liquid He, which eventually happened in 1908.

Heike Kamerlingh Onnes (right), the discoverer of superconductivity. Paul Ehrenfest, Hendrik Lorentz, Niels Bohr stand to his left.

In fact, the experiment with Au and Pt was repeated after 1908. For Pt, the resistivity became constant after 4.2K, while Au is found to superconduct at very low temperatures. Thus, Lord Kelvin’s notion about infinite resistivity at very low temperatures was incorrect. Onnes had found that at 3 K (below the transition), the normalised resistance is about 10−7. Above 4.2 K, the resistivity starts appearing again. The transition is too sharp and falls abruptly to zero within a temperature window of 10−4 K.

Perfect conductors, superconductors, and magnets

All superconductors are normal metals above the transition temperature. If we ask in the periodic table where most of the superconductors are located, the answer throws some surprises. The good metals are rarely superconducting. The examples are Ag, Au, Cu, Cs, etc., which have transition temperatures of the order of ∼ 0.1K, while the bad metals, such as niobium alloys, copper oxides, and 1 MgB2, have relatively larger transition temperatures. Thus, bad metals are, in general, good superconductors. An important quantity in this regard is the mean free path of the electrons. The mean free path is of the order of a few A0 for metals (above Tc), while for good metals (or the bad superconductors), it is usually a few hundred of A0. Whereas for the bad metals (good superconductors), it is still small as the electrons are strongly coupled to phonons. The orbital overlap is large in a superconductor. In good metals, the orbital overlap is small, and often they become good magnets. In the periodic table, transition elements such as the 3D series elements, namely Al, Bi, Cd, Ga, etc., become good superconductors, while Cr, Mn, and Fe are bad superconductors and in fact form good magnets. For all of them, that is, whether they are superconductors or magnets, there is a large density of states at the Fermi level. So, a lot of electronic states are necessary for the electrons in these systems to be able to condense into a superconducting state (or even a magnetic state). The nature of the electronic wave function determines whether they develop superconducting order or magnetic order. For example, electronic wavefunctions have a large spatial extent for superconductors, while they are short-range for magnets.

Meissner effect

The near-complete expulsion of the magnetic field from a superconducting specimen is called the Meissner effect. In the presence of a magnetic field, the current loops at the periphery will be generated so as to block the entry of the external field inside the specimen. If a magnetic field is allowed within a superconductor, then, by Ampere’s law, there will be normal current within the sample. However, there is no normal current inside the specimen. Thus, there can’t be any magnetic field. For this reason, superconductors are known as perfect diamagnets with very large diamagnetic susceptibility. Even the best-known diamagnets (which are non-superconductors) have magnetic susceptibilities of the order of 10−5. Thus, the diamagnetic property can be considered a distinct property of superconductors compared to zero electrical resistance.

A typical experiment demonstrating the Meissner effect can be thought of as follows: Take a superconducting sample (T < Tc), sprinkle iron filings around the sample, and switch on the magnetic field. The iron filings are going to line up in concentric circles around the specimen. This implies the expulsion of the flux lines outside the sample, which makes the filings line up.

Distinction between perfect conductors and superconductors

The distinction between a perfect conductor and a superconductor is brought about by magnetic field-cooled (FC) and zero-field-cooled (ZFc) cases, as shown below in Fig. 1.

In the absence of an external magnetic field, temperature is lowered for both the metal and the superconductor in their metallic states from T > Tc to T < Tc (see left panel for both in Fig. 1). Hence, a magnetic field is applied, which eventually gets expelled owing to the Meissner effect. The field has finally been withdrawn. However, if cooling is done in the presence of an external field, after the field is withdrawn, the flux lines get trapped for a perfect conductor; however, the superconductor is left with no memory of an applied field, a situation similar to what happens in the zero-field cooling case. So, superconductors have no memory, while perfect conductors have memory.

Microscopic considerations: BCS theory

The first microscopic theory of superconductivity was proposed by Berdeen, Cooper, and Schrieffer (BCS) in 1957, which earned them a Nobel Prize in 1972. The underlying assumption was that an attractive interaction between the electrons is possible, which is mediated via phonons. Thus, electrons form bound pairs under certain conditions, such as (i) two electrons in the vicinity of the filled Fermi Sea within an energy range ¯hωD (set by the phonons or lattice). (ii) The presence of phonons or the underlying lattice is confirmed by the isotope effect experiment, which confirms that the transition temperature is proportional to the mass of ions. Since the Debye frequency depends on the ionic mass, it implies that the lattice must be involved. 3 A small calculation yields that an attractive interaction is possible in a narrow range of energy. This attractive interaction causes the system to be unstable, and a long-range order develops via symmetry breaking. In a book by one of the discoverers, namely, Schrieffer, he described an analogy between a dancing floor comprising couples, dancing one with any other couple, and being completely oblivious to any other couple present in the room. The couples, while dancing, drift from one end of the room to another but do not collide with each other. This implies less dissipation in the transport of a superconductor. The BCS theory explained most of the features of the superconductors known at that time, such as (i) the discontinuity of the specific heat at the transition temperature, Tc. (ii) Involvement of the lattice via the isotope effect. (iii) Estimation of Tc and the energy gap. The value of Tc and the gap are confirmed by tunnelling experiments across metal-superconductor (M-S) or metal-insulator-superconductor (MIS) types of junctions. Giaever was awarded the Nobel Prize in 1973 for his work on these experiments. (iv) The Meissner effect can be explained within a linear response regime. (v) Temperature dependence of the energy gap, confirming gradual vanishing, which confirms a second-order phase transition. Most of the features of conventional superconductors can be explained using BCS theory. Another salient feature of the theory is that it is non-perturbative. There is no small parameter in the problem. The calculations were done with a variational theory where the energy is minimised with respect to some free parameters of the variational wavefunction, known as the BCS wavefunction.

Unconventional Superconductors: High-Tc Cuprates

This is a class of superconductors where the two-dimensional copper oxide planes play the main role, and superconductivity occurs in these planes. Doping these planes with mobile carriers makes the system unstable towards superconducting correlations. At zero doping, the system is an antiferromagnetic insulator (see Fig. 2). With about 15% to 20% doping with foreign elements, such as strontium (Sr), etc. (for example, in La2−xSrxCuO4), the system turns superconductivity. There are two things that are surprising in this regard. (i) The proximity of the insulating state to the superconducting state; (ii) For the system initially in the superconducting state, as the temperature is raised, instead of going into a metallic state, it shows several unfamiliar features that are very unlike the known Fermi liquid characteristics. It is called a strange metal.

In fact, there are some signatures of pre-formed pairs in the ‘so-called’ metallic state, known as the pseudo gap phase. Since the starting point from which one should build a theory is missing, a complete understanding of the mechanism leading to the phenomenon cannot be understood. It remained a theoretical riddle.

Dr. Saurabh Basu is Professor at Department of Physics, Indian Institute of Technology (IIT) Guwahati. He works in the area of correlated electron systems with the main focus on bosonic superfluidity in (optical) lattices.

Space & Physics

MIT unveils an ultra-efficient 5G receiver that may supercharge future smart devices

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones

Published

on

Image credit: Mohamed Hassan from Pixabay

A team of MIT researchers has developed a groundbreaking wireless receiver that could transform the future of Internet of Things (IoT) devices by dramatically improving energy efficiency and resilience to signal interference.

Designed for use in compact, battery-powered smart gadgets—like health monitors, environmental sensors, and industrial trackers—the new chip consumes less than a milliwatt of power and is roughly 30 times more resistant to certain types of interference than conventional receivers.

“This receiver could help expand the capabilities of IoT gadgets,” said Soroush Araei, an electrical engineering graduate student at MIT and lead author of the study, in a media statement. “Devices could become smaller, last longer on a battery, and work more reliably in crowded wireless environments like factory floors or smart cities.”

The chip, recently unveiled at the IEEE Radio Frequency Integrated Circuits Symposium, stands out for its novel use of passive filtering and ultra-small capacitors controlled by tiny switches. These switches require far less power than those typically found in existing IoT receivers.

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones. This means the receiver achieves necessary filtering without relying on bulky components, keeping the circuit size under 0.05 square millimeters.

Credit: Courtesy of the researchers/MIT News

Traditional IoT receivers rely on fixed-frequency filters to block interference, but next-generation 5G-compatible devices need to operate across wider frequency ranges. The MIT design meets this demand using an innovative on-chip switch-capacitor network that blocks unwanted harmonic interference early in the signal chain—before it gets amplified and digitized.

Another critical breakthrough is a technique called bootstrap clocking, which ensures the miniature switches operate correctly even at a low power supply of just 0.6 volts. This helps maintain reliability without adding complex circuitry or draining battery life.

The chip’s minimalist design—using fewer and smaller components—also reduces signal leakage and manufacturing costs, making it well-suited for mass production.

Looking ahead, the MIT team is exploring ways to run the receiver without any dedicated power source—possibly by harvesting ambient energy from nearby Wi-Fi or Bluetooth signals.

The research was conducted by Araei alongside Mohammad Barzgari, Haibo Yang, and senior author Professor Negar Reiskarimian of MIT’s Microsystems Technology Laboratories.

Continue Reading

Society

Ahmedabad Plane Crash: The Science Behind Aircraft Take-Off -Understanding the Physics of Flight

Take-off is one of the most critical phases of flight, relying on the precise orchestration of aerodynamics, propulsion, and control systems. Here’s how it works:

Published

on

On June 12, 2025, a tragic aviation accident struck Ahmedabad, India when a regional passenger aircraft, Air India flight A1-171, crashed during take-off at Sardar Vallabhbhai Patel International Airport. According to preliminary reports, the incident resulted in over 200 confirmed casualties, including both passengers and crew members, and several others are critically injured. The aviation community and scientific world now turn their eyes not just toward the cause but also toward understanding the complex science behind what should have been a routine take-off.

How Do Aircraft Take Off?

Take-off is one of the most critical phases of flight, relying on the precise orchestration of aerodynamics, propulsion, and control systems. Here’s how it works:

1. Lift and Thrust

To leave the ground, an aircraft must generate lift, a force that counters gravity. This is achieved through the unique shape of the wing, called an airfoil, which creates a pressure difference — higher pressure under the wing and lower pressure above — according to Bernoulli’s Principle and Newton’s Third Law.

Simultaneously, engines provide thrust, propelling the aircraft forward. Most commercial jets use turbofan engines, which accelerate air through turbines to generate power.

2. Critical Speeds

Before takeoff, pilots calculate critical speeds:

  • V1 (Decision Speed): The last moment a takeoff can be safely aborted.
  • Vr (Rotation Speed): The speed at which the pilot begins to lift the nose.
  • V2 (Takeoff Safety Speed): The speed needed to climb safely even if one engine fails.

If anything disrupts this process — like bird strikes, engine failure, or runway obstructions — the results can be catastrophic.

Environmental and Mechanical Challenges

Factors like wind shear, runway surface condition, mechanical integrity, or pilot error can interfere with safe take-off. Investigators will be analyzing these very aspects in the Ahmedabad case.

The Bigger Picture

Take-off accounts for a small fraction of total flight time but is disproportionately associated with accidents — approximately 14% of all aviation accidents occur during take-off or initial climb.

Continue Reading

Space & Physics

MIT claims breakthrough in simulating physics of squishy, elastic materials

In a series of experiments, the new solver demonstrated its ability to simulate a diverse array of elastic behaviors, ranging from bouncing geometric shapes to soft, squishy characters

Published

on

Image credit: Courtesy of researchers

Researchers at MIT claim to have unveiled a novel physics-based simulation method that significantly improves stability and accuracy when modeling elastic materials — a key development for industries spanning animation, engineering, and digital fabrication.

In a series of experiments, the new solver demonstrated its ability to simulate a diverse array of elastic behaviors, ranging from bouncing geometric shapes to soft, squishy characters. Crucially, it maintained important physical properties and remained stable over long periods of time — an area where many existing methods falter.

Other simulation techniques frequently struggled in tests: some became unstable and caused erratic behavior, while others introduced excessive damping that distorted the motion. In contrast, the new method preserved elasticity without compromising reliability.

“Because our method demonstrates more stability, it can give animators more reliability and confidence when simulating anything elastic, whether it’s something from the real world or even something completely imaginary,” Leticia Mattos Da Silva, a graduate student at MIT’s Department of Electrical Engineering and Computer Science, said in a media statement.

Their study, though not yet peer-reviewed or published, will be presented at the August proceedings of the SIGGRAPH conference in Vancouver, Canada.

While the solver does not prioritize speed as aggressively as some tools, it avoids the accuracy and robustness trade-offs often associated with faster methods. It also sidesteps the complexity of nonlinear solvers, which are commonly used in physics-based approaches but are often sensitive and prone to failure.

Looking ahead, the research team aims to reduce computational costs and broaden the solver’s applications. One promising direction is in engineering and fabrication, where accurate elastic simulations could enhance the design of real-world products such as garments, medical devices, and toys.

“We were able to revive an old class of integrators in our work. My guess is there are other examples where researchers can revisit a problem to find a hidden convexity structure that could offer a lot of advantages,” Mattos Da Silva added.

The study opens new possibilities not only for digital content creation but also for practical design fields that rely on predictive simulations of flexible materials.

Continue Reading

Trending