Connect with us

Space & Physics

Why does superconductivity matter?

Dr. Saurabh Basu

Published

on

A high-temperature (liquid nitrogen cooled) superconductor levitating above a permanent magnet (TU Dresden). Credit: Henry Mühlpfordt/Wikemedia Commons

Superconductivity was discovered by H. Kamerlingh Onnes on April 8, 1911, who was studying the resistance of solid Mercury (Hg) at cryogenic temperatures. Liquid helium was recently discovered at that time. At T = 4.2K, the resistance of Hg disappeared abruptly. This marked a transition to a new phase that was never seen before. The state is resistanceless, strongly diamagnetic, and denotes a new state of matter. K. Onnes sent two reports to KNAW (the local journal of the Netherlands), where he preferred calling the zero-resistance state ‘superconductivity’’.

There was another discovery that went unnoticed in the same experiment, which was the transition of superfluid Helium (He) at 2.2K, the so-called λ transition, below which He becomes a superfluid. However, we shall skip that discussion for now. A couple of years later, superconductivity was found in lead (Pb) at 7K. Much later, in 1941, Niobium Nitride was found to superconduct below 16 K. The burning question in those days was: what would the conductivity or resistivity of metals be at a very low temperature?

The reason behind such a question is Lord Kelvin’s suggestion that for metals, initially the resistivity decreases with falling temperature and finally climbs to infinity at zero Kelvin because electrons’ mobility becomes zero at 0 K, yielding zero conductivity and hence infinite resistivity. Kamerlingh Onnes and his assistant Jacob Clay studied the resistance of gold (Au) and platinum (Pt) down to T = 14K. There was a linear decrease in resistance until 14 K; however, lower temperatures cannot be accessed owing to the unavailability of liquid He, which eventually happened in 1908.

Heike Kamerlingh Onnes (right), the discoverer of superconductivity. Paul Ehrenfest, Hendrik Lorentz, Niels Bohr stand to his left.

In fact, the experiment with Au and Pt was repeated after 1908. For Pt, the resistivity became constant after 4.2K, while Au is found to superconduct at very low temperatures. Thus, Lord Kelvin’s notion about infinite resistivity at very low temperatures was incorrect. Onnes had found that at 3 K (below the transition), the normalised resistance is about 10−7. Above 4.2 K, the resistivity starts appearing again. The transition is too sharp and falls abruptly to zero within a temperature window of 10−4 K.

Perfect conductors, superconductors, and magnets

All superconductors are normal metals above the transition temperature. If we ask in the periodic table where most of the superconductors are located, the answer throws some surprises. The good metals are rarely superconducting. The examples are Ag, Au, Cu, Cs, etc., which have transition temperatures of the order of ∼ 0.1K, while the bad metals, such as niobium alloys, copper oxides, and 1 MgB2, have relatively larger transition temperatures. Thus, bad metals are, in general, good superconductors. An important quantity in this regard is the mean free path of the electrons. The mean free path is of the order of a few A0 for metals (above Tc), while for good metals (or the bad superconductors), it is usually a few hundred of A0. Whereas for the bad metals (good superconductors), it is still small as the electrons are strongly coupled to phonons. The orbital overlap is large in a superconductor. In good metals, the orbital overlap is small, and often they become good magnets. In the periodic table, transition elements such as the 3D series elements, namely Al, Bi, Cd, Ga, etc., become good superconductors, while Cr, Mn, and Fe are bad superconductors and in fact form good magnets. For all of them, that is, whether they are superconductors or magnets, there is a large density of states at the Fermi level. So, a lot of electronic states are necessary for the electrons in these systems to be able to condense into a superconducting state (or even a magnetic state). The nature of the electronic wave function determines whether they develop superconducting order or magnetic order. For example, electronic wavefunctions have a large spatial extent for superconductors, while they are short-range for magnets.

Meissner effect

The near-complete expulsion of the magnetic field from a superconducting specimen is called the Meissner effect. In the presence of a magnetic field, the current loops at the periphery will be generated so as to block the entry of the external field inside the specimen. If a magnetic field is allowed within a superconductor, then, by Ampere’s law, there will be normal current within the sample. However, there is no normal current inside the specimen. Thus, there can’t be any magnetic field. For this reason, superconductors are known as perfect diamagnets with very large diamagnetic susceptibility. Even the best-known diamagnets (which are non-superconductors) have magnetic susceptibilities of the order of 10−5. Thus, the diamagnetic property can be considered a distinct property of superconductors compared to zero electrical resistance.

A typical experiment demonstrating the Meissner effect can be thought of as follows: Take a superconducting sample (T < Tc), sprinkle iron filings around the sample, and switch on the magnetic field. The iron filings are going to line up in concentric circles around the specimen. This implies the expulsion of the flux lines outside the sample, which makes the filings line up.

Distinction between perfect conductors and superconductors

The distinction between a perfect conductor and a superconductor is brought about by magnetic field-cooled (FC) and zero-field-cooled (ZFc) cases, as shown below in Fig. 1.

In the absence of an external magnetic field, temperature is lowered for both the metal and the superconductor in their metallic states from T > Tc to T < Tc (see left panel for both in Fig. 1). Hence, a magnetic field is applied, which eventually gets expelled owing to the Meissner effect. The field has finally been withdrawn. However, if cooling is done in the presence of an external field, after the field is withdrawn, the flux lines get trapped for a perfect conductor; however, the superconductor is left with no memory of an applied field, a situation similar to what happens in the zero-field cooling case. So, superconductors have no memory, while perfect conductors have memory.

Microscopic considerations: BCS theory

The first microscopic theory of superconductivity was proposed by Berdeen, Cooper, and Schrieffer (BCS) in 1957, which earned them a Nobel Prize in 1972. The underlying assumption was that an attractive interaction between the electrons is possible, which is mediated via phonons. Thus, electrons form bound pairs under certain conditions, such as (i) two electrons in the vicinity of the filled Fermi Sea within an energy range ¯hωD (set by the phonons or lattice). (ii) The presence of phonons or the underlying lattice is confirmed by the isotope effect experiment, which confirms that the transition temperature is proportional to the mass of ions. Since the Debye frequency depends on the ionic mass, it implies that the lattice must be involved. 3 A small calculation yields that an attractive interaction is possible in a narrow range of energy. This attractive interaction causes the system to be unstable, and a long-range order develops via symmetry breaking. In a book by one of the discoverers, namely, Schrieffer, he described an analogy between a dancing floor comprising couples, dancing one with any other couple, and being completely oblivious to any other couple present in the room. The couples, while dancing, drift from one end of the room to another but do not collide with each other. This implies less dissipation in the transport of a superconductor. The BCS theory explained most of the features of the superconductors known at that time, such as (i) the discontinuity of the specific heat at the transition temperature, Tc. (ii) Involvement of the lattice via the isotope effect. (iii) Estimation of Tc and the energy gap. The value of Tc and the gap are confirmed by tunnelling experiments across metal-superconductor (M-S) or metal-insulator-superconductor (MIS) types of junctions. Giaever was awarded the Nobel Prize in 1973 for his work on these experiments. (iv) The Meissner effect can be explained within a linear response regime. (v) Temperature dependence of the energy gap, confirming gradual vanishing, which confirms a second-order phase transition. Most of the features of conventional superconductors can be explained using BCS theory. Another salient feature of the theory is that it is non-perturbative. There is no small parameter in the problem. The calculations were done with a variational theory where the energy is minimised with respect to some free parameters of the variational wavefunction, known as the BCS wavefunction.

Unconventional Superconductors: High-Tc Cuprates

This is a class of superconductors where the two-dimensional copper oxide planes play the main role, and superconductivity occurs in these planes. Doping these planes with mobile carriers makes the system unstable towards superconducting correlations. At zero doping, the system is an antiferromagnetic insulator (see Fig. 2). With about 15% to 20% doping with foreign elements, such as strontium (Sr), etc. (for example, in La2−xSrxCuO4), the system turns superconductivity. There are two things that are surprising in this regard. (i) The proximity of the insulating state to the superconducting state; (ii) For the system initially in the superconducting state, as the temperature is raised, instead of going into a metallic state, it shows several unfamiliar features that are very unlike the known Fermi liquid characteristics. It is called a strange metal.

In fact, there are some signatures of pre-formed pairs in the ‘so-called’ metallic state, known as the pseudo gap phase. Since the starting point from which one should build a theory is missing, a complete understanding of the mechanism leading to the phenomenon cannot be understood. It remained a theoretical riddle.

Dr. Saurabh Basu is Professor at Department of Physics, Indian Institute of Technology (IIT) Guwahati. He works in the area of correlated electron systems with the main focus on bosonic superfluidity in (optical) lattices.

Space & Physics

MIT Engineers Develop Energy-Efficient Hopping Robot for Disaster Search Missions

The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second

Published

on

Credit: Melanie Gonick, MIT

MIT researchers have unveiled an insect-scale robot capable of hopping across treacherous terrain—offering a new mobility solution for disaster response scenarios like collapsed buildings after earthquakes.

Unlike traditional crawling robots that struggle with tall obstacles or aerial robots that quickly drain power, this thumb-sized machine combines both approaches. By using a spring-loaded leg and four flapping-wing modules, the robot can leap over debris and uneven ground while using 60 percent less energy than a flying robot.

“Being able to put batteries, circuits, and sensors on board has become much more feasible with a hopping robot than a flying one. Our hope is that one day this robot could go out of the lab and be useful in real-world scenarios,” says Yi-Hsuan (Nemo) Hsiao, an MIT graduate student and co-lead author of a new paper published today in Science Advances.

The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second. It easily navigates ice, wet surfaces, and even dynamic environments, including hopping onto a hovering drone without damage.

Co-led by researchers from MIT and the City University of Hong Kong, the team engineered the robot with an elastic compression-spring leg and soft actuator-powered wings. These wings not only stabilize the robot mid-air but also compensate for any energy lost during impact with the ground.

“If you have an ideal spring, your robot can just hop along without losing any energy. But since our spring is not quite ideal, we use the flapping modules to compensate for the small amount of energy it loses when it makes contact with the ground,” Hsiao explains.

Its robust control system determines orientation and takeoff velocity based on real-time sensing data. The robot’s agility and light weight allow it to survive harsh impacts and perform acrobatic flips.

“We have been using the same robot for this entire series of experiments, and we never needed to stop and fix it,” Hsiao adds.

The robot has already shown promise on various surfaces—grass, ice, soil, wet glass—and can adapt its jump depending on the terrain. According to Hsiao, “The robot doesn’t really care about the angle of the surface it is landing on. As long as it doesn’t slip when it strikes the ground, it will be fine.”

Future developments aim to enhance autonomy by equipping the robot with onboard batteries and sensors, potentially enabling it to assist in search-and-rescue missions beyond the lab.

Continue Reading

Space & Physics

Sunita Williams aged less in space due to time dilation

Astronauts Sunita Williams and Butch Wilmore returned from the ISS last month, younger than we did in the past ten months – thanks to strange physics that we typically encounter daily.

Published

on

Photographed shortly after splashdown. Butch Wilmore (left) and Sunita Williams (right) with Alexandr Gorbunov and Nick Hague (middle) in their SpaceX Dragon capsule | Credit: NASA/Keegan Barber

On March 18th, astronauts Sunita Williams and Butch Wilmore returned from the International Space Station (ISS) after their unscheduled nine-month stay in orbit. There has been much concern expressed around Williams and Wilmore’s health, having survived the harsh conditions of outer space. Yet if anything, the duo came out younger than we did in the interim period – thanks to strange physics that we typically don’t encounter daily.

Williams and Wilmore lived in a weak gravitational environment throughout their stay up in space; at the least compared to everyone else on earth. At that altitude 450 km above the surface, Einstein’s theory of relativity came to play – slowing down time for the astronauts.

When clocks run slow

In Einstein’s general theory of relativity, gravity is better explained as the distortive effect in an abstract continuum called space-time. This is quite distinct from Newton’s explanation of gravity, of invisible attractive forces emanating from masses themselves. In relativity, matter and energy twist both space as well as time. Imagine a thin fabric of material. Mass and energy are akin to heavy objects producing depressions in them.

Although we don’t encounter relativistic effects in our everyday encounters in life, their effects are subtle but measurable. The difference in gravity’s strength here produced a noticeable time dilation. Stronger the gravity, the slower does time flow for that person. This means people on earth aged slightly more with respect to the astronauts. This should mean that astronauts spending time up in space should have aged faster due to gravitational time dilation alone.

Except, there is yet another source of time dilation that contributes to aging – and that is, velocity. The ISS zips through low-earth orbit at speeds clocking nearly 28,800 km/h – or 8 km/s. That’s faster than a typical intercontinental ballistic missile when it’s mid-way in its journey. Space-time can distort tangibly when an object possesses incredible energy – and not just gravity. Time dilation from the ISS hurtling at such tremendous speeds, outsized the effect from earth’s gravity. And the resultant time flow would be slower than usual.

In effect, the duo aged slower, by approximately 0.0075 seconds. Virtually, there is no difference as you might notice. But with a good atomic clock though, time dilation can be demonstrated as a subtle, yet measurable effect. In fact, engineers have exploited the effect to solve technical problems arising with global positioning system (GPS) satellites, to coordinate and ensure positional accuracy. The high-precision atomic clocks on-board GPS satellites help software correct for latency errors, accounting for time dilation as well.

Continue Reading

Space & Physics

Could dark energy be a trick played by time?

David Wiltshire, a cosmologist at New Zealand’s University of Canterbury, proposed an alternate model that gets rid of dark energy entirely. But in doing so, it sacrifices an assumption cosmologists had held sacred for decades.

Karthik Vinod

Published

on

Credit: Jon Tyson / Unsplash

In 1924, American astronomer Edwin Hubble discovered that our universe expands in all directions. Powering this expansion was a Big Bang, an event that marked the birth of our current universe some 13.7 billion years ago. Back then, the finding came as a jolt to the astronomy community and the whole world. In 1998, there was even further shake-up when observations of type 1A supernovae from distant galaxies indicated the universe was expanding – at an accelerated rate. But the source of its driving force have remained in the dark.

Dark energy was born from efforts to explain the accelerated expansion. It remains a placeholder name for an undetected energy density contribution that offers a repulsive effect counterbalancing gravity’s attractive nature at long distances. Consensus emerged in support of this dark energy model thereafter. In 2011, astronomers behind the type 1A supernovae study went on to share the Nobel Prize in Physics.

More than two decades later, we are none the wiser to uncover what dark energy is. However, cosmologists have deemed it to be a constant of nature, one that does not evolve with time. So was the surprise when preliminary findings from the Dark Energy Spectroscopic Instrument (DESI) survey indicated dark energy was not just variable, but also weakening over time. The Lambda-Cold Dark Matter, more technically known as the standard model, has never stood on shakier grounds.

Fine-tuned to a Big Crunch ending

In cosmological models, the Greek letter “Lambda” fits as a placeholder for dark energy. It depicts a major chunk – some 70% of the universe’s energy density. But this figure holds only if it is a true cosmological constant. If dark energy is variable, then inevitable we end up fine-tuning the universe’s fate. A constant dark energy would yield a universe expanding forever.

But going by DESI’s preliminary findings, if dark energy is weakening over time, the the universe is set to collapse on itself in the far future. This is the Big Crunch hypothesis. It was amidst the caucus surrounding DESI’s latest findings, the cosmology community took interest in a paper published in the December edition of the Monthly Notices of the Royal Astronomical Society.

In 2007, David Wiltshire, a cosmologist at New Zealand’s University of Canterbury, and the paper’s co-author, had proposed an alternate model called timescape cosmology, to get rid of dark energy entirely. It requires a sacrifice over an assumption cosmologists have held so sacred in their models. Known as cosmological principle, it shares much in common with Aristotle and Ptolemy’s outdated viewpoint that the earth was at the center of the solar system.

A special place in the universe

The cosmological principle assumes matter in the universe is distributed uniformly everywhere on average, and in every direction that we look around. But cosmologists propose to adopt a pragmatic approach like the Polish Prussian astronomer, Nicholas Copernicus, had proposed in the 16th century. In the Copernican model of the solar system, the earth bore no special location in it. Likewise, timescape cosmology requires earth to not occupy a special location.

Saying that, the cosmological principle has a certain appeal among cosmologists. Theoretical calculations would appear complex to manipulate discarding uniformity. At the same time, cosmologists do contend that something has to give way, in light of astronomical observations that contend the cosmological principle is indeed outright wrong.

This image has an empty alt attribute; its file name is Galaxy_superclusters_and_galaxy_voids-png.avif
An artistic illustration of all the major galaxy superclusters. Encircled regions indicate voids, barren of matter. Credit: Wikimedia

Inhabiting a time bubble

One of the hallmark phenomena in Einstein’s general theory of relativity is gravitational time dilation. Time passes slower under a gravitational field. Bizarre as though it may seem to be, experiments have proven this subtle, but measurable effect.

In 1959, two Harvard physicists Robert Pound and Glen Rebka Jr. used a pair of atomic clocks to demonstrate this effect – also known as gravitational time dilation. Two clocks were stationed in their office building – one atop the roof, and the other closer to earth. The clock stationed closer to earth, lagged in comparison to the one atop the roof. Here, time dilation occurs in response to earth’s gravity tugging weakly at the clock atop, compared to the one below.

The universe looks clumpier in certain directions at cosmic scales than others. Galaxies bind together under gravity to form strands like that of a vast, interconnected cosmic web. Voids of cosmic proportions occupy the space in between. These voids experience a faster time flow, since they’re subject to weaker gravity from the surrounding galaxies. But observers in these galaxies have a skewed perception of time, since they’re living embedded inside a bubble of strong gravity. Events outside their time bubble play out akin to a fast-forwarded YouTube video.

Not the end of dark energy

Distant galaxies appears to recede accelerated in the reference frame of our time bubble. That appearance is a mere temporal illusion; an effect David Wiltshire says we falsely assume to be dark energy. So far, timescape cosmology has only occupied a niche interest in cosmology circles. There is far too little evidence to support a claim that dark energy affects arise truly from us inhabiting a time bubble.

Cosmologists had taken to social media to critique Wiltshire’s use of type 1A supernovae datasets used in his analysis. Saying that, none of the critiques themselves are conclusive. As observations pile up in the future, there may come a definitive closure. Until then it’s a waiting game for more data and refined analysis. Meanwhile on the contrary, it is too early to abdicate dark energy as a concept altogether. Lambda-CDM model would be the first to undergo a major rehaul, should DESI’s preliminary findings hold in successive observational runs. Until then, we can only speculate the universe’s fate.

Continue Reading

Trending