Connect with us

Space & Physics

A New Milestone in Quantum Error Correction

This achievement moves quantum computing closer to becoming a transformative tool for science and technology

Published

on

Image credit: Pixabay

Quantum computing promises to revolutionize fields like cryptography, drug discovery, and optimization, but it faces a major hurdle: qubits, the fundamental units of quantum computers, are incredibly fragile. They are highly sensitive to external disturbances, making today’s quantum computers too error-prone for practical use. To overcome this, researchers have turned to quantum error correction, a technique that aims to convert many imperfect physical qubits into a smaller number of more reliable logical qubits.

In the 1990s, researchers developed the theoretical foundations for quantum error correction, showing that multiple physical qubits could be combined to create a single, more stable logical qubit. These logical qubits would then perform calculations, essentially turning a system of faulty components into a functional quantum computer. Michael Newman, a researcher at Google Quantum AI, highlights that this approach is the only viable path toward building large-scale quantum computers.

However, the process of quantum error correction has its limits. If physical qubits have a high error rate, adding more qubits can make the situation worse rather than better. But if the error rate of physical qubits falls below a certain threshold, the balance shifts. Adding more qubits can significantly improve the error rate of the logical qubits.

A Breakthrough in Error Correction

In a paper published in Nature last December, Michael Newman and his team at Google Quantum AI have achieved a major breakthrough in quantum error correction. They demonstrated that by adding physical qubits to a system, the error rate of a logical qubit drops sharply. This finding shows that they’ve crossed the critical threshold where error correction becomes effective. The research marks a significant step forward, moving quantum computers closer to practical, large-scale applications.

The concept of error correction itself isn’t new — it is already used in classical computers. On traditional systems, information is stored as bits, which can be prone to errors. To prevent this, error-correcting codes replicate each bit, ensuring that errors can be corrected by a majority vote. However, in quantum systems, things are more complicated. Unlike classical bits, qubits can suffer from various types of errors, including decoherence and noise, and quantum computing operations themselves can introduce additional errors.

Moreover, unlike classical bits, measuring a qubit’s state directly disturbs it, making it much harder to identify and correct errors without compromising the computation. This makes quantum error correction particularly challenging.

The Quantum Threshold

Quantum error correction relies on the principle of redundancy. To protect quantum information, multiple physical qubits are used to form a logical qubit. However, this redundancy is only beneficial if the error rate is low enough. If the error rate of physical qubits is too high, adding more qubits can make the error correction process counterproductive.

Google’s recent achievement demonstrates that once the error rate of physical qubits drops below a specific threshold, adding more qubits improves the system’s resilience. This breakthrough brings researchers closer to achieving large-scale quantum computing systems capable of solving complex problems that classical computers cannot.

Moving Forward

While significant progress has been made, quantum computing still faces many engineering challenges. Quantum systems require extremely controlled environments, such as ultra-low temperatures, and the smallest disturbances can lead to errors. Despite these hurdles, Google’s breakthrough in quantum error correction is a major step toward realizing the full potential of quantum computing.

By improving error correction and ensuring that more reliable logical qubits are created, researchers are steadily paving the way for practical quantum computers. This achievement moves quantum computing closer to becoming a transformative tool for science and technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Space & Physics

New double-slit experiment proves Einstein’s predictions were off the mark

Results from an idealized version of the Young double-slit experiment has upheld key predictions from quantum theory.

Published

on

Two individual atoms suspended in a vacuum chamber are illuminated by a laser beam, serving as the two slits. Scattered light interference is captured by a highly sensitive camera shown as a screen. Credit: Courtesy of the researchers/MIT
  • MIT physicists perform the most idealized double-slit experiment to date, using individual atoms as slits.
  • Experiment confirms the quantum duality of light: light behaves as both a particle and a wave, but both behaviors can’t be observed simultaneously.
  • Findings disprove Albert Einstein’s century-old prediction regarding detecting a photon’s path alongside its wave nature.

In a study published in Physical Reviews Letters on July 22, researchers at MIT have realized an idealized version of the famous double-slit experiment in quantum physics yet.

The double-slit experiment—first devised in 1801 by the British physicist Thomas Young—remains a perplexing aspect of reality. Light waves passing through two slits, form interference patterns on a wall placed behind. But this phenomenon is at odds with the fact light also behaves as particles. The contradiction has lent itself to a paradox, which sits at the foundation of quantum mechanics. It has sparked a historic scientific duel nearly a century ago, between physics heavyweights Albert Einstein and Niels Bohr. The study’s findings have now settled the decades-old debate, showing Einstein’s predictions were off the mark.

Einstein had suggested that by detecting the force exerted when a photon passes through a slit—a nudge akin to a bird brushing past a leaf—scientists could witness both light’s wave and particle properties at once. Bohr countered with the argument that observing a photon’s path would inevitably erase its wave-like interference pattern, a tenet since embraced by quantum theory.

The MIT team stripped the experiment to its purest quantum elements. Using arrays of ultracold atoms as their slits and weak light beams to ensure only one photon scattered per atom, they tuned the quantum states of each atom to control the information gained about a photon’s journey. Every increase in “which-path” information reduced the visibility of the light’s interference pattern, flawlessly matching quantum theory and further debunking Einstein’s proposal.

“Einstein and Bohr would have never thought that this is possible, to perform such an experiment with single atoms and single photons,” study senior author and Nobel laureate, Wolfgang Ketterle, stated in a press release. “What we have done is an idealized Gedanken (thought) experiment.”

In a particularly stunning twist, Ketterle’s group also disproved the necessity of a physical “spring”—a fixture in Einstein’s original analogy—by holding their atomic lattice not with springs, but with light. When they briefly released the atoms, effectively making the slits “float” in space, the same quantum results persisted. “In many descriptions, the springs play a major role. But we show, no, the springs do not matter here; what matters is only the fuzziness of the atoms,” commented MIT researcher Vitaly Fedoseev in a media statement. “Therefore, one has to use a more profound description, which uses quantum correlations between photons and atoms.”

The paper arrives as the world prepares for 2025’s International Year of Quantum Science and Technology — marking 100 years since the birth of quantum mechanics. Yoo Kyung Lee, a fellow co-author, noted in a media statement, “It’s a wonderful coincidence that we could help clarify this historic controversy in the same year we celebrate quantum physics.”

Continue Reading

Space & Physics

Researchers Uncover New Way to Measure Hidden Quantum Interactions in Materials

Published

on

Image credit: Pixabay

A team of MIT scientists has developed a theory-guided strategy to directly measure an elusive quantum property in semiconductors — the electron-phonon interaction — using an often-ignored effect in neutron scattering.

Their approach, published this week in Materials Today Physics, reinterprets an interference effect, typically considered a nuisance in experiments, as a valuable signal. This enables researchers to probe electron-phonon interactions — a key factor influencing a material’s thermal, electrical, and optical behaviour — which until now have been extremely difficult to measure directly.

“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” said Mingda Li, senior author and associate professor at MIT, in a media statement.

By engineering the interference between nuclear and magnetic interactions during neutron scattering, the team demonstrated that the resulting signal is directly proportional to the electron-phonon coupling strength.

“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” said MIT graduate student Artittaya Boonkird.

While the current setup produced a weak signal, the findings lay the groundwork for next-generation experiments at more powerful facilities like Oak Ridge National Laboratory’s proposed Second Target Station. The team sees this as a shift in materials science — using theoretical insights to unlock previously “invisible” properties for a range of advanced technologies, from quantum computing to medical devices.

Continue Reading

Space & Physics

Dormant Black Holes Revealed in Dusty Galaxies Through Star-Shredding Events

Published

on

Image credit: NRAO/AUI/NSF/NASA

In a major discovery, astronomers at MIT, Columbia University, and other institutions have used NASA’s James Webb Space Telescope (JWST) to uncover hidden black holes in dusty galaxies that violently “wake up” only when an unsuspecting star wanders too close.

The new study, published in Astrophysical Journal Letters, marks the first time JWST has captured clear signatures of tidal disruption events (TDEs) — catastrophic episodes where a star is torn apart by a galaxy’s central black hole, emitting a dramatic burst of energy.

“These are the first JWST observations of tidal disruption events, and they look nothing like what we’ve ever seen before,” said lead author Megan Masterson, a graduate student at MIT’s Kavli Institute for Astrophysics and Space Research. “We’ve learned these are indeed powered by black hole accretion, and they don’t look like environments around normal active black holes.”

Until now, nearly all TDEs detected since the 1990s were found in relatively dust-free galaxies using X-ray or optical telescopes. However, researchers suspected many more events remained hidden behind thick clouds of galactic dust. JWST’s powerful infrared vision has finally confirmed their hunch.

By analyzing four galaxies previously flagged as likely TDE candidates, the team detected distinct infrared fingerprints of black hole accretion — the process of material spiraling into a black hole, producing intense radiation. These signatures, invisible to optical telescopes, revealed that all four events stemmed not from persistently active black holes but dormant ones, roused only when a passing star came too close.

“There’s nothing else in the universe that can excite this gas to these energies, except for black hole accretion,” Masterson noted.

Among the four signals studied was the closest TDE ever detected, located 130 million light-years away. Another showed an initial optical flash that scientists had earlier suspected to be a supernova. JWST’s readings helped clarify the true cause.

“These four signals were as close as we could get to a sure thing,” said Masterson. “But the JWST data helped us say definitively these are bonafide TDEs.”

To determine whether the central black holes were inherently active or momentarily triggered by a star’s disruption, the team also mapped the dust patterns around them. Unlike the thick, donut-shaped clouds typical of active galaxies, these dusty environments appeared markedly different — further confirming the black holes were usually dormant.

“Together, these observations say the only thing these flares could be are TDEs,” Masterson said in a media statement.

The findings not only validate JWST’s unprecedented ability to study hidden cosmic phenomena but also open new pathways for understanding black holes that lurk quietly in dusty galactic centers — until they strike.

With future observations planned using JWST, NEOWISE, and other infrared tools, the team hopes to catalog many more such events. These cosmic feeding frenzies, they say, could unlock key clues about black hole mass, spin, and the very nature of their environments.

“The actual process of a black hole gobbling down all that stellar material takes a long time,” Masterson added. “And hopefully we can start to probe how long that process takes and what that environment looks like. No one knows because we just started discovering and studying these events.”

Continue Reading

Trending