Space & Physics
In search for red aurorae in ancient Japan
Ryuho Kataoka, a Japanese auroral scientist, played a seminal role in searching for evidence of super-geomagnetic storms in the past using historical methods
Auroras seen on Earth are the end of a complex process that begins with a violent, dynamic process deep within the sun’s interior.
However, studying the depths of the sun is no easy task, even for scientists. The best they can do is to observe the surface using space-based telescopes. One problem that scientists are attempting to solve is how a super-geomagnetic storm on Earth comes to being. These geomagnetic storms find their roots in sunspots, that are acne-like depressions on the sun’s surface. As the sun approaches the peak of its 11-year solar cycle, these sunspots, numbering in the hundreds, occasionally release all that stored magnetic energy into deep space, in the form of coronal mass ejections (CMEs) (which are hot wisps of gas superheated to thousands of degrees).
Super-geomagnetic storms, a particularly worse form of geomagnetic storm, can induce power surges in our infrastructure, causing power outages that can plunge the world into darkness, and can cause irreversible damages to our infrastructure
If the earth lies in the path of an oncoming CME, the energy release from their resultant magnetic field alignment can cause intense geomagnetic storms and aurorae on Earth.
This phenomenon, which is astrophysical and also electromagnetic in nature, can have serious repercussions for our modern technological society.
Super-geomagnetic storms, a particularly worse form of geomagnetic storm, can induce power surges in our infrastructure, causing power outages that can plunge the world into darkness, and can cause irreversible damages to our infrastructure. The last recorded super-geomagnetic storm event occurred more than 150 years ago. Known as the Carrington event, the storm destroyed telegraph lines across North America and Europe in 1859. The risk for a Carrington-class event to happen again was estimated to be 1 in 500-years, which is quite low, but based on limited data. Ramifications are extremely dangerous if it were to ever happen.
However, in the past decade, it was learnt that such super-geomagnetic storms are much more common than scientists had figured. To top it all, it wasn’t just science, but it was a valuable contribution by art – specifically ancient Japanese and Chinese historical records that shaped our modern understanding of super-geomagnetic storms.
Ryuho Kataoka, a Japanese space physicist, played a seminal role in searching for evidence of super-geomagnetic storms in the past using historical methods. He is presently an associate professor in physics, holding positions at Japan’s National Institute of Polar Research, and The Graduate University for Advanced Studies.
“There is no modern digital dataset to identify extreme space weather events, particularly super-geomagnetic storms,” said Professor Kataoka. “If you have good enough data, we can input them into supercomputers to do physics-based simulation.”
However, sunspot records go until the late 18th century when sunspots were actively being cataloged. In an effort to fill the data gap, Professor Kataoka decided to be at the helm of a very new but promising interdisciplinary field combining the arts with space physics. “The data is limited by at least 50 years,” said Professor Kataoka. “So we decided to search for these red vapor events in Japanese history, and see the occurrence patterns … and if we are lucky enough, we can see detailed features in these lights, pictures or drawings.” Until the summer of 2015, Ryuho Kataoka wasn’t aware of how vast ancient Japanese and Chinese history records really were.
“There is no modern digital dataset to identify extreme space weather events, particularly super-geomagnetic storms,” said Professor Kataoka.
In the past 7 years, he’s researched a very specific red aurora, in documents extending to more than 1400 years. “Usually, auroras are known for their green colors – but during the geomagnetic storm, the situation is very different,” he said. “Red is of course unusual, but we can only see red during a powerful geomagnetic storm, especially in lower latitudes. From a scientific perspective, it’s a very reasonable way to search for red signs in historical documents.”
A vast part of these historical red aurora studies that Professor Kataoka researched came from literature explored in the last decade by the AURORA-4D collaboration. “The project title included “4D”, because we wanted to access records dating back 400 years back during the Edo period,” said Professor Kataoka.
“From the paintings, we can identify the latitude of the aurora, and calculate the magnitude or amplitude of the geomagnetic storm.” Clearly, paintings in the Edo period influenced Professor Kataoka’s line of research, for a copy of the fan-shaped red aurora painting from the manuscript Seikai (which translates to ‘stars’) hangs on the window behind his office desk at the National Institute of Polar Research.
The painting fascinated Professor Kataoka, since it depicted an aurora that originated during a super-geomagnetic storm over Kyoto in 1770. However, the painting did surprise him at first, since he wondered whether the radial patterns in the painting were real, or a mere artistic touch to make it look fierier. “That painting was special because this was the most detailed painting preserved in Japan,” remarked Professor Kataoka. “I took two years to study this, thinking this appearance was silly as an aurorae scientist. But when I calculated the field pattern from Kyoto towards the North, it was actually correct!”
Fan-shaped red aurora painting from the ‘Seikai’, dated 17th September, 1770; Picture Courtesy: Matsusaka City, Mie Prefecture.
The possibility to examine and verify historical accounts using science is also a useful incentive for scholars of Japanese literature and scientists partaking in the research.
“This is important because, if we scientists look at the real National Treasure with our eyes, we really know these sightings recorded were real,” said Professor Kataoka. “The internet is really bad for a survey because it can easily be very fake,” he said laughing. It’s not just the nature in which science was used to examine art – to examine Japanese “national treasures” that is undoubtedly appealing, but historical accounts themselves have contributed to scientific research directly.
“From our studies, we can say that the Carrington class events are more frequent than we previously expected,” said Professor Kataoka. There was a sense of pride in him as he said this. “This Carrington event is not a 1 in 200-year event, but as frequent as 1 in 100 years.” Given how electricity is the lifeblood of the 21st century, these heightened odds do ingrain a rather dystopian society in the future, that is ravaged by a super-geomagnetic storm.
Professor Kataoka’s work has found attention within the space physics community. Jonathon Eastwood, Professor of Physics at Imperial College London said to EdPublica, “The idea to use historical information and art like this is very inventive because these events are so rare and so don’t exist as information in the standard scientific record.”
There’s no physical harm from a geomagnetic storm, but the threat to global power supply and electronics is being increasingly recognized by world governments. The UK, for instance, identified “space weather” as a natural hazard in its 2011 National Risk Register. In the years that followed, the government set up a space weather division in the Met Office, the UK’s foremost weather forecasting authority, to monitor and track occurrences of these coronal mass ejections. However, these forecasts, which often supplement American predictions – namely the National Oceanic and Atmospheric Administration (NOAA) – have failed to specify previously where a magnetic storm could brew on Earth, or predict whether a coronal mass ejection would ever actually strike the Earth.
Professor Kataoka said he wishes space physicists from other countries participate in similar interdisciplinary collaborations to explore their native culture’s historical records for red aurora sightings
The former occurred during the evacuation process for Hurricane Irma in 2017, when amateur radio ham operators experienced the effects of a radio blackout when a magnetic storm affected the communications network across the Caribbean. The latter occurred on another occasion when a rocket launch for SpaceX’s Starlink communication satellites was disrupted by a mild geomagnetic storm, costing SpaceX a loss of over $40 million.
Professor Kataoka said he wishes space physicists from other countries participate in similar interdisciplinary collaborations to explore their native culture’s historical records for red aurora sightings. He said the greatest limitation of the AURORA-4D collaboration was the lack of historical records from other parts of the world. China apparently boasts a history of aurora records longer than Japan, with a history lasting before Christ himself. “Being Japanese, I’m not familiar with British, Finnish or Vietnamese cultures,” said Professor Kataoka. “But every country has literature researchers and scientists who can easily collaborate and perform interdisciplinary research.” And by doing so, it’s not just science which benefits from it, but so is ancient art whose beauty and relevance gains longevity.
Space & Physics
A New Milestone in Quantum Error Correction
This achievement moves quantum computing closer to becoming a transformative tool for science and technology
Quantum computing promises to revolutionize fields like cryptography, drug discovery, and optimization, but it faces a major hurdle: qubits, the fundamental units of quantum computers, are incredibly fragile. They are highly sensitive to external disturbances, making today’s quantum computers too error-prone for practical use. To overcome this, researchers have turned to quantum error correction, a technique that aims to convert many imperfect physical qubits into a smaller number of more reliable logical qubits.
In the 1990s, researchers developed the theoretical foundations for quantum error correction, showing that multiple physical qubits could be combined to create a single, more stable logical qubit. These logical qubits would then perform calculations, essentially turning a system of faulty components into a functional quantum computer. Michael Newman, a researcher at Google Quantum AI, highlights that this approach is the only viable path toward building large-scale quantum computers.
However, the process of quantum error correction has its limits. If physical qubits have a high error rate, adding more qubits can make the situation worse rather than better. But if the error rate of physical qubits falls below a certain threshold, the balance shifts. Adding more qubits can significantly improve the error rate of the logical qubits.
A Breakthrough in Error Correction
In a paper published in Nature last December, Michael Newman and his team at Google Quantum AI have achieved a major breakthrough in quantum error correction. They demonstrated that by adding physical qubits to a system, the error rate of a logical qubit drops sharply. This finding shows that they’ve crossed the critical threshold where error correction becomes effective. The research marks a significant step forward, moving quantum computers closer to practical, large-scale applications.
The concept of error correction itself isn’t new — it is already used in classical computers. On traditional systems, information is stored as bits, which can be prone to errors. To prevent this, error-correcting codes replicate each bit, ensuring that errors can be corrected by a majority vote. However, in quantum systems, things are more complicated. Unlike classical bits, qubits can suffer from various types of errors, including decoherence and noise, and quantum computing operations themselves can introduce additional errors.
Moreover, unlike classical bits, measuring a qubit’s state directly disturbs it, making it much harder to identify and correct errors without compromising the computation. This makes quantum error correction particularly challenging.
The Quantum Threshold
Quantum error correction relies on the principle of redundancy. To protect quantum information, multiple physical qubits are used to form a logical qubit. However, this redundancy is only beneficial if the error rate is low enough. If the error rate of physical qubits is too high, adding more qubits can make the error correction process counterproductive.
Google’s recent achievement demonstrates that once the error rate of physical qubits drops below a specific threshold, adding more qubits improves the system’s resilience. This breakthrough brings researchers closer to achieving large-scale quantum computing systems capable of solving complex problems that classical computers cannot.
Moving Forward
While significant progress has been made, quantum computing still faces many engineering challenges. Quantum systems require extremely controlled environments, such as ultra-low temperatures, and the smallest disturbances can lead to errors. Despite these hurdles, Google’s breakthrough in quantum error correction is a major step toward realizing the full potential of quantum computing.
By improving error correction and ensuring that more reliable logical qubits are created, researchers are steadily paving the way for practical quantum computers. This achievement moves quantum computing closer to becoming a transformative tool for science and technology.
Space & Physics
Study Shows Single Qubit Can Outperform Classical Computers in Real-World Communication Tasks
This new research, however, offers compelling evidence of quantum systems’ power in a real-world scenario
Breakthrough Study Shows Quantum Systems Can Outperform Classical Computers in Real-World Communication Tasks
A new study from the S. N. Bose National Centre for Basic Sciences in West Bengal, India, in collaboration with international teams has revealed that even the simplest quantum system, a single qubit, can surpass its classical counterpart in certain communication tasks. This discovery reshapes our understanding of quantum computing and hints at a future where quantum technologies could solve problems that classical computers, even with ample resources, cannot.
Quantum systems have long been seen as the next frontier in computing, with the potential to revolutionize technology. However, proving their superiority over classical systems has been a challenge, as experiments are complex, and limitations often arise that suggest quantum advantage might not be as accessible as once thought. This new research, however, offers compelling evidence of quantum systems’ power in a real-world scenario.
Professor Manik Banik and his team at the S. N. Bose Centre, alongside researchers from the Henan Key Laboratory of Quantum Information and Cryptography, Laboratoire d’Information Quantique, University libre de Bruxelles, and ICFO—the Barcelona Institute of Science and Technology, have demonstrated that a single qubit can outperform a classical bit in a communication task, even when no extra resources, like shared randomness, are available. The theoretical study, published in Quantum, was accompanied by an experimental demonstration featured as an Editors’ Suggestion in Physical Review Letters.
The team’s innovative approach involved developing a photonic quantum processor and a novel tool called a variational triangular polarimeter
The key to this breakthrough lies in the way quantum and classical systems handle communication. Classical communication often relies on shared resources, such as pre-agreed random numbers, to function efficiently. Without these shared resources, the task becomes more challenging. In contrast, the researchers found that a qubit does not require such help and can still outperform a classical bit under the same conditions.
The team’s innovative approach involved developing a photonic quantum processor and a novel tool called a variational triangular polarimeter. This device enabled them to measure light polarization with high precision using a technique known as Positive Operator-Valued Measurements (POVM). These measurements play a crucial role in understanding the behavior of quantum systems, particularly under realistic conditions that include noise.
“This result is particularly exciting because it demonstrates a tangible quantum advantage in a realistic communication scenario,” said Professor Banik. “For a long time, quantum advantage was mostly theoretical. Now, we’ve shown that even a single qubit can outperform classical systems, opening up new possibilities for quantum communication and computing.”
This research represents more than just an academic milestone; it brings us a step closer to a future where quantum technologies could drastically alter how we process and communicate information. As quantum systems continue to develop, this breakthrough makes the divide between quantum and classical computing not only more fascinating but also more attainable. The study also signals that quantum systems may eventually be able to solve problems that classical computers struggle with, even when resources are limited.
With this discovery, the potential for quantum communication and computation is moving from theoretical to practical applications, making the future of quantum technologies look even more promising.
Space & Physics
IIT Kanpur Unveils World’s First BCI-Based Robotic Hand Exoskeleton for Stroke Rehabilitation
The BCI-based robotic hand exoskeleton utilizes a unique closed-loop control system to actively engage the patient’s brain during therapy
The Indian Institute of Technology Kanpur (IITK) has unveiled the world’s first Brain-Computer Interface (BCI)-based Robotic Hand Exoskeleton, a groundbreaking innovation set to revolutionize stroke rehabilitation. This technology promises to accelerate recovery and improve patient outcomes by redefining post-stroke therapy. Developed over 15 years of rigorous research led by Prof. Ashish Dutta from IIT Kanpur’s Department of Mechanical Engineering, the project was supported by India’s Department of Science and Technology (DST), UK India Education and Research Initiative (UKIERI), and the Indian Council of Medical Research (ICMR).
The BCI-based robotic hand exoskeleton utilizes a unique closed-loop control system to actively engage the patient’s brain during therapy. It integrates three key components: a Brain-Computer Interface that captures EEG signals from the motor cortex to detect the patient’s intent to move, a robotic hand exoskeleton that assists with therapeutic hand movements, and software that synchronizes brain signals with the exoskeleton for real-time feedback. This coordination helps foster continuous brain engagement, leading to faster and more effective recovery.
“Stroke recovery is a long and often uncertain process. Our device bridges the gap between physical therapy, brain engagement, and visual feedback creating a closed-loop control system that activates brain plasticity, which is the brain’s ability to change its structure and function in response to stimuli,” said Prof. Ashish Dutta. “This is especially significant for patients whose recovery has plateaued, as it offers renewed hope for further improvement and regaining mobility. With promising results in both India and the UK, we are optimistic that this device will make a significant impact in the field of neurorehabilitation.”
Traditional stroke recovery often faces challenges, especially when motor impairments stem from damage to the motor cortex. Conventional physiotherapy methods may fall short due to limited brain involvement. The new device addresses this gap by linking brain activity with physical movement. During therapy, patients are guided on-screen to perform hand movements, such as opening or closing their fist, while EEG signals from the brain and EMG signals from the muscles are used to activate the robotic exoskeleton in an assist-as-required mode. This synchronization ensures the brain, muscles, and visual engagement work together, improving recovery outcomes.
Pilot clinical trials, conducted in collaboration with Regency Hospital in India and the University of Ulster in the UK, have yielded impressive results. Remarkably, eight patients—four in India and four in the UK—who had reached a recovery plateau one or two years post-stroke achieved full recovery through the BCI-based robotic therapy. The device’s active engagement of the brain during therapy has proven to lead to faster and more comprehensive recovery compared to traditional physiotherapy.
While stroke recovery is typically most effective within the first six to twelve months, this innovative device has demonstrated its ability to facilitate recovery even beyond this critical period. With large-scale clinical trials underway at Apollo Hospitals in India, the device is expected to be commercially available within three to five years, offering new hope for stroke patients worldwide.
-
Space & Physics5 months ago
Bubbles observed moving on a star for the first time
-
Learning & Teaching6 months ago
India’s Premier Universities Ranked: Indian Institute of Science tops the list
-
Interviews3 months ago
Memory Formation Unveiled: An Interview with Sajikumar Sreedharan
-
Learning & Teaching6 months ago
IIT Madras retains title as India’s top higher education institute
-
EDUNEWS & VIEWS3 months ago
India: Big Science in the 20th century and beyond
-
Society5 months ago
Repurposed antidepressant shows promise as cost-effective treatment for breast cancer
-
The Sciences5 months ago
Researchers using mushrooms to clean contaminated water
-
Space & Physics4 months ago
Nobel laureates in Physics recognized for contributions to Machine Learning