Connect with us

Space & Physics

New antenna design could help detect faint cosmological signals

This could revolutionise our ability to detect the faint signals of Cosmological Recombination Radiation (CRR)

Published

on

Image credit: PIB

In an intriguing development, scientists at the Raman Research Institute (RRI) in Bangalore, India, have developed a novel antenna design that could revolutionise our ability to detect the faint signals of Cosmological Recombination Radiation (CRR).

These signals, which are crucial for understanding the thermal and ionization history of the Universe, have so far remained undetected due to their elusive nature. The newly designed antenna is capable of measuring signals in the 2.5 to 4 Gigahertz (GHz) frequency range, which is optimal for detecting CRR, a signal that is approximately one billion times fainter than the Cosmic Microwave Background (CMB).

As per available sources, the universe is approximately 13.8 billion years old, and in its earliest stages, it was extremely hot and dense. During this time, the Universe was composed of a plasma of free electrons, protons, and light nuclei such as helium and lithium. The radiation coexisting with this matter has been detected today as the CMB, which holds vital information about the early cosmological and astrophysical processes.

One such process, known as the Epoch of Recombination, marks the transition from a fully ionized primordial plasma to mostly neutral hydrogen and helium atoms. This transition emitted photons, creating the Cosmological Recombination Radiation (CRR), which distorts the underlying CMB spectrum. Detecting these faint CRR signals would provide a wealth of information about the Universe’s early ionization and thermal history and could even offer the first experimental measurements of helium abundance before it was synthesized in the cores of stars.

However, detecting CRR is a significant challenge because these signals are extremely weak—about nine orders of magnitude fainter than the CMB. To address this, scientists need highly sensitive instruments that can isolate these signals from the vast cosmic noise surrounding them.

To this end, researchers from RRI, including Mayuri Rao and Keerthipriya Sathish, along with Debdeep Sarkar from the Indian Institute of Science (IISc), have developed an innovative ground-based broadband antenna designed to detect signals as faint as one part in 10,000. Their design is capable of making sky measurements in the 2.5 to 4 GHz range, the frequency band most suitable for CRR detection.

According to Keerthipriya Sathish, the lead author of the study, “For the sky measurements we plan to perform, this broadband antenna offers the highest sensitivity compared to other antennas designed for the same bandwidth. The antenna’s frequency-independent performance across a wide range and its smooth frequency response are features that set it apart from conventional designs.”

The antenna is compact and lightweight, weighing just 150 grams, with a square shape measuring 14 cm by 14 cm.

The proposed antenna is a dual-polarized dipole antenna with a unique four-arm structure shaped like a fantail. This design ensures that the antenna maintains the same radiation pattern across its entire operational bandwidth, with a mere 1% variation in its characteristics. This is crucial for distinguishing spectral distortions from galactic foregrounds. The antenna’s custom design allows it to “stare” at the same patch of sky throughout its full operational range of 1.5 GHz (from 2.5 to 4 GHz), which is key to separating the CRR signals from other cosmic noise.

The antenna is compact and lightweight, weighing just 150 grams, with a square shape measuring 14 cm by 14 cm. It is made using a low-loss dielectric flat substrate on which the antenna is etched in copper, while the bottom features an aluminum ground plate. Between these plates lies a radio-transparent foam layer that houses the antenna’s connectors and receiver base.

With a sensitivity of around 30 millikelvin (mK) across the 2.5-4 GHz frequency range, the antenna is capable of detecting tiny temperature variations in the sky. Even before being scaled to a full array, this antenna design is expected to provide valuable first scientific results when integrated with a custom receiver. One of the anticipated experiments is to study an excess radiation reported at 3.3 GHz, which has been speculated to result from exotic phenomena, including dark matter annihilation. These early tests will help refine the antenna’s performance and guide future design improvements aimed at achieving the sensitivity required for CRR detection.

The researchers plan to deploy an array of these antennas in radio-quiet areas, where radio frequency interference is minimal or absent. The antenna’s design is straightforward and can be easily fabricated using methods similar to those employed in Printed Circuit Board (PCB) manufacturing, ensuring high machining accuracy and consistency for scaling up to multiple-element arrays. The antenna is portable, making it easy to deploy in remote locations for scientific observations.

The team is already looking ahead, planning further improvements to achieve even greater sensitivity, with a long-term goal of detecting CRR signals at sensitivities as low as one part per billion. With this innovative antenna design, the team hopes to make significant strides toward uncovering the secrets of the early Universe and its formation.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Space & Physics

A New Milestone in Quantum Error Correction

This achievement moves quantum computing closer to becoming a transformative tool for science and technology

Published

on

Image credit: Pixabay

Quantum computing promises to revolutionize fields like cryptography, drug discovery, and optimization, but it faces a major hurdle: qubits, the fundamental units of quantum computers, are incredibly fragile. They are highly sensitive to external disturbances, making today’s quantum computers too error-prone for practical use. To overcome this, researchers have turned to quantum error correction, a technique that aims to convert many imperfect physical qubits into a smaller number of more reliable logical qubits.

In the 1990s, researchers developed the theoretical foundations for quantum error correction, showing that multiple physical qubits could be combined to create a single, more stable logical qubit. These logical qubits would then perform calculations, essentially turning a system of faulty components into a functional quantum computer. Michael Newman, a researcher at Google Quantum AI, highlights that this approach is the only viable path toward building large-scale quantum computers.

However, the process of quantum error correction has its limits. If physical qubits have a high error rate, adding more qubits can make the situation worse rather than better. But if the error rate of physical qubits falls below a certain threshold, the balance shifts. Adding more qubits can significantly improve the error rate of the logical qubits.

A Breakthrough in Error Correction

In a paper published in Nature last December, Michael Newman and his team at Google Quantum AI have achieved a major breakthrough in quantum error correction. They demonstrated that by adding physical qubits to a system, the error rate of a logical qubit drops sharply. This finding shows that they’ve crossed the critical threshold where error correction becomes effective. The research marks a significant step forward, moving quantum computers closer to practical, large-scale applications.

The concept of error correction itself isn’t new — it is already used in classical computers. On traditional systems, information is stored as bits, which can be prone to errors. To prevent this, error-correcting codes replicate each bit, ensuring that errors can be corrected by a majority vote. However, in quantum systems, things are more complicated. Unlike classical bits, qubits can suffer from various types of errors, including decoherence and noise, and quantum computing operations themselves can introduce additional errors.

Moreover, unlike classical bits, measuring a qubit’s state directly disturbs it, making it much harder to identify and correct errors without compromising the computation. This makes quantum error correction particularly challenging.

The Quantum Threshold

Quantum error correction relies on the principle of redundancy. To protect quantum information, multiple physical qubits are used to form a logical qubit. However, this redundancy is only beneficial if the error rate is low enough. If the error rate of physical qubits is too high, adding more qubits can make the error correction process counterproductive.

Google’s recent achievement demonstrates that once the error rate of physical qubits drops below a specific threshold, adding more qubits improves the system’s resilience. This breakthrough brings researchers closer to achieving large-scale quantum computing systems capable of solving complex problems that classical computers cannot.

Moving Forward

While significant progress has been made, quantum computing still faces many engineering challenges. Quantum systems require extremely controlled environments, such as ultra-low temperatures, and the smallest disturbances can lead to errors. Despite these hurdles, Google’s breakthrough in quantum error correction is a major step toward realizing the full potential of quantum computing.

By improving error correction and ensuring that more reliable logical qubits are created, researchers are steadily paving the way for practical quantum computers. This achievement moves quantum computing closer to becoming a transformative tool for science and technology.

Continue Reading

Space & Physics

Study Shows Single Qubit Can Outperform Classical Computers in Real-World Communication Tasks

This new research, however, offers compelling evidence of quantum systems’ power in a real-world scenario

Published

on

Image credit: Gerd Altmann /Pixabay

Breakthrough Study Shows Quantum Systems Can Outperform Classical Computers in Real-World Communication Tasks

A new study from the S. N. Bose National Centre for Basic Sciences in West Bengal, India, in collaboration with international teams has revealed that even the simplest quantum system, a single qubit, can surpass its classical counterpart in certain communication tasks. This discovery reshapes our understanding of quantum computing and hints at a future where quantum technologies could solve problems that classical computers, even with ample resources, cannot.

Quantum systems have long been seen as the next frontier in computing, with the potential to revolutionize technology. However, proving their superiority over classical systems has been a challenge, as experiments are complex, and limitations often arise that suggest quantum advantage might not be as accessible as once thought. This new research, however, offers compelling evidence of quantum systems’ power in a real-world scenario.

Professor Manik Banik and his team at the S. N. Bose Centre, alongside researchers from the Henan Key Laboratory of Quantum Information and Cryptography, Laboratoire d’Information Quantique, University libre de Bruxelles, and ICFO—the Barcelona Institute of Science and Technology, have demonstrated that a single qubit can outperform a classical bit in a communication task, even when no extra resources, like shared randomness, are available. The theoretical study, published in Quantum, was accompanied by an experimental demonstration featured as an Editors’ Suggestion in Physical Review Letters.

The team’s innovative approach involved developing a photonic quantum processor and a novel tool called a variational triangular polarimeter

The key to this breakthrough lies in the way quantum and classical systems handle communication. Classical communication often relies on shared resources, such as pre-agreed random numbers, to function efficiently. Without these shared resources, the task becomes more challenging. In contrast, the researchers found that a qubit does not require such help and can still outperform a classical bit under the same conditions.

The team’s innovative approach involved developing a photonic quantum processor and a novel tool called a variational triangular polarimeter. This device enabled them to measure light polarization with high precision using a technique known as Positive Operator-Valued Measurements (POVM). These measurements play a crucial role in understanding the behavior of quantum systems, particularly under realistic conditions that include noise.

Credit: PIB

“This result is particularly exciting because it demonstrates a tangible quantum advantage in a realistic communication scenario,” said Professor Banik. “For a long time, quantum advantage was mostly theoretical. Now, we’ve shown that even a single qubit can outperform classical systems, opening up new possibilities for quantum communication and computing.”

Credit: PIB

This research represents more than just an academic milestone; it brings us a step closer to a future where quantum technologies could drastically alter how we process and communicate information. As quantum systems continue to develop, this breakthrough makes the divide between quantum and classical computing not only more fascinating but also more attainable. The study also signals that quantum systems may eventually be able to solve problems that classical computers struggle with, even when resources are limited.

With this discovery, the potential for quantum communication and computation is moving from theoretical to practical applications, making the future of quantum technologies look even more promising.

Continue Reading

Space & Physics

IIT Kanpur Unveils World’s First BCI-Based Robotic Hand Exoskeleton for Stroke Rehabilitation

The BCI-based robotic hand exoskeleton utilizes a unique closed-loop control system to actively engage the patient’s brain during therapy

Published

on

Image credit: By Special arrangement

The Indian Institute of Technology Kanpur (IITK) has unveiled the world’s first Brain-Computer Interface (BCI)-based Robotic Hand Exoskeleton, a groundbreaking innovation set to revolutionize stroke rehabilitation. This technology promises to accelerate recovery and improve patient outcomes by redefining post-stroke therapy. Developed over 15 years of rigorous research led by Prof. Ashish Dutta from IIT Kanpur’s Department of Mechanical Engineering, the project was supported by India’s Department of Science and Technology (DST), UK India Education and Research Initiative (UKIERI), and the Indian Council of Medical Research (ICMR).

The BCI-based robotic hand exoskeleton utilizes a unique closed-loop control system to actively engage the patient’s brain during therapy. It integrates three key components: a Brain-Computer Interface that captures EEG signals from the motor cortex to detect the patient’s intent to move, a robotic hand exoskeleton that assists with therapeutic hand movements, and software that synchronizes brain signals with the exoskeleton for real-time feedback. This coordination helps foster continuous brain engagement, leading to faster and more effective recovery.

“Stroke recovery is a long and often uncertain process. Our device bridges the gap between physical therapy, brain engagement, and visual feedback creating a closed-loop control system that activates brain plasticity, which is the brain’s ability to change its structure and function in response to stimuli,” said Prof. Ashish Dutta. “This is especially significant for patients whose recovery has plateaued, as it offers renewed hope for further improvement and regaining mobility. With promising results in both India and the UK, we are optimistic that this device will make a significant impact in the field of neurorehabilitation.”

Traditional stroke recovery often faces challenges, especially when motor impairments stem from damage to the motor cortex. Conventional physiotherapy methods may fall short due to limited brain involvement. The new device addresses this gap by linking brain activity with physical movement. During therapy, patients are guided on-screen to perform hand movements, such as opening or closing their fist, while EEG signals from the brain and EMG signals from the muscles are used to activate the robotic exoskeleton in an assist-as-required mode. This synchronization ensures the brain, muscles, and visual engagement work together, improving recovery outcomes.

Pilot clinical trials, conducted in collaboration with Regency Hospital in India and the University of Ulster in the UK, have yielded impressive results. Remarkably, eight patients—four in India and four in the UK—who had reached a recovery plateau one or two years post-stroke achieved full recovery through the BCI-based robotic therapy. The device’s active engagement of the brain during therapy has proven to lead to faster and more comprehensive recovery compared to traditional physiotherapy.

While stroke recovery is typically most effective within the first six to twelve months, this innovative device has demonstrated its ability to facilitate recovery even beyond this critical period. With large-scale clinical trials underway at Apollo Hospitals in India, the device is expected to be commercially available within three to five years, offering new hope for stroke patients worldwide.

Continue Reading

Trending