Connect with us

Space & Physics

Fusion Energy: The quest for unlimited power

The potential benefits of fusion energy are enormous. It could provide a nearly limitless supply of energy, reduce our reliance on fossil fuels, and help combat climate change

Dr Biju Dharmapalan

Published

on

Sun in X Ray
Image credit: NASA Goddard Laboratory for Atmospheres and Yohkoh Legacy data Archive

Imagine a world with a virtually unlimited source of clean energy that could power our cities, industries, and homes without the harmful emissions and environmental impacts of fossil fuels. This isn’t science fiction—it’s the promise of fusion energy. But what exactly is fusion energy, and how close are we to making it a reality?

Nuclear fusion involves combining light elements, such as hydrogen, to form heavier elements, releasing a significant burst of energy in the process. This process, which powers the heat and light of the Sun and other stars, is praised for its potential as a sustainable, low-carbon energy source.

This process contrasts with the nuclear fission process used in nuclear power plants, where heavy atomic nuclei are split into lighter ones. But this is fraught with radioactive waste and safety concerns.

The road to practical fusion energy is steep and fraught with challenges. The foremost obstacle is achieving and maintaining the extremely high temperatures and pressures required for fusion. Similar to those at the Sun’s core, these conditions are necessary to overcome the electrostatic forces that repel the positively charged atomic nuclei. For decades, scientists have experimented with different methods to achieve these conditions. The two primary approaches are magnetic confinement and inertial confinement.

Magnetic confinement, as seen in the tokamak design, employs powerful magnetic fields to contain hot plasma within a doughnut-shaped chamber. Inertial confinement, on the other hand, involves compressing a small pellet of fusion fuel with intense laser beams to achieve the conditions for fusion. Both methods have seen significant advancements but are yet to reach the break-even point, where the energy output from fusion equals the energy input required to sustain the reaction. However, recent breakthroughs have brought us closer than ever to this elusive goal.

The primary fuel for nuclear fusion is deuterium and tritium. Deuterium and tritium are isotopes of hydrogen, the universe’s most abundant element. Isotopes are members of a family of elements that all have the same number of protons but different numbers of neutrons. While all isotopes of hydrogen have one proton, deuterium has one neutron, and tritium has two, so their ion masses are heavier than those of protium, the isotope of hydrogen with no neutrons. Deuterium can be extracted from seawater, while tritium can be bred from lithium. When deuterium and tritium fuse, they form a helium atom, which has two protons and two neutrons, and release an energetic neutron. These energetic neutrons could serve as the foundation for generating energy in future fusion power plants.

Power plants today generate electricity using fossil fuels, nuclear fission, or renewable sources like wind or water. Regardless of the energy source, these plants convert mechanical power, such as the rotation of a turbine, into electrical power. In a coal-fired steam station, coal combustion turns water into steam, which then drives turbine generators to produce electricity.

The tokamak is an experimental machine designed to harness fusion energy. Inside a tokamak, the energy produced through atomic fusion is absorbed as heat by the vessel’s walls. Similar to conventional power plants, a fusion power plant will use this heat to produce steam, which then generates electricity via turbines and generators.

At the core of a tokamak is a doughnut-shaped vacuum chamber. Under extreme heat and pressure inside this chamber, gaseous hydrogen fuel becomes plasma, creating an environment where hydrogen atoms can fuse and release energy. The plasma’s charged particles are controlled and shaped by large magnetic coils surrounding the vessel. This property allows physicists to confine the hot plasma away from the vessel walls. The term “tokamak” is derived from a Russian acronym for “toroidal chamber with magnetic coils.”

fusion 2
Image courtesy : EUROfusion

Fusion energy scientists consider tokamaks to be the leading plasma confinement design for future fusion power plants. In a tokamak, magnetic field coils confine plasma particles, enabling the plasma to reach the conditions necessary for fusion. 

The international ITER project in France is the largest and most ambitious tokamak experiment to date. ITER aims to demonstrate the feasibility of fusion as a large-scale and carbon-free source of energy. It’s a collaboration involving 35 countries, including India, and is expected to produce first plasma in the coming years.

The primary objective of ITER is to investigate and demonstrate burning plasmas—plasmas where the energy from helium nuclei produced by fusion reactions is sufficient to maintain the plasma’s temperature, reducing or eliminating the need for external heating. ITER will also test the feasibility and integration of essential fusion reactor technologies, such as superconducting magnets, remote maintenance, and systems for exhausting power from the plasma. Additionally, it will validate tritium breeding module concepts that could enable tritium self-sufficiency in future reactors.

ITER made headlines just last year when it achieved a major milestone: the successful installation of its first-of-a-kind superconducting magnet system. This system is crucial for creating the powerful magnetic fields needed to contain the superheated plasma. This achievement brings us one step closer to achieving sustained fusion reactions.

An alternative method is inertial confinement fusion, where a compact fusion fuel pellet is compressed by high-powered lasers. The National Ignition Facility (NIF) in the United States is leading the way in this research. On December 5, 2022, the National Ignition Facility (NIF), located at the Lawrence Livermore National Laboratory in California, directed a series of lasers to emit 2.05 megajoules of energy towards a small cylinder containing a frozen pellet of deuterium and tritium, which are denser variants of hydrogen. The pellet underwent compression, resulting in the generation of temperatures and pressures of sufficient magnitude to induce fusion of the hydrogen contained inside it. During an extremely brief ignition, the merging atomic nuclei discharged 3.15 megajoules of energy, surpassing the amount of energy necessary to heat the pellet by approximately 50 percent. This stage is crucial in the journey towards the practical realisation of fusion energy production.

On October 3, 2023, the Joint European Torus (JET) project in Oxford produced power for five seconds, resulting in a “ground-breaking record” of 69 megajoules of power. That energy was generated using only 0.2 milligrams of fuel. In addition, many private companies are making waves in the fusion energy scene.

While these achievements are remarkable, there are still many technical hurdles to overcome. We need to improve the efficiency and durability of fusion reactors, develop materials that can withstand the extreme conditions inside them, and create systems for safely handling and breeding tritium.

Despite these challenges, the potential benefits of fusion energy are enormous. It could provide a nearly limitless supply of energy, reduce our reliance on fossil fuels, and help combat climate change. Imagine a world where energy is abundant, clean, and available to all—fusion energy could make this vision a reality. As we look to the future, the quest for fusion energy represents one of the greatest scientific and engineering challenges of our time. It’s a testament to human ingenuity and our unwavering determination to solve the world’s most pressing problems.

Dr Biju Dharmapalan is a science communicator and an adjunct faculty at the National Institute of Advanced Studies,Bangalore; formerly associated with Vigyan Prasar

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Space & Physics

Magnetic Fields Found to Shape Star Formation Near Milky Way Disc

Scientists map magnetic fields in molecular clouds near the Milky Way, revealing their key role in slowing and shaping star formation.

Published

on

Magnetic fields star formation in molecular clouds within a nebula in the Milky Way
Image credit: NASA/Unsplash

Scientists map magnetic fields in molecular clouds near the Milky Way, revealing their key role in slowing and shaping star formation.

Scientists have uncovered new insights into how stars are formed by mapping the magnetic fields surrounding molecular clouds near the Milky Way’s disc, offering a deeper understanding of one of the universe’s most fundamental processes.

The study focuses on two small molecular clouds—L1604 and L121—revealing how magnetic fields influence the balance between gravity and internal pressure during star formation.

Magnetic Fields Star Formation in Milky Way Clouds

For decades, astronomers have understood star formation as a balance between gravity pulling gas inward and internal pressure pushing outward. However, the new research highlights a third critical factor: magnetic fields.

In a media statement, the researchers explained that magnetic fields act as an invisible force shaping how molecular clouds evolve and collapse to form stars.

The study was conducted by scientists from the Aryabhatta Research Institute of Observational Sciences (ARIES)m Uttarakhand, India and Assam University, using advanced polarimetric techniques to detect otherwise invisible magnetic structures.

Magnetic Fields Star Formation Observed Using Polarimetry

To map these fields, the team used R-band polarimetry with the ARIES Imaging Polarimeter mounted on a 104-cm telescope in Nainital.

This technique measures how starlight becomes polarised as it passes through dust grains aligned by magnetic fields.

In a media statement, the researchers said that by analysing thousands of such light signals, they were able to “see” the skeleton of magnetic fields surrounding the molecular clouds for the first time.

Two Molecular Clouds Reveal Contrasting Behaviour

The study examined two distinct clouds:

  • L1604, located about 816 parsecs away, is dense and massive, with strong potential for future star formation
  • L121, much closer at 124 parsecs, is less dense but exhibits a stronger and more organised magnetic field

In a media statement, the scientists noted that the orderly magnetic structure in L121 suggests it has not yet undergone intense gravitational collapse, unlike more active star-forming regions.

Magnetic Fields Star Formation Controlled by Energy Balance

By calculating magnetic field strength, the researchers found that both clouds are sub-critical, meaning magnetic forces are strong enough to resist gravitational collapse across most of their structure.

In a media statement, the team stated that magnetic energy dominates over both turbulence and gravity at the outer regions of the clouds.

However, deep within the dense cores, gravity may begin to take over, creating conditions suitable for star formation.

The “Recipe” for Star Formation

The findings suggest that magnetic fields play a crucial role in regulating how quickly stars form.

In a media statement, researchers said that magnetism acts as an “invisible hand,” slowing down star formation and preventing galaxies from converting all their gas into stars at once.

The study positions L1604 and L121 as natural laboratories for understanding the interplay between gravity and magnetism.

Rather than being passive clouds, they represent dynamic systems where fundamental forces interact over millions of years to shape the birth of stars.

The findings offer a clearer picture of how galaxies like the Milky Way sustain star formation over long cosmic timescales, balancing collapse with control.

Continue Reading

Space & Physics

Researchers Use AI to Enable Robots to ‘See’ Through Walls

MIT researchers develop AI-powered system using wireless signals to help robots see through walls and reconstruct hidden objects and indoor spaces.

Published

on

MIT researchers use generative AI to reconstruct hidden 3D objects.
MIT researchers use generative AI to reconstruct hidden 3D objects. Credit: Courtesy of the researchers/MIT News

Researchers at Massachusetts Institute of Technology have developed a new artificial intelligence-powered system that allows robots to detect and reconstruct objects hidden behind walls and obstacles, marking a significant breakthrough in machine perception.

The system combines wireless signals with generative AI models to enable what researchers describe as a new form of “wireless vision,” potentially transforming robotics, logistics, and smart environments.

AI See Through Walls Using Wireless Signals

The research builds on over a decade of work using millimeter wave (mmWave) signals—similar to those used in Wi-Fi—which can pass through materials such as drywall, plastic, and cardboard and reflect off hidden objects.

Earlier approaches could only capture partial shapes due to limitations in how these signals reflect.

The new system overcomes this by combining wireless reflections with generative AI, enabling the reconstruction of complete object shapes even when they are not directly visible.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” said Fadel Adib, in a media statement.

“We are using AI to finally unlock wireless vision.”

AI See Through Walls Improves Object Reconstruction

The system, called Wave-Former, first creates a partial image of a hidden object using reflected wireless signals. It then uses a trained AI model to fill in missing parts and refine the reconstruction.

In tests, Wave-Former successfully reconstructed around 70 everyday objects—including boxes, utensils, and fruits—with nearly 20% higher accuracy than existing methods.

The objects were placed behind or under materials such as wood, fabric, and plastic, demonstrating the system’s robustness in real-world conditions.

AI See Through Walls Reconstructs Entire Rooms

Beyond individual objects, the researchers developed a second system capable of reconstructing entire indoor environments.

Using a single stationary radar, the system tracks how wireless signals bounce off moving humans and surrounding objects. These reflections—often considered noise—are analysed by AI to map out the room layout.

The system, known as RISE, was tested using over 100 human movement patterns and achieved twice the accuracy of existing techniques in reconstructing indoor spaces.

Privacy-Preserving Alternative to Cameras

Unlike camera-based systems, this approach does not capture visual images, offering a privacy-preserving alternative for indoor monitoring and robotics.

Because it relies on wireless signals rather than cameras, it can detect presence and layout without revealing identifiable details.

Applications in Warehousing and Smart Homes

The researchers say the technology could have wide-ranging applications:

  • Warehouses: Robots could verify packed items before shipping, reducing errors and returns
  • Smart homes: Robots could better understand human location and movement
  • Human-robot interaction: Improved safety and efficiency in shared environments

The system could also pave the way for future “foundation models” trained specifically on wireless data, similar to how large AI models are trained for language and vision.

Continue Reading

Health

Researchers Develop AI Method That Makes Computer Vision Models More Explainable

A new technique developed by MIT researchers could help make artificial intelligence systems more accurate and transparent in high-stakes fields such as health care and autonomous driving by improving how computer vision models explain their decisions.

Published

on

MIT researchers have developed a new explainable AI method that improves the accuracy and transparency of computer vision models, helping users trust AI predictions in healthcare and autonomous driving.
Image credit: Tara/Pexels

MIT researchers have developed a new explainable AI method that improves the accuracy and transparency of computer vision models, helping users trust AI predictions in healthcare and autonomous driving.

Researchers at MIT have developed a new approach to make computer vision models more transparent, offering a potential boost to trust and accountability in safety-critical applications such as medical diagnosis and autonomous driving.

In a media statement, the researchers said the method improves on a widely used explainability technique known as concept bottleneck modeling, which enables AI systems to show the human-understandable concepts behind a prediction. The new approach is designed to produce clearer explanations while also improving prediction accuracy.

Why explainable AI matters

In areas such as health care, users often need more than just a model’s output. They want to understand why a system arrived at a particular conclusion before deciding whether to rely on it. Concept bottleneck models attempt to address that need by forcing an AI system to make predictions through a set of intermediate concepts that humans can interpret.

For example, when analysing a medical image for melanoma, a clinician might define concepts such as “clustered brown dots” or “variegated pigmentation.” The model would first identify those concepts and then use them to arrive at its final prediction.

But the researchers said pre-defined concepts can sometimes be too broad, irrelevant or incomplete for a specific task, limiting both the quality of explanations and the model’s performance. To overcome that, the MIT team developed a method that extracts concepts the model has already learned during training and then compels it to use those concepts when making decisions.

The approach relies on two specialised machine-learning models. One extracts the most relevant internal features learned by the target model, while the other translates them into plain-language concepts that humans can understand. This makes it possible to convert a pretrained computer vision model into one capable of explaining its reasoning through interpretable concepts.

“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” Antonio De Santis, lead author of the study, said in a media statement.

De Santis is a graduate student at Polytechnic University of Milan and carried out the research while serving as a visiting graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The paper was co-authored by Schrasing Tong, Marco Brambilla of Polytechnic University of Milan, and Lalana Kagal of CSAIL. The research will be presented at the International Conference on Learning Representations.

Concept bottleneck models have gained attention as a way to improve AI explainability by introducing an intermediate reasoning step between an input image and the final output. In one example, a bird-classification model might identify concepts such as “yellow legs” and “blue wings” before predicting a barn swallow.

However, the researchers noted that these concepts are often generated in advance by humans or large language models, which may not always match the needs of the task. Even when a model is given a fixed concept set, it can still rely on hidden information not visible to users, a challenge known as information leakage.

“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis said in a media statement.

The team’s solution was to tap into the knowledge the model had already acquired from large volumes of training data. Using a sparse autoencoder, the method isolates the most relevant learned features and reconstructs them into a small number of concepts. A multimodal large language model then describes each concept in simple language and labels the training images by marking which concepts are present or absent.

The annotated dataset is then used to train a concept bottleneck module, which is inserted into the target model. This forces the model to make predictions using only the extracted concepts.

The researchers said one of the biggest challenges was ensuring that the automatically identified concepts were both accurate and understandable to humans. To reduce the risk of hidden reasoning, the model is limited to just five concepts for each prediction, encouraging it to focus only on the most relevant information and making the explanation easier to follow.

When tested against state-of-the-art concept bottleneck models on tasks including bird species classification and skin lesion identification, the new method delivered the highest accuracy while also producing more precise explanations, according to the researchers. It also generated concepts that were more relevant to the images in the dataset.

Still, the team acknowledged that the broader challenge of balancing accuracy and interpretability remains unresolved.

“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis said in a media statement.

Looking ahead, the researchers plan to explore ways to further reduce information leakage, possibly by adding additional concept bottleneck modules. They also aim to scale up the method by using a larger multimodal language model to annotate a larger training dataset, which could improve performance further.

This latest work adds to growing efforts to make AI systems not only more powerful, but also more understandable in domains where trust can be as important as accuracy.

Continue Reading

Trending