Connect with us

Space & Physics

Pioneers of modern Artificial Intelligence

Published

on

Artificial Neural Network. Credit: Wikimedia Commons

The 2024 Nobel Prize for Physics has been a trend breaker with computer scientists being awarded the prestigious prize.

Geoffrey Hinton, one of this year’s laureates, was previously awarded the 2018 Turing Prize, arguably the most prestigious prize in computer science.

John Hopfield, the other laureate, and Hinton, were amongst the early generation of computer scientists in the 1980s, who’d set the foundations for machine learning, a technique used to train artificial intelligence. These techniques shaped modern AI models, to take up the mantle from us, to discover patterns within reams of data, which otherwise would take humans arguably forever.

Until the last mid-century, computation was a task that required manual labor. Then, Alan Turing, the British inventor and scientist, who’d rose to fame during World War 2, having helped break the Enigma code, would conceive the theoretical basis for modern computers. It was when he tried to push further, he came up with, arguably a thought, that led to publication of “Can machines think?” Seemingly an innocuous question, but with radical consequences if it really took shape, Turing, through his conceptions of algorithms, laid the foundation of artificial intelligence.

Why the physics prize?

Artificial neural networks, particularly, form the basis for today’s much popular OpenAI’s ChatGPT, and numerous other facial, image and language translational software. But these machine learning models have broken the ceiling with regards to their applications in numerous disciplines: from computer science, to finance to physics.

Physics did form the bedrock in AI research, particularly that of condensed matter physics. Particularly of relevance is spin glass – a phenomena in condensed matter physics, that involves quantum spins behaving randomly when it’s not supercooled, when it rather becomes orderly. Their applications to AI is rather foundational.

John Hopfield and Geoff Hinton are pioneers of artificial neural networks. Hopfield, an American, and Hinton, from Britain, came from diverse disciplines. Hopfield trained as a physicist. But Hinton was a cognitive psychologist. The burgeoning field of computer science, needed interdisciplinary talent, to attack a problem that no single physicist, logician, mathematician could solve. To construct a machine that can think, it will have to learn to make sense of reality. Learning is key, and computer scientists took inspiration from across statistical and condensed matter physics, psychology and neuroscience to come up with the neural network.

Inspired by the human brain, it involves artificial neurons, that holds particular values. This takes shape when the network would be initially fed data as part of a training program before it’s trained further on unfamiliar data. These values would update upon subsequent passes with more data; forming the crux of the learning process. The potential for this to work happened though with John Hopfield constructing a simple neural network in 1982.

Hopfield network, with neurons forming a chain of connections. Credit: Wikimedia Commons

Neurons pair up with one another, to form a long chain. Hopfield would then feed an image, training it by having these neurons passing along information, but only one-way at a time. Patterns of neurons that fire together, wire together, responding to particular patterns that it formerly trained with. Known as the Hebbian postulate, it actually forms the basis for learning in the human brain. It was when the Hopefield network was able to identify even the most distorted version of the original image, did AI take its baby steps. But then to train the network to learn robustly across a swathe of more data, required additional layers of neurons, and wasn’t an easy goal to achieve. There was a need for an efficient method of learning.

Artificial neural network, with neurons forming connections. The information can go across in both directions (though not indicated in the representation). Credit: Wikimedia Commons

That’s when Geoff Hinton entered the picture at around the same timeframe, helping conceive backpropagation, a technique that’s now mainstream and is the key to machine learning models that we use today. But in 2000, Hinton conceived the multi-layered version of the “Boltzmann machine”, a neural network founded on the Hopfield network. Geoff Hinton was featured in Ed Publica‘s Know the Scientist column.

Space & Physics

Nobel Prize in Physics: Clarke, Devoret, and Martinis Honoured for Pioneering Quantum Discoveries

The 2025 Nobel Prize in Physics honours John Clarke, Michel H. Devoret, and John M. Martinis for revealing how entire electrical circuits can display quantum behaviour — a discovery that paved the way for modern quantum computing.

Published

on

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis for their landmark discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit, an innovation that laid the foundation for today’s quantum computing revolution.

Announcing the prize, Olle Eriksson, Chair of the Nobel Committee for Physics, said, “It is wonderful to be able to celebrate the way that century-old quantum mechanics continually offers new surprises. It is also enormously useful, as quantum mechanics is the foundation of all digital technology.”

The Committee described their discovery as a “turning point in understanding how quantum mechanics manifests at the macroscopic scale,” bridging the gap between classical electronics and quantum physics.

John Clarke: The SQUID Pioneer

British-born John Clarke, Professor Emeritus at the University of California, Berkeley, is celebrated for his pioneering work on Superconducting Quantum Interference Devices (SQUIDs) — ultra-sensitive detectors of magnetic flux. His career has been marked by contributions that span superconductivity, quantum amplifiers, and precision measurements.

Clarke’s experiments in the early 1980s provided the first clear evidence of quantum behaviour in electrical circuits — showing that entire electrical systems, not just atoms or photons, can obey the strange laws of quantum mechanics.

A Fellow of the Royal Society, Clarke has been honoured with numerous awards including the Comstock Prize (1999) and the Hughes Medal (2004).

Michel H. Devoret: Architect of Quantum Circuits

French physicist Michel H. Devoret, now the Frederick W. Beinecke Professor Emeritus of Applied Physics at Yale University, has been one of the intellectual architects of quantronics — the study of quantum phenomena in electrical circuits.

After earning his PhD at the University of Paris-Sud and completing a postdoctoral fellowship under Clarke at Berkeley, Devoret helped establish the field of circuit quantum electrodynamics (cQED), which underpins the design of modern superconducting qubits.

His group’s innovations — from the single-electron pump to the fluxonium qubit — have set performance benchmarks in quantum coherence and control. Devoret is also a recipient of the Fritz London Memorial Prize (2014) and the John Stewart Bell Prize, and is a member of the French Academy of Sciences.

John M. Martinis: Building the Quantum Processor

American physicist John M. Martinis, who completed his PhD at UC Berkeley under Clarke’s supervision, translated these quantum principles into the hardware era. His experiments demonstrated energy level quantisation in Josephson junctions, one of the key results now honoured by the Nobel Committee.

Martinis later led Google’s Quantum AI lab, where his team in 2019 achieved the world’s first demonstration of quantum supremacy — showing a superconducting processor outperforming the fastest classical supercomputer on a specific task.

A former professor at UC Santa Barbara, Martinis continues to be a leading voice in quantum computing research and technology development.

A Legacy of Quantum Insight

Together, the trio’s discovery, once seen as a niche curiosity in superconducting circuits, has become the cornerstone of the global quantum revolution. Their experiments proved that macroscopic electrical systems can display quantised energy states and tunnel between them, much like subatomic particles.

Their work, as the Nobel citation puts it, “opened a new window into the quantum behaviour of engineered systems, enabling technologies that are redefining computation, communication, and sensing.”

Continue Reading

Space & Physics

The Tiny Grip That Could Reshape Medicine: India’s Dual-Trap Optical Tweezer

Indian scientists build new optical tweezer module—set to transform single-molecule research and medical Innovation

Joe Jacob

Published

on

Advanced optical tweezers manipulate single molecules with laser precision, enabling breakthroughs in biomedical and neuroscience research

In an inventive leap that could open up new frontiers in neuroscience, drug development, and medical research, scientists in India have designed their own version of a precision laboratory tool known as the dual-trap optical tweezers system. By creating a homegrown solution to manipulate and measure forces on single molecules, the team brings world-class technology within reach of Indian researchers—potentially igniting a wave of scientific discoveries.

Optical tweezers, a Nobel Prize-winning invention from 2018, use focused beams of light to grab and move microscopic objects with extraordinary accuracy. The technique has become indispensable for measuring tiny forces and exploring the mechanics of DNA, proteins, living cells, and engineered nanomaterials. Yet, decades after their invention, conventional optical tweezers systems sometimes fall short for today’s most challenging experiments.

Researchers at the Raman Research Institute (RRI), an autonomous institute backed by India’s Department of Science and Technology in Bengaluru, have now introduced a smart upgrade that addresses long-standing pitfalls of dual-trap tweezers. Traditional setups rely on measuring the light that passes through particles trapped in two separate beams—a method prone to signal “cross-talk.” This makes simultaneous, independent measurement difficult, diminishing both accuracy and versatility.

Comparison of conventional and newly developed dual-trap optical tweezer designs, highlighting how the Indian innovation eliminates signal interference for more precise measurements

The new system pioneers a confocal detection scheme. In a media statement, Md Arsalan Ashraf, a doctoral scholar at RRI, explained, “The unique optical trapping scheme utilizes laser light scattered back by the sample for detecting trapped particle position. This technique pushes past some of the long-standing constraints of dual-trap configurations and removes signal interference. The single-module design integrates effortlessly with standard microscopy frameworks,” he said.

The refinement doesn’t end there. The system ensures that detectors tracking tiny particles remain perfectly aligned, even when the optical traps themselves move. The result: two stable, reliable measurement channels, zero interference, and no need for complicated re-adjustment mid-experiment—a frequent headache with older systems.

Traditional dual-trap designs have required costly and complex add-ons, sometimes even hijacking the features of laboratory microscopes and making additional techniques, such as phase contrast or fluorescence imaging, hard to use. “This new single-module trapping and detection design makes high-precision force measurement studies of single molecules, probing of soft materials including biological samples, and micromanipulation of biological samples like cells much more convenient and cost-effective,” said Pramod A Pullarkat, lead principal investigator at RRI, in a statement.

By removing cross-talk and offering robust stability—whether traps are close together, displaced, or the environment changes—the RRI team’s approach is not only easier to use but far more adaptable. Its plug-and-play module fits onto standard microscopes without overhauling their basic structure.

From the intellectual property point of view, this design may be a game-changer. By cracking the persistent problem of signal interference with minimalist engineering, the new setup enhances measurement precision and reliability—essential advantages for researchers performing delicate biophysical experiments on everything from molecular motors to living cells.

With the essential building blocks in place, the RRI team is now exploring commercial avenues to produce and distribute their single-module, dual-trap optical tweezer system as an affordable add-on for existing microscopes. The innovation stands to put advanced single-molecule force spectroscopy, long limited to wealthier labs abroad, into the hands of scientists across India—and perhaps spark breakthroughs across the biomedical sciences.

Continue Reading

Space & Physics

New Magnetic Transistor Breakthrough May Revolutionize Electronics

A team of MIT physicists has created a magnetic transistor that could make future electronics smaller, faster, and more energy-efficient. By swapping silicon for a new magnetic semiconductor, they’ve opened the door to game-changing advancements in computing.

Published

on

Illustration of an advanced microchip with visualized magnetic fields, representing MIT's breakthrough in magnetic semiconductor transistors for next-generation electronics.

For decades, silicon has been the undisputed workhorse in transistors—the microscopic switches responsible for processing information in every phone, computer, and high-tech device. But silicon’s physical limits have long frustrated scientists seeking ever-smaller, more efficient electronics.

Now, MIT researchers have unveiled a major advance: they’ve replaced silicon with a magnetic semiconductor, introducing magnetism into transistors in a way that promises tighter, smarter, and more energy-saving circuits. This new ingredient, chromium sulfur bromide, makes it possible to control electricity flow with far greater efficiency and could even allow each transistor to “remember” information, simplifying circuit design for future chips.

“This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10,” the MIT team said in a media statement. Their work, detailed in Physical Review Letters, outlines how this material’s stability and clean switching between magnetic states unlocks a new degree of control.

Chung-Tao Chou, MIT graduate student and co-lead author, explains in a media statement, “People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research.”

The device’s game-changing aspect is its ability to combine the roles of memory cell and transistor, allowing electronics to read and store information faster and more reliably. “Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” said Luqiao Liu, MIT associate professor, in a media statement.

Moving forward, the team is looking to scale up their clean manufacturing process, hoping to create arrays of these magnetic transistors for broader commercial and scientific use. If successful, the innovation could usher in a new era of spintronic devices, where magnetism becomes as central to electronics as silicon is today.

Continue Reading

Trending