Connect with us

Space & Physics

Pioneers of modern Artificial Intelligence

Published

on

Artificial Neural Network. Credit: Wikimedia Commons

The 2024 Nobel Prize for Physics has been a trend breaker with computer scientists being awarded the prestigious prize.

Geoffrey Hinton, one of this year’s laureates, was previously awarded the 2018 Turing Prize, arguably the most prestigious prize in computer science.

John Hopfield, the other laureate, and Hinton, were amongst the early generation of computer scientists in the 1980s, who’d set the foundations for machine learning, a technique used to train artificial intelligence. These techniques shaped modern AI models, to take up the mantle from us, to discover patterns within reams of data, which otherwise would take humans arguably forever.

Until the last mid-century, computation was a task that required manual labor. Then, Alan Turing, the British inventor and scientist, who’d rose to fame during World War 2, having helped break the Enigma code, would conceive the theoretical basis for modern computers. It was when he tried to push further, he came up with, arguably a thought, that led to publication of “Can machines think?” Seemingly an innocuous question, but with radical consequences if it really took shape, Turing, through his conceptions of algorithms, laid the foundation of artificial intelligence.

Why the physics prize?

Artificial neural networks, particularly, form the basis for today’s much popular OpenAI’s ChatGPT, and numerous other facial, image and language translational software. But these machine learning models have broken the ceiling with regards to their applications in numerous disciplines: from computer science, to finance to physics.

Physics did form the bedrock in AI research, particularly that of condensed matter physics. Particularly of relevance is spin glass – a phenomena in condensed matter physics, that involves quantum spins behaving randomly when it’s not supercooled, when it rather becomes orderly. Their applications to AI is rather foundational.

John Hopfield and Geoff Hinton are pioneers of artificial neural networks. Hopfield, an American, and Hinton, from Britain, came from diverse disciplines. Hopfield trained as a physicist. But Hinton was a cognitive psychologist. The burgeoning field of computer science, needed interdisciplinary talent, to attack a problem that no single physicist, logician, mathematician could solve. To construct a machine that can think, it will have to learn to make sense of reality. Learning is key, and computer scientists took inspiration from across statistical and condensed matter physics, psychology and neuroscience to come up with the neural network.

Inspired by the human brain, it involves artificial neurons, that holds particular values. This takes shape when the network would be initially fed data as part of a training program before it’s trained further on unfamiliar data. These values would update upon subsequent passes with more data; forming the crux of the learning process. The potential for this to work happened though with John Hopfield constructing a simple neural network in 1982.

Hopfield network, with neurons forming a chain of connections. Credit: Wikimedia Commons

Neurons pair up with one another, to form a long chain. Hopfield would then feed an image, training it by having these neurons passing along information, but only one-way at a time. Patterns of neurons that fire together, wire together, responding to particular patterns that it formerly trained with. Known as the Hebbian postulate, it actually forms the basis for learning in the human brain. It was when the Hopefield network was able to identify even the most distorted version of the original image, did AI take its baby steps. But then to train the network to learn robustly across a swathe of more data, required additional layers of neurons, and wasn’t an easy goal to achieve. There was a need for an efficient method of learning.

Artificial neural network, with neurons forming connections. The information can go across in both directions (though not indicated in the representation). Credit: Wikimedia Commons

That’s when Geoff Hinton entered the picture at around the same timeframe, helping conceive backpropagation, a technique that’s now mainstream and is the key to machine learning models that we use today. But in 2000, Hinton conceived the multi-layered version of the “Boltzmann machine”, a neural network founded on the Hopfield network. Geoff Hinton was featured in Ed Publica‘s Know the Scientist column.

Space & Physics

Researchers Uncover New Way to Measure Hidden Quantum Interactions in Materials

Published

on

Image credit: Pixabay

A team of MIT scientists has developed a theory-guided strategy to directly measure an elusive quantum property in semiconductors — the electron-phonon interaction — using an often-ignored effect in neutron scattering.

Their approach, published this week in Materials Today Physics, reinterprets an interference effect, typically considered a nuisance in experiments, as a valuable signal. This enables researchers to probe electron-phonon interactions — a key factor influencing a material’s thermal, electrical, and optical behaviour — which until now have been extremely difficult to measure directly.

“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” said Mingda Li, senior author and associate professor at MIT, in a media statement.

By engineering the interference between nuclear and magnetic interactions during neutron scattering, the team demonstrated that the resulting signal is directly proportional to the electron-phonon coupling strength.

“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” said MIT graduate student Artittaya Boonkird.

While the current setup produced a weak signal, the findings lay the groundwork for next-generation experiments at more powerful facilities like Oak Ridge National Laboratory’s proposed Second Target Station. The team sees this as a shift in materials science — using theoretical insights to unlock previously “invisible” properties for a range of advanced technologies, from quantum computing to medical devices.

Continue Reading

Space & Physics

Dormant Black Holes Revealed in Dusty Galaxies Through Star-Shredding Events

Published

on

Image credit: NRAO/AUI/NSF/NASA

In a major discovery, astronomers at MIT, Columbia University, and other institutions have used NASA’s James Webb Space Telescope (JWST) to uncover hidden black holes in dusty galaxies that violently “wake up” only when an unsuspecting star wanders too close.

The new study, published in Astrophysical Journal Letters, marks the first time JWST has captured clear signatures of tidal disruption events (TDEs) — catastrophic episodes where a star is torn apart by a galaxy’s central black hole, emitting a dramatic burst of energy.

“These are the first JWST observations of tidal disruption events, and they look nothing like what we’ve ever seen before,” said lead author Megan Masterson, a graduate student at MIT’s Kavli Institute for Astrophysics and Space Research. “We’ve learned these are indeed powered by black hole accretion, and they don’t look like environments around normal active black holes.”

Until now, nearly all TDEs detected since the 1990s were found in relatively dust-free galaxies using X-ray or optical telescopes. However, researchers suspected many more events remained hidden behind thick clouds of galactic dust. JWST’s powerful infrared vision has finally confirmed their hunch.

By analyzing four galaxies previously flagged as likely TDE candidates, the team detected distinct infrared fingerprints of black hole accretion — the process of material spiraling into a black hole, producing intense radiation. These signatures, invisible to optical telescopes, revealed that all four events stemmed not from persistently active black holes but dormant ones, roused only when a passing star came too close.

“There’s nothing else in the universe that can excite this gas to these energies, except for black hole accretion,” Masterson noted.

Among the four signals studied was the closest TDE ever detected, located 130 million light-years away. Another showed an initial optical flash that scientists had earlier suspected to be a supernova. JWST’s readings helped clarify the true cause.

“These four signals were as close as we could get to a sure thing,” said Masterson. “But the JWST data helped us say definitively these are bonafide TDEs.”

To determine whether the central black holes were inherently active or momentarily triggered by a star’s disruption, the team also mapped the dust patterns around them. Unlike the thick, donut-shaped clouds typical of active galaxies, these dusty environments appeared markedly different — further confirming the black holes were usually dormant.

“Together, these observations say the only thing these flares could be are TDEs,” Masterson said in a media statement.

The findings not only validate JWST’s unprecedented ability to study hidden cosmic phenomena but also open new pathways for understanding black holes that lurk quietly in dusty galactic centers — until they strike.

With future observations planned using JWST, NEOWISE, and other infrared tools, the team hopes to catalog many more such events. These cosmic feeding frenzies, they say, could unlock key clues about black hole mass, spin, and the very nature of their environments.

“The actual process of a black hole gobbling down all that stellar material takes a long time,” Masterson added. “And hopefully we can start to probe how long that process takes and what that environment looks like. No one knows because we just started discovering and studying these events.”

Continue Reading

Space & Physics

MIT unveils an ultra-efficient 5G receiver that may supercharge future smart devices

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones

Published

on

Image credit: Mohamed Hassan from Pixabay

A team of MIT researchers has developed a groundbreaking wireless receiver that could transform the future of Internet of Things (IoT) devices by dramatically improving energy efficiency and resilience to signal interference.

Designed for use in compact, battery-powered smart gadgets—like health monitors, environmental sensors, and industrial trackers—the new chip consumes less than a milliwatt of power and is roughly 30 times more resistant to certain types of interference than conventional receivers.

“This receiver could help expand the capabilities of IoT gadgets,” said Soroush Araei, an electrical engineering graduate student at MIT and lead author of the study, in a media statement. “Devices could become smaller, last longer on a battery, and work more reliably in crowded wireless environments like factory floors or smart cities.”

The chip, recently unveiled at the IEEE Radio Frequency Integrated Circuits Symposium, stands out for its novel use of passive filtering and ultra-small capacitors controlled by tiny switches. These switches require far less power than those typically found in existing IoT receivers.

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones. This means the receiver achieves necessary filtering without relying on bulky components, keeping the circuit size under 0.05 square millimeters.

Credit: Courtesy of the researchers/MIT News

Traditional IoT receivers rely on fixed-frequency filters to block interference, but next-generation 5G-compatible devices need to operate across wider frequency ranges. The MIT design meets this demand using an innovative on-chip switch-capacitor network that blocks unwanted harmonic interference early in the signal chain—before it gets amplified and digitized.

Another critical breakthrough is a technique called bootstrap clocking, which ensures the miniature switches operate correctly even at a low power supply of just 0.6 volts. This helps maintain reliability without adding complex circuitry or draining battery life.

The chip’s minimalist design—using fewer and smaller components—also reduces signal leakage and manufacturing costs, making it well-suited for mass production.

Looking ahead, the MIT team is exploring ways to run the receiver without any dedicated power source—possibly by harvesting ambient energy from nearby Wi-Fi or Bluetooth signals.

The research was conducted by Araei alongside Mohammad Barzgari, Haibo Yang, and senior author Professor Negar Reiskarimian of MIT’s Microsystems Technology Laboratories.

Continue Reading

Trending