Space & Physics
Pioneers of modern Artificial Intelligence

The 2024 Nobel Prize for Physics has been a trend breaker with computer scientists being awarded the prestigious prize.
Geoffrey Hinton, one of this year’s laureates, was previously awarded the 2018 Turing Prize, arguably the most prestigious prize in computer science.
John Hopfield, the other laureate, and Hinton, were amongst the early generation of computer scientists in the 1980s, who’d set the foundations for machine learning, a technique used to train artificial intelligence. These techniques shaped modern AI models, to take up the mantle from us, to discover patterns within reams of data, which otherwise would take humans arguably forever.
Until the last mid-century, computation was a task that required manual labor. Then, Alan Turing, the British inventor and scientist, who’d rose to fame during World War 2, having helped break the Enigma code, would conceive the theoretical basis for modern computers. It was when he tried to push further, he came up with, arguably a thought, that led to publication of “Can machines think?” Seemingly an innocuous question, but with radical consequences if it really took shape, Turing, through his conceptions of algorithms, laid the foundation of artificial intelligence.
Why the physics prize?
Artificial neural networks, particularly, form the basis for today’s much popular OpenAI’s ChatGPT, and numerous other facial, image and language translational software. But these machine learning models have broken the ceiling with regards to their applications in numerous disciplines: from computer science, to finance to physics.
Physics did form the bedrock in AI research, particularly that of condensed matter physics. Particularly of relevance is spin glass – a phenomena in condensed matter physics, that involves quantum spins behaving randomly when it’s not supercooled, when it rather becomes orderly. Their applications to AI is rather foundational.
John Hopfield and Geoff Hinton are pioneers of artificial neural networks. Hopfield, an American, and Hinton, from Britain, came from diverse disciplines. Hopfield trained as a physicist. But Hinton was a cognitive psychologist. The burgeoning field of computer science, needed interdisciplinary talent, to attack a problem that no single physicist, logician, mathematician could solve. To construct a machine that can think, it will have to learn to make sense of reality. Learning is key, and computer scientists took inspiration from across statistical and condensed matter physics, psychology and neuroscience to come up with the neural network.
Inspired by the human brain, it involves artificial neurons, that holds particular values. This takes shape when the network would be initially fed data as part of a training program before it’s trained further on unfamiliar data. These values would update upon subsequent passes with more data; forming the crux of the learning process. The potential for this to work happened though with John Hopfield constructing a simple neural network in 1982.

Hopfield network, with neurons forming a chain of connections. Credit: Wikimedia Commons
Neurons pair up with one another, to form a long chain. Hopfield would then feed an image, training it by having these neurons passing along information, but only one-way at a time. Patterns of neurons that fire together, wire together, responding to particular patterns that it formerly trained with. Known as the Hebbian postulate, it actually forms the basis for learning in the human brain. It was when the Hopefield network was able to identify even the most distorted version of the original image, did AI take its baby steps. But then to train the network to learn robustly across a swathe of more data, required additional layers of neurons, and wasn’t an easy goal to achieve. There was a need for an efficient method of learning.

Artificial neural network, with neurons forming connections. The information can go across in both directions (though not indicated in the representation). Credit: Wikimedia Commons
That’s when Geoff Hinton entered the picture at around the same timeframe, helping conceive backpropagation, a technique that’s now mainstream and is the key to machine learning models that we use today. But in 2000, Hinton conceived the multi-layered version of the “Boltzmann machine”, a neural network founded on the Hopfield network. Geoff Hinton was featured in Ed Publica‘s Know the Scientist column.
Space & Physics
MIT Engineers Develop Energy-Efficient Hopping Robot for Disaster Search Missions
The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second

MIT researchers have unveiled an insect-scale robot capable of hopping across treacherous terrain—offering a new mobility solution for disaster response scenarios like collapsed buildings after earthquakes.
Unlike traditional crawling robots that struggle with tall obstacles or aerial robots that quickly drain power, this thumb-sized machine combines both approaches. By using a spring-loaded leg and four flapping-wing modules, the robot can leap over debris and uneven ground while using 60 percent less energy than a flying robot.
“Being able to put batteries, circuits, and sensors on board has become much more feasible with a hopping robot than a flying one. Our hope is that one day this robot could go out of the lab and be useful in real-world scenarios,” says Yi-Hsuan (Nemo) Hsiao, an MIT graduate student and co-lead author of a new paper published today in Science Advances.
The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second. It easily navigates ice, wet surfaces, and even dynamic environments, including hopping onto a hovering drone without damage.
Co-led by researchers from MIT and the City University of Hong Kong, the team engineered the robot with an elastic compression-spring leg and soft actuator-powered wings. These wings not only stabilize the robot mid-air but also compensate for any energy lost during impact with the ground.
“If you have an ideal spring, your robot can just hop along without losing any energy. But since our spring is not quite ideal, we use the flapping modules to compensate for the small amount of energy it loses when it makes contact with the ground,” Hsiao explains.
Its robust control system determines orientation and takeoff velocity based on real-time sensing data. The robot’s agility and light weight allow it to survive harsh impacts and perform acrobatic flips.
“We have been using the same robot for this entire series of experiments, and we never needed to stop and fix it,” Hsiao adds.
The robot has already shown promise on various surfaces—grass, ice, soil, wet glass—and can adapt its jump depending on the terrain. According to Hsiao, “The robot doesn’t really care about the angle of the surface it is landing on. As long as it doesn’t slip when it strikes the ground, it will be fine.”
Future developments aim to enhance autonomy by equipping the robot with onboard batteries and sensors, potentially enabling it to assist in search-and-rescue missions beyond the lab.
Space & Physics
Sunita Williams aged less in space due to time dilation
Astronauts Sunita Williams and Butch Wilmore returned from the ISS last month, younger than we did in the past ten months – thanks to strange physics that we typically encounter daily.

On March 18th, astronauts Sunita Williams and Butch Wilmore returned from the International Space Station (ISS) after their unscheduled nine-month stay in orbit. There has been much concern expressed around Williams and Wilmore’s health, having survived the harsh conditions of outer space. Yet if anything, the duo came out younger than we did in the interim period – thanks to strange physics that we typically don’t encounter daily.
Williams and Wilmore lived in a weak gravitational environment throughout their stay up in space; at the least compared to everyone else on earth. At that altitude 450 km above the surface, Einstein’s theory of relativity came to play – slowing down time for the astronauts.

When clocks run slow
In Einstein’s general theory of relativity, gravity is better explained as the distortive effect in an abstract continuum called space-time. This is quite distinct from Newton’s explanation of gravity, of invisible attractive forces emanating from masses themselves. In relativity, matter and energy twist both space as well as time. Imagine a thin fabric of material. Mass and energy are akin to heavy objects producing depressions in them.
Although we don’t encounter relativistic effects in our everyday encounters in life, their effects are subtle but measurable. The difference in gravity’s strength here produced a noticeable time dilation. Stronger the gravity, the slower does time flow for that person. This means people on earth aged slightly more with respect to the astronauts. This should mean that astronauts spending time up in space should have aged faster due to gravitational time dilation alone.
Except, there is yet another source of time dilation that contributes to aging – and that is, velocity. The ISS zips through low-earth orbit at speeds clocking nearly 28,800 km/h – or 8 km/s. That’s faster than a typical intercontinental ballistic missile when it’s mid-way in its journey. Space-time can distort tangibly when an object possesses incredible energy – and not just gravity. Time dilation from the ISS hurtling at such tremendous speeds, outsized the effect from earth’s gravity. And the resultant time flow would be slower than usual.
In effect, the duo aged slower, by approximately 0.0075 seconds. Virtually, there is no difference as you might notice. But with a good atomic clock though, time dilation can be demonstrated as a subtle, yet measurable effect. In fact, engineers have exploited the effect to solve technical problems arising with global positioning system (GPS) satellites, to coordinate and ensure positional accuracy. The high-precision atomic clocks on-board GPS satellites help software correct for latency errors, accounting for time dilation as well.
Space & Physics
Could dark energy be a trick played by time?
David Wiltshire, a cosmologist at New Zealand’s University of Canterbury, proposed an alternate model that gets rid of dark energy entirely. But in doing so, it sacrifices an assumption cosmologists had held sacred for decades.

In 1924, American astronomer Edwin Hubble discovered that our universe expands in all directions. Powering this expansion was a Big Bang, an event that marked the birth of our current universe some 13.7 billion years ago. Back then, the finding came as a jolt to the astronomy community and the whole world. In 1998, there was even further shake-up when observations of type 1A supernovae from distant galaxies indicated the universe was expanding – at an accelerated rate. But the source of its driving force have remained in the dark.
Dark energy was born from efforts to explain the accelerated expansion. It remains a placeholder name for an undetected energy density contribution that offers a repulsive effect counterbalancing gravity’s attractive nature at long distances. Consensus emerged in support of this dark energy model thereafter. In 2011, astronomers behind the type 1A supernovae study went on to share the Nobel Prize in Physics.
More than two decades later, we are none the wiser to uncover what dark energy is. However, cosmologists have deemed it to be a constant of nature, one that does not evolve with time. So was the surprise when preliminary findings from the Dark Energy Spectroscopic Instrument (DESI) survey indicated dark energy was not just variable, but also weakening over time. The Lambda-Cold Dark Matter, more technically known as the standard model, has never stood on shakier grounds.
Fine-tuned to a Big Crunch ending
In cosmological models, the Greek letter “Lambda” fits as a placeholder for dark energy. It depicts a major chunk – some 70% of the universe’s energy density. But this figure holds only if it is a true cosmological constant. If dark energy is variable, then inevitable we end up fine-tuning the universe’s fate. A constant dark energy would yield a universe expanding forever.
But going by DESI’s preliminary findings, if dark energy is weakening over time, the the universe is set to collapse on itself in the far future. This is the Big Crunch hypothesis. It was amidst the caucus surrounding DESI’s latest findings, the cosmology community took interest in a paper published in the December edition of the Monthly Notices of the Royal Astronomical Society.
In 2007, David Wiltshire, a cosmologist at New Zealand’s University of Canterbury, and the paper’s co-author, had proposed an alternate model called timescape cosmology, to get rid of dark energy entirely. It requires a sacrifice over an assumption cosmologists have held so sacred in their models. Known as cosmological principle, it shares much in common with Aristotle and Ptolemy’s outdated viewpoint that the earth was at the center of the solar system.
A special place in the universe
The cosmological principle assumes matter in the universe is distributed uniformly everywhere on average, and in every direction that we look around. But cosmologists propose to adopt a pragmatic approach like the Polish Prussian astronomer, Nicholas Copernicus, had proposed in the 16th century. In the Copernican model of the solar system, the earth bore no special location in it. Likewise, timescape cosmology requires earth to not occupy a special location.
Saying that, the cosmological principle has a certain appeal among cosmologists. Theoretical calculations would appear complex to manipulate discarding uniformity. At the same time, cosmologists do contend that something has to give way, in light of astronomical observations that contend the cosmological principle is indeed outright wrong.

Inhabiting a time bubble
One of the hallmark phenomena in Einstein’s general theory of relativity is gravitational time dilation. Time passes slower under a gravitational field. Bizarre as though it may seem to be, experiments have proven this subtle, but measurable effect.
In 1959, two Harvard physicists Robert Pound and Glen Rebka Jr. used a pair of atomic clocks to demonstrate this effect – also known as gravitational time dilation. Two clocks were stationed in their office building – one atop the roof, and the other closer to earth. The clock stationed closer to earth, lagged in comparison to the one atop the roof. Here, time dilation occurs in response to earth’s gravity tugging weakly at the clock atop, compared to the one below.
The universe looks clumpier in certain directions at cosmic scales than others. Galaxies bind together under gravity to form strands like that of a vast, interconnected cosmic web. Voids of cosmic proportions occupy the space in between. These voids experience a faster time flow, since they’re subject to weaker gravity from the surrounding galaxies. But observers in these galaxies have a skewed perception of time, since they’re living embedded inside a bubble of strong gravity. Events outside their time bubble play out akin to a fast-forwarded YouTube video.
Not the end of dark energy
Distant galaxies appears to recede accelerated in the reference frame of our time bubble. That appearance is a mere temporal illusion; an effect David Wiltshire says we falsely assume to be dark energy. So far, timescape cosmology has only occupied a niche interest in cosmology circles. There is far too little evidence to support a claim that dark energy affects arise truly from us inhabiting a time bubble.
Cosmologists had taken to social media to critique Wiltshire’s use of type 1A supernovae datasets used in his analysis. Saying that, none of the critiques themselves are conclusive. As observations pile up in the future, there may come a definitive closure. Until then it’s a waiting game for more data and refined analysis. Meanwhile on the contrary, it is too early to abdicate dark energy as a concept altogether. Lambda-CDM model would be the first to undergo a major rehaul, should DESI’s preliminary findings hold in successive observational runs. Until then, we can only speculate the universe’s fate.
-
EDUNEWS & VIEWS5 months ago
India: Big Science in the 20th century and beyond
-
Interviews6 months ago
Memory Formation Unveiled: An Interview with Sajikumar Sreedharan
-
Society6 months ago
Global tech alliance: Nvidia partners with Reliance to transform AI landscape in India
-
The Sciences6 months ago
Prof Saleem Badat awarded ASSAf Science-for-Society Gold Medal
-
Earth5 months ago
The wildfires, floods, and heatwaves: Understanding the science behind climate change
-
Society6 months ago
Do not compete the competition
-
Space & Physics5 months ago
How Shyam Gollakota is revolutionizing mobile systems and healthcare with technology
-
Space & Physics6 months ago
Chandrayaan-3: The moon may have had a fiery past