Connect with us

Space & Physics

Pioneers of modern Artificial Intelligence

Published

on

Artificial Neural Network. Credit: Wikimedia Commons

The 2024 Nobel Prize for Physics has been a trend breaker with computer scientists being awarded the prestigious prize.

Geoffrey Hinton, one of this year’s laureates, was previously awarded the 2018 Turing Prize, arguably the most prestigious prize in computer science.

John Hopfield, the other laureate, and Hinton, were amongst the early generation of computer scientists in the 1980s, who’d set the foundations for machine learning, a technique used to train artificial intelligence. These techniques shaped modern AI models, to take up the mantle from us, to discover patterns within reams of data, which otherwise would take humans arguably forever.

Until the last mid-century, computation was a task that required manual labor. Then, Alan Turing, the British inventor and scientist, who’d rose to fame during World War 2, having helped break the Enigma code, would conceive the theoretical basis for modern computers. It was when he tried to push further, he came up with, arguably a thought, that led to publication of “Can machines think?” Seemingly an innocuous question, but with radical consequences if it really took shape, Turing, through his conceptions of algorithms, laid the foundation of artificial intelligence.

Why the physics prize?

Artificial neural networks, particularly, form the basis for today’s much popular OpenAI’s ChatGPT, and numerous other facial, image and language translational software. But these machine learning models have broken the ceiling with regards to their applications in numerous disciplines: from computer science, to finance to physics.

Physics did form the bedrock in AI research, particularly that of condensed matter physics. Particularly of relevance is spin glass – a phenomena in condensed matter physics, that involves quantum spins behaving randomly when it’s not supercooled, when it rather becomes orderly. Their applications to AI is rather foundational.

John Hopfield and Geoff Hinton are pioneers of artificial neural networks. Hopfield, an American, and Hinton, from Britain, came from diverse disciplines. Hopfield trained as a physicist. But Hinton was a cognitive psychologist. The burgeoning field of computer science, needed interdisciplinary talent, to attack a problem that no single physicist, logician, mathematician could solve. To construct a machine that can think, it will have to learn to make sense of reality. Learning is key, and computer scientists took inspiration from across statistical and condensed matter physics, psychology and neuroscience to come up with the neural network.

Inspired by the human brain, it involves artificial neurons, that holds particular values. This takes shape when the network would be initially fed data as part of a training program before it’s trained further on unfamiliar data. These values would update upon subsequent passes with more data; forming the crux of the learning process. The potential for this to work happened though with John Hopfield constructing a simple neural network in 1982.

Hopfield network, with neurons forming a chain of connections. Credit: Wikimedia Commons

Neurons pair up with one another, to form a long chain. Hopfield would then feed an image, training it by having these neurons passing along information, but only one-way at a time. Patterns of neurons that fire together, wire together, responding to particular patterns that it formerly trained with. Known as the Hebbian postulate, it actually forms the basis for learning in the human brain. It was when the Hopefield network was able to identify even the most distorted version of the original image, did AI take its baby steps. But then to train the network to learn robustly across a swathe of more data, required additional layers of neurons, and wasn’t an easy goal to achieve. There was a need for an efficient method of learning.

Artificial neural network, with neurons forming connections. The information can go across in both directions (though not indicated in the representation). Credit: Wikimedia Commons

That’s when Geoff Hinton entered the picture at around the same timeframe, helping conceive backpropagation, a technique that’s now mainstream and is the key to machine learning models that we use today. But in 2000, Hinton conceived the multi-layered version of the “Boltzmann machine”, a neural network founded on the Hopfield network. Geoff Hinton was featured in Ed Publica‘s Know the Scientist column.

Space & Physics

MIT unveils an ultra-efficient 5G receiver that may supercharge future smart devices

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones

Published

on

Image credit: Mohamed Hassan from Pixabay

A team of MIT researchers has developed a groundbreaking wireless receiver that could transform the future of Internet of Things (IoT) devices by dramatically improving energy efficiency and resilience to signal interference.

Designed for use in compact, battery-powered smart gadgets—like health monitors, environmental sensors, and industrial trackers—the new chip consumes less than a milliwatt of power and is roughly 30 times more resistant to certain types of interference than conventional receivers.

“This receiver could help expand the capabilities of IoT gadgets,” said Soroush Araei, an electrical engineering graduate student at MIT and lead author of the study, in a media statement. “Devices could become smaller, last longer on a battery, and work more reliably in crowded wireless environments like factory floors or smart cities.”

The chip, recently unveiled at the IEEE Radio Frequency Integrated Circuits Symposium, stands out for its novel use of passive filtering and ultra-small capacitors controlled by tiny switches. These switches require far less power than those typically found in existing IoT receivers.

A key innovation lies in the chip’s clever use of a phenomenon called the Miller effect, which allows small capacitors to perform like larger ones. This means the receiver achieves necessary filtering without relying on bulky components, keeping the circuit size under 0.05 square millimeters.

Credit: Courtesy of the researchers/MIT News

Traditional IoT receivers rely on fixed-frequency filters to block interference, but next-generation 5G-compatible devices need to operate across wider frequency ranges. The MIT design meets this demand using an innovative on-chip switch-capacitor network that blocks unwanted harmonic interference early in the signal chain—before it gets amplified and digitized.

Another critical breakthrough is a technique called bootstrap clocking, which ensures the miniature switches operate correctly even at a low power supply of just 0.6 volts. This helps maintain reliability without adding complex circuitry or draining battery life.

The chip’s minimalist design—using fewer and smaller components—also reduces signal leakage and manufacturing costs, making it well-suited for mass production.

Looking ahead, the MIT team is exploring ways to run the receiver without any dedicated power source—possibly by harvesting ambient energy from nearby Wi-Fi or Bluetooth signals.

The research was conducted by Araei alongside Mohammad Barzgari, Haibo Yang, and senior author Professor Negar Reiskarimian of MIT’s Microsystems Technology Laboratories.

Continue Reading

Society

Ahmedabad Plane Crash: The Science Behind Aircraft Take-Off -Understanding the Physics of Flight

Take-off is one of the most critical phases of flight, relying on the precise orchestration of aerodynamics, propulsion, and control systems. Here’s how it works:

Published

on

On June 12, 2025, a tragic aviation accident struck Ahmedabad, India when a regional passenger aircraft, Air India flight A1-171, crashed during take-off at Sardar Vallabhbhai Patel International Airport. According to preliminary reports, the incident resulted in over 200 confirmed casualties, including both passengers and crew members, and several others are critically injured. The aviation community and scientific world now turn their eyes not just toward the cause but also toward understanding the complex science behind what should have been a routine take-off.

How Do Aircraft Take Off?

Take-off is one of the most critical phases of flight, relying on the precise orchestration of aerodynamics, propulsion, and control systems. Here’s how it works:

1. Lift and Thrust

To leave the ground, an aircraft must generate lift, a force that counters gravity. This is achieved through the unique shape of the wing, called an airfoil, which creates a pressure difference — higher pressure under the wing and lower pressure above — according to Bernoulli’s Principle and Newton’s Third Law.

Simultaneously, engines provide thrust, propelling the aircraft forward. Most commercial jets use turbofan engines, which accelerate air through turbines to generate power.

2. Critical Speeds

Before takeoff, pilots calculate critical speeds:

  • V1 (Decision Speed): The last moment a takeoff can be safely aborted.
  • Vr (Rotation Speed): The speed at which the pilot begins to lift the nose.
  • V2 (Takeoff Safety Speed): The speed needed to climb safely even if one engine fails.

If anything disrupts this process — like bird strikes, engine failure, or runway obstructions — the results can be catastrophic.

Environmental and Mechanical Challenges

Factors like wind shear, runway surface condition, mechanical integrity, or pilot error can interfere with safe take-off. Investigators will be analyzing these very aspects in the Ahmedabad case.

The Bigger Picture

Take-off accounts for a small fraction of total flight time but is disproportionately associated with accidents — approximately 14% of all aviation accidents occur during take-off or initial climb.

Continue Reading

Space & Physics

MIT claims breakthrough in simulating physics of squishy, elastic materials

In a series of experiments, the new solver demonstrated its ability to simulate a diverse array of elastic behaviors, ranging from bouncing geometric shapes to soft, squishy characters

Published

on

Image credit: Courtesy of researchers

Researchers at MIT claim to have unveiled a novel physics-based simulation method that significantly improves stability and accuracy when modeling elastic materials — a key development for industries spanning animation, engineering, and digital fabrication.

In a series of experiments, the new solver demonstrated its ability to simulate a diverse array of elastic behaviors, ranging from bouncing geometric shapes to soft, squishy characters. Crucially, it maintained important physical properties and remained stable over long periods of time — an area where many existing methods falter.

Other simulation techniques frequently struggled in tests: some became unstable and caused erratic behavior, while others introduced excessive damping that distorted the motion. In contrast, the new method preserved elasticity without compromising reliability.

“Because our method demonstrates more stability, it can give animators more reliability and confidence when simulating anything elastic, whether it’s something from the real world or even something completely imaginary,” Leticia Mattos Da Silva, a graduate student at MIT’s Department of Electrical Engineering and Computer Science, said in a media statement.

Their study, though not yet peer-reviewed or published, will be presented at the August proceedings of the SIGGRAPH conference in Vancouver, Canada.

While the solver does not prioritize speed as aggressively as some tools, it avoids the accuracy and robustness trade-offs often associated with faster methods. It also sidesteps the complexity of nonlinear solvers, which are commonly used in physics-based approaches but are often sensitive and prone to failure.

Looking ahead, the research team aims to reduce computational costs and broaden the solver’s applications. One promising direction is in engineering and fabrication, where accurate elastic simulations could enhance the design of real-world products such as garments, medical devices, and toys.

“We were able to revive an old class of integrators in our work. My guess is there are other examples where researchers can revisit a problem to find a hidden convexity structure that could offer a lot of advantages,” Mattos Da Silva added.

The study opens new possibilities not only for digital content creation but also for practical design fields that rely on predictive simulations of flexible materials.

Continue Reading

Trending