The Sciences
Artificial intelligence outstrips clinical tests in predicting the progression of Alzheimer’s disease
Dementia presents a substantial healthcare challenge globally, impacting more than 55 million individuals with an annual economic burden estimated at $820 billion.

Scientists from Cambridge have created an AI tool that can predict with 80% accuracy whether individuals showing early signs of dementia will remain stable or progress to Alzheimer’s disease in four out of five cases.
This innovative approach has the potential to decrease reliance on invasive and expensive diagnostic procedures, leading to better treatment outcomes during early stages when interventions like lifestyle adjustments or new medications may be most effective.
Dementia presents a substantial healthcare challenge globally, impacting more than 55 million individuals with an annual economic burden estimated at $820 billion. The prevalence of dementia is projected to nearly triple over the next five decades.
Alzheimer’s disease is the primary cause of dementia, responsible for 60-80% of cases. Early detection is critical because treatments are most likely to be effective during this stage. However, accurate early diagnosis and prognosis of dementia often require invasive or costly procedures such as positron emission tomography (PET) scans or lumbar punctures, which are not universally accessible in memory clinics. Consequently, up to one-third of patients may receive incorrect diagnoses, while others may be diagnosed too late for treatment to be beneficial.
“We’ve created a tool which, despite using only data from cognitive tests and MRI scans, is much more sensitive than current approaches at predicting whether someone will progress from mild symptoms to Alzheimer’s – and if so, whether this progress will be fast or slow”
Professor Zoe Kourtzi
Scientists from the Department of Psychology at the University of Cambridge have led a team in developing a machine learning model that predicts the progression of mild memory and cognitive issues to Alzheimer’s disease more accurately than current clinical tools. Their research, published in eClinical Medicine, utilized non-invasive and cost-effective patient data — including cognitive assessments and structural MRI scans showing grey matter deterioration — from over 400 individuals in a US-based research cohort.
The model’s efficacy was then tested using real-world data from an additional 600 participants in the same US cohort, alongside longitudinal data from 900 individuals from memory clinics in the UK and Singapore. The algorithm successfully differentiated between individuals with stable mild cognitive impairment and those who progressed to Alzheimer’s disease within a three-year timeframe. It achieved an 82% accuracy in correctly identifying those who developed Alzheimer’s and an 81% accuracy in identifying those who did not, using only cognitive tests and MRI scans.
Compared to current clinical standards, which rely on markers like grey matter atrophy or cognitive scores, the algorithm demonstrated approximately three times greater accuracy in predicting Alzheimer’s progression. This significant improvement suggests the model could substantially reduce instances of misdiagnosis.
“We’ve created a tool which, despite using only data from cognitive tests and MRI scans, is much more sensitive than current approaches at predicting whether someone will progress from mild symptoms to Alzheimer’s – and if so, whether this progress will be fast or slow,” said Senior author Professor Zoe Kourtzi from the Department of Psychology at the University of Cambridge.
“This has the potential to significantly improve patient wellbeing, showing us which people need closest care, while removing the anxiety for those patients we predict will remain stable. At a time of intense pressure on healthcare resources, this will also help remove the need for unnecessary invasive and costly diagnostic tests,” he added.
Math
Researchers Unveil Breakthrough in Efficient Machine Learning with Symmetric Data

MIT researchers have developed the first mathematically proven method for training machine learning models that can efficiently interpret symmetric data—an advance that could significantly enhance the accuracy and speed of AI systems in fields ranging from drug discovery to climate analysis.
In traditional drug discovery, for example, a human looking at a rotated image of a molecule can easily recognize it as the same compound. However, standard machine learning models may misclassify the rotated image as a completely new molecule, highlighting a blind spot in current AI approaches. This shortcoming stems from the concept of symmetry, where an object’s fundamental properties remain unchanged even when it undergoes transformations like rotation.
“If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties,” the researchers explained. While some empirical techniques have shown promise, there was previously no provably efficient way to train models that rigorously account for symmetry—until now.
“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” said Behrooz Tahmasebi, MIT graduate student and co-lead author of the new study, in a media statement.
The research, recently presented at the International Conference on Machine Learning, is co-authored by fellow MIT graduate student Ashkan Soleymani (co-lead author), Stefanie Jegelka (associate professor of EECS, IDSS member, and CSAIL member), and Patrick Jaillet (Dugald C. Jackson Professor of Electrical Engineering and Computer Science and principal investigator at LIDS).
Rethinking how AI sees the world
Symmetric data appears across numerous scientific disciplines. For instance, a model capable of recognizing an object irrespective of its position in an image demonstrates such symmetry. Without built-in mechanisms to process these patterns, machine learning models can make more mistakes and require massive datasets for training. Conversely, models that leverage symmetry can work faster and with fewer data points.
“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi noted.
The MIT researchers explored the trade-off between how much data a model needs and the computational effort required. Their resulting algorithm brings symmetry to the fore, allowing models to learn from fewer examples without spending excessive computing resources.
Blending algebra and geometry
The team combined strategies from both algebra and geometry, reformulating the problem so the machine learning model could efficiently process the inherent symmetries in the data. This innovative blend results in an optimization problem that is computationally tractable and requires fewer training samples.
“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” explained Tahmasebi.
By demonstrating that symmetry-aware training can be both accurate and efficient, the breakthrough paves the way for the next generation of neural network architectures, which promise to be more precise and less resource-intensive than conventional models.
“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” added Soleymani.
This foundational advance is expected to influence future research in diverse applications, including materials science, astronomy, and climate modeling, wherever symmetry in data is a key feature.
Health
Researchers Develop Low-Cost Sensor for Real-Time Detection of Toxic Sulfur Dioxide Gas
Sulfur dioxide, a toxic air pollutant primarily released from vehicle exhaust and industrial processes, is notorious for triggering respiratory irritation, asthma attacks, and long-term lung damage.

In a significant breakthrough for environmental monitoring and public health, scientists from the Centre for Nano and Soft Matter Sciences (CeNS), Bengaluru, India, have developed an affordable and highly sensitive sensor capable of detecting sulfur dioxide (SO₂) gas at extremely low concentrations.
Sulfur dioxide, a toxic air pollutant primarily released from vehicle exhaust and industrial processes, is notorious for triggering respiratory irritation, asthma attacks, and long-term lung damage. Monitoring its presence in real time is essential, but existing technologies are often expensive, power-hungry, or ineffective at detecting the gas at trace levels.
To address this gap, the CeNS team, under the leadership of Dr. S. Angappane, has engineered a novel sensor by combining two metal oxides — nickel oxide (NiO) and neodymium nickelate (NdNiO₃). NiO serves as the receptor that captures SO₂ molecules, while NdNiO₃ acts as a transducer that converts the chemical interaction into an electrical signal. This innovative design enables the sensor to detect SO₂ at concentrations as low as 320 parts per billion (ppb), outperforming many commercial alternatives.
Speaking about the development, Dr. Angappane said in a media statement, “This sensor system not only advances the sensitivity benchmark but also brings real-time gas monitoring within reach for a wider range of users. It demonstrates how smart materials can provide practical solutions for real-world environmental challenges.”

The CeNS team has also built a portable prototype incorporating the sensor. It features a user-friendly threshold-triggered alert system with color-coded indicators: green for safe levels, yellow for warning, and red for danger. This visual approach ensures that even non-specialist users can understand and respond to pollution risks instantly. Its compact size and lightweight design make it ideal for deployment in industrial zones, urban neighborhoods, and enclosed environments requiring continuous air quality surveillance.
The sensor system was conceptualized and designed by Mr. Vishnu G Nath, with key contributions from Dr. Shalini Tomar, Mr. Nikhil N. Rao, Dr. Muhammed Safeer Naduvil Kovilakath, Dr. Neena S. John, Dr. Satadeep Bhattacharjee, and Prof. Seung-Cheol Lee. The research findings were recently published in the journal Small.
With this innovation, CeNS reinforces the role of advanced materials science in developing cost-effective technologies that protect both public health and the environment.
The Sciences
How a Human-Inspired Algorithm Is Revolutionizing Machine Repair Models in the Wake of Global Disruptions
A new multi-server machining model from India integrates emergency scenarios and behavioral uncertainties to optimize industrial resilience post-pandemic.

In the aftermath of the COVID-19 pandemic, industries worldwide grappled with a shared vulnerability: sudden breakdowns and disrupted repair services. Now, a new research study by Indian mathematicians C.K. Anjali and Sreekanth Kolledath, from Amrita Vishwa Vidyapeetham, Kochi, Kerala, offers a scientifically robust answer.
Published in one of Elsevier‘s peer-reviewed journals, the study introduces an innovative multi-server machining queuing model that simulates emergency vacations — sudden, unplanned leaves of absence taken by maintenance staff due to crises such as pandemics or natural disasters.
This innovative approach also accounts for “reneging”, when malfunctioning units exit the system before being serviced, and integrates retention strategies to keep these units within the repair cycle — a nod to the real-world pressures and adaptations faced by modern industrial systems.
“The disruptions caused by the COVID-19 pandemic made it clear how critical unexpected breakdowns and service interruptions can be in industrial systems,” co-author Sreekanth Kolledath said to EdPublica. “This inspired us to model such emergency scenarios more realistically and explore efficient optimization strategies.”
The Power of teaching–learning-based optimization
What truly sets this study apart is its use of a relatively novel algorithm: Teaching–Learning-Based Optimization (TLBO) — a human-inspired metaheuristic. TLBO mimics the interactions in a classroom, where students improve by learning from both teachers and peers. This “educational” algorithm is benchmarked against more established methods like Particle Swarm Optimization (PSO) and Genetic Algorithms (GA).
The result? TLBO consistently outperformed its peers in optimizing the cost and efficiency of repair operations under complex conditions, showing robustness in handling dynamic workloads and service interruptions.
“This research helps bridge a gap in queuing theory by not only modelling realistic industrial disruptions but also applying an underused yet highly effective optimization technique,” explained lead researcher C.K. Anjali.
Modelling real-life Complexities
The model simulates environments like CNC machining systems where multiple machines (K), standbys (S), and repairmen (R) operate under fluctuating conditions. Emergency vacations are modelled using probability distributions, while the likelihood of units leaving (reneging) and being retained is factored into performance and cost metrics.

Using matrix-analytic methods, the researchers assessed system behaviour across parameters like waiting times, failure rates, and repair loads. Their simulations revealed:
- Increased emergency vacations lead to higher wait times and unit failures.
- Faster server startup (post-vacation) mitigates congestion.
- Higher reneging probability severely affects system throughput — but retention mechanisms help stabilize it.
- TLBO yielded the lowest total operational cost among the three algorithms across all test cases.
A blueprint for resilient manufacturing
Beyond academic impact, the implications of this research are practical and global. Industries like aerospace, healthcare, and smart manufacturing—where machine uptime is crucial—can integrate this model to simulate and prepare for emergency disruptions.
Moreover, by applying TLBO, organizations can fine-tune costs related to machine downtime, labour availability, and service logistics, helping build resilience in supply chains and production floors.
What’s next?
The researchers suggest future work could extend the model to cloud-based repair simulations, energy-aware systems, and AI-integrated predictive maintenance, further aligning with the Industry 5.0 vision.
“This research was made possible only due to the constant encouragement and support of Dr. U. Krishnakumar, our visionary Director at the Kochi Campus in Kerala, India,” adds Kolledath. “He is widely known for fostering a culture of quality research within the institution.”
As the world continues to adapt to increasingly unpredictable events, the fusion of human-inspired algorithms with real-world engineering models might just be the lesson industries need most.
-
Society6 months ago
Starliner crew challenge rhetoric, says they were never “stranded”
-
Space & Physics5 months ago
Could dark energy be a trick played by time?
-
Women In Science5 months ago
Neena Gupta: Shaping the Future of Algebraic Geometry
-
Earth6 months ago
How IIT Kanpur is Paving the Way for a Solar-Powered Future in India’s Energy Transition
-
Space & Physics5 months ago
Sunita Williams aged less in space due to time dilation
-
Earth4 months ago
122 Forests, 3.2 Million Trees: How One Man Built the World’s Largest Miyawaki Forest
-
Know The Scientist5 months ago
Mysterious, resilient, and radiant: The timeless legacy of Marie Curie
-
Women In Science6 months ago
How Dr. Julia Mofokeng is Rewriting the Story of Plastic Waste