Technology
Nuventure Connect Launches AI Innovation Lab for Smart Industrial Innovation
The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools

Kerala-based technology firm Nuventure Connect has launched an AI Innovation Lab aimed at accelerating the development of intelligent, sustainable industrial solutions. The launch coincides with the company’s 15th anniversary and signals a renewed focus on artificial intelligence (AI), Internet of Things (IoT), and data-driven automation.
The new lab will serve as a collaborative platform for enterprises, startups, and small-to-medium-sized businesses to design, test, and refine next-generation technologies. Nuventure, which specializes in deep-tech solutions and digital transformation, will provide technical expertise in AI, machine learning, and IoT to support co-development efforts.
“Our vision is to help businesses harness the full potential of AI and IoT to optimize operations and improve sustainability,” said Tinu Cleatus, Managing Director and CEO of Nuventure Connect.
The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools. The platform is targeted at industries seeking to cut energy consumption and preempt operational failures. Company representatives showcased solutions that could potentially reduce industrial energy costs by up to 40%.
Nuventure is inviting organizations to partner with the lab through structured collaboration models. Participating firms will gain access to advanced IoT infrastructure, expert mentorship, and opportunities to co-create pilot projects.
The initiative places Nuventure among a growing number of regional tech firms contributing to global trends in sustainable and AI-led industrial innovation. By opening its lab to cross-sector partnerships, the company aims to help shape the next phase of digital transformation in manufacturing and beyond.
Health
PUPS – the AI tool that can predict where exactly proteins are in human cells
Dubbed, the Prediction of Unseen Proteins’ Subcellular Localization (or PUPS), the AI tool can account for the effects of protein mutations and cellular stress—key factors in disease progression.

Researchers from MIT, Harvard University, and the Broad Institute have unveiled a groundbreaking artificial intelligence tool that can accurately predict where proteins are located within any human cell, even if both the protein and cell line have never been studied before. The method – Prediction of Unseen Proteins’ Subcellular Localization (or PUPS) – marks a major advancement in biological research and could significantly streamline disease diagnosis and drug discovery.
Protein localization—the precise location of a protein within a cell—is key to understanding its function. Misplaced proteins are known to contribute to diseases like Alzheimer’s, cystic fibrosis, and cancer. However, identifying protein locations manually is expensive and slow, particularly given the vast number of proteins in a single cell.
The new technique leverages a protein language model and a sophisticated computer vision system. It produces a detailed image that highlights where the protein is likely to be located at the single-cell level, offering far more precise insights than many existing models, which average results across all cells of a given type.
“You could do these protein-localization experiments on a computer without having to touch any lab bench, hopefully saving yourself months of effort. While you would still need to verify the prediction, this technique could act like an initial screening of what to test for experimentally,” said Yitong Tseo, a graduate student in MIT’s Computational and Systems Biology program and co-lead author of the study, in a media statement.
Tseo’s co-lead author, Xinyi Zhang, emphasized the model’s ability to generalize: “Most other methods usually require you to have a stain of the protein first, so you’ve already seen it in your training data. Our approach is unique in that it can generalize across proteins and cell lines at the same time,” she said in a media statement.
PUPS was validated through laboratory experiments and shown to outperform baseline AI methods in predicting protein locations with greater accuracy. The tool is also capable of accounting for the effects of protein mutations and cellular stress—key factors in disease progression.
Published in Nature Methods, the research was led by senior authors Fei Chen of Harvard and the Broad Institute, and Caroline Uhler, the Andrew and Erna Viterbi Professor at MIT. Future goals include enabling PUPS to analyze protein interactions and make predictions in live human tissue rather than cultured cells.
Space & Physics
MIT Engineers Develop Energy-Efficient Hopping Robot for Disaster Search Missions
The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second

MIT researchers have unveiled an insect-scale robot capable of hopping across treacherous terrain—offering a new mobility solution for disaster response scenarios like collapsed buildings after earthquakes.
Unlike traditional crawling robots that struggle with tall obstacles or aerial robots that quickly drain power, this thumb-sized machine combines both approaches. By using a spring-loaded leg and four flapping-wing modules, the robot can leap over debris and uneven ground while using 60 percent less energy than a flying robot.
“Being able to put batteries, circuits, and sensors on board has become much more feasible with a hopping robot than a flying one. Our hope is that one day this robot could go out of the lab and be useful in real-world scenarios,” says Yi-Hsuan (Nemo) Hsiao, an MIT graduate student and co-lead author of a new paper published today in Science Advances.
The hopping mechanism allows the robot to jump nearly 20 centimeters—four times its height—at speeds up to 30 centimeters per second. It easily navigates ice, wet surfaces, and even dynamic environments, including hopping onto a hovering drone without damage.
Co-led by researchers from MIT and the City University of Hong Kong, the team engineered the robot with an elastic compression-spring leg and soft actuator-powered wings. These wings not only stabilize the robot mid-air but also compensate for any energy lost during impact with the ground.
“If you have an ideal spring, your robot can just hop along without losing any energy. But since our spring is not quite ideal, we use the flapping modules to compensate for the small amount of energy it loses when it makes contact with the ground,” Hsiao explains.
Its robust control system determines orientation and takeoff velocity based on real-time sensing data. The robot’s agility and light weight allow it to survive harsh impacts and perform acrobatic flips.
“We have been using the same robot for this entire series of experiments, and we never needed to stop and fix it,” Hsiao adds.
The robot has already shown promise on various surfaces—grass, ice, soil, wet glass—and can adapt its jump depending on the terrain. According to Hsiao, “The robot doesn’t really care about the angle of the surface it is landing on. As long as it doesn’t slip when it strikes the ground, it will be fine.”
Future developments aim to enhance autonomy by equipping the robot with onboard batteries and sensors, potentially enabling it to assist in search-and-rescue missions beyond the lab.
Health
Could LLMs Revolutionize Drug and Material Design?
These researchers have developed an innovative system that augments an LLM with graph-based AI models, designed specifically to handle molecular structures

A new method is changing the way we think about molecule design, bringing us closer to the possibility of using large language models (LLMs) to streamline the creation of new medicines and materials. Imagine asking, in plain language, for a molecule with specific properties, and receiving a comprehensive plan on how to synthesize it. This futuristic vision is now within reach, thanks to a collaboration between researchers from MIT and the MIT-IBM Watson AI Lab.
A New era in molecular discovery
Traditionally, discovering the right molecules for medicines and materials has been a slow and resource-intensive process. It often involves the use of vast computational power and months of painstaking work to explore the nearly infinite pool of potential molecular candidates. However, this new method, blending LLMs with other machine-learning models known as graph-based models, offers a promising solution to speed up this process.
These researchers have developed an innovative system that augments an LLM with graph-based AI models, designed specifically to handle molecular structures. The approach allows users to input natural language queries specifying the desired molecular properties, and in return, the system provides not only a molecular design but also a step-by-step synthesis plan.
LLMs and graph models
LLMs like ChatGPT have revolutionized the way we interact with text, but they face challenges when it comes to molecular design. Molecules are graph structures—composed of atoms and bonds—which makes them fundamentally different from text. LLMs typically process text as a sequence of words, but molecules do not follow a linear structure. This discrepancy has made it difficult for LLMs to understand and predict molecular configurations in the same way they handle sentences.
To bridge this gap, MIT’s researchers created Llamole—a system that uses LLMs to interpret user queries and then switches between different graph-based AI modules to generate molecular structures, explain their rationale, and devise a synthesis strategy. The system combines the power of text, graphs, and synthesis steps into a unified workflow.
As a result, this multimodal approach drastically improves performance. Llamole was able to generate molecules that were far better at meeting user specifications and more likely to have a viable synthesis plan, increasing the success rate from 5 percent to 35 percent.
Llamole’s success lies in its unique ability to seamlessly combine language processing with graph-based molecular modeling. For example, if a user requests a molecule with specific traits—such as one that can penetrate the blood-brain barrier and inhibit HIV—the LLM interprets the plain-language request and switches to a graph module to generate the appropriate molecular structure.
This switch occurs through the use of a new type of trigger token, allowing the LLM to activate specific modules as needed. The process unfolds in stages: the LLM first predicts the molecular structure, then uses a graph neural network to encode the structure, and finally, a retrosynthetic module predicts the necessary steps to synthesize the molecule. The seamless flow between these stages ensures that the LLM maintains an understanding of what each module does, further enhancing its predictive accuracy.
“The beauty of this is that everything the LLM generates before activating a particular module gets fed into that module itself. The module is learning to operate in a way that is consistent with what came before,” says Michael Sun, an MIT graduate student and co-author of the study.
Simplicity meets precision
One of the most striking aspects of this new method is its ability to generate simpler, more cost-effective molecular structures. In tests, Llamole outperformed other LLM-based methods and achieved a notable 35 percent success rate in retrosynthetic planning, up from a mere 5 percent with traditional approaches. “On their own, LLMs struggle to figure out how to synthesize molecules because it requires a lot of multistep planning. Our method can generate better molecular structures that are also easier to synthesize,” says Gang Liu, the study’s lead author.
By designing molecules with simpler structures and more accessible building blocks, Llamole could significantly reduce the time and cost involved in developing new compounds.
The road ahead
Though Llamole’s current capabilities are impressive, there is still work to be done. The researchers built two custom datasets to train Llamole, but these datasets focus on only 10 molecular properties. Moving forward, they hope to expand Llamole’s capabilities to design molecules based on a broader range of properties and improve the system’s retrosynthetic planning success rate.
In the long run, the team envisions Llamole serving as a foundation for broader applications beyond molecular design. “Llamole demonstrates the feasibility of using large language models as an interface to complex data beyond textual description, and we anticipate them to be a foundation that interacts with other AI algorithms to solve any graph problems,” says Jie Chen, a senior researcher at MIT-IBM Watson AI Lab.
With further refinements, Llamole could revolutionize fields from pharmaceuticals to material science, offering a glimpse into the future of AI-driven innovation in molecular discovery.
-
Earth3 months ago
How IIT Kanpur is Paving the Way for a Solar-Powered Future in India’s Energy Transition
-
Space & Physics2 months ago
Could dark energy be a trick played by time?
-
Society3 months ago
Starliner crew challenge rhetoric, says they were never “stranded”
-
Space & Physics5 months ago
Obituary: R. Chidambaram, Eminent Physicist and Architect of India’s Nuclear Program
-
Society3 months ago
Sustainable Farming: The Microgreens Model from Kerala, South India
-
Space & Physics2 months ago
Sunita Williams aged less in space due to time dilation
-
Society4 months ago
DeepSeek: The Good, The Bad, and The Ugly
-
EDUNEWS & VIEWS4 months ago
Indian kids use different math skills at work vs. school