Technology
Nuventure Connect Launches AI Innovation Lab for Smart Industrial Innovation
The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools

Kerala-based technology firm Nuventure Connect has launched an AI Innovation Lab aimed at accelerating the development of intelligent, sustainable industrial solutions. The launch coincides with the company’s 15th anniversary and signals a renewed focus on artificial intelligence (AI), Internet of Things (IoT), and data-driven automation.
The new lab will serve as a collaborative platform for enterprises, startups, and small-to-medium-sized businesses to design, test, and refine next-generation technologies. Nuventure, which specializes in deep-tech solutions and digital transformation, will provide technical expertise in AI, machine learning, and IoT to support co-development efforts.
“Our vision is to help businesses harness the full potential of AI and IoT to optimize operations and improve sustainability,” said Tinu Cleatus, Managing Director and CEO of Nuventure Connect.
The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools. The platform is targeted at industries seeking to cut energy consumption and preempt operational failures. Company representatives showcased solutions that could potentially reduce industrial energy costs by up to 40%.
Nuventure is inviting organizations to partner with the lab through structured collaboration models. Participating firms will gain access to advanced IoT infrastructure, expert mentorship, and opportunities to co-create pilot projects.
The initiative places Nuventure among a growing number of regional tech firms contributing to global trends in sustainable and AI-led industrial innovation. By opening its lab to cross-sector partnerships, the company aims to help shape the next phase of digital transformation in manufacturing and beyond.
Math
Researchers Unveil Breakthrough in Efficient Machine Learning with Symmetric Data

MIT researchers have developed the first mathematically proven method for training machine learning models that can efficiently interpret symmetric data—an advance that could significantly enhance the accuracy and speed of AI systems in fields ranging from drug discovery to climate analysis.
In traditional drug discovery, for example, a human looking at a rotated image of a molecule can easily recognize it as the same compound. However, standard machine learning models may misclassify the rotated image as a completely new molecule, highlighting a blind spot in current AI approaches. This shortcoming stems from the concept of symmetry, where an object’s fundamental properties remain unchanged even when it undergoes transformations like rotation.
“If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties,” the researchers explained. While some empirical techniques have shown promise, there was previously no provably efficient way to train models that rigorously account for symmetry—until now.
“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” said Behrooz Tahmasebi, MIT graduate student and co-lead author of the new study, in a media statement.
The research, recently presented at the International Conference on Machine Learning, is co-authored by fellow MIT graduate student Ashkan Soleymani (co-lead author), Stefanie Jegelka (associate professor of EECS, IDSS member, and CSAIL member), and Patrick Jaillet (Dugald C. Jackson Professor of Electrical Engineering and Computer Science and principal investigator at LIDS).
Rethinking how AI sees the world
Symmetric data appears across numerous scientific disciplines. For instance, a model capable of recognizing an object irrespective of its position in an image demonstrates such symmetry. Without built-in mechanisms to process these patterns, machine learning models can make more mistakes and require massive datasets for training. Conversely, models that leverage symmetry can work faster and with fewer data points.
“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi noted.
The MIT researchers explored the trade-off between how much data a model needs and the computational effort required. Their resulting algorithm brings symmetry to the fore, allowing models to learn from fewer examples without spending excessive computing resources.
Blending algebra and geometry
The team combined strategies from both algebra and geometry, reformulating the problem so the machine learning model could efficiently process the inherent symmetries in the data. This innovative blend results in an optimization problem that is computationally tractable and requires fewer training samples.
“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” explained Tahmasebi.
By demonstrating that symmetry-aware training can be both accurate and efficient, the breakthrough paves the way for the next generation of neural network architectures, which promise to be more precise and less resource-intensive than conventional models.
“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” added Soleymani.
This foundational advance is expected to influence future research in diverse applications, including materials science, astronomy, and climate modeling, wherever symmetry in data is a key feature.
Books
Humour, Humanity, and the Machine: A New Book Explores Our Comic Relationship with Technology
MIT scholar Benjamin Mangrum examines how comedy helps us cope with, critique, and embrace computing.

In a world increasingly shaped by algorithms, automation, and artificial intelligence, one unexpected tool continues to shape how we process technological change: comedy.
That’s the central argument of a thought-provoking new book by MIT literature professor Benjamin Mangrum, titled The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence, published this month by Stanford University Press. Drawing on literature, film, television, and theater, Mangrum explores how humor has helped society make sense of machines-and the humans who build and depend on them.
“Comedy makes computing feel less impersonal, less threatening,” Mangrum writes. “It allows us to bring something strange into our lives in a way that’s familiar, even pleasurable.”
From romantic plots to digital tensions
One of the book’s core insights is that romantic comedies-perhaps surprisingly-have been among the richest cultural spaces for grappling with our collective unease about technology. Mangrum traces this back to classic narrative structures, where characters who begin as obstacles eventually become partners in resolution. He suggests that computing often follows a similar arc in cultural storytelling.
“In many romantic comedies,” Mangrum explains, “there’s a figure or force that seems to stand in the way of connection. Over time, that figure is transformed and folded into the couple’s union. In tech narratives, computing sometimes plays this same role-beginning as a disruption, then becoming an ally.”
This structure, he notes, is centuries old-prevalent in Shakespearean comedies and classical drama-but it has found renewed relevance in the digital age.
Satirizing silicon dreams
In the book, Mangrum also explores what he calls the “Great Tech-Industrial Joke”-a mode of cultural humor aimed squarely at the inflated promises of the technology industry. Many of today’s comedies, from satirical shows like Silicon Valley to viral social media content, lampoon the gap between utopian tech rhetoric and underwhelming or problematic outcomes.
“Tech companies often announce revolutionary goals,” Mangrum observes, “but what we get is just slightly faster email. It’s a funny setup, but also a sharp critique.”
This dissonance, he argues, is precisely what makes tech such fertile ground for comedy. We live with machines that are both indispensable and, at times, disappointing. Humor helps bridge that contradiction.
The ethics of authenticity
Another recurring theme in The Comedy of Computation is the modern ideal of authenticity, and how computing complicates it. From social media filters to AI-generated content, questions about what’s “real” are everywhere-and comedy frequently calls out the performance.
“Comedy has always mocked pretension,” Mangrum says. “In today’s context, that often means jokes about curated digital lives or artificial intelligence mimicking human quirks.”
Messy futures, meaningful laughter
Ultimately, Mangrum doesn’t claim that comedy solves the challenges of computing-but he argues that it gives us a way to live with them.
“There’s this really complicated, messy picture,” he notes. “Comedy doesn’t always resolve it, but it helps us experience it, and sometimes, laugh through it.”
As we move deeper into an era of smart machines, digital identities, and algorithmic decision-making, Mangrum’s book reminds us that a well-placed joke might still be one of our most human responses.
(With inputs from MIT News)
Technology
Researchers Develop Breakthrough Imaging Tech That Lets Robots See Inside Boxes
The system, called mmNorm, uses millimeter wave (mmWave) signals — commonly used in Wi-Fi networks — to reconstruct high-resolution 3D shapes of objects that are fully occluded from view

A team of researchers at the Massachusetts Institute of Technology (MIT) has developed a new imaging technology that allows robots to see hidden objects — such as tools buried under packing material or items tucked inside drawers — with unprecedented accuracy.
The system, called mmNorm, uses millimeter wave (mmWave) signals — commonly used in Wi-Fi networks — to reconstruct high-resolution 3D shapes of objects that are fully occluded from view. The method achieved 96% reconstruction accuracy on complex-shaped items such as mugs, power tools, and silverware — outperforming existing methods by a wide margin.
In a media statement, Fadel Adib, associate professor in MIT’s Department of Electrical Engineering and Computer Science and senior author of the study, said:
“We’ve been interested in this problem for quite a while, but past methods weren’t getting us where we needed to go. We needed a very different way of using these signals than what’s been done for over 50 years.”
Seeing What’s Invisible
mmNorm works by capturing and interpreting the way mmWave signals bounce off surfaces — even when those surfaces are hidden from view. Unlike traditional radar, which struggles to image small items with detail, mmNorm estimates the direction of each object’s surface using signal strength and geometry, then reconstructs a full 3D model from that data.
“Some antennas might have a very strong vote, some might have a very weak vote… and we combine all votes to produce one surface normal that is agreed upon by all antenna locations,” explained Laura Dodds, lead author and MIT research assistant, in a media statement.
The researchers built a prototype by mounting a radar unit on a robotic arm, which moved around the object while collecting signal data. This allowed the system to determine not just what an object was, but its exact shape and orientation — crucial information for robots tasked with delicate or complex manipulation.
Real-World Applications
The implications of mmNorm are wide-ranging. In manufacturing or warehouses, it could enable robots to retrieve the right tool or part from cluttered drawers or sealed containers. In homes or assisted living facilities, it could allow service robots to interact more naturally and safely with everyday items. In security and defense, the technology could enhance the detection of concealed objects in scanners.
“Our qualitative results really speak for themselves. The amount of improvement you see makes it easier to develop new applications using these reconstructions,” said Tara Boroushaki, co-author and research assistant.
The system also successfully distinguished between multiple objects made of different materials — including wood, glass, plastic, and metal — although its performance drops when objects are behind thick walls or metallic barriers.
The research team aims to improve mmNorm’s resolution, enhance its ability to handle less reflective materials, and make it more robust for use through thicker obstructions.
“This work really represents a paradigm shift in how we think about these signals and the 3D reconstruction process,” Dodds added. “We’re excited to see how the insights we’ve gained here can have a broad impact.”
The study, co-authored by Fadel Adib, Laura Dodds, Tara Boroushaki, and former MIT postdoc Kaichen Zhou, was presented at the 2025 Annual International Conference on Mobile Systems, Applications and Services (ACM MobiSys 2025).
-
Society5 months ago
Starliner crew challenge rhetoric, says they were never “stranded”
-
Space & Physics4 months ago
Could dark energy be a trick played by time?
-
Earth5 months ago
How IIT Kanpur is Paving the Way for a Solar-Powered Future in India’s Energy Transition
-
Space & Physics4 months ago
Sunita Williams aged less in space due to time dilation
-
Women In Science4 months ago
Neena Gupta: Shaping the Future of Algebraic Geometry
-
Learning & Teaching5 months ago
Canine Cognitive Abilities: Memory, Intelligence, and Human Interaction
-
Society6 months ago
Sustainable Farming: The Microgreens Model from Kerala, South India
-
Earth3 months ago
122 Forests, 3.2 Million Trees: How One Man Built the World’s Largest Miyawaki Forest