Connect with us

Technology

Nuventure Connect Launches AI Innovation Lab for Smart Industrial Innovation

The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools

Published

on

Tinu Cleatus, CEO, Nuventure COnnect

Kerala-based technology firm Nuventure Connect has launched an AI Innovation Lab aimed at accelerating the development of intelligent, sustainable industrial solutions. The launch coincides with the company’s 15th anniversary and signals a renewed focus on artificial intelligence (AI), Internet of Things (IoT), and data-driven automation.

The new lab will serve as a collaborative platform for enterprises, startups, and small-to-medium-sized businesses to design, test, and refine next-generation technologies. Nuventure, which specializes in deep-tech solutions and digital transformation, will provide technical expertise in AI, machine learning, and IoT to support co-development efforts.

“Our vision is to help businesses harness the full potential of AI and IoT to optimize operations and improve sustainability,” said Tinu Cleatus, Managing Director and CEO of Nuventure Connect.

The company also unveiled NuWave, an IoT platform under development that integrates AI-driven automation, real-time analytics, and predictive maintenance tools. The platform is targeted at industries seeking to cut energy consumption and preempt operational failures. Company representatives showcased solutions that could potentially reduce industrial energy costs by up to 40%.

Nuventure is inviting organizations to partner with the lab through structured collaboration models. Participating firms will gain access to advanced IoT infrastructure, expert mentorship, and opportunities to co-create pilot projects.

The initiative places Nuventure among a growing number of regional tech firms contributing to global trends in sustainable and AI-led industrial innovation. By opening its lab to cross-sector partnerships, the company aims to help shape the next phase of digital transformation in manufacturing and beyond.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Researchers Crack Open the ‘Black Box’ of Protein AI Models

The approach could accelerate drug target identification, vaccine research, and new biological discoveries.

Published

on

Illustrated image

For years, artificial intelligence models that predict protein structures and functions have been critical tools in drug discovery, vaccine development, and therapeutic antibody design. But while these protein language models (PLMs), often built on large language models (LLMs), deliver impressively accurate predictions, researchers have been unable to see how the models arrive at those decisions — until now.

In a study published this week in the Proceedings of the National Academy of Sciences (PNAS), a team of MIT researchers unveiled a novel method to interpret the inner workings of these black-box models. By shedding light on the features that influence predictions, the approach could accelerate drug target identification, vaccine research, and new biological discoveries.

Cracking the protein ‘black box’

“Protein language models have been widely used for many biological applications, but there’s always been a missing piece: explainability,” said Bonnie Berger, Simons Professor of Mathematics and head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In a media statement, she explained, “Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations. Additionally, identifying features that protein language models track has the potential to reveal novel biological insights.”

The study was led by MIT graduate student Onkar Gujral, with contributions from Mihir Bafna, also a graduate student, and Eric Alm, professor of biological engineering at MIT.

From AlphaFold to explainability

Protein modelling took off in 2018 when Berger and then-graduate student Tristan Bepler introduced the first protein language model. These models, much like ChatGPT processes words, analyze amino acid sequences to predict protein structure and function. Their innovations paved the way for powerful systems like AlphaFold, ESM2, and OmegaFold, transforming the fields of bioinformatics and molecular biology.

Yet, despite their predictive power, researchers remained in the dark about why a model reached certain conclusions. “We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger noted.

The sparse autoencoder approach

To address this challenge, the MIT team employed a technique called a sparse autoencoder — an algorithm originally used to interpret LLMs. Sparse autoencoders expand the representation of a protein across thousands of neural nodes, making it easier to distinguish which specific features influence the prediction.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” explained Gujral in a media statement. “Before the sparse representations are created, the networks pack information so tightly together that it’s hard to interpret the neurons.”

By analyzing these expanded representations using AI assistance from Claude, the researchers could link specific nodes to biological features such as protein families, molecular functions, or even their location in a cell. For instance, one node could be identified as signalling proteins involved in transmembrane ion transport.

Implications for drug discovery and biology

This new transparency could be transformational for drug design and vaccine development, allowing scientists to select the most reliable models for specific biomedical tasks. Moreover, the study suggests that as AI models become more powerful, they could reveal previously undiscovered biological patterns.

“Understanding what features protein models encode means researchers can fine-tune inputs, select optimal models, and potentially even uncover new biological insights from the models themselves,” Gujral said. “At some point, when these models get more powerful, you could learn more biology than you already know just from opening up the models.”

Continue Reading

Math

Researchers Unveil Breakthrough in Efficient Machine Learning with Symmetric Data

Published

on

Illustrated image

MIT researchers have developed the first mathematically proven method for training machine learning models that can efficiently interpret symmetric data—an advance that could significantly enhance the accuracy and speed of AI systems in fields ranging from drug discovery to climate analysis.

In traditional drug discovery, for example, a human looking at a rotated image of a molecule can easily recognize it as the same compound. However, standard machine learning models may misclassify the rotated image as a completely new molecule, highlighting a blind spot in current AI approaches. This shortcoming stems from the concept of symmetry, where an object’s fundamental properties remain unchanged even when it undergoes transformations like rotation.

“If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties,” the researchers explained. While some empirical techniques have shown promise, there was previously no provably efficient way to train models that rigorously account for symmetry—until now.

“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” said Behrooz Tahmasebi, MIT graduate student and co-lead author of the new study, in a media statement.

The research, recently presented at the International Conference on Machine Learning, is co-authored by fellow MIT graduate student Ashkan Soleymani (co-lead author), Stefanie Jegelka (associate professor of EECS, IDSS member, and CSAIL member), and Patrick Jaillet (Dugald C. Jackson Professor of Electrical Engineering and Computer Science and principal investigator at LIDS).

Rethinking how AI sees the world

Symmetric data appears across numerous scientific disciplines. For instance, a model capable of recognizing an object irrespective of its position in an image demonstrates such symmetry. Without built-in mechanisms to process these patterns, machine learning models can make more mistakes and require massive datasets for training. Conversely, models that leverage symmetry can work faster and with fewer data points.

“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi noted.

The MIT researchers explored the trade-off between how much data a model needs and the computational effort required. Their resulting algorithm brings symmetry to the fore, allowing models to learn from fewer examples without spending excessive computing resources.

Blending algebra and geometry

The team combined strategies from both algebra and geometry, reformulating the problem so the machine learning model could efficiently process the inherent symmetries in the data. This innovative blend results in an optimization problem that is computationally tractable and requires fewer training samples.

“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” explained Tahmasebi.

By demonstrating that symmetry-aware training can be both accurate and efficient, the breakthrough paves the way for the next generation of neural network architectures, which promise to be more precise and less resource-intensive than conventional models.

“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” added Soleymani.

This foundational advance is expected to influence future research in diverse applications, including materials science, astronomy, and climate modeling, wherever symmetry in data is a key feature.

Continue Reading

Books

Humour, Humanity, and the Machine: A New Book Explores Our Comic Relationship with Technology

MIT scholar Benjamin Mangrum examines how comedy helps us cope with, critique, and embrace computing.

Published

on

Credit: Courtesy of Stanford University Press; Allegra Boverman

In a world increasingly shaped by algorithms, automation, and artificial intelligence, one unexpected tool continues to shape how we process technological change: comedy.

That’s the central argument of a thought-provoking new book by MIT literature professor Benjamin Mangrum, titled The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence, published this month by Stanford University Press. Drawing on literature, film, television, and theater, Mangrum explores how humor has helped society make sense of machines-and the humans who build and depend on them.

“Comedy makes computing feel less impersonal, less threatening,” Mangrum writes. “It allows us to bring something strange into our lives in a way that’s familiar, even pleasurable.”

From romantic plots to digital tensions

One of the book’s core insights is that romantic comedies-perhaps surprisingly-have been among the richest cultural spaces for grappling with our collective unease about technology. Mangrum traces this back to classic narrative structures, where characters who begin as obstacles eventually become partners in resolution. He suggests that computing often follows a similar arc in cultural storytelling.

“In many romantic comedies,” Mangrum explains, “there’s a figure or force that seems to stand in the way of connection. Over time, that figure is transformed and folded into the couple’s union. In tech narratives, computing sometimes plays this same role-beginning as a disruption, then becoming an ally.”

This structure, he notes, is centuries old-prevalent in Shakespearean comedies and classical drama-but it has found renewed relevance in the digital age.

Satirizing silicon dreams

In the book, Mangrum also explores what he calls the “Great Tech-Industrial Joke”-a mode of cultural humor aimed squarely at the inflated promises of the technology industry. Many of today’s comedies, from satirical shows like Silicon Valley to viral social media content, lampoon the gap between utopian tech rhetoric and underwhelming or problematic outcomes.

“Tech companies often announce revolutionary goals,” Mangrum observes, “but what we get is just slightly faster email. It’s a funny setup, but also a sharp critique.”

This dissonance, he argues, is precisely what makes tech such fertile ground for comedy. We live with machines that are both indispensable and, at times, disappointing. Humor helps bridge that contradiction.

The ethics of authenticity

Another recurring theme in The Comedy of Computation is the modern ideal of authenticity, and how computing complicates it. From social media filters to AI-generated content, questions about what’s “real” are everywhere-and comedy frequently calls out the performance.

“Comedy has always mocked pretension,” Mangrum says. “In today’s context, that often means jokes about curated digital lives or artificial intelligence mimicking human quirks.”

Messy futures, meaningful laughter

Ultimately, Mangrum doesn’t claim that comedy solves the challenges of computing-but he argues that it gives us a way to live with them.

“There’s this really complicated, messy picture,” he notes. “Comedy doesn’t always resolve it, but it helps us experience it, and sometimes, laugh through it.”

As we move deeper into an era of smart machines, digital identities, and algorithmic decision-making, Mangrum’s book reminds us that a well-placed joke might still be one of our most human responses.

(With inputs from MIT News)

Continue Reading

Trending