Connect with us

Know The Scientist

Paul Dirac – the mystifying genius with a doleful past

Meet the enigmatic British physicist who combined quantum mechanics with Einstein’s theory of special relativity.

Karthik Vinod

Published

on

Credit: Jijin M.K. / EdPublica

The Nobel-winning contribution

Quantum mechanics presented a radical new picture of the world that defied intuition. In this radical theory, we can’t predict reality with certainty anymore. This was in direct conflict with the tenets of classical physics that Newtonian physics and Einstein’s theories of relativity were based on.

It didn’t develop overnight though. It had gone through several years of iteration after constant addition of key principles. And for the theory’s intuition defying predictions, it didn’t have an overnight reception either. One of the famous detractors was Albert Einstein, who famously said, “God does not play dice,” at the 1927 Solvay Conference. But then Niels Bohr, the other great physicist at the time somewhat humorously said, “Einstein, don’t tell God what to do!”.

Despite the successes quantum theory had in the 1920s, its proponents knew that the theory was incompatible with Einstein’s special theory of relativity. Einstein’s theory postulates that nothing in the universe can cross the speed of light barrier. For any object with mass, it must otherwise possess unlimited energy to reach the speed of light. The postulate holds as real as ever, and is a fundamental tenet of modern physics. However, quantum physics which lacked a strong mathematical rigor until then didn’t incorporate the postulate. At least, until Paul Dirac would arrive with his eponymous equation in 1928 – the Dirac equation.

Dirac began as a skeptic of quantum theory, sharing the same sentiments as Einstein. Dirac’s biographer, Graham Farmelo, recalled him deriding Heisenberg’s work on the uncertainty principle, when he was a young PhD student, scribbling away, “Nature can’t be this complicated,” on top of the paper and brushing it aside.

But not for long though. In what Dirac fondly recalled as his profound work when he proved that Heisenberg’s and Schrodinger’s mathematical descriptions of quantum theory essentially predicted the same physical phenomena. He became a convert at last, championing the theory he had long detested. 

But Dirac’s magnum opus came when he finally developed the relativistic wave equation that finally seamed together quantum mechanics with Einstein’s special theory of relativity. 

Dirac was obsessed with mathematical rigor that he knew quantum theory so desperately needed. Experiments often led the way until then for theoretical models to be constructed which validated reality. However, the Dirac equation could predict novel phenomena hitherto undreamt of. His equation could predict every other puzzling phenomena attributed to quantum mechanics until then – including intrinsic spin, magnetism but also antimatter (particles that have the same mass as matter, but opposite electric charge). For these achievements, he was awarded the 1933 Nobel Prize in Physics with Erwin Schrodinger. He was the youngest ever winner at the time.

The reticent genius

Richard Feynman (right) chatting with his ‘hero’ Paul Dirac (left) at the sidelines of a conference in July 1962. Credit: Marek Holzman / Wikimedia

Despite these achievements though, Paul Dirac wasn’t the one to bask in any limelight. In Britain where he grew up, he’s ranked the greatest physicist since Newton, although he’s remotely popular there. Nonetheless, Dirac’s influence was massive at least with physicists entering the fray in his work. Richard Feynman called him his ‘hero’, falling in love with the mathematical rigor and clarity in his work.

“He made a new breakthrough, a new method of doing physics,” said Feynman. “He had the courage to simply guess at the form of an equation, the equation we now call the Dirac equation, and to try to interpret it afterwards.”

But to his immediate colleagues and well-wishers alike, he was as much shy and reticent as one could possibly get. At times he can come across as cold – almost schizoid like. In a sense, he fit the stereotype of the scientist who was below adept at socializing, but was a genius when it came to his intellectual acumen.

However, there could be a more genuine plausible reason. It’s very plausible that his outward appearance was shaped by his severe childhood trauma. At least that’s what one can construe hearing from Graham Farmelo who wrote Dirac’s only biography – released in 2009. Farmelo shed light onto Dirac’s difficult relationship with his father during childhood in great detail. The young Dirac grew up in an abusive and neglectful environment, where he didn’t just have the chance to develop his identity without being constantly intruded by his father who wanted him to be engrossed with his education. It’s pure speculation, but it’s possible that physics gave him respite from having to deal with the outside world. 

Niels Bohr once remarked, ‘Paul Dirac must be the strangest fellow to ever visit my institute.’ It wasn’t meant as condescension, although it was meant to highlight Dirac’s eccentric nature. 

In some respects, Dirac was as enigmatic and mysterious of a man, as the quantum theory he helped conceive.

Karthik is a science writer, and co-founder of Ed Publica. He writes and edits the science page. He's also a freelance journalist, with words in The Hindu, a prominent national newspaper in India.

Know The Scientist

The ‘Godfather of AI’ has a warning for us

The speed with which large language models such as ChatGPT has come to the fore has re-invigorated serious discussion about AI ethics and safety among scientists and humanities scholars alike.

Karthik Vinod

Published

on

Credit: Jijin M.K. / Ed Publica

The quest to develop artificial intelligence (AI) in the 20th century had entrants coming in from various fields, mostly mathematicians and physicists.

Geoff Hinton, famously known as the ‘godfather of AI’ today, at one point dabbled in cognitive psychology as a young undergraduate student at Cambridge. Allured by the nascent field of AI in the 1970s, Hinton did a PhD from Edinburgh where he helped revive the idea of artificial neural networks (ANNs). These ANNs mimic neuronal connections in animal brains, and has been the staple of mainstream research into AI. Hinton, a British-born Canadian, since then moved to the University of Toronto, where he’s currently a professor in computer science.

In 2018, Hinton’s contributions to computer science and AI caught up to him. He was awarded a share of the coveted Turing Award, which is popularly known as the ‘Nobel Prize in Computing’. His 1986 work on ‘back propagation’ helped provide the blueprint to how machines learn, earning him the popular recognition of being one of the ‘fathers of deep learning’ as well.

The last two years saw artificial intelligence become commonplace in public discourse on technology. Leading the charge was OpenAI’s ChatGPT, as large language models (LLMs) found use in a whole host of settings across the globe.  OpenAI, Google, Microsoft and their likes are engaged in upping the ante.

But this sudden spurt has alarmed many and is re-invigorating a serious discussion about AI ethics and safety. Last year, Elon Musk was amongst signatories of a letter requesting to halt AI research for a while, fearing the ever-increasing odds that sentient AI may be in the horizon. But sociologists believe this risk is simply overplayed by billionaires to avoid the real-world problems posed by AI gets swept under the carpet. For example, job losses will occur for which there is no solution in sight about what should be done to compensate those who may lose their work.

However, in a very technical sense, computer scientists like Hinton have taken to the fore to make their views explicitly clear. In fact, Hinton ended his decade long association with Google last year to speak freely about what he thought was a competition between technology companies to climb upon each other’s advances. He, like many computer scientists, believe humanity is at a ‘turning point’ with AI, especially with large language models (LLMs) like ChatGPT at the fore.

“It’s [LLMs] very exciting,” said Hinton in a Science article. “It’s very nice to see all this work coming to fruition. But it’s also scary.” 

One research study suggests these LLMs are anything but ‘stochastic parrots’ that outputs what it’s been instructed to do. This doesn’t mean AI is anywhere close to being sentient today. However, Hinton and other computer scientists fear humanity may unwittingly run into the real risk of creating one. In fact, Hinton was one of several signatories of an open letter requesting policy makers to consider the existential risk of AI.

Creating a sentient AI, or artificial general intelligence (AGI, as it’s technically called) would vary in definition based on scientists researching them. They don’t exist for one today, and nobody safe to say knows what it would look like. But in popular lore, these can simply mean Skynet from the Terminator movies, becoming ‘self-aware’. Hinton was of the opinion that AI already surpassed biological intelligence in some ways. However, it must be bore in mind that AI isn’t anymore a stochastic parrot than it is sentient. Hinton doesn’t say more powerful AI would make humans all redundant. But AI could do many routine tasks humans already do, and thus replace them in those in time. Navigating them is a task that requires views that are transdisciplinary.

Continue Reading

Know The Scientist

The astrophysicist who featured in TIME’s most influential personality list

Priyamvada Natarajan’s contributions in astronomy helped shed light into two major research interests in contemporary astrophysics – the origins of supermassive black holes, and mapping dark matter in the universe.

Karthik Vinod

Published

on

Credit: Jijin M.K. / EdPublica

For Priyamvada Natarajan, her earliest exposure to scientific research arose from her childhood passion making star maps. Her love for maps never abated, and shaped her career as a theoretical astrophysicist. In the media, she’s the famous ‘cosmic cartographer’, who featured in the TIME magazine’s list of 100 most influential personalities this year.

“I realise what an honour and privilege this is,” said Natarajan to The Hindu. “It sends a message that people working in science can be seen as influential, and that is very gratifying.”

The Indian-American’s claim to fame arises from her pathbreaking research into dark matter and supermassive black holes.

She devised a mathematical technique to chart out dark matter clumps across the universe. Despite dark matter being invisible and elusive to astronomers, they’re thought to dominate some 75% of the universe’s matter. Dark matter clumps act as ‘scaffolding’, in the words of Natarajan, over which galaxies form. When light from background galaxies gets caught under the gravitational influence of dark matter clumps, they bend like they would when passed through a lens. Natarajan exploited this effect, called gravitational lensing, to map dark matter clumps across the universe.

Simulation of dark matter clumps and gas forming galaxies. Credit: Illustris Collaboration

Natarajan reflected her passion for mapping in a TEDx talk at Yale University, where she’s professor of physics and astronomy. Though she’s an ‘armchair’ cartographer, in her own description, she has resolved another major headwind in astronomy – nailing down the origins of supermassive black holes.

Black holes generally form from dying stars, after they collapse under their weight due to gravity. These black holes would swallow gas from their environment to grow in weight. However, there also exists supermassive black holes in the universe, millions of times heavier than any star or stellar-sized black hole, whose formation can’t be explained by the dying star collapse theory. One example is Sagittarius A* at the center of the Milky Way, which is a whopping four million times massive than our sun.

First direct image of Sagittarius A* at the Milky Way center. Credit: EHT

The origins of these behemoths remained in the dark until Natarajan and her collaborators shed some light to it. In their theory, massive clumps of gas in the early universe would collapse under its own weight to directly form a ‘seed’ supermassive black hole. This would grow similar to its stellar-massed counterparts by swallowing gas from its environment. In 2023, astronomers found compelling evidence to validate her theory. They reported a supermassive black hole powering the ancient quasar, UHZ1, at an epoch when no black hole could possibly have grown to attain such a massive size.

These observations came nearly two decades following Natarajan’s first paper on this in 2005. In a 2018 interview to Quanta, she expressed how content she would be with her contributions to astrophysics without having her theory requiring experimental verification done within her lifetime. For, she would be simply content at having succeeded at having her ideas resonate among astronomers for them to go search for her black holes. “I’m trying to tell myself that even that would be a supercool outcome,” she said in that interview. “But finding [the supermassive black hole ‘seed’] would be just so awesome.”

Beyond science, Natarajan’s a well-sought public speaker as well, with pursuits in the humanities as well. In fact, at Yale University, she’s the director of the Franke Program in Science and the Humanities, which fosters links between the two disciplines. Her humanities connect comes at MIT, where she did degrees in physics and mathematics before taking a three-year hiatus from science to explore her interest in the philosophy of science. However, she returned to astronomy soon thereafter, enrolling as a PhD student at Cambridge, where she worked under noted astronomer Martin Rees on black holes in the early universe which seeded her success in later years.

Continue Reading

Know The Scientist

Peter Higgs’ odd tryst with the ‘God particle’

The nonagenarian Higgs, who passed away last week in Edinburgh, had expressed displeasure that he alone received the fanfare for the Higgs boson.

Karthik Vinod

Published

on

Credit: Jijin M.K. / EdPublica

History perhaps would have it no other way for Peter Higgs, with his name seemingly linked forever to the ‘God particle’.

The nonagenarian Higgs, who passed away last week in Edinburgh, had expressed displeasure once that he alone received the fanfare for the Higgs boson. For in his own admission, there were some six other theoretical physicists who had come up with exactly the same idea arguably around the same time. Higgs even proposed to re-name the mechanism which led to the Higgs boson, to “ABEGHHK’tH mechanism”, in an attempt to represent the contributions of every theorist involved by having their initials. However, the name Higgs boson stuck in parlance after Benjamin Lee, a physicist in the 1970s, was better acquainted with Higgs work than the other physicists, and preferred using the former’s name as ‘shorthand’ to describe the particle.

The Nobel Committee didn’t overlook this fact, when Higgs was awarded the 2013 Nobel Prize in Physics with Francois Englert – the only one of the remaining six to be alive by that point. But they’re also the lucky few who got to see their groundbreaking work become the crowning jewel moment in their career.

Higgs, who faced much of the limelight after he directly proposed a quantum particle in his 1964 paper, had a steely resolve to defend his work from the onslaught of critique, as is rather common when radical scientific progresses are on the cusp of making.

But for a really long time though, the Higgs boson didn’t find consensus amongst particle physicists as the particle they should really fund the LHC to identify. The theory didn’t find resonance amongst leading physicists of Higgs’ time either; even Stephen Hawking, as The Guardian notes, on one occasion publicly stated the particle will never be discovered. Higgs retorted publicly likewise, saying that Hawking’s celebrity status had helped him escape accountability for his misguided statements.

At long last when the Higgs boson was finally discovered in 2012, Higgs was all teared up during the announcement at Geneva. This was perhaps a crowning moment more so than the Nobel Prize arguably, for a career well-spent in service for science.

The media hype that followed in its reporting of the Higgs boson’s discovery, also helped the particle’s longevity in popular memory. A slew of news stories popularized it as the ‘God particle’, when it is literally anything but that. The Higgs boson is a prediction arising within quantum field theory (or QFT), which accounts for the various interactions between three of the fundamental forces of nature – electromagnetism, the strong and weak nuclear forces.

At one point in time, these three fundamental forces existed in unison at the time of the Big Bang. But matter as we knew it was massless too then. it was only after a billionth of a billionth of a billionth of a billionth second after the colossal Big Bang, did the particles get imbued with mass at all, due to a mediating Higgs boson. According to QFT, the Higgs boson emerges from one of, in fact, many characteristic energy fields from which quantum particles arises and interacts. The Higgs isn’t the carrier of mass, but it is that interlocutor which mediates the Higgs field (which is what the quantum field associated with the Higgs is called) with the massless particles. In QFT, the Higgs is just another particle. 

According to Business Insider, the ‘God particle’ conception was directly derived from Nobel Prize winning physicist, Leon Lederman’s book in 1990. The planned title of his book, ‘The Goddamn Particle’ was changed to ‘The God Particle’ upon the insistence of his publishers then. Lederman’s logic was to convey how Higgs boson evaded the eyes of particle detectors of his era. But Higgs wasn’t a fan of the naming. In a 2013 interview, he said “The name itself is a sham … it was a joke, you know.”

But the term ‘God particle’ stuck in popular lore ever after, with no re-naming around the horizon.

Continue Reading

Trending