For the longest time, Yuval Noah Harari did not have a smartphone. Just two years ago, he was forced to buy one because “so many services now require a smartphone,” he says, during an interview at the Taj Mahal Palace Hotel in Mumbai on a glorious Sunday morning. But he uses the device sparingly. “I try to use it instead of being used by it,” he says with an air of wisdom. “Too much information isn’t a good thing. This is part of my information diet.”

The need for an ‘information diet’ captures the essence of his latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI (Fern Press). Exploring the breadth of history, Harari writes about how clay tablets, stories, the printing press, and computers have all helped humans communicate with one another. He explores how communication networks have helped us build societies, exercise power, disseminate information, and shape democracies and autocracies. Using history as the basis, he imagines — and dreads — what could happen in the age of AI (artificial intelligence).

In a sweeping saga, Harari takes the reader across the world and across centuries — from ancient Mesopotamia to the Qin dynasty to Nazi Germany — and liberally sprinkles the book with gripping anecdotes. However, in order to pack all of them into neat episodes, he sometimes strips the stories — whether of the fall of the Roman republic or of the dangers posed by the printing press — of nuance, and recklessly makes causal connections.

He first spends several pages simply examining what information means. His biggest concern is the “naive view of information”, which posits that more information will lead to the truth. “Many proponents of this naive view sit in Silicon Valley and firmly believe that the faster spread of information through technology will be better for the world,” he tells me.

He disagrees, contending over 400-odd pages that the same technology enables us to spread lies and doesn’t necessarily make us wiser. “The point is not the quantity of information but the quality of it. We need institutions with scientific mechanisms to tell us the difference between reliable and unreliable information. But Silicon Valley is destroying these institutions,” he says.

AI as an agent

In the midst of this information overload, Harari worries that we are plunging happily into the age of AI. With no holds barred, he writes, “AI has the potential to escape our control and enslave or annihilate us.” When asked about this alarmist perspective, he doubles down in his soft-spoken, yet assertive manner: “When I say AI may escape our control, I mean it. Because AI is not a tool, it is an agent.”

He elaborates: “A book or a printing press cannot escape our control. A printing press cannot invent a new book, but AI can. AI is capable of writing texts and creating images and videos. The defining characteristic of AI is that it can learn on its own and change, which is why it is difficult to control.”

Harari cites two examples during the interview and dozens of others in the book about the dangers of AI. He recounts how in March 2016, AlphaGo, an AI system, beat Lee Sedol, the Go world champion, in a match. It is not the defeat itself, but the manner in which it happened that he found terrifying.

South Korean Go player Lee Sedol reviews his match against artificial intelligence program, AlphaGo, at the Google DeepMind Challenge Match on March 13, 2016, in Seoul, South Korea. | Photo Credit: Getty Images

“AlphaGo invented new ways and strategies of playing Go, which were different from the way millions have played Go in the past. This is just a game. But the same superhuman intelligence can start changing much more important things than just games,” he says. Harari cites finance and religion as two spheres that can change significantly because of AI, and asks: what happens when we give AI the power to make more financial decisions? What happens when AI begins to analyse religious texts and throw up new interpretations?

It is an apocalyptic view of the world, but Harari doesn’t agree that he is a scaremonger. “I don’t make predictions; I give warnings,” he clarifies. “What happens will finally depend on the decisions we make.”

He explains that very few people understand AI and they are mostly based in two countries: the U.S. and China. “We have a handful of people representing humanity and making some of the most important decisions in history. This is why I wrote Nexus, to give more people in more countries a better understanding of what’s happening, so they can join the debate. Then there is a good chance that we will make good decisions.”

Gift of storytelling

Harari’s greatest gift is his storytelling. Nexus is a bold, sensationalist, and propulsive read. After all, he is a popular historian. This is a tag that he enjoys as it has allowed him to reach a wider audience. “I see myself as a bridge between academic historians who write for a very narrow professional circle and the wider public. I base my work on their findings. Popular history is important, but it has to stay loyal to the scientific values of evidence, truth, and research,” he says.

When he wrote Sapiens in 2011, Harari had one research assistant; now, he has four. Every time he wants to learn more about a subject, he says he simply instructs them to do the research. The work on Nexus started in 2019 and he took two years to write the book.

What helps him write weighty tomes is discipline. Harari wakes up at 7 a.m., meditates for an hour, eats breakfast, and then usually writes or does research until noon when he takes a break. Then he sits down for another three to four hours of uninterrupted work, followed by another hour of meditation and exercise, either yoga or a walk.

“I don’t work during the evenings and weekends. When I work, I work for long stretches and when I relax, I relax for long stretches. Otherwise, I don’t do either well,” he says.

radhika.s@thehindu.co.in

Published - December 12, 2024 02:07 pm IST