Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Author and historian Yuval Noah Harari discusses the battle against fake news, the challenges facing democracy worldwide, and the biggest threat facing humanity in the next 100 years

yuval noah harari
Yuval Noah Harari. YouTube/Talks At Google

  • Yuval Noah Harari is a world-renowned historian and professor at the Hebrew University of Jerusalem. He is the author of "Sapiens: A Brief History of Humankind," "Homo Deus: A Brief History of Tomorrow," and, most recently, "21 Lessons for the 21st Century."
  • Harari sat down with Mathias Döpfner, the CEO of Business Insider's parent company Axel Springer, for a wide-ranging discussion on politics, the state of democracy, and the threats facing humanity today.
Advertisement

Axel Springer CEO Mathias Döpfner: You are something of a pop star among historians. You are constantly being asked about the future. Does your life as an oracle annoy you sometimes?


YUVAL NOAH HARARI: As a historian, I like to talk more about the past, but most people want to hear about the future. I get a bit annoyed when people see me as a kind of guru-prophet — as if I know what will happen in 50 years or what we should do. It is obvious that no one has any idea ​​what the world will look like in 2050. This is probably the first time in human history that we have absolutely no idea about ​​even the most basic stuff. It is equally obvious that no one can predict political developments. If you lived a thousand years ago, you couldn't tell whether there was going to be a war, an invasion, a revolution, or anything else. But today we cannot even imagine the basic structure of the job market in 2050, the basic structure of the family or what our own bodies will be like. We don't know what life expectancy will be. Things like that. I try to think through different options. In doing so, I always emphasize that we still have plenty of room to maneuver to prevent the worst-case scenarios and to bring about the best-case scenarios.

Your topics are often political. Do you have political ambitions? Not in a party-political sense, but in a broader sense?

In the broadest sense, yes. In the sense of trying to shape the public conversation. But I don't think that I can give people answers. I don't have the political ambition to tell people to do this or don't do that. But I think we need to change the public conversation and focus it on things other than what is currently attracting the most attention. Humanity today is facing three major problems. These should be the top three items on the agenda of every country, of every election campaigner: nuclear war, climate change, and disruptive technologies, especially the rise of AI and biotechnology. But if I look at an election or referendum anywhere in the world, like Brexit or the election in the US or whatever, people are just not talking about these things. This worries me. I see my role as a public intellectual as one of trying to steer the conversation in that direction.

These days it is fashionable to speak very negatively about politicians. Who does his or her job best when it comes to dealing in the right or the best possible way with those topics that are particularly relevant from your perspective?

I'll have to think about that. I don't really know. I have never done an assessment of different political figures, and I just don't have the expertise to do such an assessment.

So, there is nobody who particularly impresses you?

No, I think the standard of governance we see in much of the world today is the best we've ever had.

Does that also apply to Donald Trump?

Sure, there are exceptions, but if you look at the United States government, what it is capable of, what it can provide for people, it's a lot better than it was a century or two ago. I think part of the problem, especially in the West, is that people are unaware of how lucky they are. They have little gratitude for what they and past generations have achieved. And that's why they're willing to risk so carelessly what they have achieved. These people feel that the system is completely broken, and that all politicians are liars. That we have to destroy everything and start from scratch. This is just terrible. They do not know how much they can lose, how far they can fall.

Can you assign your views to an ideology? Are you more left or more right?

I think most of the ideological divisions that we have inherited from the 20th century are not very relevant to the big issues of the 21st century. What is the difference between the Republican Party and the Democratic Party in the US when it comes to AI? There is no difference. They just don't talk about it. However, when it comes to climate change, there is one big difference: In some countries, such as the United States, the denial of climate change seems to be the monopoly of right-wing nationalists. This is surprising from a historical perspective as conservatives cared more about the preservation of the environment in the past. In the first half of the 20th century, it was actually more of a right-wing-nationalist concern to worry about the environment, the forests, and the animals.

The explanation for this is that we are facing an immense crisis. It is a global crisis. It should be obvious to everyone that there is absolutely no national solution to climate change. You can't solve climate change by having this or that national policy. Only global cooperation can work. That's why I think that nationalists don't have a solution. They just tend to deny the problem. If you recognize the problem, you also have to realize that you cannot think in purely nationalist terms: "My country first" does not work. If we stick to the slogan "My country first," then we have no solution to climate change.

So you are confident, then, that nationalism will recede simply because it is not working?

No. People still vote for many things that do not work. People do not always do the wise thing or the best thing for their own interests. This is also true on a personal level. People often make terrible decisions about their personal lives. And it is also true on the collective level.

What about free, open, and democratic societies? Will they prevail?

I don't know. What I know is that, from a long-term historical perspective, free democratic societies are a very rare phenomenon. During most of world history, people did not live in open democratic societies. This is mainly a phenomenon of the last century or two, in a few corners of the world. You have all this talk about Ancient Greek democracy and European civilization, as if for thousands of years European civilization was built on the foundations of freedom, democracy, and human rights. This is nonsense. In ancient times, there were a few places where, for a few decades or a century or two, as in ancient Athens, there was a sort of democracy for 10% of the population, for the elite. And then for centuries there were regimes, oligarchies, and empires and dictatorships and so on. Then you have a century where you can find a few places with liberal democratic societies. So, there is absolutely no guarantee that this model is going to prevail. I hope it does, because looking at the broad spectrum of history, I think it's the best political model that humans have ever created.

China is the most successful country and system at the moment. With the help of artificial intelligence, and provided that their regulatory system continues to be fully based on strengthening their current regime, which allows a lot more than is possible in America and Europe, China in all likelihood could become the leader in artificial intelligence in a few years. Whoever controls AI will firstly dominate the world economy and will then gain supremacy on a political level. Do you think that China will become the dominant player worldwide? And that the country will implement its undemocratic, totalitarian system in more and more democracies?

Yes, there is quite a high likelihood of that happening. Like many other scholars and also politicians today, I think that those countries who lead the world in AI are likely to lead the world in all economic and political terms. It could be a rerun of the industrial revolution of the 19th century when you had a few countries, first Britain then Germany and France and the US and Japan, who were pioneers in industrialization. These few countries conquered, dominated, and exploited the world. This is very likely to happen again on an even larger scale with AI and biotechnology in the 21st century.

It may not be the same countries again. It may be different countries this time, such as China. The gap between those who control AI and biotechnology and those who are left behind is likely to be far greater than the gap between those who developed steam engines in the 19th century and those who didn't. However, I do not believe that Chinese supremacy is inevitable. The research and development of AI can go in all possible directions, and there is also a huge question mark hanging over the Chinese system. It has never really faced a major crisis since the beginning of the reforms in 1978-79.

A country like the US impresses me as a historian: A system that has worked for 200 years and has managed to survive and adapt to very extreme crises. The current Chinese system that we have today is only 30 years old. And that was 30 very good years in economic and political terms. But what will happen when China faces its first major crisis, either because economic development is slowing down or because there is an ecological crisis that they cannot solve? Then there will be a crisis that destroys the expectations of 1.5 billion people. And nobody, including the Chinese leadership, knows what will happen in such a scenario.

I have a friend who launched a company that mainly deals with in-vitro fertilization. He is trying to decouple sexuality from reproduction. He has a lot of experience in different markets and says the best market for his company is China, because in China you can do whatever you want. Parents can choose whether a baby will a boy, tall, blond, blue-eyed. And one day you will even be able to manipulate the EQ and the IQ. This, in combination with China having the world's largest population, could increase the likelihood that it will create a kind of superior human class.

I visit Europe, North America, and China quite often. Questions that people in the West respond to with concern and fear Chinese people often respond to with enthusiasm: "Wow, we can do that." The cultural and historical background to this is the Chinese trauma of being left behind. This trauma is not that the Chinese would think: "After we were technologically left behind, the British and Americans and Japanese and Russians and everybody conquered and exploited us, and China suffered for over a century, a terrible century." No, the Chinese trauma is rather that of being left behind in the industrial revolution because of their own mistakes. They keep telling themselves that the tragedy of that time is ultimately their own fault, because it was their own fault that they were left behind. They didn't realize what was happening. "We were complacent," they say. Now, the Chinese only have one thing in mind: that this doesn't happen a second time. "We will lead the next big revolution — whatever it takes. We will lead it."

But then there is another underlying current in China that many outsiders also do not know about. They have another national trauma, the trauma of going too fast too soon. This is the trauma of the 1950s, 1960s. This is, of course, hushed up in China. You cannot talk about it, but many Chinese, including their rulers, remember those years. On the other hand, when you speak with Americans, their view of technology, as I have often said, is very naïve. They have never really had any problems with it. They have a wonderful two centuries of technological development behind them. Everything was perfect. But the Chinese have experienced both sides of the coin. They know what it means to be left behind. And they also know what it means to rush forward too quickly.

Looking at the battle of the giants in China — Tencent, Baidu, Huawei, and Alibaba — and the big four in America — Google, Apple, Facebook and Amazon — which side would you put your money on?

The thing that concerns me the most is the possibility of an AI arms race, a global arms race, particularly between the US and China, because I think we've seen the beginning of just such an arms race over the past two years. An AI arms race always guarantees the worst outcome. Instead of picking a side in the arms race to support, my greatest hope is that we can avoid the scenario of an arms race altogether, because really, whoever wins the arms race, humanity will lose. In an arms-race situation, no one is willing to dispense with dangerous developments, because they fear that if they don't do it, the other side certainly will. When it was about nuclear weapons, we had an arms race that was, in one sense, actually less dangerous than the AI arms race. There are two big differences between the nuclear arms race and the AI arms race. One big difference is that, with the nuclear arms race, it is very difficult to keep it a secret. If the North Koreans develop nuclear weapons, everyone knows it. When it comes to AI, even if you sign an agreement to restrict certain kinds of development, you can never really trust the other side. So you need a much higher degree of trust. The other difference compared to an AI arms race is that ultimately, in an atomic arms race, there's only one thing you can do with them, which is the end-game, like in the big war. AI, by contrast, can be used at any time. And when you start to develop dangerous AI, the effects will be felt in society immediately; not like in a doomsday scenario, where nothing happens for 50 years and then suddenly we have the big AI war and everybody dies. No. When AI starts to develop in a dangerous direction, the effects will be felt immediately.

Immediately?

Yes. I'm very concerned about the direction in which things are going at the moment. And I'm just as concerned about the fact that people on both sides, in China and in the West, are increasingly thinking, "We're in an arms race, so what do we have to do to win the arms race?"

But let's assume that China will be successful with its concept of totalitarian surveillance-state capitalism in the foreseeable future. What would be the right strategy for the West to adopt to deal with that? Should we just let China do as it pleases, as Henry Kissinger proposes? Or should we set boundaries along the lines of the trade policy of Donald Trump?

I don't have a single, magical solution. The more you give the Chinese the feeling you are out to get them, and that they are in an arms race, the worse things will be, especially because they have a national trauma of losing the previous arms race and industrialization to the West and suffer tremendously because of that. They're not willing to risk that again. We have to be very careful not to push them. I certainly don't understand what Trump is doing with these trades and tariffs on the economic side of things. But on a deeper level, it is precisely what is driving the world to this scenario of a Chinese-versus-the-West arms race, a Cold War, or however you want to describe it. And because we don't just have the AI problem, but also climate change and so on, that's an extremely dangerous development. We also have to bear in mind that Chinese companies, which are subject to the Chinese government and very different value systems, can dominate certain industries or certain markets. That is certainly a concern. But I would be very careful about framing the whole issue as, "Now we have an arms race, and we have to do everything we can to win it."

That sounds a bit resigned. Do you have any advice for Europe concerning democracy, AI, and biotechnology in terms of how to catch up and even compete with the big players from American and China? Or is it a lost cause?

No, I don't think it's a lost cause. It's a question of where you invest your resources. The EU is still an economic giant, and a scientific giant, and if it puts its mind to it, if the EU tells itself, "OK, this is what we need to do — we need to be a leader in these fields," you have all the resources. If we were to have this conversation in Latin America or Africa, it would be a different story. They are really going through a repetition of the industrial revolution. Of being left behind, once again. And many of the countries have probably already missed the boat. They don't have what it takes. But that is not Europe's situation. It has a lot of problems in this respect. Partly complacency, partly just confusion. Not knowing what it wants for itself. What is its own identity? But Europe certainly has the resources to compete with the US and with China on these new fronts of AI and biotechnology.

In your most recent book, "21 Lessons for the 21st Century," you talked a lot about the role of fake news and thus indirectly about the role of the media. What would you advise newspaper publishers to do to be successful in the digital world?

My husband should be here to answer this one instead of me. He's the publishing genius. I only know how to write books. If you want good advice on how to be successful in the book business or in publishing, you have to invite him, not me.

So he was the one who wrote the chapter on fake news?

[laughs] No.

I found the chapter on fake news particularly fascinating. You write about what the phenomenon is doing to society. You believe it could change democracy.

Fake news is old news. Fake news has always been with us, long before Facebook and Twitter.

But digitization has accelerated its distribution and made it more global.

Yes, that's right. Information and propaganda were always there, but what we have now, which is new, is the ability to hack human beings, to get inside their brain, to get to know them better than they know themselves. To get to know their weaknesses and then tailor the fake news to those weaknesses. Fake news used to work like carpet bombing — the same news to everybody. Now it's precision-guided munitions. They can tell, for example, that one person is already biased against immigrants, so they will show that person a fake-news story about a gang of immigrants raping local women. Because the person already has this weakness, it will be very easy for that person to believe the story.

Advertisement

It's a battle for attention. The person's neighbor has a different bias. She has a bias against anybody who opposes immigration. She thinks anybody who opposes immigration must be a neo-Nazi or a complete racist. So they show her a different fake-news story about a gang of neo-Nazis killing immigrants and she will click on it and believe it. If these two people meet in the elevator, they can't have a discussion because their minds are poisoned. We now have a society in which people are spending hours every day poisoning themselves, feeding their hatred, their anger. And it is all because of the battle for attention. What the attention specialists have realized over the last couple of decades is that, if you want to grab somebody's attention, the best buttons to press are hatred and fear and greed. So all you need to do is find out what this person already fears and then you can feed him or her more.

If the best recipe against fake news and its effects is quality journalism and responsible, independent publishing, wouldn't it be better for the consumption of news and information to be paid for with money rather than data?

Certainly, but this is a model that should be adopted by the entire industry, including through regulation. If a single publisher decides to do it, it will lose the battle for attention. The problem with the present model is that it offers exciting news in exchange for attention, and it's a terrible deal because the truth doesn't play any part in it. The consumer gets excitement, not the truth. And the peddlers of fake news sell the users' attention to companies, governments, or political parties for large sums of money. Ideally, we should shift to a different model of high-quality news that doesn't exploit the attention of the user, but costs a lot of money. People pay a lot of money to get good food and high-quality cars. Why not pay good money for high-quality news? It's a very strange thing.

What role do you think a social-media platform like Facebook plays in our society?

It is a new type of media company. And it is a development that has a very important role to play. I don't think that social media is going to disappear any time soon. I think it would be good, both for Facebook and for society in general to recognize the immense power of this platform.

Facebook has more than 2 billion customers today. Should it be allowed to decide who gets what information? What is right, what is wrong, what is good, what is bad? Should Facebook take on this responsibility, or should it be reduced to a neutral technological platform that is only there to connect people?

What the platforms do is connect people, and that was precisely their initial vision. But we have now realized that things are not that simple. The medium is never neutral. That should be obvious to anyone who works in this field professionally or who knows the history of communication. There is no such thing as a completely neutral medium. So, we need to get beyond this naïve stage of believing that it is and that it could ever be just a connecting platform. We need to come to grips with the fact that we ourselves need to take on a certain degree of responsibility for making sure we are properly informed. Facebook itself is also now beginning to address the issue of its greater responsibility.

But would that not put them in the role of a publisher, directly replacing smaller publishers, thus giving themselves a kind of global publishing monopoly?

Not necessarily. If the main task of the social media remains just organizing social interactions, then publishing books and news and so forth still fulfills another function altogether.

However, it would certainly be a really dangerous step towards monopoly if, in addition to social media, they also took on the main responsibility for all kinds of publications in other arenas. In my opinion, we already have a monopoly problem in the social media today, and it is different here than in other industries. The whole idea of social media almost necessitates a kind of monopoly, because users want to be in the same place that everybody else is. If Facebook as a social network split up into Facebook 1 and Facebook 2, then I wouldn't want to be in Facebook 1 if my friends were in Facebook 2.

The author of the book "The Four," Scott Galloway, advocates smashing Google, Apple, Facebook, and Amazon, as they are actually too big and will constantly be seeking to increase their power and become even bigger. Do you think that's the right way to go in our thinking? Or should we aim for more transparency, treat them more like utility companies, like Bell Systems, for example, which used to have a telephone monopoly. It wasn't broken up, but it had to share its patents, and that led to the biggest wave of innovation the United States has ever seen, in the process indirectly leading to the emergence of today's Silicon Valley.

I think that breaking it up could be a good idea in the sense that, when you have very different service areas, then it is not a good idea to have all service areas controlled by a single corporation. So, you could break them up. But when it comes to a particular service like social media or like a search in the case of Google, there is something inherent about the service that doesn't allow you or makes it very difficult to split it.

Let's look at the search again. If you take the whole power of the Google search engine, it is also an amazing and constant tool for good, because everybody is searching on Google and all the data comes together and is analyzed in one place. This makes it easier to do better searches next time. So, breaking Google up into two search engines would have a destructive effect. You also you have to take into consideration that there is competition on the global level and, if you break up Google into 10 companies, none of them will be a very good search engine. What you would have instead of this — and this is something we already talked about — is one giant company in China with a much better search engine. All you would be doing is simply handing the search engine industry on a plate to the Chinese, meaning you wouldn't really have accomplished much in terms of fighting monopoly.

In your book you describe how Google has essentially taken away the advertising business from publishers while at the same time taking their content without paying for it, that Google has destroyed the publishing industry. You go on to predict that in the next step, Google will destroy the entire advertising industry. Can you elaborate on that?

It's very simple. Let's say that Mercedes Benz wants to sell you a car. So Mercedes pays for advertising. But ultimately Google wants to reach a point where they have so much information about the world and so much information about you that you can ask Google anything. If I want to buy a car, I'll just ask Google, "Hey, Google, what car should I buy?" And Google will take into account everything that it knows, not just about all the different cars in the world — fuel consumption, safety, EU regulations, whether child labor was used in a third-world country — but also everything that it knows about me. My preferences, my political opinions, my views on climate change, my views on Middle Eastern politics. Everything. And then Google will say, "The perfect car for you is X." And if they do a good enough job I will just buy X. Advertising will just be pointless. And because I know that my judgment is constantly being manipulated by advertisers, I won't trust my own judgment. I'll be happy to let Google take the decision out of my hands and there will no longer be any need for advertising.

In the end, Google, aided by AI, will know us better than we know ourselves. Your book presents a drastic example. The gist of it was, "I wasted a lot of time in my life dating girls because I did not know I was gay."

[laughs] I didn't date a lot of girls, so no worries.

"[...] and if I had asked Google they would have told me earlier. I could have saved a lot of time." Can you imagine that one day a search engine with artificial intelligence will indeed intrude into our private lives to this degree?

Technologically, it's actually very simple. All you have to do in this case is track eye movements. You don't even have to be aware of it as the user. You watch a YouTube video, and the computer simply tracks your eye movements. You see an image of a sexy guy and a sexy girl in swimsuits walking on a beach somewhere. Where do your eyes go and where do they linger? Often, it's something that you don't even control. The computer can very easily detect what's happening with your eyes at that moment. If not today, then maybe in two years or five. Again, it's a question of attention or where your attention goes. We are very close to the point, and maybe already at the point, where an entity like Google or the Chinese government or the secret police can know the sexual identity of teenagers long before the teenagers realize it about themselves and the consequences can go in all kinds of directions. If you live in Iran there is one consequence. If you live in the United States, then maybe Coca-Cola knows, which brings us back to advertising. If Coca-Cola wants to sell me Coke, they should use the advertisement with the shirtless guy and not the advertisement with the girl in the bikini. Let's assume that Coca-Cola knows that and Pepsi doesn't, so Coke shows me an ad with a shirtless guy while Pepsi shows me an ad with a girl in a bikini. The next day I go to the supermarket and I buy Coke and not Pepsi. I don't even know why. I don't need to know why. Only they need to know why.

In your book you point to the ethical conflicts that AI can bring about in the context of self-driving automobiles. For example, the decision that an algorithm has to make in the case of a collision: "Do I collide with the wall and endanger my passenger, or do I collide with two people walking on the street?" You propose that Tesla should bring two different cars to the market, the Tesla Egoist and the Tesla Altruist. What one would you rather buy, the Altruist or the Egoist?

The research on that has already been done. Everyone says that people should buy the Tesla Altruist, but when people are asked about themselves, most people say that they would buy the Egoist.

That's honest, at least.

That's why we need government regulation on it. If you leave it to the free market where the customer is always right, then you will get some very scary, dystopian results.

In view of these ethical problems, and various regulatory questions, when do you think that autonomous mobility, fully automated driving, will be realistic?

It depends on the country. In North Korea, tomorrow. One of the first countries in the world to simply ban human drivers and only allow self-driving cars could be a country like North Korea, where you just need the approval of one person. I presume there will be a lot of corporations, or at least one, that will rush in and look at North Korea as the greatest laboratory in the world. They can start there and do all the experiments. When they're done, 500 North Koreans will be dead, but that's OK — that's nothing in North Korea. Then we can launch the safe model in Berlin.

Elon Musk was asked at a conference when self-driving mobility will be approved. He replied that that wasn't the question. The question is when human driving will be prohibited.

I concur with that.

He also said that 100 years ago no one could have imagined using an elevator without a lift attendant. Do you think that both — human driving and automated driving — will be allowed simultaneously? Or would that create even more conflicts than a system in which there are only self-driving cars?

There are quite serious potential problems. But here again, the starting point for the discussion should be the fact that currently 1.25 million people are killed each year in traffic accidents. That's twice the number of people killed through wars, crime, and terrorism put together. We will never reach perfection in the switch to self-driving cars, but we just need to be better than humans. If the switch to self-driving vehicles means that the number of people being killed in traffic accidents goes down to half a million a year, we will have saved 700,000 lives a year.

One of the most heated topics these days in Germany and Europe is the issue of migration. Writing about it, you said it helps to see immigration as a deal with three basic conditions. Firstly, the host country takes in immigrants. Secondly, in return, the immigrants must accept the central norms and values of the host country, even if that means giving up some of their own traditional norms and values. Thirdly, if immigrants become sufficiently integrated, over time they will become members of the host country with equal rights, who fully belong to the host country, "they" will become "we." Do you believe therefore that these three basic conditions could be implemented in Germany and, if so, would this solve the heated debate and calm down political radicalization? Or do they remain a punchline by a historian?

Theoretically you can do anything. All identities that exist today in the world were formed in a historical process in which people from different backgrounds were amalgamated into a new identity. A thousand years ago, even 200 years ago, there were no Germans in the sense that we have today. You had Bavarians and Saxons and Prussians and so forth. And quite often, the animosity, and hatred, and violence, let's say between Catholics and Protestants — just think about the Thirty Years' War — was far, far worse than almost anything you see today. And the descendants of the people who butchered each other by the millions don't care nowadays whether you're a Catholic or a Protestant, you're a Bavarian or a Prussian. There might be a few jokes about it today, but it is no longer a major issue in German politics. Certainly not like in 1618. So, you can integrate people.

Many people think about this topic in biological terms, as if different human groups are like different animal species. But that's absolute nonsense. You cannot integrate chimpanzees and gorillas to form a single species. Biology just doesn't allow it. Gorillas and chimpanzees cannot have children together. But with humans, it's possible. On the other side of the coin, I think it should be clear again from a historical perspective that, if the majority of the local population is against immigration, it is usually a mistake on the part of the government to try to force large-scale immigration, because it just won't work. The government needs the cooperation of the population to make immigration work. If people refuse, that's called democracy. Even if you think they are making an ethical mistake.

Maybe the worst thing about immigration, in the debate now in Germany and in Europe as a whole, is that people are turning it into a kind of struggle between good and evil. As if anyone who opposes immigration is a racist Nazi, and anybody who is in favor of immigration is a lunatic or traitor. And I think that, in the end, it's a debate between two legitimate views. Decisions about it should be taken through the normal democratic process. This is what you have democracy for.

But in Germany there are certain additional sensitivities driven by the historic guilt and trauma that the Third Reich and the Holocaust created. So in a way it is understandable that Germany treats such matters in a more emotional way.

Yes, it's understandable, but it's not necessarily constructive.

Israel integrated a couple of million Russians very successfully, but the big difference was that they all shared a desire to become a member of the Israeli society and also shared its values. And a large number of them even shared the most widespread religion in Israel. So to what degree do culture, religion, and tradition define certain limits for integration.

They set a great many limits for integration. Israel is also a country of immigrants today. Nearly every citizen, or at least their parents, emigrated to Israel from someplace else. If you are looking for an example in history of a country that is made up of immigrants, then that country is Israel. Take all these mad scenarios that are discussed in Germany, where people imagine Muslim immigrants taking over the country, or taking over Europe, turning it into a caliphate. Well, Israel is an example of precisely that. The entire land was built on this edifice — one of a hostile takeover. And the Russian Jews were generally accepted with open arms, because Israel is in a struggle for survival. The immigrants were seen as more soldiers. Not necessarily literal soldiers, although they were also literal soldiers in the army, but in a broader sense, soldiers in the struggle for Israel's survival. Now we have immigration, for example, from non-Jews coming from Africa, and the reaction of the Israeli population is completely different.

Let's now look at the third point: If immigrants integrate appropriately, do they become, over time, equal and full members of the host country, do "they" become "we"? And how do we even measure whether integration has been successful? What are the criteria? And what happens if these criteria are not fulfilled?

One of the key points is the difference in timescales. Let's say I'm a second-generation immigrant and I was born here. My parents came from Syria or Turkey or somewhere else, and I'm not fully integrated into German society. When people criticize this, then this criticism takes place on a timescale that relates to my own personal life: "You're not German — go back to Turkey!" But I was born here, maybe I don't even speak Turkish. I've never visited Turkey, all my friends are here, and I've lived here all my life. So when people tell me to "Go back to Turkey!" that doesn't make any sense, because I was never there in the first place. What do you want from me? I'm German.

But, if we go back to timescale, from a broader perspective, 30 years is a very short time for one culture to fully absorb people from another country, from another culture, with a different religion, and to fully integrate them. The difficult question is therefore, what timescale are we thinking about? And part of this issue is the question of accounting. How do we count violations? Let's say, you have a million immigrants, and out of these million immigrants, 100 join terrorist organizations and commit attacks against the host country. Is this success or failure? The other 999,900 people were perfectly peaceful. And we also have to ask this question the other way around. If an immigrant walks down the street a thousand times and is not molested, and then, one time, somebody shouts a racist insult, is the conclusion then that the Germans accept him or reject him? How can you gauge that correctly? In most of the debates about immigration, what we see is both sides using different units of measure for their accounting. And if people cannot agree on the accounting method, as it were, then they might be looking at exactly the same thing, but arriving at completely different conclusions.

We spoke before about Israel being an absolute land of immigration. Are you actually a Zionist?

Am I a Zionist? Yes, I think I am. I'm a practicing Zionist. I'm a Jew who lives in Israel, and that for me is the essence of Zionism.

What motivated you to write in your book a lot about the — in your opinion — overestimated role played by Israel, and the overrated role of Jewish people with regard to their contribution to world history, ranging from art to science?

Well, I think that people all over the world have an inflated, often vastly inflated idea about the role their culture, their people, and their country have played in world history. And this is extremely dangerous given what we talked about earlier. I took Judaism as an example, simply because I'm most familiar with it, since I live in Israel. And also because I think that this kind of self-criticism is like puncturing a hot-air balloon with a needle. A balloon that has the following written on it: "We are so important. We are the most important." That's a balloon you should burst right away. I can't do it for the Chinese. I can't do it for the Indians or for the Iranians. If I come to the Iranians and I say, "Look, hey, Iran isn't as important as you say," they will hardly appreciate it coming from me. So when I wrote this chapter saying, you know, Judaism isn't as important as you think, my hope was that people in Iran or in China or in Germany would read it, not just because it criticizes Judaism, but might see it as a kind of guide.

Modesty, self-criticism ...

Yes, in the sense of, "Let's reflect on our own role."

That would be a wonderful achievement. I don't know if everybody would really interpret it that way. Your enemies might say: "Oh, look at that. Jews even admit themselves that they ... "

No. I would really like to emphasize that I have never said that Judaism is bad in any way. All I'm saying, is that it hasn't been as important as Jews tend to think it has been.

Religion and God play a big role in your book. How would you describe yourself? As an atheist?

Yes, absolutely. At least, what I want to say is that I make a distinction between two kinds of gods. You have the God who is just a name for the cosmic mystery. There are many things we don't know about the universe, about its origins, the laws of nature, what consciousness is, all these kinds of things. If you want to call this kind of mystery God, I have no objection to that kind of God. The God I don't like is the God who has all these very concrete ideas about human sexuality and human dress codes, and food taboos, and politics, and so forth. And I don't believe that the cosmic mystery cares about human sexuality or about human politics.

Otherwise it's not the God we want to pray to.

No!

On the other hand, 50% of scientists worldwide say that science has basically shown them that God exists. And the other 50% say that science has proved that God doesn't exist.

Again, this is a problem of semantics. Many scientists — not all — who speak about God have something very different in mind than the Islamic State does or than orthodox Jews in Israel do. If I remember correctly, Einstein's reaction to quantum physics was, "God doesn't play dice." I don't think that Einstein had in mind the God who hates homosexuals, or the God who says that women shouldn't vote, or things like that. Which is why I don't like to use that word. People take it out of context and use it to support a very different kind of God.

Are we, the Western democracies, too tolerant with countries that do not respect the rights of homosexuals, even kill them because they are homosexuals, or do not respect the rights of women, discriminating them just because they are women? There are so many examples, and at the same time so many compromises that politicians and business people are willing to take with these systems. How do you see that?

You know that we do not control the world.

No, but we can decide how we act.

You can decide —

Who you make trade deals with, who you make political deals with, for example. Germany made a deal with Turkey concerning the refugee crisis, only to soon find out that it was being used as a kind of blackmailing system to reach very different goals.

You have to decide on a case-by-case basis. If you say you are just going to cease all trade relations with any country that doesn't give women equal rights, or that has laws that discriminate against gay people, then it's not going to be workable.

But when the issue is child labor, for example, everyone agrees that it's not acceptable.

First of all, you have to distinguish between what you do in your own country, or in your own system of countries, like the EU, and how you deal with countries that are on the other side of the planet. On the one hand, you have to recognize the ethical responsibility you have towards people who live in other countries. On the other hand, you also need to recognize the limitations to what you can, or even what you should do.

We have a lot of examples of people coming from the other side of the planet and trying to change countries they are not familiar with. It often ends terribly. Take the American invasion of Iraq and Afghanistan, for example. When it happened, it was presented, in part, as an attempt to improve the position of women in Iraq and Afghanistan. I think, 10 years later, it is clear that the results don't bear this out. As far as I know, the position of women today in Iraq is many ways is much, much worse than under Saddam Hussein. If the main aim of the American invasion of Iraq was to improve the situation of women in Iraq, it didn't work.

So, to reiterate, I think we have to decide on a case-by-case basis. Sometimes, the most intelligent thing to do is to bring people to change their views by exposing them to different ideas, to different cultures and not by using violence. There is a saying that nobody likes armed missionaries, even if the missionaries are armed with trade agreements and things like that.

Let's move from world politics back to your childhood. What was your dream job?

My dream job? I guess I'm working in my dream job. I always liked history.

When did you start making that dream come true?

When I was a child, the general message I got was, being a historian is not a job.

Was it not serious enough?

Yes.

What did your mother want you to become, a lawyer or a doctor?

She didn't express any particular preference.

If time machines existed, which time would you like to travel to?

Just to visit, or to actually live there?

Well, first of all just a visit.

Well, as you know, I'm a global historian. I like almost any time period, so any period would be interesting for me. But, I guess the most interesting thing for me would be to see life before the agricultural revolution. We have so many theories about how people lived as hunter-gatherers, but we actually have very little evidence. I would really like the opportunity go as an observer and just see what life was really like in the Stone Age.

Considering the future — let's say the next 100 years — what is, in your opinion, the biggest threat facing us, and what are the greatest opportunities for society?

The biggest threats are the three big problems I mentioned in the beginning: nuclear war, climate change, and technological disruptions. And of the three technological disruption is the most complicated to deal with, because we don't know what to do with it. With nuclear war and climate change, it's easy — at least we know what we have to do: Stop them, prevent them. But with the rise of AI and biotechnology, there are immense beneficial opportunities, so we are not going to stop researching into AI and biotech. So the best-case scenario is that we will have much better healthcare, much more leisure time, we will be able to explore and develop ourselves as human beings in ways that have never been possible before in history. The flip side, the worst-case scenario, is that we might have digital dictatorships with all power concentrated in the hands of a tiny elite that monitors everybody all the time.

Humanity might actually be divided into a caste of superhumans and an underclass of useless people. The latter will be people who have no economic value and no political power. I think that one of the big lessons we learned in the 20th century is that technology is not deterministic. The same electricity served the Third Reich and communist Germany and liberal, democratic united Germany. The electricity doesn't care what you do with it. The same applies for AI and biotech. We can use them to build paradise or hell. It's up to us.

Do you believe in Ray Kurzweil's theory of singularity?

In a variation of it, yes. But I don't believe in his time frame. I don't think it's coming so soon.

The year 2045 is what he says.

Something like that, yes. I think that's a bit too early. But yes, I do think we are approaching it. My definition of singularity is the moment when our imagination fails, the moment when our imagination cannot bring anything forth anymore. That is singularity in the sense that you can't look beyond it. What happened before the Big Bang? The world before the Big Bang has no meaning, because there was no time. And then there is singularity in the future, which we just can't have any meaningful idea about, because we don't know what will happen after that moment, because our own imagination, our own mind be able to develop differently and change.

If I say to you, "Let's imagine we have a spaceship that travels at the speed of light." Based on that, you can imagine flying into the future and building colonies on Mars, for example. In a scenario like that, you can imagine what might happen afterwards. However, if I told you what would happen if we had the technology to start reengineering our brains and minds, including our imaginations, if you really understood what that means, then you would realize that it's not actually possible to imagine what would happen after that. This is because, after that, your imagination itself can also be changed. So, no matter what you imagine, it will never include the entire spectrum of possibilities. What happens after that point in time is something you are unable to imagine. And if you can imagine what happens after that, all that means is that what you are imagining isn't real.

Do you think that, sooner or later, there will be a certain form of eternal life for human beings?

I wouldn't say "eternal" but indefinite. "Eternal" means that, no matter what happens, you'll never die, and that's a very radical demand.

But longevity is going to develop almost exponentially, and then ...

Yes, once you reach 150 there is no limit, because to reach 150 you really need to carry out some kind of reengineering. I mean, the body we have at present, even if everything goes perfectly, can survive to 100, 120 years of age. But, beyond that, you really need to find ways to reengineer the human body.

It's about cell reproduction, isn't it? We could genetically manipulate it.

Yes, either to rejuvenate part of the body or to replace it with bionic parts. Whatever it is you try to do to reach 150, as soon as you have managed that, the same technology could extend human life up to 200, 400, 1,000, or as many years as you want.

Is that something you would aim for? Would you like to become that old?

I think it would be extremely frightening, because then people would not be willing to take any risks in life. The levels of anxiety will be immense, and also the levels of anger and hate among those who don't have the money or the ability to access this type of technology.

Advertisement

Could it also lead to a lack of ambition, a situation where a person might say, I have 300 years, why today, why tomorrow?

Again, it could lead in all kinds of directions that we can't really imagine. But the real question is, and I would say that 99% of all people would answer this question with a yes, is whether we just want to live longer.

Most people I ask say no.

Because they don't understand the question.

Advertisement

I want to become as old as possible, I can assure you.

Yes, but you should ask people the following question: Would you like another 10 years in good health?

And they would naturally say yes ...

Everybody would say yes, and at the end of the 10 years we would ask them again: Would you like another 10 years in good health? And, once again, everyone would say yes, and that is precisely what it's about. Let's say you tell people, OK, I have this pill, and if you take this pill, you'll live to be a million years old. Nobody really knows what that means. But if you simply asked them again and again whether they would like another 10 years in good health, then they would always say yes.

Advertisement

But would that also be your personal answer: to live as long as possible, in good health and mentally fit?

Yes.

Jeff Bezos is seriously worried about the situation on Earth 100 years from now, in terms of energy resources, consumption, and other aspects. He believes that planet Earth will no longer be enough for us. Do you share the general concern and the conclusions that Bezos draws from that, or would you rather go to his fiercest competitor Jack Ma, who recently said, when I asked him that question, "Let Jeff take care of orbit; I'll take care of Earth."

As far as that's concerned, I'm more with Jack Ma. I think the great danger with these fantasies about colonizing other planets is that, it's kind of running away, it's a form of escapism, to avoid having to take care of the problems on Earth. "Oh, we don't need to worry so much about climate change, or this and that, we have other options." And that's a very dangerous turn of mind. Especially when one thinks about the explosive potential of class differences between the people who are stuck on Earth and the people who can dream about colonizing other planets.

Advertisement

Apart from that, we also need to differentiate between resources and colonization. Resources, exploiting other planets, asteroids, and so on for minerals, and building factories on Mars and then shipping the final products to Earth makes a lot of sense, and I think we will see such things actually take place. And they make a lot of economic and even ecological sense.

In terms of colonization, the fact is that humans have adapted to life on Earth over billions of years of evolution. It will be tremendously difficult to maintain human life, or any organic life outside planet Earth, unless we turn ourselves into cyborgs and renounce the organic body. But in terms of our organic bodies, that is nothing more than a pipe dream. It's going to be extremely difficult. We are used to this gravity, to this distance from the sun, to this cosmic radiation, to so many things. It is simply far too complex. I think there will be space colonization, but not by homo sapiens.

So we should concentrate fully on staying here under good conditions for as long as possible?

Yes, I think that's what we should focus on — saving this planet before we get carried away with colonizing other planets.

Advertisement

What topic occupies you most at the moment?

The rise of AI, the connection with biotechnology. I don't keep this hidden ace up my sleeve, "Oh, this is really important stuff, but I won't say anything about it!" I think what I really worry about is that, despite all these technologies, ultimately the future is going to — even if everything works out fine, even if there is no climate catastrophe, if there is no dystopian digital dictatorship — I worry that the future is going to be really banal. I worry that all the immense power and the amazing technology we have will mainly be used for really banal purposes. Which is better than using it for evil purposes, I guess.

And I think about our life today, and imagine, what if I had this time machine and could go back to the Stone Age to see how people lived back then, and I could bring somebody back here and show him how we live today. And I would tell him about the immense power we have, and all the things we can do with it: We can fly to the moon, talk to someone on the other side of the planet, and all these things. And yet what do we actually do with it all? And that's such a huge disappointment.

That reminds me of something that Henry Kissinger said a few years ago, in Berlin. He said, modern technology enables people to connect with another at any time, to communicate, to gather, but do we really know why we gather and what we should connect for?

Advertisement

Yes, exactly! When you look at all these conversations, they are usually so empty and meaningless. The thing that really excites me more than anything else is research into human consciousness and the human mind. And that's something we have made very little progress in over thousands and thousands of years. And I guess this is the reason why we don't really do much with all these immense powers that we have.

On the other hand, would you not agree that, in the last thousands of years the world has become a better place?

Not necessarily, certainly not for nonhuman entities. You might maintain that things are getting better, but only for humans. And that's only been the case in the last 150 years or so, since the mid-19th century. In the early 19th century, things already started do develop enormously. And yet we still aren't seeing any improvement in the life of ordinary individuals, so it's not a very robust thing this improvement. Yes, there have been improvements, but they are much, much smaller than you would have expected. I mean, the increase in human power has been immense, the increase in human quality of life. There has been a certain improvement, but there is a huge mismatch between the increase in our power and the much smaller increase in our quality of life and our satisfaction.

Of course, I don't think we should ignore the good things. We should be very grateful and mindful for what we have achieved. But, on the other hand, it's a bit of a disappointment. I often feel it's like having this super-powerful car, and you press the gas pedal right down to the floor, but the car hardly moves. It moves a little, but as if you were in first gear. There's something stuck there between all that power and what we actually do with it, something that still doesn't connect.

Advertisement

So we could have done things better?

Yes.

Read the original article on Axel Springer. Copyright 2018.
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account