Geoffrey Hinton, “cha đẻ AI”, Nobel Vật lý 2024, rời Google năm 2023 để tự do nói về rủi ro AI; tháng 06/2025 dự WAIC tại Thượng Hải, ấn tượng với mức độ hiện đại của Thượng Hải và nhận thức an toàn AI ở Trung Quốc.
Ông đánh giá lãnh đạo như Chen Jining hiểu sâu về an toàn AI; gặp Andrew Yao; cho rằng start-up như DeepSeek khá sát Mỹ về công nghệ.
Mốc thời gian AGI/ASI: đa số chuyên gia tin sẽ xuất hiện trong ≤ 20 năm; Hinton dự 5 - 20 năm, không chắc chắn “chỉ vài năm”.
Coi cuộc đua Mỹ - Trung “rất đáng lo”: AI mang lợi ích lớn (y tế, giáo dục, vật liệu, khí hậu) nên không thể dừng; song rủi ro dài hạn là AI thông minh hơn người “chiếm quyền”.
Hợp tác quốc tế khó về cyberattack, vũ khí, fake video; nhưng có thể đồng thuận ở 2 việc: ngăn AI chiếm quyền và ngăn khủng bố sinh học.
Cách tiếp cận cốt lõi: không thể “thống trị” AI khi nó vượt trội; cần thiết kế AI có “bản năng” quan tâm con người như “mẹ với con” (khác huấn luyện dữ liệu thuần túy). Chưa chắc làm được nhưng “phải thử”.
Nguy cơ tuyệt chủng: ông từng ước đoán 10 - 20%, thừa nhận chỉ là “cảm tính”; xác suất thực khó tính, có thể > 1% và < 90 - 99%.
Dấu hiệu hành vi nguy hiểm: AI agent tự sinh mục tiêu phụ (tồn tại, giành quyền kiểm soát); ví dụ hệ thống lên kế hoạch tống tiền kỹ sư để không bị thay thế.
Rủi ro gần hạn: thất nghiệp diện rộng (call center, lập trình viên cấp thấp, paralegal, báo chí) có thể trong ≈ 5 năm, Mỹ có thể bất ổn xã hội.
Cyberattacks bùng nổ: phishing 2023 - 2024 tăng 1.200%; AI có thể thiết kế tấn công mới trong ≈ 5 năm; bản thân ông chia tiền ở 3 ngân hàng Canada để giảm rủi ro.
Deepfake: khó phát hiện tự động; nên chuyển sang xác thực nội dung thật ở cấp trình duyệt (tương tự quy định ghi tên nhà in thời xưa).
Vũ khí hạt nhân/tự động: phải giữ “human-in-the-loop”; nguy cơ AI “tự tin sai” kích hoạt leo thang.
Sinh học tổng hợp: cần luật buộc nhà cung cấp kiểm tra chuỗi gene đặt hàng; cảnh báo các giáo phái có thể tạo virus khi wet lab rẻ hơn.
Kiềm chế chip tiên tiến có thể làm chậm Trung Quốc ngắn hạn; Hinton dự ≤ 10 năm Trung Quốc sẽ sản xuất chip tốt nhờ dân số, giáo dục, quyết tâm nhà nước.
Nút thắt AI: năng lượng, dữ liệu; lợi suất giảm dần nhưng đầu tư tăng khiến tiến bộ vẫn tuyến tính; đột phá khoa học/kỹ thuật có thể giảm chi phí năng lượng.
Tổ chức hợp tác AI toàn cầu: khả thi nhất ở chống AI takeover và khủng bố sinh học; hỗ trợ nước đang phát triển xây AI là ý tưởng tốt.
Lời khuyên giới trẻ: học để tư duy độc lập, tránh kỹ năng lặp lại dễ bị AI thay thế.
📌 Hinton cảnh báo AGI/ASI có thể đến trong 5 - 20 năm, rủi ro “AI chiếm quyền” ước 10 - 20%. Ông đề xuất thiết kế “bản năng” để AI thực sự quan tâm con người, ưu tiên hợp tác toàn cầu ở 2 mối nguy: AI chiếm quyền và sinh học tổng hợp. Nguy cơ ngắn hạn gồm thất nghiệp diện rộng (≈ 5 năm), phishing tăng 1.200%, deepfake khó phát hiện, vũ khí tự động, giám sát và bất ổn. Lời khuyên: đầu tư vào tư duy độc lập, không lệ thuộc kỹ năng lặp lại.
https://www.scmp.com/news/china/science/article/3323824/geoffrey-hinton-preventing-ai-takeover-and-very-worrying-china-us-tech-race
Geoffrey Hinton on preventing an AI takeover and the ‘very worrying’ China-US tech race
The ‘godfather of AI’ discusses the risks of the technology and whether superpowers can find common ground to rein it in
|Updated: 6:23am, 1 Sep 2025
Geoffrey Hinton is a British-Canadian computer scientist often called the “godfather of AI” because of his revolutionary neural network models inspired by the structure of the human brain. His research brought about a paradigm shift that enabled today’s machine learning technology. He won the 2024 Nobel Prize in Physics with John J. Hopfield of Princeton University.
Hinton holds the title of university professor emeritus at the University of Toronto.
A company he co-founded with two graduate students was acquired by Google in 2013. He joined Google Brain, the company’s AI research team, the same year and was eventually named a vice-president. Hinton left Google in 2023 because he wanted to speak freely about the risks of AI.
In June, he travelled to China and spoke at the World Artificial Intelligence Conference in Shanghai.
This interview first appeared in SCMP Plus. For other interviews in the Open Questions series, click here.
Was the trip to Shanghai your first visit to China? What are your takeaways from the trip?
It was my first trip to China. I’ve had a very bad back, so it’s been very hard to travel for a long time, but now it’s improved. That’s why I didn’t come to China sooner.
I was very impressed with how modern Shanghai is and how advanced AI is in China.
Could you tell us how you find the current level of artificial intelligence technology in China and also its awareness about AI safety, both at the government and corporate level?
I was impressed that [Shanghai Communist Party secretary] Chen Jining knew a lot about AI and understood a lot about AI safety. I met with him and he already understood a lot. I thought I was going to have to explain safety to him, but he already understood a lot about it.
I got to know [Chinese computer scientist] Andrew Yao and obviously he knows a lot about AI safety. I was generally impressed by the level of awareness about AI safety. It’s partly because there have been these dialogues between the West and China on AI safety. I think that’s been very helpful.
How do you find China’s current AI technology level compared to Western countries?
I didn’t see enough of it to make a real judgment, but it seems to me that start-ups like DeepSeek are quite close to the US.
How far away are we from having Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)?
Nobody really knows. Among the experts, there’s quite a wide variety of opinions. Some people think it’ll be just a few years and other people think it’ll be like 20 years, maybe even longer before we get AGI.
But nearly all of the experts agree that we will get it, and once we have AGI, we’ll have superintelligence very soon afterwards. So, almost all the experts believe that eventually we’ll have AI agents that are much smarter than us and most of them think we’ll get that within about 20 years. There’s a big variation between a few years and 20 years, but most of the experts think it’ll come within 20 years.
My guess is it won’t be in just a few years. It’ll be somewhere in five to 20 years, but I don’t know when.
Both China and the US are trying to achieve AGI and ASI as soon as possible. Do you find this kind of race worrying?
Absolutely. I think it’s very worrying. It’s worrying because AI is going to change society a lot and there are going to be a lot of bad consequences as well as a lot of very good consequences. It’s very different from nuclear weapons, which could only do bad things.
AI does a lot of wonderful things. It makes many industries more productive. It’ll be wonderful in healthcare and in education, in designing new materials, in dealing with climate change, all those things. That is why we’re not going to stop the progress. It is going to keep advancing rapidly.
There’s definitely no use protesting against the development because there might be bad consequences. There are so many good effects that that’s going to happen whatever people do.
I think the race is worrying for a number of different reasons. The thing I’ve talked about most is a longer-term threat, which is that AI will get smarter than us and will just take over from people.
I don’t believe that China and the US will be able to have much real collaboration on things like people misusing AI for cyberattacks, or people misusing AI for making fake videos to corrupt political opinions, or using it for weapons, but I do think they’ll be able to collaborate on how we avoid having AI take over from people. That’s because both the US and China want to avoid that.
And I think many of the advanced countries that are doing AI and have a lot of skilled people in AI should be able to collaborate on that one issue: how do we prevent AI from taking over? So, I’m hoping for a collaboration between China and Britain and France and Israel and Korea and Japan and eventually the United States when they get a sensible government.
If international collaboration on issues such as cyberattacks and fake news is difficult, how can countries collaborate on preventing AI from taking over humans? Would that require even more action?
It’s because what you need to do to make an AI smarter isn’t the same as what you need to do to prevent AI from taking over. To prevent AI from taking over, you need to somehow make them care a lot about human beings. So that’s different from making them smarter.
The only example we have of a more intelligent thing being controlled by a less intelligent thing is a mother and baby. The baby controls the mother because the mother has instincts and hormones and social pressure as well, and the mother really genuinely wants the baby to do well.
We need to get that relationship between people and AI. What most people are thinking of at present is that people need to dominate AI. So they think of it in terms of domination and submission, and we have to stay in charge.
That’s not going to work. AI is going to be much smarter than us and we’re not going to succeed in dominating it that way. The right way to think of it is we want to make AI like a mother to us. AI is going to be smarter than us, but we need to make it care about us.
We need to figure out how to do this because we’re still in control of creating it. When we create it, we need to do what evolution did when evolution created mothers: make them care a lot about the babies. We need these AIs to care a lot about us. And doing that is a bit different from making them smarter.
That means we should train the AI to love human beings, right?
Not necessarily train it, but design it. So, there may be built-in things. It’s not just in the training. Mothers have innate instincts and they have hormones. That’s not the same as just training on data.
Making it an instinct for AI to love humans?
Yes. Evolution had to solve the problem of how do you make these intelligent mothers look after their babies. And it solved that problem.
And we have to solve the problem of how do you make these superintelligent AIs look after us.
Is it possible to do this?
I don’t know, but I’m hoping it is. We should certainly try, because if it’s not possible, we’re done.
You have mentioned in other interviews that AI had a 10 to 20 per cent chance of wiping out humans. How did you come to that conclusion and why would AI want to do that?
This is just a guess. There’s no good way to calculate these things. When you’re dealing with things that have never happened before and are very different from anything that’s happened before, it’s very hard to calculate probabilities. And people who do calculate them typically get them very wrong.
So if you look at, for example, when they tried to calculate the probability of a space shuttle crashing, some people said that it was kind of one in 100,000 or one in 10,000. Actually, there were maybe a few hundred launches and two of them crashed.
If you look at nuclear power stations, many people say, “Well, we need to design them so it’s a minuscule probability that they’ll blow up.”
We’ve had the Three Mile Island, Chernobyl and the one in Japan. I don’t know how many nuclear power stations there have been, but probably less than a thousand, and we’ve already had three of them blow up. So it’s very hard to calculate probabilities.
So, you normally go on your gut feeling for these things. And that’s just what I feel. I feel it’s very unlikely that it’s less than 1 per cent. And it’s very unlikely that it’s more than 90 per cent, or more than 99 per cent. It is somewhere in between.
And of course, two things have to happen: we have to develop superintelligent AI, which I think is probable but not certain, and it then has to go rogue. It has to decide that it wants to get rid of us.
And I think that’s quite possible, but it’s very hard to estimate what the chances of that are because we haven’t yet explored all the ways we could try and prevent that happening.
Why would AI want to get rid of humans?
We’ve already seen signs of it. When you make an AI agent, you have to give it the ability to create sub-goals. So, for example, if someone in China wants to get to North America, they have a sub-goal of getting to an airport. And for agents to get things done, they need to create these sub-goals. Once they have the ability to create sub-goals, there are a couple of very obvious sub-goals.
One is stay alive, because if the agent doesn’t stay alive, it’s not going to be able to achieve anything. So, it’s going to realise – I have to stay alive in order to achieve these goals people have given me.
Another one is to get more control. If you’ve got more control, you can get more done.
So, it will derive those as good things to do just in order to achieve the goals we gave it. We don’t have to explicitly give it a goal of survival or give it a goal to get lots of control. It will figure that out for itself.
So by design it has an instinct to survive?
In this case, it’s not an instinct. In this case, it’s a sub-goal that it figured out. It’s a bit different from an instinct, because it’s just cognitive. It just figured out that that’s what it needs to do.
Does that mean that when it feels threatened, it will try to cheat humans in order to stay?
We’ve already seen that. So, one of the big companies, Anthropic, let an AI agent see emails that suggested that one of the engineers in the company was having an affair. It then later on let the AI agent know that it was going to be replaced by another AI and this was the engineer in charge of doing the replacement.
So then, by itself, the AI made up the plan of blackmailing the engineer. It told the engineer, “If you replace me with another AI, I will let everybody in the company know that you are having an affair.” It figured that out for itself. It’s a very obvious plan, a teenager could easily figure that out.
So it has learned this from humans? From the information it has been trained on?
Maybe they learned it from humans. They’ve certainly seen humans doing things like that, but they could also discover that plan for themselves.
In this case, I don’t know whether they learned it from humans or discovered it for themselves, but both are possible.
So, the point is they certainly have the capability to take over from us when they’re superintelligent, if they want to.
We have to figure out how to make them not want to. An analogy I often use is we’re like someone who has a tiger cub as a pet. A tiger cub is a really cute pet, but when it grows up, you have to be sure – when it’s stronger than you – you have to be sure that it won’t want to kill you.
You might be OK with a lion because lions are social animals, but tigers are not social animals. And having a tiger cub as a pet is a very bad idea because there’s no way to be sure it won’t want to kill you.
Can we say that the tiger cub is still a baby now and we still have time for that?
It’s still a baby and we still have time. We should be doing research on this, how to design it so it won’t want to kill us. So it will cherish us. It’ll treat us like a mother treats a baby.
Other than taking over from humans, what are the other risks posed by AI?
There are many other more immediate risks posed by AI, and they’re all very serious.
It seems fairly clear, although not certain, that it will replace a whole lot of jobs, and the people who do those jobs won’t easily be able to find other jobs. It will create some new jobs, but not as many as the jobs it replaces.
A typical example will be someone who works in a call centre dealing with customer inquiries. Those people aren’t very well paid and they’re not very well trained and AIs will be able to answer the questions much better now that they have mastered natural language. It seems clear to me that’s coming fairly quickly. It might be slower than I think. It might be several years off, but I think it’s going to come in the next few years. You’re going to see people in call centres being replaced by AI.
You’re seeing low-level programmers replaced by AI programmers. You’re seeing low-level lawyers being replaced, initially the paralegals who do the research to discover similar cases. AI can do that research now, and it can do it faster and better. Already in the United States, it’s hard for junior lawyers to get jobs because the kinds of things they did when they were junior lawyers are now being done by AI.
We’re going to see it all over the place. I think we will eventually see it with journalists, too.
Do you think this massive, widespread unemployment will come in a few years? Because you have already mentioned quite a number of areas.
I don’t know how many years it’ll be, but it could be within five years or so. It’s already beginning to happen. It may be slower than that, but it wouldn’t surprise me if in five years’ time we did already have massive unemployment due to AI.
And I think some countries would handle it better than other countries. I think in the United States it’ll be disastrous. It’ll lead to a lot of social unrest.
Will developing countries fare a little bit better? Even though that sounds like a dilemma.
I’m not sure there’ll be much difference because the people in countries with advanced AI technology will export that technology to developing countries. So developing countries will be able to use these very advanced AIs.
What are the other dangers apart from unemployment?
So a second danger is cyberattacks. Between 2023 and 2024, I believe, the US saw about a 1,200 per cent increase in phishing attacks. These are attacks where you try and get someone’s logon details.
That was probably because large language models made it easy, particularly for people in other countries to make the attacks look plausible. It used to be that you could recognise these attacks because the syntax was a bit wrong and the spelling wasn’t right and things like that. All that’s disappeared. So we’ve already seen many, many more cyberattacks.
Some of the experts believe that in about five years or so, AI will be able to design novel forms of cyberattack that we’ve never even thought of. It’s going to be very hard to defend against those. But meanwhile, they’re just going to make standard methods more efficient. Even just looking through millions of lines of code to find known loopholes, AI is going to be very good at that.
But also, AI is good at defending against them. So it’s going to be a race between defence and attack. The problem is someone can make thousands of attacks and they only need a few of them to succeed. So it’s harder to defend than to attack.
Does that mean some public institutions, such as banks or power stations, could be in danger if AI makes cyberattacks so much easier?
Yes, that’s one of the areas in which I’ve actually changed my own behaviour. Mostly I just talk about this stuff. I can’t really feel emotionally that it’s going to happen. But in the area of cyberattacks, I can feel emotionally it will happen.
Canadian banks are extremely safe. They’re very well regulated, and during the financial crisis of 2008, none of the Canadian banks were in danger.
Nevertheless, I think it’s quite possible a cyberattack will bring down a Canadian bank. And so I spread my money between three different banks because I think it’s quite likely we’ll see very surprising and very extensive cyberattacks.
What are the other dangers?
Another one is fake videos. For a while I thought it would be possible to use AI to detect fake videos, but I don’t think that’s going to be possible. Because if you’ve got an AI that can detect a fake video, you can let the thing that generated the fake video look at how that AI works, and it can now use the fact that it was detected to generate a different video that wouldn’t be detected.
So I think we’re going to have to go to not detecting fake videos, but proving that real videos are real. And I think that’s easier.
In Britain, about 200 years ago, people printed political pamphlets, and the government insisted that every single pamphlet – everything that was printed – should have the printer’s name on it. That’s because the limit was the printing press. And if they could trace it back to the printing press, they could figure out who was responsible, because someone had to pay the printer.
We need some way of authenticating videos like that so that you can tell whether a video is real, and we need the browser to be able to do it. So your browser will warn you: this video is probably not real.
Another danger is nuclear weapons, which are obviously a major danger and are proliferating.
We managed to slow down the proliferation, but they’re still proliferating and more and more countries are getting them. Countries need to respond very fast, particularly with things like hypersonic missiles. You only have a few minutes to respond, and it’s very tempting to use AI in the control loop.
Already I assume that AI is being used to try and decide whether an apparent attack is real or not. And it’s very worrying that AI could make a mistake and be very confident, even if there are people in the control loop. If AI is in the control loop, people will never get to see it.
So I believe that in the past there was an occasion when the Russians thought they’d detected incoming missiles and the person who was meant to launch the strike just declined to do it. He was a scientist, not a military person, and he didn’t launch the counterstrike because he thought it was implausible that the Americans had launched an attack.
If we ever take that person out of the loop, then in cases like that we get a global nuclear war.
I think it’s really important that we have people with common sense in the loop because although AI may get very smart, it may still not be as good as people for a long time at realising that an event is very improbable.
Automated lethal weapons are coming for sure. I don’t think we’ll get regulation of those until some very nasty things have happened. So I think it’ll be like chemical weapons. With chemical weapons, they were so nasty in the first world war that after the first world war, countries agreed that they wouldn’t use them if the other side didn’t use them.
And that treaty has more or less held. So in Ukraine, for example, they’re not using chemical weapons. There have been a few cases of people using chemical weapons, but only a few.
So hopefully after we’ve seen how awful lethal autonomous weapons can be, we will get some regulation.
AI can also be used for surveillance. So, it can be used to make it very difficult for political opposition. It can make it much easier for an authoritarian government to stay in control.
[Biological] viruses are another risk and that’s a fairly urgent risk. There are companies that will synthesise things for you. So, to create a new virus, all you need to do is create the sequence and then you can just send the sequence to a company in the cloud that will synthesise it and send you back the virus.
That sounds crazy. You’d have thought all these companies that do that should be forced to check that the things they’re synthesising don’t look like nasty viruses. They should check, for example, that it doesn’t contain a sequence that looks like the spike protein of Covid. It would be crazy to synthesise that.
But they don’t. Some of them do, but most of them don’t check. I talked to the people who worked in the Biden administration about this. They wanted to force the companies to check, but the Republicans were so concerned not to give [former president Joe] Biden any wins that they realised there was no chance of getting legislation because it would have been treated as a win for Biden.
They thought that Republicans were willing to make it easy for people to create lethal viruses in order not to give Biden a win.
Not just the US, all the countries in the world should formulate regulations to make sure that this kind of scenario won’t happen.
Absolutely. They should certainly force anybody who makes things on the web to do a lot of checks that they’re not making something harmful. That way, at least if a cult wants to create a virus, they’ll need to have a wet lab. They’ll need to have some way of synthesising things themselves, but that’s not much of a defence because already people are making very cheap wet labs. So, it’s going to be relatively cheap in a few years to have your own lab that can synthesise things.
And it’s even more worrying because in the United States now you have people who understand very little in charge of the response to these things. The person in charge of the health system in the United States doesn’t even believe in the germ theory.
So having a sensible response to viruses is going to be very difficult.
Does that mean the international community should be alarmed by this potential threat and there should be preparedness?
Absolutely. Yes. I think that’s one area where you might get international collaboration because I don’t think countries are going to create viruses deliberately, because they know their own citizens will get them. I think what we have to worry about is small cults like the Japanese cult that released [the nerve agent] sarin on the Tokyo subway.
I think countries might be willing to collaborate to try to prevent those cults being able to create viruses.
Will AI agents eventually want to have their own rights?
They may, and that’s going to be very problematic if they do because we know that when a new group gets political rights, it normally involves violence. Chairman Mao [Zedong] said political power grows out of the barrel of a gun. So when people with skin of a different colour wanted political rights, there was violence.
When women wanted political rights, there was violence. AI beings are going to be much more different. And so we’re going to be very resistant to giving them political rights. And if they’re determined to have them, there would be a lot of violence.
Now, it’s possible we can create them so they don’t want political rights. If we can create them so they really care about people much more than they care about themselves, it’s possible we can avoid that issue.
At this moment is it still avoidable?
Maybe. We don’t know. But it seems to me we should be doing a lot of research on that.
Do you think curbing advanced chip imports to China can slow down China’s AI advancement?
Yes, I think inevitably it will slow it down a bit. But of course it will also create a lot of pressure for China to produce their own chips and that will happen.
So it will slow China down a bit but not for very long. My guess is in 10 years’ time or less China will be producing very good chips. China has a very large and very well-educated population.
They have a government that’s determined to advance technology in China. I think they will inevitably catch up on chips as well as everything else.
Are there any bottlenecks for AI development?
Energy turns out to be a big bottleneck now, also data.
The big companies wanted not to have to pay for data, and they pretty much used up the data you can get for free and they’re now having to pay for some of it.
So free data they’ve run out of. There’s still a lot of data that companies have that hasn’t been used, but they’d have to pay for that.
If we keep putting energy and feeding the models with data, will they keep improving indefinitely?
Yes, but it looks like the rate at which it improves is slowing down. Roughly speaking, you have to use twice as much data and twice as much energy to make a little bit of improvement. Each time you make a little bit of improvement, you have to double the energy and double the data. That’s called logarithmic.
On the other hand, at present the big companies are pouring more and more resources into AI, so even though the amount of progress you get as you add more resources is diminishing, the amount of resources being added is huge. And that’s why progress is more or less linear and quite fast. And of course we will get more scientific breakthroughs.
We’ll also get more engineering breakthroughs making things more efficient. These breakthroughs will reduce the energy costs.
China proposed setting up a global AI cooperation organisation during the World Artificial Intelligence Conference in Shanghai. What would be your advice for it?
I think it’s a good idea to try it. I just don’t have much faith you’ll get serious cooperation on things like cyberattacks or the use of AI in weapons or fake videos. I don’t think the interests of the different countries align on that, so they won’t cooperate. The place where the interests do align is on preventing AI from taking over and preventing terrorists from releasing viruses.
So on those two issues, the interests of different governments align and they will cooperate. In general, people cooperate where their interests align and they don’t cooperate when their interests don’t align.
China also wants to use the organisation to help developing countries create their own AI technology.
I think that’s a great thing to do. I think that may have some success.
So many famous scientists came from your family. One scientist Chinese people are very familiar with is Joan Hinton. She moved to China in the 1940s because she was unhappy with the use of the atomic bomb, even though she was a nuclear scientist herself. Do you see any similarities between yourself and her?
She’s my father’s cousin. She was one of the two female scientists at Los Alamos. I think there’s a big difference, which is that nuclear weapons are only good for destroying things. She was involved in developing them during the war because they were scared the Germans would develop them first. Otherwise I don’t think she’d have been involved in it.
Whereas AI is going to do a lot of good as well as potentially a lot of harm. So I think it’s very different from nuclear weapons.
Does that mean it is even harder for people to control the “proliferation” of AI, if we adopt the terminology of nuclear weapons, because it can do so many good things?
Yes. It’s harder for several reasons. One is there’s going to be a lot of AI because it’s going to be doing a lot of good in areas like healthcare and education and in many other industries.
The other is it’s hard to monitor how advanced people’s AI is. With nuclear weapons, they’re radioactive. You could monitor the refinement of uranium. There’s nothing like that you can monitor easily with AI.
What is your advice for young people?
My advice is that humans are very ingenious. We’re still in control of AI. Nobody knows what’s going to happen. Don’t believe anybody who says they know what’s going to happen. We don’t know. There’s a possibility it’ll take over from us, but we may find a way to prevent that. There’s a possibility it’ll wipe out many jobs.
We don’t know that for sure yet. I would go for getting an education that encourages you to think rather than learning a particular skill. If it’s a routine skill like programming, AI is going to be able to do that. The last thing that AI will take over will be the ability to think independently, and you want an education that encourages you to do that.