تدريب Shadowing: The new AI model that’s alarming Washington | The Economist - تعلم التحدث بالإنجليزية مع YouTube
C1
أدوات التظليل الصوتي
0% مكتمل (0/93 جمل)
Mythos is a new AI model trained by Anthropic.
⏸ متوقف مؤقتاً
السرعة:
عدد التكرارات:
وضع الانتظار:
مزامنة الترجمة:0ms
كل الجمل
93 جمل
1
Mythos is a new AI model trained by Anthropic.
2
The reason why it's causing a fuss is that Anthropic say it is an extraordinarily competent cyber security engineer.
3
It can hack things really, really well.
4
To give you an example, Opus 4.6, the previous top tier model from Anthropic, the one that the public can use, is quite good at finding weaknesses in technology.
5
It found a a vulnerability in a version of Firefox that has since been fixed.
6
But if you ask it to then use that vulnerability to hack a computer, it falls down.
7
Anthropic tried hundreds of times.
8
They got twice a working exploit on the very first step.
9
Mythos, 181 successful exploits of that, and then 27 went further and actually built a working attack chain that affected the Windows registry, a real deep level attack.
10
That's a step change, right?
11
It's not just a couple of percent higher on a benchmark.
12
And it raises the risk that anyone using Mythos becomes a top tier hacker, even if they don't have any tech capability themselves.
13
Okay, Zannie, you've been in New York and Washington.
14
How much is this Mythos moment causing alarm there?
15
A lot.
16
I mean, it's interesting.
17
I got here basically a week ago, just after Mythos was announced by Anthropic.
18
And I've been talking to a bunch of government officials.
19
I've been talking to tech leaders.
20
I've been talking to business leaders in bigger gatherings and one-on-one.
21
And I've kind of watched over the past week as the alarm has grown.
22
There is real alarm in the hitherto very hands-off Trump administration.
23
Secretary of the Treasury, Scott Besant, and the head of the Fed, Jay Powell, summoned in the banks for an emergency meeting last week.
24
And I've really sensed as even in other parts of the administration, now people are going, oh my God, this is a real wake-up moment as people are realizing the potential risks involved in a model like this.
25
And it's been alarming and interesting to watch as pretty much everyone I've spoken to as the days go on has mentioned mythos within about the first five minutes of our conversation.
26
This is being perceived as dangerous by Anthropik itself.
27
What's been Daria's response to that?
28
So they have released it behind closed doors.
29
This is actually not the first time Daria has done this.
30
In 2019, OpenAI had made GPT-2, right?
31
This large language model.
32
And we were all very excited because it could produce good text.
33
And they decided not to release it for six months after they trained it because they were afraid of what it might do on the internet.
34
That decision was probably wrong in hindsight, like upticks in spam and fake reviews weren't perhaps worth a holding it away from the public.
35
This time, I think there's really something there.
36
They have kept it available only to themselves and 11 handpicked partners, massive companies like Apple and Microsoft and the Linux Foundation, which makes the open source operating system.
37
And the idea is, the hope is that these companies will be able to use Mythos to fix their products before any capability like Mythos makes it to the public at large.
38
now zanna you've met dario recently and he he he doesn't mind the pr so i mean and this has got him a lot of publicity is is this for real in your view and is it just about cyber security or might it go broader than that so i've met dario a number of times over the years and he has long been of these uh you know ai gods the most publicly focused on safety and he has always said uh you know this is going to be very dangerous.
39
We need to have government regulation.
40
We need to be very careful.
41
And as Alex said, he held back an earlier model.
42
He's worried about bioweapons.
43
He's been absolutely and so there are a lot of people, particularly his competitors, who are saying, oh God, crying wolf again.
44
This is just marketing.
45
And the other models are quite close behind and they're just making a big splash about this because it's good for them.
46
But my sense is that this time really is different.
47
And I have talked to members of senior people in companies that actually now have Mythos and they are all corroborating the idea that this is actually extremely powerful.
48
So I do think Dario Amadei is definitely one who's focused on safety.
49
Of course, this is good for him.
50
And so I'm sure there's no little bit of self-servingness about it, but I think it's actually serious and for real.
51
I think that leads us to the next question, which is, okay, there's a danger here.
52
Bad things can happen.
53
There is the beginnings of a response to it.
54
I want to look now at how systematic these sort of protections can be.
55
And Zannie, I want to start with you.
56
I remember David Sachs, who, when he was in administration, was a booster for a kind of laissez-faire approach.
57
Let them cook let them carry on he's used to say do you think the administration is ready to intervene well i think uh i think the short answer is yes um because i think they have really been freaked out by the power of mythos but you're right it's a very big shift from an administration that came in basically poo-pooing the biden administration which was very focused on trying to create kind of a regulatory framework and in comes the the trump administration absolutely not they're accelerationists.
58
They want to go as fast as possible, unfettered competition.
59
And so the question is now, how do you do it?
60
And I think that it's not just what they want to do, it's how little time there is.
61
Mythos exists.
62
And so my sense is that we are going to have kind of informal actions very fast, which will involve continuing this approach that the most powerful models are first released only to a small set of trusted companies.
63
That kind of limited release, I think will be rolled out because it also suits the companies.
64
The government is going to get involved and I think will basically say, we need to see these models.
65
We need to know what they're and we're going to have a say on how far things are commercialized.
66
And then I think we can, the question is what happens thereafter?
67
And probably, and this is what people are talking about, it will evolve into some kind of industry led sort of certification approach, which will be be the kind of big model builders getting together with the government saying, okay, this kind of a model is all right for release.
68
The kind of trade-offs in this are huge.
69
Have the heavy hand of government, then America falls behind.
70
You don't get the benefits of this extraordinary technology.
71
That's bad.
72
Go too slow and you have an AI disaster, an AI accident.
73
And as you know, Ed, I have always thought that the kind of race dynamic between America and China and between these companies meant that we wouldn't get any change in the Trump administration's approach until there had been an AI accident.
74
And I think mythos may actually be the wake up call before an AI accident, but it was pretty close.
75
So Zannie, that leaves me feeling pretty alarmed.
76
I mean, in other words, we've got, you know, a small space in which to begin to get talking and take this issue seriously, begin to get governments involved.
77
But it is a very short time.
78
You should be alarmed.
79
I mean, you know, you absolutely should be alarmed.
80
This is an extraordinarily alarming moment.
81
However, I'm going to, you know, with my perennial optimism, offer you one reason to perhaps be a bit hopeful.
82
Firstly, I do think this has been a wake up call for this administration, which had been so extremely hands-off.
83
The question is whether they are kind of competent enough to work out how to deal with it, with Anthropic and others.
84
And remember, just a few weeks ago, they were furiously having a row with Anthropic and deeming Anthropic, this company, a supply chain risk that couldn't work for the Pentagon.
85
So it's going to demand some cooperation.
86
But the other is, there is this meeting coming up, the summit between President Xi and President Trump in May.
87
And I will wager, I'm completely speculating here, but I will wager that this subject will be discussed.
88
Because actually, even though America is focused on being ahead in AI, I think there is a recognition that there are some things that it is in no country's interest to have.
89
It is in no country's interest to have the capabilities of taking down critical infrastructure in the hands of some crackpot somewhere.
90
And so I think we will get the beginnings of some conversation about how you can have coordination or standards, because that is essential for any approach to be lasting.
91
I think it'll be done in an environment of massive mistrust.
92
I'm not putting a huge amount of weight on it going anywhere.
93
But I do think when you have a moment like this, people start thinking differently.
Shadowing English
FreePractice speaking anytime, anywhere
5.0
🎯 Sentence-by-sentence🎙️ Voice recording
Download on theApp Store
حول هذا الدرس
أنت تتدرب على اللغة الإنجليزية باستخدام "The new AI model that’s alarming Washington | The Economist" مع تقنية الـ Shadowing.
ما هي تقنية التظليل الصوتي؟
التظليل الصوتي (Shadowing) تقنية تعلم لغة مدعومة علمياً، طُورت أصلاً لتدريب المترجمين الفوريين المحترفين. الطريقة بسيطة لكنها قوية: تستمع لصوت إنجليزي أصلي وتكرره فوراً بصوت عالٍ — كظل يتبع المتحدث بتأخير 1-2 ثانية. تُظهر الأبحاث تحسناً كبيراً في دقة النطق والتنغيم والإيقاع وربط الأصوات والاستماع والطلاقة.
☕ اشترِ لنا قهوة
يظل ShadowingEnglish مجانيًا بنسبة 100% بفضل دعمكم. تكاليف الخوادم والذكاء الاصطناعي مرتفعة — قهوتكم تبقينا مستمرين! 🙏