Skip to main content Skip to secondary navigation
Main content start

The future of cybersecurity

An expert in cybersecurity surveys a rapidly evolving world where technology is racing ahead of our ability to manage it, posing risks to our national security.
Map of the United States with binary code in the background.
Today’s cybersecurity attack surface is everywhere. | iStock/guirong hao

With TikTok in the hands of 170 million Americans, cybersecurity expert Amy Zegart says it’s time to talk about consequences. Foreign access to all that data on so many Americans is a national security threat, she asserts.

For those as concerned as she, Zegart has good news and bad. The government has gotten better at fighting cyberthreats, but artificial intelligence is making things very complicated, very fast. The US needs to adapt quickly to keep pace, Zegart tells host Russ Altman on this episode of Stanford Engineering’s The Future of Everything podcast.

Embed Code

Listen on your favorite podcast platform:

 

Transcript

[00:00:00] Amy Zegart: What we really need to pursue much more seriously is developing independent capacity. Developing the talent, developing the compute that, I know at Stanford's been really pushing this idea of a national AI, research resource. That's fancy talk for compute power so that independent researchers can ask hard questions, uh, and do the kind of analysis that, that needs to be done.

[00:00:27] I think we need to be investing much more in that. Compute is a strategic national asset like oil. And the government should be investing orders of magnitude more and making that available.

[00:00:44] Russ Altman: This is Stanford Engineering's The Future of Everything podcast, and I'm your host, Russ Altman. If you're enjoying the show or if it's helping you in any way, please consider sharing it with friends, family, and colleagues. Personal recommendations are one of the best ways to spread the news about the podcast.

[00:00:59] Today, Amy Zegart from Stanford University will tell us about cybersecurity and AI. How have things changed in the last three or four years since she was last a guest on The Future of Everything? It's the future of cybersecurity. 

[00:01:13] Before we get started, a quick reminder that if you're enjoying this show, please consider sharing it with friends and family. Personal recommendations really do work in growing the podcast audience and improving the podcast.

[00:01:31] Cybersecurity is a huge issue for the United States. Computers are one of the battlefields where the next big conflicts are and will be waged. Specifically, we focus on four countries that are the source of lots of cyber attacks. These include Russia, China, North Korea, and Iran. Intelligence is complicated, and it's gotten more complicated in the last few years with the rise of AI.

[00:01:55] Now, disinformation can be generated and spread more quickly and more realistic looking. Amy Zegart is a senior fellow at the Hoover Institute, the Freeman Spogli Institute for International Affairs, and the Institute for Human Centered AI at Stanford University. She's an expert in cybersecurity. She's written several books and she's an advisor to the nation.

[00:02:17] She's going to tell us that AI has accelerated the work of both the good guys trying to combat cyber warfare and cyber attacks, as well as, unfortunately, the bad guys who are making the attacks. However, things are looking better and there's reason to be optimistic. Amy, you're a return visitor to The Future of Everything.

[00:02:37] It's a great honor for me. You can decide if it's an honor for you or not. Last time we discussed cyber security and you said something very memorable, which is that the cyber security world is moving at light speed and the government is moving at government speed. Now in the interim, since you were last on the show, there's this thing called AI that has just exploded. ChatGPT and many other technologies. Has that changed the situation? Is the government able to respond faster? Are the cyber attacks able to come faster? Both, neither. Where are we these days with respect to the relative strengths of the government and of the cyber attackers? 

[00:03:18] Amy Zegart: Well, Russ, thanks for starting me off with such a softball question. It's really an honor to be back with you too. It's a complicated question. There's good news and bad news. Let me start with the good news. Government has matured. So part of the challenge, as you know, of dealing with cyber is do we have capacity in the government to understand and coordinate and work with the private sector?

[00:03:40] That's gotten a lot better. So we have the creation of the National Cyber Director. That office has matured. There's a state department ambassador at large for cyber. That's a really important component to it. Um, secondly, there are SEC regulations now. 

[00:03:55] Russ Altman: Wow. 

[00:03:55] Amy Zegart: So there's an incentive for corporate boards to pay much more attention to cyber security. And when the incentives are aligned, of course, companies pay more money for cybersecurity, do a lot more investing in cybersecurity. So that, those SEC rules have kicked in. And I think that's important to bear in mind. 

[00:04:12] Russ Altman: Can you, just to take, just to dive down, why would the SEC, why does the SEC even regulate that? I'm a little bit surprised. I think of them as regulating, you know, disclosures about the company and the financials and all that kind of stuff, monopolies, where does cybersecurity come into their kind of purview?

[00:04:28] Amy Zegart: I think there's a sense that it's a question of governance and it's not, the SEC regulations aren't dictating specific cybersecurity actions, but they are incentivizing processes. So corporate directors are held responsible for oversight of cybersecurity. And that's then catalyzed a whole bunch of other things, you know, regular reporting and in private and public companies as well. 

[00:04:53] Russ Altman: And it's not the case that the companies would have been independently motivated not to be attacked, not to have ransomware? I'm just surprised that we needed to tell them this. 

[00:05:02] Amy Zegart: Well, many companies are incentivized. When you think about financial services, for example, they've invested an enormous amount of money in their cyber security, and they know they need to. But many companies think that cyber security is really for the big guys. Cyber security is for the sexy industries. Remember Home Depot? When Home Depot got hacked? 

[00:05:20] Russ Altman: Yes.

[00:05:20] Amy Zegart: Right? Their defense was, we just sell hammers. Why are we a victim of cyber attacks? So, when you think about, 

[00:05:27] Russ Altman: Now the bad guys know about my wheelbarrow. 

[00:05:30] Amy Zegart: Exactly. So when you think about the cyber attack surface, it's everywhere. So you really want to incentivize all companies to spend more money and pay more attention to cybersecurity. I think that's, 

[00:05:41] Russ Altman: Sorry, I interrupted you on that because I was just surprised. Keep going. Yeah. 

[00:05:44] Amy Zegart: So that's the plus side of the ledger. The negative side of the ledger, and you asked about AI, is AI makes everything more complicated and faster. So is AI being used to automate cybersecurity on the defensive side? Yes. Is AI used to automate offensive attacks against cybersecurity defenses? Yes.

[00:06:04] So it sort of washes out. It's good news and bad news on that side. But let me add one other, uh, depressing wrinkle to the story, which is, of course, what are the attackers thinking? What are their capabilities? And what are their intentions? And there, the picture is bad. So I want to read to you the intelligence threat assessment, the annual intelligence threat assessment that was just delivered last month by the Director of National Intelligence to Congress. And this is about China. This is what the intelligence community’s assessment said.

[00:06:38] If Beijing believed that a major conflict with the United States were imminent it would consider aggressive cyber operations against U.S. critical infrastructure, by the way, that means just about everything, and military assets. It goes on to say, designed to induce societal panic.

[00:07:02] What they're saying is, it's not just if a war breaks out, if China believes conflict is imminent, they're already in our systems. And they have an incentive and an interest in attacking. So the bad guys are out there. Just because they haven't attacked us yet, doesn't mean they won't. So when you think about nation states with real cyber capabilities, China, Russia, Iran, North Korea, they're out there and they have not been defeated on the cyber landscape.

[00:07:30] Russ Altman: I'm glad you mentioned our favorite four countries because, uh, because I wanted to ask you, are they, are there, I'm suspecting that their interests are not precisely aligned and that they all have a different kind of, uh, piece of the game in mind. So you just kind of described how China might be thinking of it as a defensive slash offensive, basically a weapon for a potential, uh, conflict with the U.S., is that the same kind of thinking that Russia is doing? Or are they just, they seem more random? That's why I ask, because they seem more random in their attacks. 

[00:08:04] Amy Zegart: There are different flavors of their interests, right? So Russia is really focused on the disinformation space. So think about foreign election interference. So the Russians really like that space, uh, fomenting, uh, you know, exacerbating polarization in the United States. By the way, the Chinese are trying to do that too. The Iranians too, but Russia is the A team. That's, they're really good at that. The North Koreans want to steal. Right? So they're trying to steal everything they can in cyberspace. When you think about ransomware, I think primarily North Koreans, Russians too. 

[00:08:35] Russ Altman: Okay.

[00:08:35] Amy Zegart: But they're really in it for the money. The Iranians have sort of a vengeful attitude towards cyber attacks. So you'll see cyber attacks against, um, particular targets like casinos, right? And they're very vengeful, uh, in there, like specifically targeting, uh, you know, assets that are owned by people that say things that they don't like.

[00:08:58] Now that's a broad simplification of the differences between these four countries, but they're all very capable cyber adversaries. Uh, and those capabilities are growing over time. So it's a cat and mouse defense offense game. 

[00:09:13] Russ Altman: And can you update us on the issue of Russia? What have we learned from the Russian Ukrainian conflict? Has Russia, uh, have they been successful in kind of as part of their to include the cyber security or have the Ukrainians successfully fended them off? And are we learning anything by watching this conflict? Are we learning anything that could be useful to us in combating the Russians? 

[00:09:37] Amy Zegart: So yes and yes.

[00:09:38] Russ Altman: Okay, good. 

[00:09:39] Amy Zegart: Yes. Um, I think the narrative of the Russian cyber attacks against Ukraine at the start of the war turns out to be not quite right. You'll remember at the start of the war, the big question was, how come they didn't do more in cyberspace? They actually did do a lot in cyberspace, we now know, right?

[00:09:58] They attacked Viasat, so this is a pretty significant cyber attack. And they tried to do a lot more. Why didn't they succeed is the question, is the more important question. Part of the answer we now know is that United States Cyber Command was helping the Ukrainians weeks before the Russian attack.

[00:10:18] And so this was about moving critical digital assets to safer, uh, configurations. This was about fending off attacks. This was about working side by side with the Ukrainians to try to mitigate the risk of a cyber sort of first strike that would really take them out. And so we know a lot more now that's been revealed publicly about U.S. Cyber Command's role there. 

[00:10:40] The other thing I think that we learned is that cyber attacks in an ongoing conflict are actually really hard to pull off, right? So it's one thing to initiate an attack at the start of a conflict where you're planning to go in. 

[00:10:53] Russ Altman: The surprise. 

[00:10:54] Amy Zegart: But then sustaining that level of activity and adapting to the threat landscape turns out to be much harder.

[00:11:03] Russ Altman: Really interesting. Okay. So, um, all right. So you gave a great answer to my initial question. And the answer is that the cyber guys have gotten faster. The government is a little bit, is more on board and presumably is getting faster. Um, what about a domestic? I know that you've spent a lot of time thinking about, uh, global threats. Um, I think you also think about internal, um, domestic threats. Um, Is that true? 

[00:11:29] Amy Zegart: Not so much. I worry about things coming from outside the United States, not inside the United States. 

[00:11:35] Russ Altman: Okay. So tell me about, um, have they gotten, so misinformation, I want to talk about misinformation. Um, you already alluded to it as especially like messing up elections. Uh, and generally it's an attempt to create chaos. Is that how I should think about it? 

[00:11:52] Amy Zegart: So it depends. So let's just talk a little bit about terminology. So misinformation is information that is wrong, but people are mistakenly sending it around, right? So this is your crazy uncle sending things that he may think may be true, but are actually not true.

[00:12:08] So disinformation is a deliberate effort by somebody to spread something that they know to be false. We have both problems. People spreading things that are patently false, but they don't know they're false and they go viral. And then nefarious actors, domestic and foreign, that are knowingly spreading information that is false. 

[00:12:28] And in that category, right, there are a variety of motives, but from a foreign adversary perspective, it is to fray the bonds of democratic societies that bring us together. Anything that gets you and me to fight with each other about what's true or what's not, or what we believe in and, um, what our values are in conflict benefits these adversaries.

[00:12:50] Russ Altman: Yeah. So, and I've heard that. And so, you know, as you may be aware, Uh, there is a lot of splits in the U.S. right now, politically, and it makes me wonder when you say things like what you just said, um, is some of that manufactured not from deeply held beliefs of the two sides. Uh, but from external sources, and that would actually, in a funny way, be a hopeful thing, because it might imply that we actually have a chance to resolve some of these divides if we could kind of get the noise from the, uh, external pertur, perturbators out, but I don't know if that's too Pollyanna.

[00:13:25] I'm sure it is too Pollyannaish. So how should I think about when I look at the political divides in the U.S.? Do we have a sense for how much is that created from external sources and how much is in some sense? Like real, or is that not even a question that makes sense? 

[00:13:39] Amy Zegart: I think it's a great question. I don't think we know the answer. It's so hard to unravel what are the roots of, uh, the particular information that's being spread. Now there are researchers that are trying to do this, but that they're caught in the political maw as well, right? Seen as either censoring or favoring one side or the other. It's a really hard thing to unwind.

[00:14:00] And of course our adversaries are getting better at it and hiding their tracks. So you think back to, you know, Russia's early efforts to interfere with a presidential election. And, uh, we may have talked about this in the last time I came on. You know, you could easily trace it back after the fact to Facebook groups that were created by Prigozhin in St. Petersburg in an office with trolls that, you know, came to work nine to five and they masqueraded as Americans trying to get followers and getting Americans to be pitted against each other. Not just online, but in real life, right? On the streets of Texas, protesting against each other, all fomented by the Kremlin.

[00:14:40] Now it's a lot harder. And, you know, you think about TikTok, for example, the Chinese don't need the Russian playbook, which is utilize American platforms against ourselves. They have their own platform. Directly into the hands of forty percent of Americans, which is why there's been this whole bruhaha about banning TikTok or forcing a sale of TikTok. It is a legitimate national security concern. 

[00:15:04] Russ Altman: Great. Okay. Now there's something juicy and meaty we can talk about. So there are these big tech companies and uh, TikTok is a great example because I have graduate students who are ready to hit the streets in protest if, uh, if TikTok is shut down, because it evidently has become a huge part of their life. Um, so talk to me about your perception of the real risks of TikTok, uh, versus kind of manufactured political rhetoric that you think is not so beefy. 

[00:15:35] Amy Zegart: So I will say, and I mentioned this to General Nakasone when he was the head of Cyber Command and the National Security Agency, that, um, we were talking about our kids, I said, you know, I have a college daughter, and she runs her team's TikTok page, right?

[00:15:47] Russ Altman: Right, exactly. There you go. 

[00:15:49] Amy Zegart: How do you, and you know, and I had to send a note to the parents saying, hey, if you want to know what your kids are up to on TikTok, uh, you can take a look, but I have to tell you it's a national security threat. 

[00:16:00] Russ Altman: Oh my god. Oh, what a great message that would have been to get.

[00:16:03] Amy Zegart: So it was a bit of an awkward position to be in. So yes, I hear what you're saying, particularly young people, they get their news from TikTok. TikTok is a very big part of their life. So why do we care so much? 

[00:16:15] Russ Altman: They'll ask me, what's email? What's Instagram? Facebook is for my grandmother. It's all about TikTok. And they also say, just be, sorry to interrupt you. They also say, and these are my technical students, my computer science, or they say that we don't know how their algorithm works, but it is light years better than all the other algorithms in terms of putting things in front of me that I find interesting.

[00:16:37] Amy Zegart: Yeah. And by the way, that's one of the reasons China may not want to actually divest TikTok to an American buyer because they don't want that IP, that algorithm to be in foreign hands. 

[00:16:49] Russ Altman: Yes, it seems to be magically good. 

[00:16:50] Amy Zegart: So this dilemma, the U.S. is trying to force a sale. China doesn't want to sell. It's going to be very interesting to see what happens.

[00:16:57] But back to your question about what's the real national security concern about TikTok? There are several. Number one, access to data. So TikTok executives have been saying, no, no, no, no, no. There's a firewall between American data and Chinese access to the data. That is not true. Right? We know empirically that is not true.

[00:17:17] So data on a hundred and seventy million Americans can be accessible by Chinese owners of this company. And we know the national security law in China mandates that companies turn over data when asked. 

[00:17:31] Russ Altman: Is this data generated within the app or is this even other stuff on your phone that can be grabbed? 

[00:17:37] Amy Zegart: That I don't know.

[00:17:39] Russ Altman: Okay, but it's at least what I'm watching on TikTok, what I'm typing into TikTok, maybe where I am, if there's geolocation. I don't know if there's geolocation on TikTok. 

[00:17:48] Amy Zegart: So let me put it to you this way, Russ. Imagine the U.S. government had the ability to reach into the phones of forty percent of citizens in China. And know with the algorithm what gets their attention, what they like and what they don't. How much would we pay to have that kind of access to the populace of a foreign country?

[00:18:10] Russ Altman: And that's what they have. 

[00:18:11] Amy Zegart: That's what they have, right? That's what our kids and your students have. So that's issue number one is access to the data.

[00:18:19] Number two is influence. So it doesn't have to be a heavy handed, the Chinese Communist Party is great kind of TikTok algorithm, right? Where you're, you know, the viewers or my daughter is suddenly watching this. It can be things at the margin, right? You probably saw the story about Osama Bin Laden really had legitimate grievances when he masterminded the nine eleven attack.

[00:18:42] Just horrifying things. But on TikTok, this messaging went viral. Now, I don't know whether this was deliberately put there, right, by, in a deliberate effort by a foreign adversary, but you can see how ideas, if a government wanted to influence opinion on issues, uh, it would be pretty easy to do. 

[00:19:03] Russ Altman: Yeah, and your point is a really good one, that these can be nudges and not bludgeons.

[00:19:07] Amy Zegart: Right. 

[00:19:08] Russ Altman: And actually nudges are probably more insidious and difficult to find and are way less detectable. 

[00:19:15] Amy Zegart: That's a better way of putting it than what I just said. Yes. 

[00:19:17] Russ Altman: This is The Future of Everything. I'm Russ Altman. We'll have more with Amy Zegart next.

[00:19:36] Welcome back to The Future of Everything. I'm Russ Altman, your host, and I'm speaking with Amy Zegart from Stanford University. 

[00:19:42] In the last segment, Amy described to us some of the changes that have happened since we last spoke with her in cybersecurity threat assessment and also response. In this segment, we're going to talk about the role of corporations. They have more compute power, not only more than academics. In many cases, they have more than the government has to use in building AI tools. This has created new challenges for regulation and for collaboration between industry and government and academics. 

[00:20:11] Amy, to start off this segment, I wanted to ask you about the role of corporations. We talked about it a tiny bit when we're talking about the SEC regulations, but there are these big tech companies that are really holding AI and controlling AI. What is the role of the corporation in national security these days? 

[00:20:28] Amy Zegart: Well Russ, the role of the corporation is totally different than it was when we were growing up. So it used to be that innovations were invented in the government and then they became commercialized, right? You think about the internet started that way, or GPS satellites. And now the script has flipped. So now innovations, and we see this with large language models, are being invented outside of the government, and the government has to figure out not only what to do about that in the private sector, but how to bring those capabilities into the government itself.

[00:20:57] That's a new world for them. We're in a world right now where a handful of companies really dwarf the capacity of the government or universities to, uh, to compete and understand these large language models. 

[00:21:12] Russ Altman: And there is no way that me and my friends could have built ChatGPT. We don't have a tenth of the compute power we would need.

[00:21:21] Amy Zegart: And I think Russ, many people don't know that, right? So the orders of magnitude of compute power, right? So how much more compute power does OpenAI have than Stanford, for example? I mean, you know, it's, you know, you probably don't have, 

[00:21:37] Russ Altman: It's ten to a hundred, it's a hundred X. It's a hundred X. 

[00:21:40] Amy Zegart: So I was trying to find a specific number cause I'm writing about this now. And I saw an announcement that Princeton is buying, is very excited, that they're buying three hundred NVIDIA chips by the end of the year. Meta is expected to have three hundred and fifty thousand, right? So, 

[00:21:59] Russ Altman: So it's a thousand X. I was off by an order of magnitude. You're right. 

[00:22:03] Amy Zegart: And so what that means is that these companies are not just at the forefront of innovating, they're grading their own homework, right? Because how do you know what's safe? How do you know what potential risks there are? What kinds of questions do you need to ask? They're deciding those things largely by themselves, and that's never a good setup. I would say that, but I also want to emphasize that we don't want to impede innovation.

[00:22:26] So the question is, how do you strike the right balance between mitigating harms from these AI models. And how do you make sure that we're allowing the private sector leaders in this space, and they are all American at this moment, to continue innovating in ways that help, you know, there's so many positive benefits to these models as well. So we don't want to impede that either. 

[00:22:47] Russ Altman: So what is the approach? I mean, is it now requiring government to have some kind of super awkward type of conversations that it's not used to having where it's kind of, it has to be a little bit more humble, perhaps, and come to the companies and say, hey, could we work together?

[00:23:04] I mean, first of all, I presume that they're a part of the national security infrastructure. We would want to use many of those technologies to combat, uh, threats. On the other hand, um, they're not under the control of the government and they might say things like, who's going to pay for that? Or, you know, how much should the license that the government pays us, how much should that cost? And so, how far are we at figuring out how this new dance should be danced? 

[00:23:30] Amy Zegart: Well, you know, we just talked about cyber and how long it took to sort of get the maturation of cyber organization in the U.S. government. We're on day one for AI. So we are not far along at all. Yes, there are awkward conversations. You can see them on television when Sam Altman goes to testify before Congress. You know, many members of Congress are learning how to spell AI. So we have a, we have an expertise challenge there too. So at one point I counted the number of members of Congress that had engineering degrees. I think it was in 2020.

[00:24:00] You will be unsurprised to know more than half the Senate had law degrees. And there were, I think, three engineers. Which is actually more than I thought there would be. So that's part of the challenge too, is, you know, ordinary folks don't know enough to be able to ask the right oversight questions. 

[00:24:16] Russ Altman: So here's overly optimistic Russ's next question. Did we learn from cyber so many lessons that AI is going to be easier? Based on the cyber experience, or are we basically starting from scratch? 

[00:24:28] Amy Zegart: I think I'm trying to be optimistic. I'm trying to get your optimism, Russ. Yes, we have learned some lessons. You can see efforts early on at conversation, both on the private sector side. They know they've got this powerful technology and they're concerned about the risks. I don't want to overstate. 

[00:24:45] Russ Altman: They want to be patriots to some degree, I would guess, slash hope. 

[00:24:49] Amy Zegart: And they know that what they have has great promise and also great peril and trying to harness the upside while mitigating the downside is in everybody's interest.

[00:24:58] So those conversations are happening. We don't have the Edward Snowden problem. So you remember in 2013, former NSA contractor revealed highly classified programs, and it really sowed distrust, deep distrust between companies here in the Valley and the government. We don't have that right now. 

[00:25:15] Thanks to Xi Jinping and Vladimir Putin there's a joint concern about authoritarians in the world and the bad they can do. So that's good. We have learned from that, but you know, there is this question of what do we do, given that there's just this capacity differential between talent and compute and algorithms and a handful of companies and what everybody else can do.

[00:25:38] And I think we have three options. One is regulation. I'm concerned about that because you can really, you know, uh, throw the baby out with the bath water. Two is do nothing, let them grade their own homework. That concerns me too. What happens if, uh, people violate OpenAI's rules in the presidential election? They, they go to OpenAI jail, right? I mean, it's voluntary compliance. 

[00:25:59] Russ Altman: No chat for you. 

[00:26:01] Amy Zegart: So that's the world we're in right now. And then the third option, I think this is what we really need to pursue much more seriously is developing independent capacity. Developing the talent, developing the compute thing.

[00:26:15] I know Stanford's been really pushing this idea of a national AI research resource. That's fancy talk for compute power so that independent researchers can ask hard questions, uh, and do the kind of analysis that, that needs to be done. I think we need to be investing much more in that. Compute is a strategic national asset like oil and the government should be investing orders of magnitude more and making that available.

[00:26:44] Russ Altman: You know, as a biomedical researcher, I'm very aware of this because whenever, for example, the drug industry gets out ahead of NIH researchers, the NIH has a history, and I've seen this several times in my career, of making huge investments to try to level the playing field so that academic can kind of not really compete with the pharmaceutical industry. But do things at the same scale. 

[00:27:05] And I'm somewhat surprised to not have seen a government scale, uh, AI resource that's as big as Facebook's or OpenAIs or Anthropics or, it's just surprising because the government can definitely afford it. Yes, it's expensive, but you know, the government has a bigger budget than Facebook even and so it's just surprising and maybe we'll see this.

[00:27:27] Um, let me just ask you, you mentioned about regulation. I'm sure you have opinions about the Europeans. As you know, the Europeans have been very aggressive at um, kind of AI and data protections. What's your take on that in terms of the security implications? Is that a model that the U.S. should seriously look at? Or do you have concerns about how they've approached it? 

[00:27:47] Amy Zegart: I understand where they're coming from, and the Europeans share our values. And so I think that the impetus, I understand. I think their hearts in the right place, but I don't think it's a coincidence that the leading AI companies in the world are not coming out of Europe.

[00:28:01] They're coming out of the United States. Our, you know, wild west approach to innovation, hands off regulation is both a feature and a bug, right? It is what is fostering this innovation explosion that we've had for a long time. But it also means that we have a harder time mitigating the harms. So what I think is most promising and what the Europeans have done, two things.

[00:28:26] One is starting an international conversation about norms. That's really important. Ultimately, I think that we also need a serious bilateral conversation about AI guardrails between the U.S. and China. Things like AI and nuclear command and control, AI and financial system security, things where we have mutual interests, we need to have that.

[00:28:48] But the multilateral approach is important, too, for building norms about what's acceptable and what isn't. The second thing the Europeans are doing is the UK has really taken the lead on their AI Safety Institute. Independent capacity building to really understand what the risks of this technology could be.

[00:29:06] Now it's a beginning, it's not an end. The United States is behind. We're behind in our organization. We're behind in our funding compared to the Brits. And as you probably know, now there's conversation about how can we join our efforts together. And I think that's very promising. 

[00:29:21] Russ Altman: Great. So to finish up. How is AI going to help intelligence? Like, I know that the bad guys might use AI as well, but what are some of the ways, like, kind of tangible ways that we should be excited about AI helping increase our security and safety? 

[00:29:36] Amy Zegart: So, I do think there is some good news of the potential of AI. So, it can help intelligence in a number of ways.

[00:29:43] First, you have to think about intelligence isn't really about secrets, it's about insight. So the question is, how can analysts sitting inside the Central Intelligence Agency better develop insight about what's going on around the world? Well, what can AI do? AI can do pattern recognition at scale and speeds that humans cannot.

[00:30:02] So you think about an analyst, there was actually an experiment done several years ago by the National Geospatial Intelligence Agency. And they had a human team and an AI team looking at identifying surface to air missile sites over a huge swath of territory. The humans and the algorithms did the, had the same level of accuracy, ninety percent accuracy, but the AI did it eighty times faster.

[00:30:28] What does that mean? Now you're freeing up the human analysts to do things that only humans can do well, like thinking about intention. What does the adversary intend to do with those surface to air? 

[00:30:40] Russ Altman: Why are they pointing in that direction? 

[00:30:42] Amy Zegart: Why are they pointing in that direction? Yeah. So, so incredible efficiency gains, right?

[00:30:46] Pattern recognition. That's thing one. Thing two is AI can help find needles in haystacks. Much better than humans can pouring over images or pouring over data. And the third thing that AI can do, is AI can derive insight from the haystacks themselves. So I think about in your world, AI, um, accelerating scientific discovery, the new antibiotic at MIT, that's finding insight from mounds and mounds of data, all the haystacks, connections that humans didn't even know existed. AI can help with that. 

[00:31:23] Russ Altman: Thanks to Amy Zegart, that was The Future of Cybersecurity. Thanks for tuning into this episode. With over 250 episodes in our archive, you have instant access to an extensive array of fascinating discussions on the future of pretty much everything. Please remember to hit follow in the app that you're listening to now.

[00:31:41] This will guarantee that you never miss out on the future of anything. You can connect with me on x or twitter @rbaltman. You can connect with Stanford Engineering @stanfordeng.