AI may be the most consequential technology advance of our lifetime. Rapid advances are creating new opportunities, challenges, and questions that require the public and private sectors to come together to ensure that this technology serves the public good. In this special episode, recorded as part of an event hosted by Microsoft in Washington D.C., I share how AI is leading to new breakthroughs in research, healthcare, and productivity, the guardrails required to ensure accountability and transparency, and a five-point blueprint to help create AI Policy, Law, and Regulation.
AI may be the most consequential technology advance of our lifetime. Rapid advances are creating new opportunities, challenges, and questions that require the public and private sectors to come together to ensure that this technology serves the public good. In this special episode, recorded as part of an event hosted by Microsoft in Washington D.C., I share how AI is leading to new breakthroughs in research, healthcare, and productivity, the guardrails required to ensure accountability and transparency, and a five-point blueprint to help create AI Policy, Law, and Regulation.
Click here for the episode transcript.
Governing AI: A Blueprint for the Future
Watch the video: https://www.linkedin.com/events/governingai-ablueprintforaipoli7066550853603639296/about/
Read the executive summary: https://blogs.microsoft.com/on-the-issues/2023/05/25/how-do-we-best-govern-ai/
Download the full report: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW14Gtw
Join the discussion: https://www.linkedin.com/events/governingai-ablueprintforaipoli7066550853603639296/comments/
Brad Smith:
I'm Brad Smith and this is Tools and Weapons. On this podcast, I'm sharing conversations with leaders who are at the intersection of the promise and the peril of the digital age. We'll explore technology's role in the world as we look for new solutions for society's biggest challenges.
Rapid advances in artificial intelligence are creating breakthroughs in research, healthcare, agriculture, and environmental sustainability. AI is also improving productivity and making the information and knowledge accessible to more people regardless of where you live or what language you speak. But we can't just look at the benefits of AI as a powerful tool. We must also address the risks of it being used as a weapon. We need to develop AI technology in a responsible way and we need to ensure that it remains under human control. And to do that, we need to ensure that the development and deployment of AI is subject to the rule of law.
In this episode taken from a talk I gave in Washington DC, I share how AI is transforming lives, the guardrails needed for AI, and a five point blueprint to bring regulators and technologists together to govern AI effectively. AI: A Blueprint for the Future, up next on Tools and Weapons.
Thank you so much. We have such a wonderful group of people here today. Some folks I've known for a long time, some more recently, and we're in an extraordinary place to talk about what I think is one of the most important questions, not just here in Washington DC but really around the world in 2023. How should the world govern AI? And we're in an extraordinary place to have this conversation.
We've long been enormously excited about planet Word, and in a way I think it's almost poetic to think about the picture of this room and what it shows to all of us. In many ways, it's not a coincidence at all that we're here because we're here to talk about a large language model in a building that is really dedicated to understanding the role of words and language. And in a way it even is more than that because sometimes despite all the advances in technology, there is still nothing more powerful than human intuition.
It was the intuition of Microsoft CEO Satya Nadella a few years ago to think that just possibly the best breakthrough in the world could come from the ability to use language in new ways. As he reasoned humanity invented language so we could understand the world, so we could explain the world to ourselves, so we could talk about what we were seeing in the world with other people so we could take what we had learned and literally pass it down from generation to generation. That at its core is the story of human civilization. And what we have the opportunity to do today and in the future if we act wisely, is now harness the power of computing in a large language model to enable humanity, we hope and believe, to put the power of language to good use in new ways.
But there's a fundamental question we are all needing to address. The question is a straightforward one. How do we best govern AI? That is the question of the day, the year. It is quite possibly one of the most important questions of the 21st century. It makes sense I think to put it in historical perspective, to think about the role of technologies over time.
The truth is, for most of humanity's history, life didn't change very much. It didn't get very much better from century to century. In fact, until the invention of the printing press, global GDP typically increased by only four tenths of 1% every year. In so many ways, the printing press was the invention that first changed the arc of history for humanity. And in many ways we see the role of the printing press and the role of AI as, if we do our work well, something that can come together. But it was after the invention of the printing press that the pace of change and the growth of prosperity started to accelerate. First slowly every century global GDP would basically double. And yet by the 20th century, it grew 20 fold. Inventions like the steam engine, electricity, combustion engine, the automobile, the airplane, the telephone, computing, and the internet have created the modern world in which we live today. And in some ways what excites us most, the reason that we spend so much time and money working on this technology in the first place is because of what we believe it can do for people.
Let me just share a few examples. The first involves the ability to detect diabetic retinopathy. There are 400 million people in the world that have diabetes, and one third of them may develop this disorder which can result in blindness. But the challenge in the world today is that there are only 200,000 ophthalmologists. Most people with diabetes will never meet an ophthalmologist. And yet with the power of artificial intelligence that we've helped to develop and are deploying with a partner, Iris, it is possible to take a camera in the phone, look at someone's eye and detect diabetic retinopathy when it is in an early stage. Not as good as an ophthalmologist, but 97% as good. For most of the people who will have the opportunity to benefit from this, it is the only source of ophthalmological care they will ever receive. That will save people's ability to see.
A second example I think is also compelling, it's the use of artificial intelligence to prepare for and come to the rescue of people in severe storms. One of the things that's important I think for people in the United States to appreciate is most of the world does not have the up-to-date maps or even the ground-based weather radar that people in the developed world have. And yet what we're finding is with the use of satellites and AI, it is possible on a global basis to better predict where severe storms are going to hit land. And with the ability to map buildings in advance and then do the same thing immediately afterwards, organizations like the American Red Cross with whom we're working today in a place like Guam, literally as we sit in this room, can use the power of AI to identify where there has been damage and where there are people that need to be rescued. This too is saving lives.
And we're putting the same approach with artificial intelligence to work in a third location, in Ukraine every single day. Early on in the war, we at Microsoft were engaged not only in protecting Ukraine from cyber attacks, but documenting war crimes. We were able to work with a company called Planet with its satellite imagery and use AI in real time to map every school, every hospital, every water tower, and know every day whether the Russians were attacking them. In fact, we've documented, in real time, damage to more than 3,000 schools, and we provide this information to the Secretary General of the United Nations so they have a source of information to address and seek to hold accountable the individuals who are committing war crimes.
But it's not just issues of war and peace, life or death, the ability to see or lose one's vision that we're talking about. There's a lot of things that we might say are more pedestrian, things that will be part of our everyday life, but I think they can make a difference in a positive way. Imagine an everyday scenario for people in Washington DC. You've written a memo on something like the CHIPS Act and somebody says, "Can you come present this at a meeting?" And you say, "I wish I could. If only I had a few PowerPoint slides I'd love to do that. I don't have the time." Well, now you do. Because what I'm showing you is the new co-pilot that will be in M365. If you have 29 seconds, AI can help you turn your memo into a PowerPoint presentation. That is what it did in the time that it took me to explain this to you.
But the key thing to remember is this is not about putting your work on autopilot. It is not a checking your brain at the door when you arrive in the office or get up in the morning. It's about using it to make yourself better. So don't just take the slides that first come to you. It's probably a good idea to read them before you present them.
But you can also work with the computer to make it better. In other words, look at that first slide. You don't have to know how the features work. You just have to say, "Add an image of Congress." After all, it's the Congress that passed this law in the first place. What it does is finds an image on the internet, pulls it in, and then on the right lets you choose the picture that will work best. And you might look at other slides and you might do something else as well. You might say, "There's a lot of words. That's sort of a boring presentation." So as you see in the next illustration, somebody can go up and say, "Pick one of these slides and change the way the slide looks." In this case, you would say, "Yep. Make these bullets more concise and add a image of a microprocessor." And so the co-pilot goes to work. It looks for the image on the internet, you don't have to go scanning for it yourself, and then it will incorporate it into your slide. It will edit your bullet points. And again, read before you speak. Take a look, choose the image that works best in your presentation and suddenly your life, your day at least, is a little bit smoother than it was before.
It reminds me in many ways of what was probably interestingly enough, the most consequential decision I ever made. I was only 27 years old. It was in 1986, and I got a job offer to work in this building at 1201 Pennsylvania Avenue just a few blocks away from here, the law firm of Covington & Burling. It was the firm where I wanted to work, but I said, "I'm not going to accept this offer unless you will give me this personal computer so I can have it on my desk." And they said, "Well, why do you want that?" I said, "Because I want to use it to write." And they said, "No, no, no. You write on a legal pad and then we have secretaries that have computers and they'll turn your legal pad and what you write into the printed Word." And I said, "No, no, I actually know how that works, but I think I can write faster and write better if I can write myself on a computer."
It took a decision by the firm's management committee to let me have a computer and thankfully, thank you, thank you, they said yes. Because I arrived at work in September of 1986, and you know what? I did write faster and I did write better. And when you look at the future, that is fundamentally what we believe is the future of productivity software. The software we create with the power of AI and the use of a co-pilot you can create a PowerPoint presentation better. You can act faster. Every day I use Bing and I find I do research better. There are days when I have a question, I go, "I wonder if I should ask people to go spend some time looking for this." Then I realize, "You know what? They have better things to do and I can find the answer most of the time right away."
And what I've learned is that the faster you can find an answer to a question, the sooner you can ask another question. And fundamentally, what life turns on is not finding the right answers, but asking the right questions and asking more of them. And what I've described I believe is important not just for everyday lives of people like all of us, but the future of much of the world. Because something else is changing that's of enormous consequence for the world, is demographics. Just as COVID arrived in March of 2020, the working age population of the 38 OECD countries hit its peak. This is the group of people between the ages of 20 and 64. For the rest of this century, the working age population in these countries is going to get smaller. But the people who have retired, the older population won't necessarily shrink at the same rate. Fewer people are going to need to produce more output to support the economy as a whole. That requires more productivity. We live in a world where productivity gains have been hard to come by, but with the use of AI, there is a new tool at our disposal.
Now, despite all of that, I want to say this is not your typical tech person comes to Washington and says, "Wow, just buy our stuff and the world will be better." We can't afford to look to the future that way. I think we should look at the headlines from the past and learn from them. A decade ago, we were all enthusiastic about the impact of social media. We looked at the Arab Spring and we believe that social media was going to be the greatest tool to advance the future of democracy. And yet just a few years later, we found that this technology had been targeted and was aiming at the health of democracy itself.
We're 10 years older, we need to be 10 years wiser. And above all else, we need to take to heart one fundamental principle, we need to be clear-eyed and we need to be responsible as we create this technology. Fundamentally, we need to ask ourselves a question that Carol Ann Browne and I asked in our book that we wrote in 2019. We entitled one chapter, Don't Ask What Computers Can Do, Ask What They Should So. That is the conversation of the year and the decade as we look ahead.
Now, I feel good about where we are at Microsoft today, and yet I know that we have even more work in our future. But we've been working for six years to get ready for this day. We adopted principles, we've implemented them in a corporate standard. We're now in version two of our responsible AI standard. It sets forth engineering requirements and practices. We've implemented these across the company in training for engineers, in tooling for engineers, in testing for the systems that we create. We've built in an oversight mechanism just as we have for privacy and cybersecurity and certain other enterprise risks to monitor the use and development of technology to report on concerns, to audit the work, to ensure that we're in compliance with our own standard. And yet I will be the first to stand here today and say, that's not enough. It's not enough for any single company to feel that the state of the art in 2023 will always be sufficient for the future, and it's not enough to just feel good about what you do yourself. We need to adopt another fundamental tenant, and that is this, the development of AI technology must be subject to the rule of law.
Think about the country in which we meet, the world in which many of us live. No person should be above the law. No government should be above the law. No company should be above the law. No technology should be above the law. The development of AI must be subject to the rule of law. And that is really what we're focused on today, ensuring that AI technology is controlled by humans in no small measure because its development and deployment must remain subject to the rule of law. How do we do that? We're publishing a white paper today with a five point blueprint. Our goal is to ensure that AI serves humanity and is controlled by humans with an approach to policy and law and regulation.
So what are the five points? Well, the first is to build on another aspect of what I would call common sense. The best way to move quickly, and we should move quickly, is to build on good things that exist already. One of the great things about these issues is that Microsoft isn't the only institution in the United States or the world that has been focused on them. It turns out the United States Congress has been as well. And as a result, because of a law passed in 2020, just four months ago, NIST, the National Institute of Standards and Technology finalized and published its Artificial Intelligence Risk Management Framework. It's an extraordinary website. Use Bing. Heck, use Google. Just find it and look at it. It's worth the time. They have terrific explainer videos. At its heart, it creates a new foundation, a new framework for how institutions that create AI and deploy it should govern it, should measure and manage it, should mitigate problems. At its core, it defines what I believe is a new intellectual discipline for artificial intelligence just as NIST has done over many years for cybersecurity, an area where Microsoft has deep experience including working with NIST in itself.
As new technology develops, new disciplines need to emerge, and it's not just about the technology itself. I remembered starting in my early years at Microsoft in 1996, the European Community, as it was then called, adopted the first data protection directive. At the time, there was no such thing as a privacy lawyer. And yet today you can go to the annual meeting of the International Association of Privacy Professionals and you will find that they have more than 30,000 members. A new profession was born. In part because of the work of this, a cybersecurity profession was born. And I believe that if we move quickly to adopt this and implement it, we can give birth to what the world needs, a new profession dedicated to responsible artificial intelligence. Already at Microsoft, we have more than 350 people working in this new discipline, and this framework can help it spread faster.
That's why one of the commitments that we're sharing, one of the commitments that we've developed in response to the meeting at the White House a few weeks ago is that we will implement this framework across all of our services. And we'll even go beyond it in terms of additional strengthening of red teaming and safety work.
But if we want the government to go fast, and I think we should, if the government can go fast and wisely at the same time, and I believe it can, it would be to adopt an executive order in which the federal government says that it will only procure certain AI services from organizations that are attesting that they are applying this framework. It will send a signal to the market that this is the future we all need to embrace. And as we do so, we as a company are committed not just to applying this framework ourselves, but to working with our customers so they can deploy new AI solutions in a responsible way. So that's the first of our five thoughts.
The second suggestion that we'd like to offer is that we in effect do for AI what we've done for many other technologies over time, that we create and require the use of what we're calling here AI safety breaks. Safety breaks, especially for systems that control critical infrastructure. I often find with new technology, it's helpful to ground ourselves in some understanding of how technology has evolved in the past. People understandably are often fearful about the dangers that new technology presents, and that is a healthy thing if we embrace it and then act to address it.
One person who met this head on was Elisha Otis. He invented the elevator. Think for a moment about the role of the elevator. We couldn't have modern cities without tall buildings. We couldn't have tall buildings without elevators. And yet in the 1850s, people looked and said, "What are you asking us to do? Walk into a metal box so a cable can hoist us into the sky? No, thank you. That seems very dangerous." It wasn't until Elisha Otis went to the New York World's Fair in 1854 and put on a demonstration. He went up on this platform, the cable raised him into the sky, and then he took out a cutter and he cut the cable. The elevator did not fall because he had invented and deployed a safety break.
What was created for elevators was created for the school buses on which we put our children, for the high speed trains on which we travel, the electricity circuits where we have circuit breakers in our homes, in our buildings, fundamentally for the safety architecture of the modern world. And there is an approach, as we describe in our white paper, to do the same thing for AI so that especially when it is deployed to run the electrical grid or at least manage its distribution or to manage the water supply or the flow of traffic in our cities, we have not one, but two layers of safety breaks to ensure that AI remains properly under human control.
To do that well, we need to think about a third thing. It's really the most substantive part, if you will, of what we are publishing today in our white paper. We need to develop a new multi regulatory framework, multi-tiered regulatory framework for highly capable AI models. How do we do that? Well, the first thing we think it's helpful to do is to ground it in an understanding of the technology architecture for AI itself.
So you see on this next slide, the technology stack for AI foundation models. It starts with the applications at the top, like ChatGPT or Bing or GitHub Copilot or M365. It calls through application programming interfaces, APIs on the models like GPT-4 that we're all talking about. Those models are built with the help of machine learning acceleration software, and fundamentally, they depend at bottom on the most advanced AI super computing data centers the world has ever seen. First to train the models and then to deploy them so they can be used.
One of the fascinating things I find about 2023 is we're all talking about AI, we're all talking about ChatGPT, we're all talking about GPT-4, and nobody asked where was this thing made? It was made by these extraordinary engineers in California, but it was really made in Iowa. It was literally made next to cornfields west of Des Moines in an advanced AI super computing data center that Microsoft built exclusively to enable open AI to train what has become GPT-4. In fact, we were very open about it, you'll see information that was published in 2020. But this is the bedrock, if you will, for the power of AI as we look to the future.
So what we need to do is build on this understanding of the architecture of technology to build an architecture for regulation. And in particular, we think that we need to think about the application of law and regulation at three layers.
The first, as you see on the next slide, is for the applications themselves. This is where there's just going to be an extraordinary level of innovation, new applications, existing applications that will use AI in a new way. But the truth is, in many instances, we don't need new laws. We have the laws in place. What we need to ensure is that those laws are applied properly to the new technology that's being created. Think about the world in which we live. It is unlawful for a bank to discriminate on the basis of race in deciding who should get a mortgage. If a bank relies on AI to make that decision for itself, I cannot imagine a day when an attorney will have a good moment in court if she or he stands before a judge and says, "Your honor, a machine made us do it." That is not a defense to a legal obligation. But what we're going to need to do is help customers like banks understand how to ensure that they are using AI in a manner that meets their legal responsibilities. We're going to have to help courts and judges learn. We're going to need regulatory agencies with AI specialists so they can analyze the software that is being used, whether it's a new drug, a new airplane, a new car, or anything else in everyday life.
And then there's a second layer, the layer that we're all quite rightly talking about, the regulatory architecture for pre-trained AI models. There's a certain class of powerful models we'll have to decide exactly what the definition is, but as Sam Altman said before the Senate judiciary subcommittee last week, "We do need new law in this space." We would benefit from a new agency here in the United States. We should have licensing in place so that before such a model is deployed, the agency is informed of the testing. There are requirements it needs to meet for safety protocols, there need to be measurement, and ultimately, like so much else in life, whether it's an airplane or an automobile or frankly oftentimes new foods that are created, AI will require a license. And we need to make sure that licensing can move forward quickly so that we don't slow the pace of innovation but we can do that and ensure responsible conduct at the same time.
Once one has a license, one's obligations don't end. There should be ongoing monitoring and reporting. There needs to be disclosure to the government about issues that arise. And we would say for these most powerful models, they need to be deployed in a safe place. That's why we would say they need to be deployed in a data center that has been authorized and licensed itself for precisely this kind of use.
Which brings us to the third layer. For the development of these models, for their deployment, and especially for the running of applications that use AI to control critical infrastructure, we should impose on companies that get a license obligations, obligations to protect security, physical security, cyber security, national security. We will need a new generation of export controls, at least the evolution of the export controls we have to ensure that these models are not stolen or not used in ways that would violate the country's export control requirements.
And just as the application operator who is running critical infrastructure, say the power company needs to have the ability to slow the model down or turn it off if something is going awry, there should be a second ability to do the same thing in the data center itself. That is how we will ensure that humanity remains in control of technology.
Now, there will be some who will say, "Okay, now we figured out how you fit into this Microsoft." The truth is we fit in at every layer. But this isn't just about big companies like Microsoft. That would be, I think, a huge mistake to conclude such a thing. The reality is we'll have models being created by many people in many ways, applications will be developed by startups. Any company can build and operate its own data center. And in the world today, a lot of critical infrastructure providers do that. And then we have a number of big companies that operate these data centers at scale. In effect, for all of us, we should take a concept developed in the world of banking, KYC, or know your customer. And in the world of AI, it should become KY3C. The model developer needs to know your cloud. Where is your model being developed? Where is it being deployed? The people that have the customer relationships need to know their customer, especially to protect something like export control compliance, and all of us need to know our content. And I'll say a little bit more about that. But the good news is here too, there are existing concepts from which we can borrow and build upon and thereby move more quickly.
There's two final aspects of our five point plan. Number four is about transparency. This is critical. That's why we today are committing to you all, to the public, and to the White House that we will publish an annual transparency report and take other steps to make data about these models more transparent. But it's more than that that we need to think about. The truth is the use of AI, the study of AI can at times be more expensive, require more computational capacity than has often been available to many people in the past, which is why we as a company want to stand up today and strongly endorse another proposal here in Washington DC, to create a national AI research resource. This would need to be funded by Congress. It would create a national computing capacity for the great academics in the nation's colleges and universities who are doing basic research in many fields, including studying AI itself, so that they can stay at the forefront of what we need, American leadership, global advances in science and technology.
In fact, we would offer one suggestion to add to the notion of what has been proposed. What we really need is an international AI research resource. There is every opportunity to bring together our like-minded democratic allies and friends so that we all together can invest in and rely upon this kind of resource and make it available as a public asset for the public good.
I might note that for anybody here from Congress this morning, first of all, we're really counting on you to ensure that we're solvent a week from now, but it's just possible that when you do so, there might be a little less money available to spend than people were thinking. This is where necessity meets invention. The world would be a better place, the United States would be a better place and will actually perhaps be able to do more if we do it with others and share the burden of bringing the money and share the benefits that are created by enabling people to work together.
And what we do for academics and the public, we need to do in additional ways for the nonprofit sector. That's why we're also committing today that we will continue to build upon an announcement we made last week, to bring AI technology in new and less expensive ways to the world's nonprofits. That too remains vital.
Finally, as we think about transparency for AI-generated content, we're going to have to address the issues around deep fakes. We're going to have to address in particular what we worry about most, foreign cyber influence operations. The kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians. We need to take steps so that the public knows when it is getting content that's generated by AI. We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.
Which brings me to my final point, we are going to need new forms of innovation, we're going to need new forms of collaboration, we are going to need new public-private partnerships so that we can use AI as an effective tool to address all of the new challenges and issues that AI itself will be a part of. Some of these are broad societal challenges, things like the future of work and education. In other instances, it will be very specific problems like the use of AI to deceive others. We're committed to that. You will see us take new steps in the months ahead to use AI to protect democracy, to use AI to provide access to new skills for new jobs, to use AI to address the climate and sustainability needs of this planet. We will do more, and I believe that many across our industry will as well.
At the end of the day, whenever we look to a future with new technology that can seem so different, I always find myself looking back and thinking a little bit about what we've learned from the past. There's a saying attributed to Mark Twain, that history doesn't necessarily repeat itself, but it often rhymes. To me, there was an extraordinary day, quite possibly the most extraordinary day in the history of the American Republic. It was a sunny afternoon on August the 22nd, 1787 in Philadelphia, Pennsylvania. For those in this room, you'll probably recognize the year and say, "Oh, that's when our constitutional convention, the founders of this Constitution we all have today were meeting to forge a new document." But that afternoon, they took the afternoon off. They stopped their meetings because almost all of the delegates walked down to the banks of the Delaware River.
They did it for a particular reason. There was a self-educated inventor, almost tinkerer, if you will, named John Fitch, and on that day, he had the first working steamboat that the world had ever seen. And for the founders of our Constitution, they immediately saw what this meant. Here was a country that had these rivers that were very hard to go if you had to go against the current, the Delaware, the Hudson, the Ohio, the Mississippi. Here was a solution. If people could harness the power of steam and invent a steamboat, people would be able to navigate rivers in new ways.
What is so extraordinary about this day, in my view, is two things that make it even more remarkable. The first is a week later, the founders who had this experience went back into the Convention and into Independence Hall, and they voted unanimously to add to the Constitution a clause that would give Congress the authority to issue patents under a patent law. Something that previously had been available only to the states and that was holding back the power and pace of innovation.
But there's something else that is even more remarkable when you look back at it. Poor John Fitch, his steamboat stopped working. This was before the invention of the scientific method. He was a tinkerer more than a scientist. And once his steam engine broke, he never got it to work again. It would actually take 20 full years before Robert Fulton in 1807 got a working steamboat going up the Hudson River, and the country changed. But the framework was in place for the future. And when you think about it, we had legislators at the time, we had judges who became members of courts, we had presidents that were elected, and this was before we had a steamboat. It was before the railroad. It was before electricity. It was before all the inventions that have made our world today what it is. And yet our Constitution has endured. The law has evolved. People have learned how to take the values and principles that we hold, timeless and apply it to each generation of new technology. That is what we have done over and over and over again.
When I look back and see that, when I look at where technology is going, I will say this, as a country, as a world, we have adapted before. We have gotten the best out of technology, we haven't always been perfect, we've dealt with enormous challenges, and yet the rule of law and a commitment to democracy has kept technology in its proper place. We've done it before. We can do it again. That is what we need to go do. Thank you very much.
You've been listening to Tools and Weapons with me, Brad Smith. If you enjoyed today's show, please follow us wherever you like to listen. Our executive producers are Carol Ann Browne and Aaron Thiese. This episode of Tools and Weapons was produced by Corina Hernandez and Jordan Rothlein. This podcast is edited and mixed by Jennie Cataldo, with production support by Sam Kirkpatrick at Run Studios. Original music by Angular Wave Research. Tools and Weapons is a production of Microsoft, made in partnership with Listen.