Tools and Weapons with Brad Smith

Carol Ann Browne: Turning the tables on Brad Smith

Episode Summary

Brad and Carol Ann discovered that riding in an autonomous vehicle as it learns to navigate the streets of London can be a bit nerve-wracking. But these hands-on experiences are crucial to understanding the impact that AI's sudden surge has on everyday life at the intersection of technology and society. In this episode, Brad's co-author, chief of staff, and long-time colleague Carol Ann Browne puts him in the hot seat to discuss the tech issues and trends that they are witnessing together in real time. They discuss the war in Ukraine, the global economy, climate change, and why responsibility must be at the center of AI development.

Episode Notes

Brad and Carol Ann discovered that riding in an autonomous vehicle as it learns to navigate the streets of London can be a bit nerve-wracking. But these hands-on experiences are crucial to understanding the impact that AI's sudden surge has on everyday life at the intersection of technology and society. In this episode, Brad's co-author, chief of staff, and long-time colleague Carol Ann Browne puts him in the hot seat to discuss the tech issues and trends that they are witnessing together in real time. They discuss the war in Ukraine, the global economy, climate change, and why responsibility must be at the center of AI development.

Click here for the episode transcript. 

You can read more about the initiatives discussed in the episode through the links below:

(04:28) Olena Zelenska Foundation 

(07:28) Planet

(07:32) The Clooney Foundation for Justice

(11:52) Climeworks

(11:57) Microsoft Climate Innovation Fund

(15:31) OpenAI

(21:51) Responsible AI at Microsoft

(28:11) Wayve

(29:58) SEEDS

Episode Transcription

Brad Smith: I'm Brad Smith, and this is Tools and Weapons. On this podcast, I'm sharing conversations with leaders who are at the intersection of the promise and the peril of the digital age. We'll explore technology's role in the world as we look for new solutions for society's biggest challenges. When the annals of technology history are written, people will look back at this year, 2023, and really see it as an inflection point.

Carol Ann Browne: That's Brad Smith, vice chair and president of Microsoft, and yes, the host of this podcast and my boss. I'm Carol Ann Browne, Brad's Chief of Staff, co-author, and colleague since 2010. Today I turn the tables on Brad for an in-depth look at how he's thinking about the themes and trends that are shaping 2023. In this episode, we explore how technology is helping experts battle cyber attacks and document war crimes in Ukraine. We highlight some of the incredible sustainability breakthroughs we're seeing from our partners, and of course we discuss why 2023 is such an inflection point for AI and how it will affect people everywhere. My conversation with Brad Smith, up next on Tools and Weapons.

Carol Ann Browne: Hi everyone, it's Carol Ann Browne here at Microsoft. I'm Brad's Chief of Staff, and today I've decided to turn the tables on Brad. Good morning, Brad.

Brad Smith: Good morning, Carol Ann, and of course you're a lot more than my Chief of Staff. You have a hand in everything we do. You and I have co-written a book together. You have this extraordinary perspective yourself on the world. It's great to be here with you to do this.

Carol Ann Browne: Well, thank you. That's very generous. We've been working together on this very fun podcast, and I thought today I could ask you to talk a bit about how you're seeing 2023, what we're seeing unfold in the trends and themes of the coming year.

Brad Smith: Well, it's a fascinating time when you really step back because there's really four big things, I think, happening simultaneously. We're seeing this ongoing war in Ukraine, the biggest war in Europe in 70 years. We're seeing this very unusual economic time, the biggest period of inflation in more than 40 years. We're seeing this fascinating intersection with this enormous climate change and the response to it. We're seeing the sudden explosion of AI. And all of this is happening at the same time.

Carol Ann Browne: Let's start with the war in Ukraine. This time last year we saw the buildup of the military, and we're approaching the one-year mark. What are you seeing?

Brad Smith: First of all, I think you're right to point to this time last year. It was extraordinary. Remember, you and I were at the Munich Security Conference just days before the war began, and it was so interesting because the US was sharing its intelligence, and people were really debating whether there would be a war or not. In a way, that set in motion everything we've seen since. Certainly a longer war than anyone anticipated, I think it's fair to say, although a lot of wars last longer than what people expect. And here we are a year later, and the expectation is that this war is going to continue to unfold over a long period of time.

Brad Smith: The sharing of intelligence has been critical throughout, but what is also so important for those of us who work in the tech sector, we are using technology to detect and identify what's going on. We are using technology to defend against the cyber dimensions of what's going on. We increasingly need to use technology to fight what's going on, not only in Ukraine but especially cyber influence operations that are often being waged by the Russian military in some ways against their own people, I think, in Russia, against the Ukrainian people, but against people in Europe and around the world, including the United States. So it has become a very multifaceted hybrid war, unlike any war we have seen before.

Carol Ann Browne: And the ways that we're supporting Ukraine from protecting them against cyber attacks, what we have called evacuating their data to the cloud, even helping Zelenska's foundation, raising money to purchase laptops for students who no longer have access to schools, speaks to the role of technology. The headlines show the war escalating in many ways. What are you seeing now?

Brad Smith: What I would highlight, Carol Ann, is perhaps two things on the cyber front, and it's almost two distinct cyber fronts. One is the use of what we might call more conventional cyber tactics by the Russian government and the Russian military. One is destructive attacks and the other is the use of cyber espionage tactics. In terms of destructive attacks, we're continuing to see those waged against targets in Ukraine principally. But as we've noted publicly in recent months, we also have seen that used against targets in Poland. And that's disconcerting because that is an expansion beyond Ukrainian territory.

Brad Smith: I will say, so far, defense is defeating offense. And Microsoft, other tech companies, the US, and other governments and certainly the Ukrainian government, I think is so far succeeding in countering that quite effectively. And that is very important. But we should also be on guard. If we see Russian tactics expand outside Ukraine, it will be, I think, more of what we've seen in Poland, namely destructive attacks dressed up as ransomware, so that there is some level of plausible deniability. And that's why we've been so focused on warning and working with critical infrastructure and other customers across Europe, for example.

Brad Smith: On the cyber influence front, I think we have to expect that the Russians will continue to do what they're frankly very sophisticated and good at doing, especially when they have enough time for sophisticated operational planning. And that is the launching of these campaigns that are designed to plant false narratives, as they did, for example, with the false narrative about the purported Pentagon-backed bio weapons labs in Ukraine, something that is clearly not true, and we've shown how they have sought to spread that. And we're seeing that in Russia today, to try to sustain support among the domestic population. We're seeing it against the Ukrainian population. We continue to see it in very episodic but important ways in a number of other countries as well. So we are working hard as a company and with other companies and with governments to combat all of that.

Carol Ann Browne: Another way technology is being put to use is to document war crimes, and we have an interesting partnership with a company called Planet and also with other human rights organizations including the Clooney Foundation for Justice. Can you talk a bit about that?

Brad Smith: This, to me, is one of the really fascinating dimensions of what has emerged because there are important aspects that are new and they speak to many broad opportunities for the future. Specifically, obviously, what you're referring to is the use of satellite imagery from Planet, which is this wonderful company in San Francisco that is launching more and more satellites, that is getting more and more data. But specifically, they get the real-time images every day as their satellites pass over Ukraine. And our AI for Good Lab at Microsoft has developed the AI technologies. They intake that data every day. They're able in real time now to identify if there have been further attacks and damage at schools, at hospitals, on water towers in Ukraine, and then we're able to make that data available to others who put it to important use.

Brad Smith: And then there's the other side that you just mention, the partnership on the human rights side. Sometimes people ask me, is it really going to make a difference to document this kind of war crime? And I believe the answer is yes, both in some ways in real time—it sends a message that can hopefully act to some degree as a deterrent—but more importantly, for the future, for future generations, and our own people to know that these kinds of war crimes do not need to go unprosecuted, because the data is being built up.

Brad Smith: And that's why we're taking data like this and others and working with the United Nations itself, with the prosecutor for the International Criminal Court, and with Amal and George Clooney and the Clooney Foundation, and the critical role that Amal is playing as a human rights lawyer. And that too, I think, points to the future, because the best way to protect human rights, in some way, is to document when they're being violated. And we have such a greater ability today with data that comes from everybody's phone, data that comes from literally satellite imagery, and then the ability to harness the power of AI to go through that data and use it far more effectively than would've been the case even five or 10 years ago.

Carol Ann Browne: A significant result of the war has been the energy crisis that's being felt around the world. In some ways it feels like this green energy transition that we've been working towards has been thrown in reverse, but there actually might be a silver lining here. Do you want to talk a bit about that?

Brad Smith: Yeah. It's so interesting, and I think you're right to put it that way. Because in the short run, what we've seen is an increase in energy prices. This has disrupted the energy economy, if you will, especially across all of Europe as the supply of Russian gas has been impacted. But as you and I and others were at COP27 in Egypt, what I felt we were watching was the rewiring of the European energy economy in front of our very eyes. It takes time to do that. But unquestionably, what this war has done is accelerate efforts across Europe, certainly more wind and solar, a resurgence and focus on nuclear power. All of this is happening as we're seeing more advances in the development of nuclear fusion. And then this great focus on green hydrogen.

Brad Smith: The other thing that I think is important to really think about, especially across the Atlantic, is that so many of the climate tech leaders are spread around the world, but a lot of the great climate tech leaders are in Western Europe. The construction of wind turbines is one example, these investments in green hydrogen. And just as digital technology from the United States is critical to the future of Europe, I believe that climate technology from Europe is critical to the future of the United States. Somehow, we need to find a way to build on these respective strengths in a new aspect of the transatlantic partnership.

Carol Ann Browne: And we saw Europe's strength in climate technology last year when we visited a company called Climeworks, which is a partner of Microsoft's, which we invest in through our Climate Innovation Fund. We took a tour of a plant on their carbon capture capabilities. Tell us about what you saw in the future of that technology.

Brad Smith: Well, that was one of my favorite days last year, was that Saturday morning in Zurich. We went out, as you say, and we toured this—Climeworks, to me, is doing something that's amazing. They're building this new technology that literally takes in air, captures the carbon, puts it ultimately into a liquified form, and then can deposit it deep underground where it can remain for many millennia. That is what the future needs. One of the interesting things that we get to do at Microsoft is help spur these new technologies by participating literally on two sides of the market. We invest to provide them with capital through our Climate Innovation Fund, and that's what we're doing with Climeworks, and then we purchase some of their output as part of our own carbon removal commitments. And you add both of those up, it's not necessarily the size of the money being spent, although it is significant, it's really helping to stimulate and send signals to the market more broadly.

Brad Smith: I am incredibly excited about what we're doing with Climeworks really for two reasons. One is, it's a good example of where we can stimulate this development in this decade in a way that will move this faster. But the thing I love even more is, what you see in Climeworks is the kind of technology and business that can actually help us ultimately, as this century progresses, reverse the impact of climate change. Because as this carbon is taken out of the air and deposited underneath, it first helps us curtail the impact of carbon emissions.

Brad Smith: But ultimately, if you think about the world in 2050, the world in 2050 should be at a point where we brought down carbon emissions, and then if a company like Climeworks can be taking a gigaton of carbon out of the atmosphere every year, humanity has an opportunity between 2050 and, say, 2100 to actually undo the damage. And that's extraordinary. I just think it's important for all of us who live on Planet Earth to recognize we have the collective power if we choose to use it, not just to slow the impact of climate change, but actually reverse it. We need that inspiration. I think we need that level of ambition around the world.

Carol Ann Browne: And it's become a motto at Microsoft: It can be done.

Brad Smith: Absolutely. That's what I love about all the work on climate. Let's just say it needs to be done.

Carol Ann Browne: Yep.

Brad Smith: It can be done. And I just think we live in a world where so often we feel the weight of these enormous challenges. There are days when people worry about the impact of technology, but when we put digital technology and this new climate technology to work together, it can be done. We can have this positive impact that I think exceeds the expectations that most people currently have.

Carol Ann Browne: The past couple of months we've had the opportunity to experiment with OpenAI models ourselves, personally, which has been very exciting to see the potential of this technology. Will you talk a bit about where you see this technology going?

Brad Smith: Well, the first thing I would say is, I think when the annals of technology history are written, people will look back at this year, 2023, and really see it as an inflection point. And I've worked in the tech sector long enough to be able to look back and say, 1990 was the year of the graphical user interface. It exploded with Windows 3.0. It had been around, but that's when it exploded. 1995 was the year of the internet. All of a sudden the browser was being used. The internet had been around, but its popular use exploded. 2007 was when the iPhone came to market with the touch interface, and that changed the way everybody lives their life, carrying around this little computer in their pocket that's called a phone. 2023, I think, is doing the same thing for AI. All of a sudden, it makes it real for people. All of a sudden, people say, "Wow, I didn't know we could do that."

Brad Smith: In so many ways, I think initially with ChatGPT, but also with many other things that are coming, people are realizing that this is a new tool that they can use in new ways, and hopefully from our perspective, do things in a better way than they did before. Of course, there's this incredibly important set of issues. Is it good or bad for it to be used in this way or that way? We have an enormous new generation of ethical issues that we'll keep working on building on everything that we've been doing. But the starting point is for us just to recognize that there are things that we do in our daily life that we can now do better and faster than we could before. And it's just the beginning.

Brad Smith: We're seeing this in our lives as individuals, and you used the right word, experimenting. We’re going to see this in our daily lives as consumers. I think it's going to be transformative in companies, in nonprofits, in governments, especially as all of these organizations recognize that they can take the problems they're working on, they can take the data and data sets that they have already, or can now even more focus on creating and fine tune these models to help their analysts, their engineers, their scientists, their researchers all discover new insights faster. So I think it is a moment in time where we should be excited. And then of course, we have to think about the broader societal impacts, the critical impacts as well.

Carol Ann Browne: Like we say, every technology can be a tool or a weapon.

Brad Smith: We should write a book called that.

Carol Ann Browne: The unintended consequences of AI are top of mind for many people. When we talk about this technology and we see the potential and the power of it, I’ve noticed that people pause and think, wow, this is going to impact me personally, my job, the education of my children, the ethics around it, privacy, what have you, bias.

Brad Smith: Well, the first thing I would reflect upon is something that you noted, because you and I last year, even before ChatGPT was available, were taking some examples from the technology and sharing it with some people in important positions in governments around the world so they could start to think about what this would mean. One of the things that I think we both noticed was, people very quickly went to this space, what does this mean for me? What does this mean for my job? What does this mean for my kids?

Carol Ann Browne: Yep.

Brad Smith: Maybe they're working or they're going to be working in the future, what is it going to mean for their job? So the first thing I think we should reflect upon is that this swath of technology will probably impact even a larger group of people and frankly, a larger group of people with more education, college degrees, graduate and professional degrees. And the key thing for everyone, I believe, in a personal way, is to think about how do I use this to do my job better? It should mean that things that are, frankly, more routine, that may be laborious and even feel like drudgery, the computer can do that part and that frees people's minds. Instead of having to spend so much time chasing the right answer, people can ask, what's the right question? What's the next question? That should expand the opportunity for creativity.

Brad Smith: The truth is, these AI models, as powerful as they are, really can only provide answers that are based on data that exists already. They aren't going to tell people where should creativity go in the future. That's the spark that the human mind adds. So one of the things we need is an ongoing conversation where we all talk together and think together, how do we combine this with what is special about human creativity so we can bring these together?

Carol Ann Browne: It's not a time that we can just move fast and break things. And I've been so impressed personally at the dedication and focus on this at Microsoft. Can you talk a bit about how we're approaching our responsibility in this space?

Brad Smith: I think it's a great aspect, and it's important of course in Microsoft, but really across the tech sector. What's interesting about AI and ethics is that you look back at the period from, say 2017 to now, 2018 to now, and we saw the emergence of ethical principles. We were one of the earlier adopters with our six principles, and these have proliferated, which is a good thing. And there's a lot of consistency around the world in terms of the ethical principles for AI. But of course the next step is a recognition that principles are not actually impactful unless you can figure out how to implement them and put them into practice. And so, there's so much work that is going into this.

Brad Smith: I think you see in this effort, we start with our principles. We've then created a policy, a corporate standard. Now we're in version two of what we call our responsible AI standard. That puts so-called meat on the bone. It describes how, when it comes to the development of our products and services with artificial intelligence, these principles need to be applied. Then you need to figure out how to train people. You need to create engineering tools ultimately so that the principles and the policy can be implemented. So we've embarked on that. We then need to measure how we're doing against the policy and the principles. We need to have testing systems to then test what is in effect models and code, as they're shipped or before they're shipped, so that we're confident in them. We need to have auditing and compliance and ultimately governance systems. And that is something that business has basically been developing and improving for the last two decades, starting with financial controls, with the automation of those, it's now being applied here.

Brad Smith: Ultimately, I think it’s not unrealistic to imagine a future when we even create AI models that themselves will help us, in effect, supervise the other AI models and their implementation to ensure that, in fact, these ethical principles are being respected. But when you just think about all of that, it shows how sophisticated, complicated, and important this whole endeavor is. This will not work if we sit down in 20 years and say, well, we had the right principles, but we never figured out how to put them into practice. So this next stage, which we’re well underway in addressing, I just think remains a critical part of this evolutionary process.

Carol Ann Browne: Yeah, absolutely. One thing about the sophistication, the complexity of AI, you know many people don't understand exactly what goes into creating an AI model. There's a training stack, it's called an inferencing stack. There's fine-tuning the model. Can you walk us through that this isn't just creating a string of code, but what goes into creating this technology?

Brad Smith: That, to me, is one of the things that's really exciting about having the opportunity to work with this space every day. And it's part of the conversation I really look forward to having with more people because we need to make this knowledge and information more accessible. I think people suddenly read about something like ChatGPT, they use it, they don't really think about, well, where did this come from? Oh, OpenAI, but how did it get created? It was made in America. As I like to say, it was made in Iowa. It literally was made in Iowa because the first step is actually to create a supercomputer data center, which Microsoft built. It requires an enormous investment and technological innovation to create the hardware layer, on which this wonderful company and these extraordinary people at OpenAI then are able to pull data from the internet, crawl the internet, and train the model.

Brad Smith: That is sort of this first technology stack. Then you have a model that can be used, and then it can be accessed by creating an API so that then even other applications can be built on top. It then needs to be deployed, the inferencing stack or the deployment stack, and of course that happens in our data centers around the world, but this is no simple task. The use of these models is far more computationally intensive than say, something like a search engine or many other uses of data centers and the like. It requires this extraordinary effort, that we've been engaged in across Microsoft for many months, to deploy the technology so that it literally can be used around the world. And then in these data centers, people can access the model and they can build on top of it.

Brad Smith: To some degree, people can build simple applications or they can open up the features in the model. But in other instances, there's this third dimension, and that's the work that organizations will do so that they take their data set, they're able to fine-tune the model so that the model can evolve and help them use their own data in a more effective way. The interesting thing about this is that particular change to the model is not something that an individual say, government or company or nonprofit, necessarily wants to share with everyone else. So you have this third layer, and it's very different from most software use, say like an operating system, where it's critical to avoid fragmenting the operating system.

Brad Smith: In this new world, you end up with many different permutations. So there's this whole aspect of technology development that actually will have important implications for technology deployment, for economic activity, for ensuring that all these principles ethically are complied with. We're just on the first minute of the first hour of the first day of even helping people learn about and think about what this all means and how it needs to move forward.

Carol Ann Browne: Last year we rode an autonomous car in London and we watched an AI model learn real time. It was like sitting in the car with my teenage driver, hitting curbs, maybe speeding a bit.

Brad Smith: I think your kids and my kids may have done that. I know that I had one child that definitely hit a curb. This car did not, but you are so right. That was such a fun event with Wayve. I was sitting in the back seat with their founder and you were sitting in the front seat. At one level, when I got in the car, I thought, oh, this driver has the easiest job in the world, and then I realized, no, he actually has the hardest job in the world because, basically, as with machine learning in general, he would have to intercede every time the car was making a mistake. So he had to know when to do that, interrupt the flow, and then the machine learning system learns from each of those mistakes.

Brad Smith: This was the first time they driven that car in a part of England that has speed bumps. So every time it would come up, you'd have to sort of help the car figure out how to slow down and navigate and then speed up again. And that's what you realize is at the heart of all machine learning, it's really learning from this experience. The other thing that I'll just always remember was, we're coming up to a crosswalk, and literally there was the proverbial scenario that everybody worries about with an autonomous vehicle. There was a mother pushing-

Carol Ann Browne: A baby carriage.

Brad Smith: Literally a baby. I'm like, "Oh my gosh, was this a setup?" And I'm like, "We better do well here. He's going to have to be ready to jump in." And he didn't need to. The AI system already knew that was a moment to stop. But what it really speaks to, in part, I think, is the opportunity that we have here. There's so many interesting scenarios, and one of the things that you and I get to do over the course of about a year is just sort of see this in different places, and that's where you see the power and the potential for something like AI to do some real good for the world.

Carol Ann Browne: You were in India last year as well, and you experienced a project called SEEDS. Talk about that.

Brad Smith: This was another one of those days. You look back, and I was there, Trevor Noah was there as Microsoft's chief questions officer. Yeah, the phrase that you developed, which was wonderful because he gets to work with us, we get to work with him on these things. And we're at this exhibit in New Delhi with this extraordinary nonprofit SEEDS. I think it stands for Sustainable Environmental and Ecological Development Society. What we're doing with them is, again, using satellite imagery. The satellite imagery captures the rooftops of every home in India, and our AI for Good Lab, in partnership with SEEDS, has built an AI system that is able to draw inferences based on what it identifies as the rooftops, and use this to be able to adapt to climate change and protect against natural disasters.

Brad Smith: So for example, if the system and the satellite sees a house where the roof is made of thatch, the inference is that the walls are probably made of mud, and that house is going to be especially susceptible to a storm, a wind, and rain. So when there are predictions that come in of those kinds of storms, SEEDS can work with local communities and evacuate those people in those homes sooner than would otherwise be the case. And in a similar way, if a roof is made of metal, tin, aluminum and the like, it's able to draw an inference that the walls are probably made of say, concrete. And that house is more susceptible to heat in a heat wave.

Brad Smith: What I especially was moved by was meeting these students, these kids, 15-year-olds, 16-year-olds, and they go out and they take this data, and they go to the owners of those homes. They show them how you can take say, a burlap bag that has been used for say, the shipment of vegetables, and cover the metal roof and bring down the heat level and better protect the people who live inside. This, to me, is in some ways the promise of the future. That is how you take something like AI and enable an organization to do far more than it could do without it. So I think it's right. We should always worry about the problems and address them. Do not leave them sitting in a corner or look the other way, but let's also draw on the inspiration. Let's recognize what we can do together when we truly put this new technology, AI in the cloud, into action to solve the real-world problems that we all think about every day when we get up. That's the opportunity that 2023 brings, especially as we go through this new AI inflection moment.

Carol Ann Browne: I think that's a great note to end on. It's going to be a big year.

Brad Smith: It's going to be a big year, big problems, even bigger opportunities and solutions. I'm excited. That's why we continue to get up, and frankly, work so hard and keep doing this every day.

Carol Ann Browne: Thanks for your time. It's been so fun turning the tables on you.

Brad Smith: Anytime, Carol Ann, any day.

Brad Smith: You've been listening to Tools and Weapons with me, Brad Smith. If you enjoyed today's show, please follow us wherever you like to listen. Our executive producers are Carol Ann Browne and Aaron Thiese. This episode of Tools and Weapons was produced by Corina Hernandez and Jordan Rothlein. This podcast is edited and mixed by Jennie Cataldo, with production support by Sam Kirkpatrick at Run Studios. Original music by Angular Wave Research. Tools and Weapons is a production of Microsoft, made in partnership with Listen.