Microsoft’s Chief Technology Officer, Kevin Scott, believes that for AI to benefit everyone, humans must be at the center of its development. His philosophy was shaped by his rural Virginia roots, where he belonged to a hardworking community that used creativity, perseverance, and curiosity to support each other and tackle practical challenges. In this episode, we talk about how a culture grounded in human values can lead to safer products, how AI can increase access to critical services like education and medicine, and what Chopin’s G Minor Ballade can teach us about AI and human connection.
Microsoft’s Chief Technology Officer, Kevin Scott, believes that for AI to benefit everyone, humans must be at the center of its development. His philosophy was shaped by his rural Virginia roots, where he belonged to a hardworking community that used creativity, perseverance, and curiosity to support each other and tackle practical challenges.
In this episode, we talk about how a culture grounded in human values can lead to safer products, how AI can increase access to critical services like education and medicine, and what Chopin’s G Minor Ballade can teach us about AI and human connection.
Click here to read the full transcript.
Click here to find out more about Kevin's book, Reprogramming the American Dream: From Rural America to Silicon Valley - Making AI Serve Us All.
Brad Smith: I am Brad Smith, and this is Tools and Weapons. On this podcast, I'm sharing conversations with leaders who are at the intersection of the promise and the peril of the digital age. We'll explore technology's role in the world as we look for new solutions for society's biggest challenges.
Kevin Scott: You can look at AI as a potential way to transform really hard zero-sum problems into non-zero-sum ones. And I think that lens is one that we ought to use more because that's the true benefit and power of technology is helping us have more leverage, more abundance, more of everything for everyone.
Brad Smith: That's Kevin Scott, Microsoft's Chief Technology Officer, who leads the company's AI strategy. His vision to make AI a tool available to people everywhere has roots in his rural Virginia upbringing. There he saw people with limitless creativity and grit find ingenious solutions to everyday problems. Kevin explains his guiding philosophy for creating technology forged when he was at LinkedIn and how he applies it today to build technology in a responsible way, and he shares the goosebumps he gets when he listens to classical music and what that reveals about AI and our future. My conversation with Kevin Scott, up next on Tools and Weapons.
Brad Smith: Kevin, you are working on an extraordinary range of issues dealing with artificial intelligence. You're really at the forefront of the field. But before we turn to that, I want to ask you a little bit about what has shaped your view of the world, because I see you bring that every day to AI. Kevin, I know your view of the world starts with where you grew up, the metropolis – town of 4,000 people, Gladys, Virginia. Tell us a little bit about growing up there.
Kevin Scott: Yeah, so I grew up in rural Central Virginia in this town called Gladys, where I think there are more cows than there are human beings, and there are many towns like this throughout the country.
Brad Smith: I do want to interject. I grew up in Wisconsin. That was true of the entire state for me. So yes, I appreciate that.
Kevin Scott: Yeah, and look, I think these are wonderful parts of the United States. My dad and my grandfather, my great-grandfather were all construction workers, so neither my mom nor my dad went to college and we didn't have a ton when I was growing up, but what I had were these hardworking, very creative people surrounding me. So my entire family had their job that they were doing where they worked with their hands, and then when they came home at night, they had these things that they would do, like more things with their hands, sort of. My dad made furniture and my grandfather restored antiques. They fiddled around with rebuilding cars. And so there was just always this flurry of activity around me. And so the two things that I really understood from very early on is that the people around me were incredibly industrious. They had this sort of versatility and grit. They just sort of solved problems and everybody was sort of committed to their community.
Kevin Scott: So you were just sort of acutely aware of who everyone else was, and when someone needed a meal because they were sick or old church ladies needed firewood, you just sort of went and solved those problems. And I think a lot of that informs how I think about our obligation to the world right now. It's not just an obligation to each other inside of Microsoft or an obligation to the tech industry that we occupy. It's sort of an obligation to everyone and just recognizing that there ingenious, hardworking people everywhere, and if you equip them with tools, particularly the tools of technology, they're going to go do interesting, great things in their communities and at broader scale.
Brad Smith: We are working in an industry that fundamentally was invented and built by engineers, software engineers, computer scientists, data scientists. But we're talking here about a future that needs to be built by people with a breadth of perspective that reflects many disciplines, I'll say a bigger role for the liberal arts and people who have training or interest in the liberal arts. I am sort of one of the voices of the liberal arts in the senior ranks at Microsoft. And what I have always enjoyed among many things about you is so are you. You have this extraordinary range of interests, and maybe it comes from that ability to see people tinker with so many things. What do you do when you're not thinking about AI? I think that's worth people learning a little bit about.
Kevin Scott: Look, I think the thing that my parents and my family gave me that maybe is the most defining thing about me other than my inventive vocabulary, which you've also been exposed to, is just this almost pathological curiosity. I want to understand everything and how everything works. And it used to drive my mother crazy because at three or four years old, I would disassemble all the appliances in the house and they'd be in various states of disarrays. Like I tried to figure out how they worked and sometimes I could get them back together and working again and sometimes not. And so I just am constantly myself tinkering with things. My wife calls me a serial hobbyist because I will switch from one thing to another and go super deep. And so it's been photography or sometimes it's an obsession with cooking. Like Nathan Myhrvold, one of the previous Microsoft CTOs also I think is serially obsessed with learning things.
Kevin Scott: But it also, I think to your point, includes – I went to a liberal arts undergraduate school and I was a computer science major and I was minoring in English. And when I got to the end of my four years, I had a computer science advisor who was like, "You must go get a PhD in computer science." And I had an English advisor who was saying, "You must go get a PhD in English literature." And I didn't really know which one I was going to choose because I was equally interested in both. And it was sort of a pragmatic choice that – and this is a sad statement about society more than anything else. I was broke and I decided that my economic prospects were going to be better with a career in computer science, although I had no idea why. It was just this vague sense I had. It turns out, I'm guessing it was right because I was more likely to be Kevin Scott, CTO of Microsoft – although that's a very unlikely thing in of itself – than I was going to be James Patterson or Stephen King or JK Rowling.
Brad Smith: But a couple of months ago, you and I were at a dinner together in Washington, DC with about a dozen very reputable esteemed journalist, and we were talking about AI, as one does when one is with journalists, fielding a lot of questions. And I was sitting across this table and suddenly I heard you describing how you had been listening to classical music. Can you remember that story and share that?
Kevin Scott: Yeah. One of the things that I believe is – and so the context here is we were having this conversation about what AI may or may not subsume about the human experience. And one of the things that I really do believe is that part of what we want from each other is connection. And that the purpose of art is not just expression, but it is a desire to be connected to one another, like to the composer, to the performer, to the audience that you're sitting in, to other people who may not be in the physical audience, who are experiencing the same thing as you are. How they're experiencing it differently and how they're experiencing it similarly. And the anecdote that I was sharing is I'm a huge classical piano fan, and my very favorite composition is Chopin's G Minor Ballade, which is this unbelievably intense, complex piece of music. And even though it's my favorite, I have favorite performances of it, it's been performed thousands and thousands of times by thousands of different performers.
Kevin Scott: And so even though they all approach the notes on the page, which are the same for all of them, and they are playing on instruments that are slightly different instrument to instrument, but they all have 88 keys, they all are a piano, is sort of a piano. What they get out of that instrument and that score is very different from performance to performance. And there's some performances where I will listen to it, there's a piece of the composition where I forget exactly, I think it's bar 96, where you have this triple fortissimo that just releases all of the tension that you've been building up. And there are certain performances of this where I sort of get this goosebumps every time. And I've got to believe that is something very deeply human. I just can't imagine how an AI produces that because when I have it, I'm just in such an emotional state and it is about what that performer must be thinking and feeling when he's producing this thing that is producing this reaction in me.
Brad Smith: And I think that gets to the core of something important. For all of our enthusiasm about AI and for all of the potential we think it can bring, it's never going to replace what fundamentally makes us special as human beings. It can add to it, and augment, hopefully, what we do and what we achieve. And you see that through that experience. And then let me ask you about one other thing that you commented a bit about at that dinner. You still make things with your hands. Tell us what you've been making recently.
Kevin Scott: I've been doing a bunch of woodworking and a bunch of, funny enough, leather working so I make bags and a bunch of complicated leather goods.
Brad Smith: Like handbags and the like?
Kevin Scott: Yeah, so the briefcase that I carry around, I made myself, I make my own luggage. It's really weird, man, when I say it out loud.
Brad Smith: No, you and I were about to walk into a meeting at the Rand Corporation earlier this year and you said, "Oh yeah, I made this bag" that you had, I think on your back or in your hand at that point.
Kevin Scott: And I do it because I really enjoy understanding how the world is put together, so it helps me not take things for granted. So we casually consume so much of the world, and I think we take for granted the people who have, with their hands, made these things that are integral parts of our daily life. And being able to understand how things come together and to try to replicate some of these things poorly as I might myself, helps me be more grateful for all of the things that I have.
Kevin Scott: And it's also a good release. You know this very well – our jobs are doing things that are so complex that your individual contribution to a thing is very diffuse because you're one among very large number of people trying to make a thing happen. And it usually takes a long time to get a thing accomplished. And so being able to make a thing like a briefcase or a little decorative wooden box or I made the wallet that I carry is a way to do something by yourself. Or in my case, I have one other person that I work with and over a short time horizon where you can sort of say, "Oh, here I did this thing." And so it also serves that purpose where I get anxious honestly, if I don't have that creative outlet, like a way to go spend a few hours, do something with my hands and then see what it is that I've made, because that's not the way my job as CTO works.
Brad Smith: One of the things that I think is interesting about life is we all look at the world through our own eyes by definition, and they reflect our own unique experience. And here in 2023, one question that people are sometimes asking is, "How did Microsoft and OpenAI come together?" And a big part of the answer is through you, through Kevin Scott. But what I love – I've heard our mutual boss, Satya Nadella, Microsoft CEO describe it as, "Sam Altman asked me if we would provide more support. So I asked Kevin Scott if he'd go over and spend some time and see if there was a future." You were not excited about spending that day there when you first got that, I'll just say "request," in quotes, from Satya. What were you thinking when you embarked on that day to sit down and understand what OpenAI was working on?
Kevin Scott: Yeah, I think the thing that people even here inside of Microsoft don't remember is we had a partnership with OpenAI prior to that conversation that Satya and Sam had. So they had gotten some credits from Microsoft to do some of their first training runs on top of Azure. And the thing that I was concerned about at the beginning is like, okay, are we just going to be a source of funding for compute versus having some deeper partnership? And I was a little bit skeptical even that we were on a path to making super fast progress. I've been working on machine learning things for a whole bunch of years. My first job in the industry was doing some machine learning work in advertising and search systems, and I just didn't really fully appreciate that we had gotten ourselves onto a path where things were going to sort of scale up with the amount of data and compute that we could feed to AI systems in ways that were going to make the AI models much, much more compelling.
Kevin Scott: And so when I had that first meeting, I wasn't the only one who was skeptical. And you can see the skepticism even persisting today. That period of time between when GPT-4 existed and when the world knew it existed, there were all sorts of people making all sorts of claims about this will never work or this is impossible, where we inside of the company and at OpenAI could sort of see that the things people were using as evidence of impossibility were solved problems now. And so that conversation that I had with Sam and his team at OpenAI was eye-opening in a whole bunch of ways about how they had really solved some very interesting problems. Sam has been talking quite eloquently lately about not just being able to scale up the capability of AI with more compute and data, but having that scale-up be predictable. And as soon as I had seen those two things, I knew that we were going to be able to do something very interesting with OpenAI.
Brad Smith: In a way that perhaps most people wouldn't necessarily know about or appreciate, do you think it helped you see what OpenAI could build having grown up surrounded by people who were constantly solving the day's problems by building all sorts of solutions?
Kevin Scott: Yeah, I think so. I mean, this is maybe the most important thing about the OpenAI and Microsoft partnership is we're sort of both aligned on this notion that we're building AI to put into the hands of other people so they can create with it. And so this idea of building very powerful AI and the only people who get to decide how it functions, what it gets used for, how it's steered is a handful of people in big tech companies is not nearly as interesting to me as this idea that we're building a platform for other people to create on top of.
Kevin Scott: Because I just know, and I'm sure this is your experience as well, you grow up in one of these communities that's very different from Silicon Valley or the Pacific Northwest or New York City or Beijing or pick your place where a lot of technical innovation is happening and people have different problems. They think about the world differently, they have different urgencies around things that they care about, and you want to equip everyone with really amazing tools so that they can do their best work and so they can solve problems that you couldn't even imagine. And to me, that's the exciting thing, just watching what a creative person does who has a different point of view for me makes something amazing that I find surprising and would never have built myself.
Brad Smith: And I of course appreciate personally because I've worked with you for a number of years that this is not something new for you. It's a real passion. A passion that has not only led to what you're doing at Microsoft, but you've literally written a book about this, “Reprogramming The American Dream,” and it's fundamentally focused on, as the subtitle is, “Making AI Serve Us All.” And your book didn't come out at the best moment in time. It arrived about the same time as the pandemic. It's hard to do a book tour during a pandemic, but you really focus in the book about how to use AI as a tool so that we can solve the world's problems without thinking about everything as sort of a zero-sum game to calculate scarce resources. It's really the story that you just shared here, but can you share a little bit with us about that broader perspective that you have been nurturing really for several years?
Kevin Scott: I think the history of human innovation is trying to use technology and tools to turn zero-sum problems into non-zero sum ones. And for those people who have never heard of this, there's this idea in game theory, which is a branch of mathematics that describes certain types of games where a zero-sum game is one where there's a winner and a loser. And zero-sum problems in general, even if they're not binary games with one winner and one loser, they typically have a finite number of resources in the outcome of the game. You have to apportion those finite resources to the players of the game. And I think we can sort of see in our lives, the most contentious things that we face as human beings are typically zero-sum. They're ones where we've got some restricted pool of resources and we have to figure out how to allocate them to a bunch of people who may all be equally deserving of a share of those resources or where I sort of think about this in terms of college admissions.
Kevin Scott: I've got a 14-year old right now who's sort of obsessed with where it is she's going to go to school. And I just look at how many really bright kids don't get into the schools that they want because college admissions are sort of a zero-sum thing. Education tends to be a zero-sum thing because we have more educational need than we have capacity to teach. Medicine can be a zero-sum game because we have more need for medicines and therapies and healthcare resources than we have capacity.
Kevin Scott: And so the story of humanity, and this is a thing that I was super influenced by as an undergraduate student, by an author named Robert Wright who wrote a book called “Nonzero,” is whenever we are able to take a piece of technology and turn one of these zero-sum things into a non-zero-sum, where we take scarcity and constraints and create abundance and release the constraints, we are able to do really amazing things. And I think that if you look at the future that we're facing, even some of the things that I just mentioned like healthcare and education, you can look at AI as a potential way to transform really hard zero-sum problems into non-zero-sum ones. And I think that lens is one that we ought to use more because that's the true benefit and power of technology is helping us have more leverage, more abundance, more of everything for everyone.
Brad Smith: And one of the things I find interesting about what you just said is I think right now around the world, there are people, including people I'm sure who are listening to this, who are going, "Uh-huh, that sounds good, but I feel like I've heard that before. I feel like I've heard you tech guys” – usually it's guys – “get too excited all wrapped up in yourselves and you create more problems than you actually solve." And then they say that's what happened with social media. And without trying to make a statement about any other company, I want to ask you this. You're not just the CTO of Microsoft. You came to Microsoft having been the Chief Technology Officer of LinkedIn. And I would argue that across the social network landscape, LinkedIn has always stood out. It's why we wanted to acquire the company as doing what you just said, creating a social network that created more opportunity and actually created benefits in very important ways. That in part was your vision. How did you bring that vision into reality at LinkedIn?
Kevin Scott: Well, like it wasn't my vision originally. It was Reid Hoffman's vision and then a thing that a whole bunch of people opted into. And I think this is a really interesting thing from a career perspective. Like, mission matters, it matters a lot. LinkedIn's mission was connect the world's professionals to help them be more productive and successful. And we live that mission every day. So we asked ourselves in the things that we were doing, whether it was building a piece of infrastructure or building a new feature or a new product, we asked ourselves whether or not that thing was going to be true to that mission. Was it going to help someone be more productive? Was it going to help someone be more successful? Was it going to help connect people around the world? I think just really building a culture at a company where you can be mission-oriented and you're always called back to that is really interesting, and it results in you having a team of people who are very passionate about a shared vision that they have for what they're putting their labor towards.
Kevin Scott: And it helps you deal with some hard questions because there are many things that can pull you off mission. Like many, many things that seem like good ideas, that seem attractive, where an incentive exists to pull you in a direction. And if you have the culture inside of a company to always question whether or not those sort of pulls that you're having that may bring you off mission are really orthogonal versus things that make the focus of the mission stronger, I think you end up doing reasonably good things over time. It's one of the things I'm excited about at Microsoft and one of the reasons why I have always admired Satya and the senior leadership team, and I wanted to be the CTO of the company when Satya offered to let me do that, which just seemed like a crazy thing at the time. The mission of Microsoft as a platform company as empowering every individual and organization on the planet to achieve more. I think it's one of the reasons why the LinkedIn acquisition's actually been successful is that Microsoft mission I just articulated is so close to the LinkedIn mission.
Brad Smith: And I'll grant you, it was Reid Hoffman's vision. I'll say you played an indispensable role in operationalizing it and turning it into reality, and you are doing it again here at Microsoft around AI and this whole focus that we have about building what we call responsible AI, ethical AI. You and I co-chair the Responsible AI Council. What are the challenges that you're confronting anew, today's challenges, in doing for Microsoft and AI what you did at LinkedIn on a smaller scale?
Kevin Scott: I believe I said this in my book. We as technologists don't get to do whatever it is we want. We have to be serving some interest of society. Society has to trust that we are acting responsibly and in their interest over time. Otherwise we lose the permission that we have to do what we want. And to believe that we don't need the permission of the world to do what we're doing is a very weird and naive and incorrect assumption I think that people have.
Kevin Scott: And so part of doing responsible AI, I think, is sort of doing what is necessary to safely and responsibly take a technology that has, in my opinion, overwhelmingly more benefits than it has risk and harms. But it is confronting the possibility of those risks and harms head on, not being Pollyanna or dismissive of them and trying to figure out how you not just build the infrastructure that helps you deploy things in a safe way, but you build a culture where by the time things get to the safeguarding mechanisms that you're building, either processes or technology, that the teams who built things have already considered most of what your safeguards are trying to enforce on those products that they are deploying. So in that sense, I think the responsible AI program that you and I are running is partially about building safeguards, but it's also partially about building culture and maybe that culture is the more important part of what it's doing.
Brad Smith: It was striking earlier this year because there were some skeptics, and especially in the journalistic community, which has the job of being skeptical, and I think that serves us all when they are, but they looked at Microsoft, they looked at Google and they said, "You all are going so fast, you're clearly throwing caution to the wind." When you think about that type of criticism or concern, how do you think we're doing in building the culture we need to ensure that AI is developed in a responsible way?
Kevin Scott: So in my opinion, and it's maybe not for me to judge because obviously I have a whole bunch of biases. Ultimately society will have to judge what we're doing. But I think we're doing a very good job. We're trying to listen very carefully to a very broad range of stakeholders, whether it's the work that you and I have been doing with the governments of a bunch of very important countries who are rightfully concerned about what the impacts of this technology, both positive and negative, are going to be on their citizens. And it is academics, it's journalists, but it's also the consumers of the products and the people who are going to build on top of the platform. So one of the things that we chose to do is we actually want to deploy the technology. It's very hard to figure out what it is that a platform should do or what it is that a product needs to do to benefit users unless you actually launch the product and get the feedback and sort of see how the systems behave in the real world.
Kevin Scott: And so part of our job with responsible AI at Microsoft is just being very clear-eyed about where the red lines are, like things that we will never do because it is just blindingly clear and everybody with a functioning brain can agree that X, Y, or Z is dangerous and we must prevent it from happening. And then there are like all of these things that are sort of fuzzy where the only way that we're going to figure out where we want to draw the line either for individuals or organizations or society, is to have individuals, organizations, and society exposed to the technology enough so that they can form a really good opinion about what it is we should do.
Brad Smith: Well, I know we're out of time, but I'd love to sit down and take stock of this conversation in, say, a year or two and we'll compare where we are then with where we are now. But what I would say is this. I actually think that all of these disparate things that you've described during this conversation actually come together. And as we look at where this extraordinary new technology is going, it's actually a really good thing. I, at least, sleep better at night knowing that the Chief Technology Officer for Microsoft actually spends some time creating his own luggage, appreciating classical piano, has this breadth of interests, developed a responsible social network at LinkedIn and grew up in Gladys, Virginia. So Kevin, thanks for the opportunity I have to work with you. Thank you for sharing some of this. Actually the future of technology depends in some ways and in many different places on people who can always remember what actually is most important about keeping us grounded and making us human. You embody all of that.
Kevin Scott: Oh, thank you so much. You're too kind. Thanks for having me on today.
Brad Smith: See you in our next meeting.
Kevin Scott: Yeah, awesome. Thank you. Bye.
Brad Smith: See you. Bye.
Brad Smith: You've been listening to Tools and Weapons with me, Brad Smith. If you enjoyed today's show, please follow us wherever you like to listen. Our executive producers are Carol Ann Browne and Aaron Thiese. This episode of Tools and Weapons was produced by Corina Hernandez and Jordan Rothlein. This podcast is edited and mixed by Jennie Cataldo with production support by Sam Kirkpatrick at Run Studios. Original music by Angular Wave Research. Tools and Weapons is a production of Microsoft made in partnership with Listen.