 Ready to go? Great. Yeah. Hey, good afternoon. My name is Stephen Rodriguez and I am a fellow at New America's International Security Program and today I'm joined by Scott Hartley, author of the new book, The Fuzzy and the Techie Why Liberal Arts Will Rule the Digital World. And it's interesting when Scott and I have known each other for a little while now and when he first approached me about this, I immediately thought of my freshman year in college when I was deciding on what I should major in and I love history and I love political science but I quickly thought, you know, I want to be sure I get a job, you know, coming out of college. It doesn't involve academia or writing papers that no one cares to read about. So I decided to, you know, meet the professional world halfway in major in business. But when I read this book and I talked to Scott about it, it made sense because, you know, my career like many of yours as well is, you know, taking different pivots and turns and I realized that now what I didn't know then, which is that in many ways liberal arts or even business has a massive ability to the technology world, even the innovation in general. So with that, I want to turn it over to Scott before Scott can tell you more about his book. You know, I know Scott from our time in New York. He like myself has spent time in the venture capital world. Also worked for Google, Facebook, pretty much has a dream resume and I think importantly for this book has spent some time as a presidential innovation fellow, right? Also had the fortune or misfortune of getting some good experience in the government driving and learning about and driving innovation in the large enterprises. So Scott, maybe to start, tell us a little bit about yourself and why you had the idea to spend a lot of time writing that book. Yeah, well first of all, thank you to New America and Stephen for having me here today and for all of you guys for spending your lunch with us here. So my sort of impetus for writing the book really came out of observation. So I spent my time, I grew up in the Bay Area in sort of the boom in the bust of Silicon Valley, but it always sort of carried this interest in public policy and sort of the fuzzier side of things. And yet I found my way into Google and then into Facebook and then onto Sand Hill Road where I was working at a venture capital firm. And in the process of VC, your job is effectively to meet with entrepreneurs on a day-to-day basis, sort of track where you think innovation may be going and then work with other partners in your firm to place investments in those companies that you think have promised. And the observation I had was sort of at odds with the narrative that was coming out of the media and the narrative that I saw on a day-to-day basis that basically Silicon Valley was this walled garden, this monolith of techies creating innovation and there were sort of no other contributors to that world. And I think if you go back to the 1990s laying the groundwork and infrastructure for the web and for the technology that we have today, it may have been more of a true statement that it was kind of pioneered by techies. But today, as Mark Andreessen has said and has been sort of propagated throughout the media, software is eating the world. I like to flip that and say, really, software is feeding the world. And it's become the application layer is about how we apply that technology meaningfully and it's no longer the case that you have to be the techie in order to sort of participate in Silicon Valley. So I was sitting in Sand Hill Road and meeting five meetings a day or so with different entrepreneurs and I would say at least half of those entrepreneurs were people that were coming out of all these different lives from fashion to finance to media to defense. They were coming out of different academic backgrounds, they were applying what they had known from sociology or anthropology or economics, partnering with the techie often to sort of put the new tools to bear against old problems, things that they understood deeply. But I sort of realized and the sort of thesis of the book is that as code has become more commoditized, the sort of comparative advantage and how we apply the tech meaningfully often comes from the people that are coming from these other backgrounds, from these other experiences and have the passion and the interest to then apply the technology to what they know. So the terms fuzzy and techie, they actually come back from the 1960s, 1970s on Stanford campus and it was this lighthearted association, this sort of lighthearted moniker of, hey, are you more of a fuzzy, are you more of a techie? And really it was just this jocular set of terms and fuzzies referred to people that studied the arts, the humanities or the social sciences and the techies were more self-explanatory people that came out of the engineering world or computer science. And the book also is not about the opposition of these two, it's not that I'm a fuzzy and you're a techie or one or the other. Because really if you look within sort of any of these programs, if you look within the social sciences for example, you've got statistical software that you have to master these days. You're often working with big data and data sets. You're engaging with independent and dependent variables and if you're doing deterrence, you're working with game theory and things like that. So they're not, the fuzzy subjects are not uniformly fuzzy. And then you go to the techie side and you look at mechanical engineering these days and you've got the advent of design thinking which is basically user psychology. There's a lot about user experience design and about sort of know your customer experience interviews which are kind of sociological or even anthropological and how they work. So you start kind of peeling back these terms and you realize actually we're all a bit of both and it's about the confluence of these two things. And the sort of secondary part of the book, which refers to how the liberal arts will rule the digital world. This sort of takes this concept of liberal arts that I think has been some degree thrown into the bus in Silicon Valley. For example, Mark Andreessen has said that those with soft skills will work in shoe stores. I have nothing against shoe stores, but I don't think that's true. I don't think that English majors will necessarily be baristas. Or like Vinod Kosla, one of the founders of Sound Microsystems has said, basically the liberal arts have no value in the future economy. And first of all, if we look at the kind of classic definition of what the liberal arts are, they incorporate mathematics, they incorporate logic, they incorporate the natural sciences. So if we look at some of the most emergent fields in the venture capital world, for example, CRISPR and sort of gene sequencing. These are things that come out of the natural sciences. These are things that come out of the study of biology without direct vocational application, but the exploration, the curiosity, the passion kind of tugging on the mind. And those are sort of the premises of the liberal arts that I mean when I say these are the things that will rule the digital world. So that's sort of the, I guess, rationale behind why I wrote the book, and sort of overarching thesis. Great. So if you listen to the podcast or check websites or watch any of the major news networks, you would feel that we're in a world. If it's not consumed by software, it's consumed by AI and automation. These are big economic messages today of where people, including the administration, are talking about the role of automation and taking jobs away or bringing jobs here. So kind of piggybacking off of the comments you just made. How or why should a world that's consumed in artificial intelligence and automated processes even care about liberal arts and things related to anthropology or history or political science? Yeah, so I mean the looking kind of empirically across the valley. And when I say Silicon Valley, I don't mean the geographic location. I mean kind of writ large this technological layer because as we're seeing at 1776 down the street here in DC, as we're seeing in places like Lexington, Kentucky, and Chattanooga, and a number of Denver, Colorado, and a bunch of places in between, really the access to information in the democratization of a lot of these tools has really, not to mention that the application layer of these technologies has meant that we've got sort of a much more broadening of where technology exists. But so the reason I think that it still matters, if you look, so in 2014, Oxford came out with a study that said 47% of US jobs were at high risk of machine automation. And this was sort of the rise of the robots and Martin Ford's book and thinking about the reality that there were so many jobs that were at risk. In January of this year, McKinsey Global Institute came out with kind of a follow on study where they looked at a little more granular level. And they said, wait a minute, let's look at 800 occupations. Let's look at what's comprised of those occupations because all of our jobs consist of many, many tasks. And if we divvy up occupations by tasks and then we attempt to match tasks with basically what machines can currently do and what we project them to be able to do down the road, we actually find that they found that 5% of jobs, which is still a non-trivial number, 5% has massive implications for all sorts of social reasons and questions of basic income and questions that are commonly brought to the forefront of the media, but it's not 47%. And what they also found was that for 60% of jobs, 30% of the tasks within those jobs were things that would change, generally over an 8 to 20 plus year time frame. So I think the reality that we're living in is much less about this coming wave of full automation and sort of serial automation and AI taking over jobs, and it's much more about if you flip the letters from artificial intelligence, AI to intelligence, augmentation, IA. That's something to really think about. In the automotive world, we look at self-driving cars and we think, okay, over what period of time are all of our vehicles just gonna be humming around the roads by themselves? And we've been undergoing this process for a long time. All the way back to manual to automatic transmission, to park assist, to any lock brakes. We're starting to see the benefits of lane guidance and being on a freeway in a particular area with no potholes and good visibility, we'll start seeing autonomous vehicles more and more. But it's not gonna happen overnight. And I think if you look at that sort of progression, it's much more serial progression than it is sort of all or none. And I think the same is true in our workforce. So if you look at driver assist, for example, in the car, we're much more likely I think to have desktop assist in the office than we are to have robots taking our job whole hog. And so one of the interesting things in the book is if you actually unpack this idea and say, where are the tasks within our jobs that can be taken away? Generally, a best practice that we have could become a machine practice. And what I mean by that is if you have a best practice, it's generally something that you've done before, you know the process. It can be scripted. If it can be scripted, it can be programmed. And if it can be programmed, obviously, there's a machine that can do that. And so if you look within any job and you say, okay, what are the best practices, the scripted tasks? Those are generally the simple things. And those are the things that are highly routine and those can be sort of moved away to machines. But what that does is it frees up the human in that role to then focus on the complex tasks. And if you focus on the complex tasks, one of the guys that I interview and talk with is a guy named David Deming who's up at the Harvard Graduate School of Education. And David Deming talks about the basically social skills and soft skills as being this dark matter in the educational world, this dark matter in the employment world. It's something that we can't really quantify. We know it's important, but how do we really put our finger on it? It's kind of like dark matter in the universe. We know that it's out there, but we can't quite put our finger on what it is. And what he talks about is in this world where all the simple tasks are scripted and eroded by machines, and what's left are the complex tasks. We actually specialize more, so you may be good at one thing, I'm good at something else. And we start task trading more frequently. And in that process of task trading, we actually encounter friction. And in that sort of there's a transaction cost associated with that task trading, and what reduces that transaction cost, what reduces the friction is actually soft skills, social skills, things that you learn through tugging on the mind, being collaborative, being empathetic to another's position. And I think it's a really interesting sort of second deck maybe to this whole wave of AI and wave of automation is to say, if these things do start, for example, in the legal space, Dana Remus and Frank Levy ran a study that I talked about in the book where they said let's look at legal and let's figure out in the legal profession where the scripted tasks and what can we take away. And they found 13% of legal tasks could be scripted and taken away. But that doesn't mean that 13% of lawyers disappeared. It means within each job there's sort of a small subset of tasks like reading a 500 page contract for capital letters or not capital letters. That sort of thing we can obviously outsource to machines. And really, I think it gives some of these new scale advantages the same way that Amazon Web Services, AWS, has empowered smaller startups to have the same scale efficiencies as big companies. I think the same way with automation and AI we'll start seeing a small law firm maybe able to compete with a big law firm because they've got the same tools as having 50 associates. So those are some of the ideas I think around the reason why increasingly I think this training in liberal arts or training in ways to train collaboration, ways to train communication, empathy, some of the soft skills, the dark matter, those become really important in this sort of machine led world. I can definitely tell you I would personally pay a lot of money for whatever dark matter helped me read government contracting language faster. That would be a valuable skill. It's interesting you mentioned AI and automation because I think pivoting more towards the field I'm in now, international security. I've often thought about unmanned systems, right? Predator or the Reaper or Global Hawk or these kind of terminator-like unmanned tanks that are gonna go out and wipe everyone out. And I had a conversation with someone recently and they reminded me that for, I think it's every one or two predators, these unmanned planes that we use for combat and non-combat missions overseas primarily. I think they said up to 80 people are required to keep those things in the air. So maybe by having a predator in the air, certain individuals no longer are able to or have the job that they did. But now there's this whole new set of job skills that I might add to the Air Force is massively undermanned and filling right now. To keep these unmanned systems which are actually, maybe the war isn't unmanned, and maybe it's almost over-manned, there's no one's on the plane itself. So I think on, really the international security and even Washington DC, who in the government today, whether individuals or agencies, gets this paradigm in your opinion. You spent time as a PIF, the Presidential Innovation Fellow. Have you run into people here in Washington that seem to understand this? Yes, we were talking before and I think it's an interesting concept of. So when Todd Park, working with President Obama, he was the second CTO for President Obama. He brought this program into fruition that was the Presidential Innovation Fellow's program and what the attempt was was to bring technologists from outside of Washington into the Beltway to bring outside perspective, tech perspective, some ability to bring design and product innovation and things to different agencies. And to really come in and sort of like the White House Fellows Program, be kind of tacked along with CTO within a particular agency and work to make that agency a little bit more efficient or think about some of these outside tools they could apply, focused on data visualization or making digitizing physical records like in different, the National Archives, for example. So that was an example, I think, of importing techies in some ways and we were chatting about earlier this idea of whether it's exporting fuzzies, exporting, I think it's more exporting problem sets and understanding depth of understanding of problems, particular problems from places like Washington where we have a finger on the pulse for maybe coming legislation, coming regulation data that's walled up in sort of government agencies that can be made open and accessible through application programming interfaces, APIs where developers can then pipe that data into new tools. Those are ways that I think we can start, quote unquote, maybe exporting the fuzzy as we have kind of imported the techie. I think that a really great example of this was prior Secretary of Defense, Ash Carter, in bringing the defense industry out to Silicon Valley. So rather than trying to, through DARPA and through many programs, we've always tried to bring technologists into Washington. I thought his attempt to bring DOD and defense out to Silicon Valley was interesting and through the process of creating, it's called DIUX and it's the defense innovation sort of experimental incubator that's out in Silicon Valley and Moffitt Field. They've started creating all sorts of programs, really exporting problem sets, so exporting understanding of particular needs that the defense industry, the security industry has. And one of the sort of outgrowths of that is the partnership between Steve Blank, who's an entrepreneurship professor, pioneer of kind of the lean startup method. He was actually the professor for Eric Reese who wrote the book Lean Startup. And so Steve Blank is really sort of the pioneer of this build, measure, learn mentality. And working with two former Army colonels, Steve Blank started a program. It's called Hacking for Defense. And then there's now a second one called Hacking for Diplomacy. And they're basically courses that have now rolled out to, I think around 13 different colleges. You mentioned Texas, Texas A&M, JMU here in Virginia has this program as well. But basically what this does is it takes problem sets from particular agencies or particular teams within the military. For example, Navy divers needing to have better information about biometric data. And it pairs that problem with a team that's mixed between computer scientists, electrical engineers, people from techie fields, and PhDs in political science, people studying international relations. And these composite teams work together for a 10 week quarter on whatever problem they're assigned to. And the innovations have been really amazing in sort of these short sprints by exporting the problems and getting sort of crowd sourcing, if you will, different perspectives on how we can fix them. I think that's one idea that really kind of gets to the heart of the book. Not just about kind of bringing techies into Washington, but I think taking some of the things that we really understand here and sort of exporting them as well. There's another example of coming regulation if you think about where as an innovator, as somebody sitting, trying to build a company. If you have information about where the world is changing so much of in the venture capital seat, it's not just the problem and the solution that you have, but it's the timing. It's why now, why is it important today? Because if you're right at the wrong time, you're wrong, right? And so I think one of the big things that Washington could help with is helping entrepreneurs and helping people understand the timing of particular things. So if there's coming regulation, for example, in trucking, I know later this year there's a mandatory electronic logging device that becomes mandated, where 31 million trucks on the road, suddenly you've got to have logging information that's not just paper notes kept in a spiral notebook when you're sleeping, when you're driving, regulation for safety on when you can drive and how many hours a day. Now there's this mandate of you're needing to have an electronic logging device and there's a company out in Silicon Valley called Keep Trucking. It's founded by a Pakistani American from Texas who studied political science at London School of Economics. And his family back in Pakistan knew the trucking industry. He said, I'm gonna leave my cushy job at Coastal Adventures working for Vinod Coastal. Case in point, here's a liberal arts guy who worked for Vinod and then went and started a company that's doing very well. So Shoeh had left the company and he founded Keep Trucking. And what they do is they've created an IoT device, so internet of things device that attaches to the engine. And it provides that real time information about when the truck is running, what the RPMs are on the engine, so if the truck is loaded or not loaded. And they're starting to cultivate all this data around which lanes for real time shipping information across the US. Which trucking lanes are highly optimized? Which ones are dead heads where a truck is driving one way, loaded and they're driving home unloaded with no shipment? And so those are the kind of things where if you have information on changing legislation, changing regulation, they can be really big drivers of innovation. It's interesting because I've had a number of friends here in DC who've gone out to work for a technology firm or a venture capital firm and inevitably they've gone into what we call kind of GR positions, like government relations positions where they're essentially the in-house lobbying person for that firm. Whether it's Uber or you mentioned injuries and Horowitz, places like this. And that's kind of bothered me only because to your point, I thought, well shoot, there's gotta be a lot more value that someone who's spent time here in DC. Not just reading the tea leaves on Capitol Hill, but really understanding how government works. There's gotta be real business value, not just being a congressional advisor or a lobbyist. There's gotta be real value on the business side that these men and women can bring to these firms. And I think you've kind of touched on some of these people here in this book. Yeah, I mean, I think rather, not just subject matter, it's not the applicability of the subject matter. It's sort of the applicability of the methodology. And so, I mean, one example, if you look at people, and this is again, one of the observations, kind of one of the empirical truths that I thought was there that was against the grain of this narrative in the media and in the Valley about tech beings is sort of monolithic place of techies. As if you look at Sheryl Sandberg, economics major. You look at Susan Wojcicki, who runs YouTube, history and literature major. Steve Case here in DC, founded AOL, history major from Williams College. You look at Alex Karp, who runs Palantir, big data company. He has a PhD in neoclassical social theory. Peter Thiel, who loves to hate on the liberal arts, philosophy degree, law degree. So you kind of go down the route and there's a lot more people than you would expect that have these sort of irrelevant degrees, Pinterest, if anyone likes Pinterest. Ben Silverman was a political science major at Yale. Thumbtack, political science, again. So there are all these examples and Stuart Butterfield is the founder of Slack. And Slack is a new corporate communications platform that's trying to become sort of alternative to email. It's a little bit more efficient or you can tag people's names, kind of like Twitter, can have attribution of subjects and people. And Stuart Butterfield, actually, he was the creator of Flickr, which was a photo sharing app back in the day. But really before that, he was a philosopher. And he did both his undergrad and his grad studies in Canada in philosophy. And in the process of creating Slack, we often look at these companies and we say, man, if only I had the foresight to build that five years ago. But you realize in the lean startup kind of methodology, people don't have the foresight, they just start doing something. And they start iterating their way toward what becomes a truer and truer version of what works. And so Slack started as a gaming company, and it was called Tiny Spec. And in the process of building this gaming company, they used an internal communications tool, they built it, to communicate between the engineers. And that over time, they realized maybe this has more value than the gaming company, and they started iterating toward that. And that became Slack. In the process of doing that, Stewart Butterfield attributes that process to this methodology of philosophy. And the philosophical inquiry, if you think about, you sit at a harkness table, a round table, and you debate ideas, and you try to not judge people based on their positions, but iteratively get toward this idea of truth, or the closest approximation you can get. And that's in many ways the same as a product development process. How do you get closer and closer to what product market fit is, whatever that is? Well, you gotta iterate your way toward that. And it's very similar in some ways to that process. So there's so many examples of the methodologies, I think, that come to play within the product development process at a place like Google at all these companies. That reminds me of a conversation I had one time. I met with a very, very senior executive at a household name technology firm that will do a name list for this conversation. And this person, in proclaiming the wonders of their company, said, well, we only hire people who know how to code. Okay, great. I said, well, I learned the code in the 90s. I learned Pascal via VHS tape growing up in Europe. So does that count in this person? I said, well, no, it's gotta be current language. I said, great, you know how to code? Well, no, I was like, it's exactly to your point. I have scratched my head saying, well, look, you're a key driver of value and of presumably revenue for this major technology firm, and you're a fuzzy. What's interesting is even, so as the tools have become more democratized to learn the new techie tools, I think if you look back to the 90s and well before that, the syntax you had to master to be a techie was really, it was close to the metal, it was highly complex syntax. As we've gotten farther and farther away, more and more abstractions away from that, it's moving toward natural language processing. We're not there yet, but I think the sort of ultimate level would be English, like we have with Alexa or we have with Siri. Those things actually worked well. We would be able to command access to our data. But the big bottleneck is the ability to ask the question, not the ability to have the data. And so I think Voltaire, if you go all the way back, Voltaire has a great quote of paraphrase that was, judge a man or a woman by their questions, not by their answers. And I think increasingly if we want an answer, we'll ask a machine. And if we want a question, we're gonna have to ask a smart human. And so those are some of the things that I think as these tools have become more and more democratized, back to your point, even when they're building these tools like Codecademy, for example. I don't know if anyone knows Codecademy, but they've got 25 million plus people learning to code through these online dashboards where you actually follow directions and you put code into the developer environment. And in the process of building Codecademy, Zack Sims, who founded the company, who was a dropout from Columbia, also a political science major, I have to give a couple of shouts out to I'm a political science guy myself. Zack Sims, he was looking at hiring top coders, top people out of different programs at Caltech and MIT and all these schools. And people coming out of CS programs, and he said, here are the languages that I need to build Codecademy. And none of them had the requisite coding language skills. They had theoretical, great grounding in C++ and things that taught them the building blocks, but they still had to go to General Assembly. They still had to go upskill in some of the latest languages. They had to learn Ruby on Rails. They had to go to a coding workshop at night. So I think the concept that we can graduate with any slip of paper, whether it's a STEM slip of paper or it's a political science slip of paper, and have that be this sort of carte blanche to relevance in this future economy. I think those days are numbered and it's much more about keeping our education in beta, keeping our education to work in progress. And I think that's something else, myth busting in the book that I think there's been this narrative today that if you're techie, if you study STEM, you still have this carte blanche. You've got this key to relevance in the future world. And this is a changing target on a monthly basis or a yearly basis that the coding languages and things, the tools change. And so really, it's about the ability to be a smart questioner, not just have the answers. Yeah, that reminds me of an old anecdote I heard that said that getting an, going to undergrad, right? Getting a bachelor's degree teaches you how to learn. And then getting a postgraduate degree teaches you what to learn. And it's this idea of I'd like what you said about education in beta or continually learning. And I think actually I'd be interested in getting your thoughts on this too. You have these new, oddly enough, primarily technology related trade classes or trade schools like General Assembly or Code Academy. Even in a different degree, Khan Academy, which is designed to kind of quickly and relatively easily help people who want to learn, help them learn how to learn specific subject matters without having to go and spend $50, $100, $200,000 for a master's or a master's plus level class. And I think that's become a personal interest to me especially in international security where a lot of times my fear is people don't ask the hard questions because the answers are going to be really ugly. I got my, I reverted to my true form and went to Georgetown to get my master's in foreign service. And one of the first things they taught us was policymaking is about choosing the least bad option. And a lot of times, in order to even get to those options, you have to ask very hard questions or even the questions that you know your boss or your peers aren't necessarily going to know how to deal with or have a great response to that in that medium there. But I think answering these questions, you may be going back to the Socratic method. Yeah, I think it could be good. The narrative in the media has been in it for good reason. I think if you look at sort of triple threat of 2008 financial crisis and rising unemployment, the rising cost of student debt and the importance of that. And then sort of this coming wave of automation and the fears around technological transformation and job loss. It's been sort of triple threat against these questions of what is the purpose of education? Is it all about vocational relevance? There is a near term million job gap in STEM. So there is a very real need for technical literacy. The book's not against technical literacy. I think it's myth-busting this idea that they're mutually exclusive, that you study philosophy, you know nothing about coding. Well, if you read James Joyce, you should probably learn a little JavaScript. It's about kind of blending these two sides. And it goes back to actually 1959 and well before that. But Charles Percy Snow, C.P. Snow, he gave what was called the read lecture at Cambridge University in the UK. And it was dubbed the two cultures lecture because he talked about this sort of opposition of two cultures of the sciences and the humanities. And he basically said, if we have people learning the laws of thermodynamics, they should also be reading Shakespeare and vice versa. And so it's not a new idea to kind of blend these two. But I think that in the advent of big data and AI and all these buzzwords that we see on a day-to-day basis, there's been this notion that these are magic new secret sauces that are going to change our world. And somehow with enough data, the answers are going to appear. And with AI, our jobs are going to disappear. And in actuality, going back to Plato and Sir Francis Bacon and like information not being the same thing as knowledge and the transition between information and knowledge requires sort of human input. And to go back to defense, I mean, there are a couple examples in the book. You think in this world of big data, if you go up to Newport Rhode Island to the Naval War College, why do we still have war gaming? If we have all the data and we've got all the signals, intelligence, why the heck do we do war gaming? Well, of course, there's a human component. And you've got to have force on force adversarial games. You've got to see what happens and see what doesn't happen and why. Thinking about red teaming and all these different ideas. And there's a phenomenological experiential component to that. That is the reason why we still, even in this big data world, we still do war gaming. Or in the South China Sea, you've got all these signals, intelligence on ships. But you hear about snafus where an oil rig has moved to different waters. And there's a bunch of ships surrounding it. And there's this moment of crisis where you have to think, is this an exercise? Is this an attack? Is this something bigger? And it's the context, in addition to the code, it's the human perspective. Like you said, to keep a drone in the air takes 80 engineers or so. That's the sort of reality behind the curtain of these buzzwords of AI and machine learning. And so those are some of the examples. Yeah. So last question, and please queue up your own questions with the remaining time we have. It's interesting you mentioned war gaming. That's, I got my start in war gaming for primarily the intelligence community. And the one war game I missed, because I was literally at another war game, was the infamous Millennium Challenge War Game 2003, where they had a marine general named Paul van Rieper. And they were gaming out a scenario in the Persian Gulf. It was a naval scenario. And General van Rieper was the commander of the red team, the opposing forces that were going against the various blue teams that were commanded by these various senior military officers. And they had their own staffs. And they're executing this war game, basically saying, what if we did these certain operations? What would the enemy do? And to your point, I think a big data scenario of May, or who knows, maybe it would affect this up. But if you had just done a Monte Carlo simulation, maybe it would have said, hey, these massive US Navy ships crush the opposing force 10 times out of 10. Well, General van Rieper said, why don't I get a whole bunch of small, like, fast attack boats, like little boats and dinghies, and swarm this carrier battle group sitting in the Persian Gulf? And they ended up sinking the entire fleet. And the only reason we know about it is because the US Navy said, well, we can't have that outcome. And so they refloated the sunk US Navy fleet. General van Rieper said this is completely ridiculous, speaking of asking hard questions. And walked out of the war game, and then this whole scenario promptly got leaked to the Wall Street Journal, which, of course, leave it to the media to ask the hard questions for us. So I guess the final question, and I'll turn it over to you, the audience, is I would imagine I've sort of not written a book. But when I sit down to write an article or a paper, I often start with a question. Like, what question am I trying to ask? The question that you presumably asked yourself when you were looking to write this book, how did that change? Or what was the initial question you did ask yourself? And is that different than what kind of ended up in the final product? That's a great question. Of course, yeah. And so my lens on the book is my sort of reality coming from Silicon Valley and Sand Hill Road and that world in the world of startups and seeing, like I talked about, tiny spec iterating its way over to become slack. The same way is true with all productions like a book. So I think the original thesis was, if I think back to one of my earliest drafts, which was a few years back at this point, it was that you don't have to be technical to succeed in Silicon Valley. That was sort of the original. The title was always the same because I loved the framing. Yeah, it's a good title. And so I kind of knew the title from the moment that I thought about it. But this second deck was originally why you can be non-technical in the tech world and still succeed. And then as we got into that, we said, well, what embodies that? Well, that's really more like the liberal arts. And then as other great books have come out, like Cathy O'Neill had a book called Weapons of Math Destruction. And it's a fantastic read if you haven't read it. But it's about being an AI realist or being a realist about big data and saying, if you look at big data is one thing, but how we collect it is another thing, where it comes from. So if you think about predictive policing, can we deploy our police forces in more optimized ways? Well, probably technology can help. But then if you look at, well, what's the source of the data that informs where we send the police? Is there some bias in the reporting of that data? Is that data based on crime data that's reported? Well, is all crime data in there? No, because it's reported crime data. So is there bias in maybe when and where and how certain types of crimes are reported? Some are under-reported chronically. So if you start running these algorithms that extrapolate and propagate that, you can get to really binary outcomes. And so it's about asking the questions of bias behind big data, realizing the fallibility of all these tools. So if you code something into ones and zeros and call it an algorithm, it doesn't become any more inherently objective than human sitting in a room. The people creating these things are engineers in Silicon Valley or wherever they might be with very real biases and questions and human fallibility. And so I think kind of taking a step back and just recognizing those truths kind of behind the buzzwords. Yeah. So I'm not sure if we have a microphone. Great, thank you. So if you have a question, please raise your hand. Introduce yourself and please limit your question to the form of a question. So right up here first, please. Thank you. I'm Kirtzig, retired from the US agriculture. You talked about education about jobs. Our leader across the street over here keeps on talking about jobs, bringing jobs back. And also economic growth as we head into a financial crisis beyond imagination apparently, if we don't get the economy moving, when I understand. You talked a little bit about education. Maybe I missed some of it, but are we prepared to meet, are we educating in such a way as we're meeting what is the next generation needs maybe more in techies and fuzzy. I mean, fuzzy, as you said, comes out in different places, but techie is very important. And the second part of that is what you said, how does that apply to the globe, to the rest of the world, where jobs are very difficult to find, where huge unemployment, whether it's in Egypt or Iran or China, I know, where there's also, but you don't see it or in France. So can you speak to that a little bit? Sure. Thank you. Thank you for your question. So the status quo, obviously, we've got to recognize the changing landscape, the changing world around us, where technical literacy is hugely important. So I think there is a way to kind of, again, back to this point, that these are not mutually exclusive things that we can have them both. We have to ask ourselves, well, how can we teach this in a way that engages new technology, but doesn't lose the old framing and lose the gravitas and the context and all these things. And so one way is if we take maybe old subjects and we apply them through the lens of these modern technologies. So in the case of ethics or philosophy, we can read cons, we can read John Stuart Mill, but what if we can read it and then apply it to this modern context of self-driving cars? So we have a car pulling into an intersection and we have various moments where there are questions about ethics and how if you were building the machine learning or to go left or go right, Philip Afoot and the sort of trolley problem that people talk about. If you have a trolley on a track and you've got to choose the left track or the right track and there's imminent death on both tracks, how do you make that choice? These are kind of unanswerable philosophy questions that could be paired with reading of different ethical paradigms and you could be thinking through some of the classical texts, but in a modern way. Similarly, using this sort of Socratic method to teach things. They're looking at K through 12. I think one of the interesting studies that I tried to bring into the book was grappling with messy problems and messy problems being in this era of Google where we can Google anything and find the answer in moments, right? What's the point of learning if you can just Google anything? Well, of course you can't Google everything and so if you have these messy questions, which one example of that, it was a school I can't recall where, I think it was in Kentucky where the teacher asked, what if we had square ears? And this was a question she posed to classroom of fifth graders. They had to use all these tools, they had to use iPads and Google and watching YouTube videos and all these different things but there was no right answer. And so they had to learn about acoustics, they had to learn about physiology, biology, they had to actually inquire against sources and say, I trust that source, I don't trust this other one, this video looks a little sketchy. And so they had to sort of grapple with these same challenges that we have on a daily basis when we've got red news feed or blue news feed in our Facebook, kind of depending on what our friends share. And so we have to grapple with sources and things like that. So I think to the extent that we can teach these things but engaging the new tools, not being a Luddite and saying out with any new tool, technical literacy is not relevant, of course it is. And so we've got to engage these things meaningfully but I like that idea of messy questions and how can we sort of teach through some of that? Yeah, that certainly reminds me of research. One of the challenges you have is learning what question, what question you should even be asking to begin with. And I think Google, it starts with that premise that you actually know the question you're supposed to be asking, right? I have a 20 month old and a three week old at home right now. And I think with both of them, you see their personalities come out early on and I think the one thing I really hope for them is that they would always keep asking those questions just not at eight o'clock when I'm trying to put them down for bed. I think we had a question in the back, was that right there on the blue blazer? Thanks a lot. Jeff Alexander with the Research Triangle Institute. So I'm really interested in an area I study which is interdisciplinarity, right? So there is this zone right in the middle of fuzzy and techie and I'm wondering if you could talk a little bit about like do you see, I mean there are now majors that are just designed to be inherently interdisciplinary, right? You have people majoring in say, computational sociology or science technology in society or things like that. Like do you see those interdisciplinary majors also playing kind of an important role in how all of this plays out in the workplace? Definitely. So yeah, I talk a little bit about steam education, obviously stem with the A for arts mixed in there. I love these interdisciplinary majors. I feel like I see more ads in the economists in places for different programs here at Georgetown and sort of applied intelligence and using data science but applied to a particular field for example. There's one that I talk about in the book that's called Symbolic Systems only because there have been some incredible graduates of that program like Reed Hoffman who founded LinkedIn, Marissa Mayer, Mike Krieger who founded Instagram, I'm forgetting Scott Forstall who invented iOS effectively. And so you look within a lot of these tech companies and a lot of these people or Symbolic Systems majors and what that major comprises is logic, philosophy, math, computer science and psychology. So pretty much the hardest major you could possibly say. It's the hardest, yeah. I looked at the major and then I quickly ran away from it. Sounds like a terrible idea. But it's an incredible cross section because you're forced to take philosophy, you're forced to grapple with computer science and math and logic and there's this natural intersection of all these things but I do think it's created these incredibly creative people that have been behind the scenes of a lot of these tech companies. So I think as these new tools change there are probably a lot of these intersection points where if we think about and they've always existed, I think if we look at architecture, for example, I mean architecture is aesthetics and mathematics in some ways, right? So I think that there always have been these things that have sat at the crossroads. I was listening to a podcast on the train here this morning from New York about basically being able to identify music based on the beats per minute and it's inherently a math challenge to identify music. And so you think of okay, music is a fuzzy subject but not really behind the scenes it could be heavily mathematical as well. So I don't think that there's sort of one right answer but I love that you're exploring that. Over there in the corner. Yes, thanks. Rob Colorena, AIOC investment group. The question is, it's two elements. One is what was your findings with respect to larger families and siblings? Was there any competitive nature among siblings of going into either technology or liberal arts? And the second thing is, was there any influence on study abroad type of experiences towards these paths? Interesting. I don't really cover like study abroad per se. I mean, I think that we obviously live in an increasingly globalized world where back to the question of education as well, this notion that just learn the techie skill, become this, go through STEM and you've got this carte blanche. I think if you look, if you're highly creative and let's say you're in a place like systems engineering or you're building the infrastructure as a techie, those jobs will always I think exist. But I think this notion that you code websites and that's gonna be your bread and butter forever. There are quickly places like Lagos, Nigeria where Andela is a company that's between New York and Lagos and what they're doing is training people that are coming out of great universities in Nigeria, giving them all the skills to say, do Ruby on Rails or front end development, back end development, bringing together whole teams that then they outsource projects from IBM and from Google and Microsoft and all these big companies are hiring dev teams in Nigeria. And so you look at sort of our rote coding skills becoming the new blue collar jobs. Those are the things that, the same way that we had Wipro and Infosys and Tata Consulting Services in the business process outsourcing to India in the 90s and the 2000s. I think that we'll see some of that in the same way for rote coding skills and we're already seeing it. My website, for example, for the book, I had coded for about $1,000 in Ukraine in under a week. So there are examples of this all over the place. So I think understanding the world in the global context, study abroad, hugely, hugely important. And then to your other question about sibling rivalry, I guess. I don't know, we had a funny interaction as we were chatting just before this, I think about not knowing too much and sort of staying humble. So in the process of writing this book, I came out of the tech kind of background in Silicon Valley and I knew nothing about writing books and I just sort of iterated my way down this path and stumbled my way into having this book on Steven's lap. And at the same time, my sister, who's a French and creative literature major who went to the Iowa Writers Workshop is working on manuscripts. And in the process of me writing the book, she's helped launch a fintech startup in Los Angeles. And so kind of the not knowing too much and sort of, I don't know if there's sibling rivalry, but I don't know if that answers your question at all. I think that gets to how many studies have been done that show that people can have very severe opinions about certain things, which then decline precipitously when they're actually exposed to the subject of their opinion, whatever that might be. I don't know if that applies to sibling rivalry, but pretty much in every single other case, I'm pretty sure it does. Right there in the hand blazer. Thank you very much. Fascinating conversation. Massimo Calderizzi from Time Magazine. I wonder if you reached any conclusions in the book about the comparative advantage of firms that combine the fuzzy and the techie or whether in a society that increasingly places monetary value above other values, whether it's in healthcare or information, whether you found some inherent advantage that needed to be pushed back against for, say, technical skills. So there's one example of blending the fuzzy and the techie that I think is really a poignant example is a company called Stitch Fix. I don't know if anyone here is familiar with Stitch Fix, but they're effectively Netflix for subscription fashion, and so they take an item of clothing and they employ a bunch of stylists and they classify that piece of fabric according to 100 or 150 characteristics, and then they pump that through a machine learning algorithm based on sort of you connect to your Pinterest board or you connect to your set of preferences, and then they try to predict, like Netflix does with movies, what you might like fashion-wise, and then they send you those items, you keep some, you send some back, and they iteratively get better and better. So they've raised about $50 million and are doing hundreds of millions of dollars in revenue on this hybrid model, and so they're not an only machine learning shop. What they do is they pass the machine learning sort of what they call their M algorithm to a set of humans, what they call their H algorithm, and the interesting thing is they've got about 70 data scientists that power the M algorithm, and they've got about 4,000 stylists that power the H algorithm, and what those humans do is the last mile delivery, and so they take and they contextualize all the information, so they have a subset of maybe 10 items of clothing that they think that you'll like, but then they know a little bit about the demography, they know a little bit about you from conversations, they know maybe where you are in the country. If you say that you're fashion forward, but you're in Lexington, Kentucky, is that different than if you're fashion forward and you're in midtown Manhattan? You know they contextualize some of these notions, and so they've done an incredible job of I think bridging the fuzzy in the techie in the sense that Katrina Lake, the founder of the company, is a fuzzy. She came out of, I mean, in economics and sort of business and social commerce background, fashion background, and she partnered with a guy named Eric Coulson who ran a Netflix algorithms program, and he had built sort of the backbone of Netflix before going to Stitch Fix, and to me that's a really great example of both on the job front, you know the 60 or 70 data scientists and the 4,000 humans that power the stylist engine, and then also on sort of the magic of bringing these two together, and so in Eric is a huge proponent of both sort of the M and the H algorithm, and your second question was more about the pushback. The larger point of the question is whether or not you think it's a sort of natural thing that will happen that liberal arts and technical skills will meld because the market favors that, or whether the market favors at the moment because of the structure of the market and because of the valuation of profit over other values, whether the technical gains an advantage over less monetizable skills, if you see what I'm seeing. So I think that's sort of the myth that I seek to bust through the book is the fact that the high performing companies are these tech monoliths because they're not. If you look at Snapchat, recent IPO, what was the reason why Snapchat won the sort of Gen X, Gen Z demographic? Why didn't those people go to Instagram or Facebook? One could argue that the major epiphany in what they had was for people that grew up with digital abundance, people that had every photo they've ever taken stored in Dropbox on their Google phone and Google cloud, they didn't know digital scarcity. And so it was actually sociological insight of understanding that what could create demand was creating scarcity on the platform and the way to create scarcity was make things disappear. And there's a guy named Nathan Jurgensen based in Brooklyn who is a PhD sociologist that wrote all about what he called digital dualism in this idea that things that were online could be real and things that are offline could be fake. And we have this idea that our real world is real and our online world is somehow ethereal and not tangible. And he said, you know, wait a minute, when you look at the stage dressing of Instagram posts and of your brunch table, that's sort of creating artifice in the real world and that's fake. And if you have this moment of a selfie that's on Snapchat, that can be very authentic and very real. And so I actually think you look at Snapchat and what's made them super effective and high growth and really resonate with this demographic was actually this fuzzy insight. And similarly, I think with Google Glass and with Snap Spectacles, there is this idea of indoor glasses and transparency or outdoor sunglasses and inherent sort of assumption of non-transparency. And to have a recording device on a sunglass, to me makes a lot more sense than a recording device on clear glasses where you expect to have informal conversation indoors transparency. And so these are very small nuances but I think those are actually the reasons why products succeed or fail. And so I think behind the scenes of these tech companies, the product decisions that really make things work and find that product market fit, I think often have this soft skill, often have this fuzzy in the room as well. I think that's one of the interesting points that I took away after reading the book was from an international security perspective, indeed from a defense perspective, a lot of these questions that Scott's talking about are actually what we refer to as concepts of employment. So whether you have a technology or an idea or a strategy, it's not as much to have the capability to do something. It's more about how you would choose to use that capability or in some cases like nuclear weapons choose to not use that capability. And I think you get to a lot of those questions. I think even from an enterprise level to your point, at a corporate level in terms of using these two concepts together, I think you get that by bringing in the fuzzy and the techie. Coincidentally enough, I think if Stitch Fix were to look at my outfit right now, they would characterize it as wonky yet approachable. Any other questions? Great, and we'll make this the last one. I'll give them that feedback. Yeah, thank you. I'm Phil Loomis from Right Work Labs, and I'm going to a talk, actually an after-hour happy hour next week by NAVA, and NAVA's doing these to recruit. And I'm trying to tell them about what you're talking about. With some of success, the success I've had is telling them a story about Wirecutter, which is a magazine like Consumer Reports, just got sold to New York Times, and about the founder who took WordPress and made basically a model of his company, 50 people. And it worked, and he loved it, and let him not do as much management. And you can imagine how something like this could be used, for example, your police example. You could imagine how this would happen, how you would have a model of a city and the police in it, and who's doing what, and who knows what, and all the sort of thing. And it would be cheap because you're using WordPress. He, by the way, had a master WordPress person on board. So here's my question. Like I say, it's really slow going with them. And the reasons, I think, are you can figure out why it is. This is so foreign to these people who think of themselves as leaders in DC and civic tech. My question to you is really how do, and again, let me add one more thing. This is something where if we're gonna do police departments, we're talking lots of people. So mid and low skill tech skills. But lots and lots of people. With these guys at NAVA being leaders. But right now, this is completely beyond what they can see. So my question is, next week when I go and have a beer with these guys, what should I say? Well, and I'll give you an example of a, so I think if they really, so maybe I'll paraphrase your question, if they really, you're saying they understand this domain of civic tech and they're looking at applying WordPress to it or? No, what NAVA does is they make things that are like consumer tax software and they work for people like the VA. So the VA's got zillions of things going on and how can you use software to reduce the complexity and keep track of all this data with, and you know, this type of software you keep and you keep building it. And this other type of software that we're talking about here, which is a next generation type of software. It's a different type of software, different type of problem. And again, what you're trying to capture with your software in this case is all of these complex human interactions and you're also presumably creating symbolic worlds which represent these very complex human circumstances so that, you know, you're a junior comp, you can know what's going on in these seven blocks that you're assigned to. To them, this is an enormous leap. And it is big leap, but again, I think it's very much next generation application and doable. There's a company that's in this space actually that I feature a little bit in the book. It's called OpenGov. I don't know if you're familiar with OpenGov, but a fascinating story of Zach Bookman who worked for General HR McMaster in Afghanistan, studied public policy and law, and was really fascinated by transparency. And so to your point of knowing sort of what's happening on a block-to-block basis, he was flying in Chinook helicopters across dusty parts of Afghanistan, looking at transparency issues and had the realization in a shipping container rather than in a garage like is normally the case in Silicon Valley, that we needed to focus on transparency here in the States. And so he is a fuzzy, but partnered with Techie, Joe Lonsdale, who co-founded Palantir, and they've built this platform that now has about 1,400 cities across the U.S., all the municipal data for those cities for expenditures and revenues in figuring out on a block-by-block basis where more parking tickets given out or what's the timing of waste management services going through the city. So they've tried to visualize and they've partnered with 1,400 cities to take the data they have, the raw data, and then they've built the platform that rather than using WordPress or something else, they've tried to build the infrastructure and the pipes where for the people that understand the problems, understand the data, they can basically partner with OpenGov and then get the dashboard and get the display of that data to then provide the transparency to that junior cop. But that's, I don't know if that's an answer to your question, but that's I think a great example of how somebody who really was passionate about transparency and passionate about this problem was able to, that became the comparative advantage of creating this company that's raised tens of millions of dollars and employs hundreds of people and now has 1,400 cities with better transparency than they've ever had before because it's not in boxes of Excel printouts, it's actually in dashboards like Google Analytics where you can click on graphs and see where in real time your city is functioning and where it can improve. I think you see why this book was the finalist, right, for the Bracken Prize, is that right? Yeah, Bracken Bauer Prize. For I think McKinsey and I also voted at one of the Financial Times books of the month for April. So I think it's really exciting, Scott, and I almost wish that we had done this discussion over a bottle of something other than water. It's really one of more philosophical discussions I've had in a long time. Please join me in thanking Scott for coming here today. Thank you.