 Good afternoon, and thank you for being here. Instead of Lawrence reading a meaningless bio, I thought I'd just tell you a little something about myself when I get started. I started my career as a developer in IBM when I graduated from university. At the time, I was studying engineering, and then software seems to be the place to go. So I ended up getting a job as a software developer back in 1999. So I'm old, unlike you guys, tired most of the time. And throughout my journey at IBM, I've had an opportunity to work in many different areas with a lot of different teams. 16 months, all I did was travel around, work with open source developers, introduce technology, code with them, participate in a couple of projects myself, and that was a lot of fun. And it's fun to be back here. I don't get to do as many community-based events as I would like these days. So I thought we'd do something a little lighter after a two-session of coding. I know you guys had a Kubernetes session this morning, some of you, and then AI neural net synthesis type session earlier, just from 11 to around one o'clock, I think. So I thought we'll do something a little lighter and ask an interesting question. There's been a lot of talk about AI robotic automation replacing human being in their jobs. So the question that I'm putting on the board is, will AI replace developer? Normally, at this point, I'd probably bring up a photo of the Terminator and Skynet and say, you know, they're coming. We've got to defend ourselves. But I thought I would do this. US patent number 92280157. Anyone know what this is? Any guess? Nope, that's not an IBM patent. Cage for developer? Pretty close, pretty close. So this is a patent filed by Amazon. So what you're seeing here is a device designed by Amazon to keep the human workers safe in a AI and robotics-enabled environment. Okay, now, they were very quick in the news article to say that we never implemented it. We didn't put a human being inside a cage. Now, for fun, I was downstairs looking at this thinking, man, this is interesting. This is a, I'll prove downstairs if you recognize it. This is the little claw machine. In the claw machine, you've got the merchandise sitting inside the machine, the robot sitting inside the machine, the joystick on the outside, human being gets what they want, and then we walk freely around everywhere. Contrasting with this, the merchandise is outside the cage, the claw and the robots are outside the cage, the joystick is on the inside. And by the way, based on this, based on what I read, I don't think the human operator get to drive this. It's sitting on a robotic platform to talk to the other robot so they don't run into each other. So when you want to go somewhere, you give the instruction, I want to go there, but it's not under your control. So I'm bringing this up because I think it's kind of interesting to see how pervasive we're expecting automation and AI to be in a very near future. So technology's always taken jobs away from human being. It's nothing new. Farming, right? 1871, I think over 900,000 of the working population in England and Wales are agricultural workers, farmers. 2014, that number is around 40,000. 95% of agricultural jobs gone because of automation, automation and irrigation, farming machineries. US Postal Service cut 25% of the workforce in the last 10 years because of e-billing and mobile connectivity. Robotics jobs, manufacturing jobs out there today. I think McKinsey in just last year, I believe, or the year before, released a report that says about 800 millions job worldwide will be gone in the next 10 years or so. In fact, I think Pricewaterhouse Cooper released something similar. 38% of all jobs in US that exist today will be gone by 2030 because of technology. Truckers, this one is the most dire. Driverless car, right? Everybody know that that's coming. AI, automation integrated into one. They figure about 3.1 million out of the 3.8 million jobs in the US today will be gone in the immediate future, in the foreseeable future. That's 85%. Fair workers. Fair workers used to be, if you go back as little as maybe 50 years ago, you have about 1.3 million, 1.4 million people in the US working in the railroad to actually ship things around. Today, with all the automation and all the signal system that's in place, there's only about 187,000 of those employees left. They used to ship about 655 billions a ton miles. So how far do you ship how many tons a year? Now they're shipping 1.85 trillion. Triple capacity, 85% workforce count. This one, this one's interesting. You know what these guys are? They're computers. Did you know that there's a profession called computers, you're back? I think some of you are nodding saying yeah. So NASA, the show, the NASA show, hidden shoot, that's right. So the show is about human being that are hired by NASA to compute trajectory. They're all gone. 100% of that job is gone. So computers has replaced computers. So begs this question, right? Stephen Hawking's things that that's gonna happen. Over here on the other side, is a study conducted by the US Department of Energy at the Oak Ridge Lab. Three researchers there actually conducted an academic study and found that they figured by the year 2040, code generated by artificial intelligence will be good enough to replace code written by human. Everybody's looking at me. Why are you here? I hate you. But I'll get to some of that. But so let's take a look at where programming languages are coming from. Where programming actually kind of began. I'll try to, in a bridge version or history of programming. And if I miss out on any key points, please don't be offended. I don't mean to leave out any major technology. I put this together at two in the morning last night. So the year 1800, Shaqard is a guy that was in weaving fabric. He created a loom that takes a punch card that will allow you to program it to weave a particular pattern on a cloth. First programmable machine out there, if you would, the first computing device invented by this man. Eight of lovelace. First real programmer, right? First person to write very first general purpose algorithm. It was a thought idea. It was never compiled and actually executed, but she was widely a mathematician, widely accepted as the world's first computer programmers. Moving forward, Alan Turing, 1936, Turing machine, the first, if you will, one of the very first general purpose computing device. John Bacchus, a few years later, Fortran. First high-level language written to replace punch card. Moving away from talking machine code but speaking more human-like language. Just a couple years later, a few years later, Grace Hopper. Cobalt. All right, Cobalt, those of you who knows, very verbose, business-like programming language that's in the mainframe that's still widely in use today, if you can believe. So there's a little break in the timeline. I'm jumping forward a little bit because a whole bunch of other things started happening after that. Basic got invented. CC++, BUNIX, the one that I liked, 1994, Rasmus, Lirdov, PHP. I don't know if anyone did any PHP coding before because I had the pleasure of actually meeting Rasmus a few times when I was doing the open-source timeline and I had many beers with him, really, really smart guys, like that data point in particular. But coming all the way to some of the CC++ work and then moving forward to TensorFlow in 2015, Cura's in 2015, Caffe in 2017, right? These are all the languages and as you can see, new languages keep popping up. But then the question is, why do we have them in the first place? Well, I think they're a few obvious reasons, right? One, computers are not really smart enough to understand human language. The ambiguity, the nuances in it is too complex. So what do we have to do? We have to invent a simplified dialect so that we can have, we could be basically using small words and short sentences so that a computer can understand the instruction that we give it so it can do the job that we want it to do. Second aspect, why do we have so many of them? Well, the other aspect of it is that there's changing need in terms of what we want the computer to do. So each of the one of the language in each set of framework was created to solve a very specific sets of problem. Again, with small words and short sentences so that the computer can understand and how to actually process that. It's invented by necessity and developer is there to actually help regular business convey their requirement to a machine. But then this happened. This was last year. So this is what makes it even more interesting now. Google Duplex. I'm sure all of you have heard about that one. Last year around May timeframe, CEO, Pichai, Mr. Pichai, Google CEO, did a demonstration in May in Google IO conference where in the forum, he showcased Google Duplex talking to a human being who runs a shop. Actually, two recording was played. First one, the Google AI went and booked an appointment for a haircut or hairstyle. I can't remember exactly, it was for a lady. Second one was a booking for dinner reservation. If you think back, I mentioned touring. In many ways, everyone looked at that and the whole audience just went, they all went crazy, they all clapped. They thought, hey, we got the touring test done. The imitation game is there. So that's one major piece. It was seamless. It was able to actually understand the nuances of the languages and actually complete the task. It was given a task, it went ahead and did that. Maybe a little lesser known, the other project. I'm from IBM. I do read all our own press stuff. IBM debater, project debater actually was released a month after in June last year. For those of you who may not be familiar, they say project as an offshoot from right after we won Jeopardy against Ken Jenning. We played Jeopardy against Ken Jenning and then after that, immediately after that, this project was kicked off. Now, what it did, it's a simple, I would still say it's a narrow AI. It's job is to go and engage in a debate with a human being and actually try to win a debate. So during the live event, they did two debate and it's a standard international debate format. Not that I know what that is, but apparently it goes something like, the AI comes up, makes an argument, human being comes up, make a calendar argument. So two perspectives being presented. The key part here though is the AI is supposed to be listening the whole time. And the AI has to form a rebuttal and then a summation. Now during this process, the audience are the judge. The evaluation criteria is which of the two competitor presented more compelling reason for them to change their mind. It's not that they agree with one or the other. It's about how far they shift from their previous decision. If they were on one side of the coin, they refer to the other way, then the winner is the, even though they may not have changed their mind, the winner is still decided that shifted their opinion the most. And the result was we won one and we lost one. Unfortunately, about a month ago, we lost again. Except this time was against a world champion debater at our event, a live cast, a webcast at our event in San Francisco. So we did lose, but we did compete against the best in the world. And the other aspect about that is one of the feedback was the AI was able to really use fact to create an argument. What's missing is the empathy aspect. Wasn't connecting with the audience. Those are some of the feedback that we got. So why all of this, right? This is about natural language programming algorithm directly, if you would. Even if you look at the debater example, the opponent, the human being's argument is in fact input to a program that needs to generate a particular rebuttal, right? So it's an outcome that is being asked for at a very high level. Basically what I'm asking the program to do is argue against me. Listen to what I have to say and argue against what I'm saying, right? And then there's this, Bayou is something that's in Rice University. Anyone familiar with this particular piece of research? This is where the beginning of the end is. This is AI that writes codes. It's a deep learning project at Rice University where they use neural network sketch learning capabilities where they look at code pattern of thousands and thousands and tens of thousands of Java file and functions. And then based on a pattern, they start using deep learning to recognize what are the input, what are the output, what are the design pattern that's recurring. And then what it can do now is it can actually allow a human programmer to give a high level description of what you need to get done and it will go create a sketch and then generate the code necessary to complete that task. It will present two or three options back to the user apparently and let the developer decide which one to use. That's kind of the onset of where all of it is going. So a little bit scary but are we done for? I hope not, I don't think so. Here's a study that's conducted at the University of Oxford, likelihood of jobs that will be replaced by a computer. And the blue stuff is us, the dark blue stuff, that's us. And the lower end of the spectrum, the less likely. The higher end, more likely. Okay, turns out, what we do is not that simple. What you guys do nowadays, I say we, I haven't done any coding for 10 years. I apologize, I sound like such a, I sound like a fraud sitting here saying we, I haven't done any coding in ages. But what you guys do are difficult. Takes creativity, takes a lot higher level thought process as well. So if you look at the job being replaced, the blue line is virtually a single pixel on this graph versus some of the other stuff. If you're a sales rep, which I happen to be, that red guy, look at me. So I should move back to coding. That's why I think this is a good time for me to, for another career change just yet. So, turns out it's not that easy. 2040 is a long way off still. Second, new jobs are coming. Here's an interesting fact. In fact, 90% of all jobs, basically in human history, has been taken over by technology already. The fact that AI will start writing code, it will happen. Everything else has happened in the past. Technology has taken jobs over in, for as long as, you know, for the last 140 years. It's will happen again. But here's the part that's interesting. Technology also create new jobs. And as human, we've always shifted our skills and our day-to-day practice to continue. If you think about it, 90% of all jobs in the past has already been taken, and yet we're all sitting here working our collective butts off to make sure that the global economy is running. So the fact that some of the jobs that we know today is shifting away, it's no big deal. Technology always create new jobs. Now, how many of you would consider yourself a web developer or a mobile web application developer, web designer, anybody? Imagine for a second, time travel. You guys fly all the way back to 1991. And somebody come and ask you, what do you do for a living? You tell them, I'm a web designer. I could have mobile applications. They'll look at me and go, well, what kind of web do you weave? Because the worldwide web didn't exist until 1992. So the whole classification of web designer, web application, graphics designer that does all that work, none of those jobs existed in 1991. In fact, Dell did a recent study of their own and they concluded that 85% of all jobs that will exist in 2030 hasn't even been invented yet. So that's the part that I think we can just take a step back and breathe. Because the fact that you guys are here means this. My Latin is terrible, but the Acora in Paro means still I am learning. Michael Angelo said that when he was 87 years old. So you guys are here because you guys are interested in new technology, always sharpening new skills and transforming. So I think one advice, don't ever stop. And the fact that we are in the Lifelong Learning Center or Institute it's just so fitting that that's what we hear for. So keep evolving because you have to anyway. There's nothing new. Second, it's going to be about a partnership between human being and machines. Humans are good at certain things, common sense. Although it's probably a silly word to use because common sense usually are not that common once we're outside this room. I find most programmers are pretty good at common sense but most of the other guys out there not so sure. Common sense, compassion, dreaming, abstraction and generalization, those are two elements that is critical if you think about what we do. What are machine good at? Learning new languages. The truth is, very new future, the computer will speak 38 languages seamlessly and we will struggle to speak one for me anyway. I struggle to speak one and my French is non-existent. And my wife makes fun of my Chinese so. Natural language is a big one. Pattern recognition, unlimited capabilities in terms of recognizing patterns, remembering things. So there's a little bit of a marriage that's going to happen here and I think we all heard this before. So how do you actually match the two things together so that we continue to be relevant in this new AI-based economy? It's going to be about this partnership so I'll move that forward. Here's a bit of a study that I found in Katie Nuggets last night around 3 a.m. I'm reading away going do-do-do-do-do-do and there's a little bio that's there that you can find out. The article's interesting. Your AI skills work less than you think. So again, one of these challenging titles, that's how I came across it because I was preparing for this talk. One of the things that was found though is this. The author actually tested two models. The author actually worked in Google was part of the team that tinged it around with TensorFlow and so on. And he was playing around two models, one that he deemed to be better, one that he deemed to be worse. And he started training this model. And that's the accuracy performance between the two. And one of the things that he came to realize is that deep skills in coding, the fact that you're building a better model, is much less relevant than good data. So the fact that code is being generated is not a big problem. You need good data to create good machine learning models. In fact, he's noticed that if you train the bad model with like 30,000 or something like 30,000 or 40,000 data point versus a good model with 30,000 data point, the one, the bad model you train with more data is gonna perform better. So one thing that you can do in this space, I think, is the piece that I mentioned about before on abstraction and generalization. We as human curate data set for machine learning to happen. That curation is not gonna change. At least I'm not seeing algorithm that says the curation is gonna be automated in a meaningful way. There's some work being done in that space, but I think we're gonna continue to play a big role. We will be actually the teacher of these AI capabilities, and we will be making that generalization judgment call to help AI, because you need good data to create good AI. And some of that decision is really not so easily programmed, right? Because the truth is there is no context when deep learning happens. There's a bunch of ones and zeros and pattern recognitions. If he gives it the wrong sets of ones and zeros, it's gonna come up the wrong pattern. I think the new role of the developers becomes one where you are the collaborator, the orchestrator, the conductor, and the supervisors. More and more so, you can start to trust generated capabilities. So we ship different products, right? And this is one of the things that we ship. IBM Cloud Private for data. And there's a few things in here that's interesting. One is open. We use open source technology, AI360, one thing that Anu talked about earlier, something that we open source and we offer for AI fairness. So we are very much looking at open source capabilities. We do very much spend a lot of time working with different framework like TensorFlow, Cafe, PyTorch, all that will work within this framework. But the idea is really to start creating trust and transparency. Again, something that's a key focus. Part of what we need to do, I think, this is part of the things that a lot of a world leader, Elon Musk and Stephen Hawking's and various people who have spoken out to say that AI is dangerous, right? Okay, I think that's a, sure, AI can be dangerous, anything when misuse can be dangerous. But I think that's much more, it's more and more critical to monitor, to make sure that you don't, you're not unnecessarily biasing your decision to make sure that transparency is there. And that's why we invest so much in helping developer that create codes to be able to monitor and to be able to make sure that there's trust and compliance and that sort of capabilities built in. I think I'm running short on time already. So in other senses, so monitoring and then collaborating with teammates, creating AI that you can trust, ethical AI if you would, free of biases or unnecessary biases, I say that because sometimes it's not a bad thing to have bias in your model. Well, okay, it is a bad thing to have bias in your model. How many data scientists is in a room? Okay, so biases, biases, so the AI term for bias is really about underfitting your data, so you don't have enough data set to create a good model, as opposed to overfitting, but bias is not really a good thing. So you try to make sure it's bias free, right? But explainability and so on. I think, how many more slides do I got? Oh, okay, I'll just do this and then I'll jump to the conclusion bit, I think, is our interest of time. One of the advice I found that keeps happening in all the readings that I was doing to prepare for this talk is to start looking at how to use AI as a lever for better outcomes. Not necessarily to sell AI, but to use AI to help sell whatever it is that you're selling. That's another key aspect. That we should be looking at. There's only, it's a very, very small market to create new AI algorithm. It's a very much bigger market to apply those AI into different areas to actually help make money. So if your plan is to build a better image recognition algorithm, oh, that boat has sailed long ago, right? No point doing that. But your goal is to use existing capabilities to enhance something, or use that visual recognition capabilities, tweak it, retrain it to recognize radiology images. That's maybe something different, right? So use it to enhance what you're doing today, rather than sell the actual technology itself. And part of that is sometimes to actually leverage tools that you have available to you that can help in terms of making it quicker to go to market. So I'm saying, since you're not interested in inventing image recognition, don't write it from scratch, right? Use the tools that's available to you. And I think Anup showed you earlier today that new nets is a new capabilities that we're providing where for images and speech, you can provide a dataset, and it will actually do the model generation and all the hyper-perfecturization tuning for you so that you can have a very accurate deep learning model without you actually writing any code. You can then take it and tweak it even further. That's up to you, but use the tools, use it as a lever, you know, don't make it a research project, right? So start thinking that way. And I guess my advice also is kind of, because if you keep an eye on some of the stuff that's developing, right? Like TensorFlow came out, but TensorFlow was really unusable. So then they wrote this thing called Cura, Cura's that actually helps it become usable. So if you're not watching and adopting and it uses writing TensorFlow, you're gonna fall behind, right? So to stop, right? Some of the other stuff that AI can do. I learned something new the other day. This X thing, IBM X Spotify, that means collab. It means collaboration. It's a cool thing to do these days. Fashion brand dust is with like, you know, Supreme T-shirts with like Louis Vuitton purses or something. So IBM had a collaboration Spotify to help write music. So you think AI is writing program. Well, AI is actually writing music. So in this case, this is something that you can actually go and have a listen when you have a moment. We actually use AI to help a music producer compose a song by studying musical composition and then matching that with sentiment analysis on social data set. That AI is basically able to suggest what type of composition evoked what kind of emotion in the listener. They did the same thing with 20,000 songs and the Billboard Top 100 for lyrics. So what kind of words in what combination creates what kind of emotion. Then the music producer, Alex the kid, I don't know who that is, just wrote, just basically asked similar to a programmer that I mentioned before in Bayou sending in a high level requirement. He sends in an emotion to try to evoke and actually the AI actually come back with sample composition, sample words to be used as a collaboration between AI and human being. Now, pretty easy to kind of take that extension and go, well, what does that mean to evolving job or evolving job market? How's that changing the way we work? Maybe, the year 2040 comes along. Maybe you no longer have music producer or DJs. Maybe the new job title of music producer involved you creating a genetic algorithm that generate music and maybe you create two or three or four of these AI bots and you get them to work together to generate music 24-7. Maybe that's what it means to be a musician 25 years from now. So maybe there's a whole new thing that hasn't happened yet. I mean, the whole concept of DJs and DJs as musicians, David Guerrero, all these guys. Think back 40 years, 45, 50 years. A DJ is just some guy sitting on a radio spinning records. DJ today is a superstar. It's nothing that says that a programmer who no longer programming may base a algorithm wouldn't be sitting out there creating AI and neural network algorithm that generate music and you may be the next music superstar. Who knows, right? So with that, I'll stop because I'm already over time. Lauren's looking at me saying, get the heck off of the stage. I hope this was too boring for you and lighten up your afternoon a little bit. Thank you so much. Very much, Kidman. We have time for a question, maybe, somebody. Thank you. Very interesting. I guess my thinking around this is around is narrow AI versus general AI, right? I mean, will AI replace programmers? Is really a question of is it open? Is it general AI, general intelligence or narrow intelligence? Because narrow AI is essentially an optimizer, right? I think it's still a narrow AI use case, right? Right. Yeah, so I guess the question to whether AI will replace programmers is more of creativity and intention. Exactly, so I think it's that you're right on. I think it's the fact that it's the collaboration that yields a more powerful team, right? Today, it's no different than let's say today you use an IDE to improve your productivity. Tomorrow, that IDE will have AI elements in it that allows you to be the person that orchestrates, that composes, that directs what happens as opposed to the person who writes the individual lines of code, so to a short answer, I think it's a narrow AI that helps instead of replaced. So then what is IBM doing in the realm of general AI? I mean, maybe it's beyond your scope, but how do you then think about AI becoming a general AI that is more capable, like essentially enhancing the capabilities of AI to a quantum level? So we brought a quantum, that's an interesting one as well. Because we are actually working on some quantum computing capabilities and who knows what that will bring. IBM research has been a driving force if you would behind much of the AI technology as well as various advancement. In terms of things that I have seen coming to market, our focus is still in narrow AI. I think from a general AI perspective, what I can see is more research around the necessary hardware to advance that forward to make it a viable use case. Today, my personal belief is that today, general AI is not really, hasn't been quite all that successful in creating computer to be a general AI. Part of it is compute related. So the research I've seen are things like neurosynaptic chipset, for example, that increases the compute density to a point where maybe we can start tackling that problem set. So there's a whole set of research under DARPA initiative right now I think the latest one has a 4096 cores on a single chip running on like one-tenth the power. The idea is to create something that mimics the human synapse to allow high density compute and that compute may one day create general AI and that they probably come sooner than we think. In fact, I talked about Stephen Hawking's way too much because he's kind of passed away last year and but Stephen Hawking's actually one of his pull is that at a certain level, he believes that there's really no fundamental difference between a biological versus a electronics computer in terms of ability to mimic human thought process if you would. So I think it will come, I don't think it's there yet but the research that we're doing right now I think primarily around hardware that I can see but then I don't know what they do all the time. They close the door and don't let me in. Thank you so much. Thank you. Apologies for being over.