 Hello everyone and welcome back to day two of theCUBE's live coverage of Forward Six here at the MGM in Las Vegas. I'm your host, Rebecca Knight, along with my co-host, Lisa Martin. We have two guests for this segment. We are welcoming Ed Chalice. He is the head of AI strategy and GM of communications mining at UiPath. Welcome, Ed. It's great to be here. And David Barber. He is a professor and the director of the UCL Center for AI in London, of course, and a distinguished scientist at UiPath. Thank you so much. Great, thanks for having me. Two Brits in our session. So you're fresh from the keynote stage. You gave a great presentation. And I want to start by talking about the theme of this conference, AI at work. It is something that every conversation here is revolving around. Every business leader rank and file, a down to rank and file workers are trying to figure out how do we make AI work at work? How do we make AI work for everyone? What does that mean to you? How are you approaching that question? I'll start with you, David. Well, obviously I'm more on the research side. So my interest is to try to take the cutting edge research which is coming out of academia and actually translate that into product. So for me, actually it's a very exciting time because I think the technology underlying this is changing very, very quickly. Actually keeping up to date with that itself is a hugely interesting and a challenge itself. But actually figuring out how to then translate that into interesting products for the work is also very, very interesting. So I'm really keen to just try to work on that intersection and I think that's a really, it's an amazing time because actually the inspiring thing is that these tools, which only maybe 10 years ago were very, very academic, actually now you can see them impacting people's lives and actually that's really, we've not seen that before, right? So my role as a scientist has really changed a lot within the last couple of years because it's come from very, very theoretical academic stuff. So actually, wow, people are using this stuff in their daily life. So that's a great reason to be at UiPath and work where I am. And Ed, you joined last year after the acquisition of ReInfer where you were co-founder and CEO. Talk a little bit about your role as the leader of AI strategy and how that's evolved since the acquisition. Yeah, so I mean, obviously it's been an incredibly busy year in AI. I think it was in November when chat GPT really came out my whole life, I got into AI when I was 20, it's been every year the heat, the level of conversation, the amount of press just increases and increases. This year has obviously been bigger than ever. But it's underpinned by this fundamental advance in the technology with generative AI, foundation models. And so the really big questions are how can we take this technology which has this amazing promise but make it real? And there are these big questions really around kind of trust, security, verification, that if we can't solve and get right, this technology will end up being a consumer kind of personal productivity thing. And so that's really where all of our attention is focused. Trust is a topic that, I mean, you pointed it out in your keynote, the problems with hallucination and bias, information security, sort of data leaks. How are you thinking about, what do you see as sort of the biggest obstacles to gaining trust and how are you approaching this? Yes, so I think, I mean, sorry, I think there's one of the first hesitancies people have, rightfully so, is an information security kind of hesitancy. That's actually the easiest problem to solve here. We can, that's the solve problem. If you point to yourself, if you use a secure model from a reputable provider, you can get the kind of constraints around that you need, the governance, the protections you need. The much more interesting questions I think, and the much harder challenges is, how do you embed this in processes, in automations in a way where you have justified belief that it's going to do exactly what you needed to do and it's going to be a net positive. Yeah, I think it's also, it is a very challenging problem, like the hallucination thing, it's a really difficult fundamental research problem that I don't think is really fully being solved by anybody yet. It's not just a UI path that's got these challenges, it's a kind of universal issue. And that stems from the fundamental way these machines are working, right? They're built like human brains work. Like I was saying in the keynote, actually in some ways that's a good thing, but it's also, it's a challenge because humans are also kind of fallible, right? In a similar way, it's hard to know if a human's actually going to do the right thing. How can you guarantee that? Well, you can't, right? We get tired, we see things we don't think. Yeah, you know, in a similar way, it's hard to really figure out pinpoint exactly what it is in these systems that needs adjusting. So it probably needs a little bit more work on the research side to get, I think, to the next level. In the meantime, there are mitigations that UI path and others can put in there, which actually makes it more trustable. Things like, well, actually having the human in the loop to make sure that any generation's always checked by a human. Things like, well, fine tuning. So for example, if you say, well, look, here's information, this is kind of like the mission, the right, so there's the directive of our company is this. The machine should never say anything against these directives. And you make sure that the machine ingests the information. It's trained on the information. So it's very unlikely, but not impossible that it won't say something bad against that directive. But it is a challenge. But that's actually, from my side as a researcher, that's actually one of the very interesting things. And that marriage of that's kind of like that human-like cognition, with this more kind of like rule-based, you cannot do this, you must do that. How do you make that marriage work? And if you can do that, then wow. What a system you're going to have. You're going to have a system which got all the amazing capabilities that human cognition has, married with the calculation of the reasoning capabilities that machines have. And that's gold, if you can do that. And we're working to make that happen. Ed, one of the things that was very noticeable this morning when I walked into the keynote was it was standing your own, it was absolutely packed. As the head of AI strategy, we talked about the challenges. What are some of the, where are your customer conversations these days? On the IT side, on the business side, how are customers coming to you saying, help us build an AI strategy? We've got to be able to maximize the value of our data. Because clearly people are really, really curious about this. Yeah, for sure. But I mean, I think at the risk of repeating myself. So what I, you know, in those conversations I have, you know, that those kind of information security questions, those trust questions, those are probably the two biggest pieces that every, I would say 90% of companies are kind of trying to just get those pieces in place. And I think there's also a lot of misconception around this technology. Like, because of the phenomenal kind of success of chat GPT, people sometimes almost constrain, you know, they believe that it's only a chat technology. This is much, much bigger than chat. This technology can, you know, it's really like perfect for automation. It's ability to kind of synthesize data, manipulate data. It just makes the scope of what can be automated much, much greater. And we have these amazing tools, like, you know, human and the verification, which can ultimately provide that governance that you need. You know, in any kind of really sensitive process, like when a bank makes a big payment, one person types it in and another person checks it. You know, that's as a really foolproof way, you know, of getting that stuff to be safe. And we can use those same methodologies with this technology, get kind of more governance, more control, but also kind of improve the experience and efficiency of these processes. I think another interesting thing is that, like Ed was saying, these technologies are so new, right? But actually we are, I think we're going to see this kind of a state of the world where we start to learn to use these systems much more. You know, now they're kind of like a little bit, you know, new, we don't really know how they work, we don't know how to work with them, but as time goes on, we're going to learn to work with them, how to get the best out of them and understand their strengths and their weaknesses. And so we're only a year or two into this process. I think we're going to learn much, much deeper relationships with these machines in the future. To what do you attribute the significant acceleration of AI? I mean, as you both have said, AI is not new itself, and yet these technologies are pretty new and the significant breakthroughs that we've seen in language understanding, math and code have all happened in a relatively short period of time. What were the breakthroughs that were underpinning that this acceleration? Well, there's a couple. Well, so this data, right? You know, the availability of large amounts of data is a huge factor. That's never been the case in human history before, right? We've never been able to access that level of data. And compute, like I was saying, the keynote. If you do sort of back at the end of the calculations, it's stunning the amount of compute that we have now, even compared to 20 years ago. You're talking about, if it takes a month to train a large language model now, it would have taken five million years, 20 years ago. It's really, really, and 20 years ago, computers weren't slow either. They were pretty darn fast even then. So that's a massive change. They're not actually yet at the level of humans, and in terms of the number of processing units and the processing capability in some sense. So that's also very interesting. There's a long way to go still there to get to that human level, but we're probably not that far off that stage where actually they're gonna be as capable cognitively as humans, just in terms of the processing power. So I think that's very interesting. And there's also some algorithmic improvements that have occurred as well. So the transform is probably one of the most well-known ones that's happened within the last 10 years. So there've been those three data compute and some algorithmic improvements as well. Ed, comment on the cultural change that organizations have to go through to adopt AI. Obviously you talked, Elaine, about some of the major challenges that organizations across industries are facing, but there's got to be a cultural mindset shift, and that's difficult for organizations to do, especially at scale and speed. Yeah, definitely. I think almost the constraining factor now is our imagination and is our ambition. We have this amazing technology, which can do so much. We need to experiment with it, play with it, and use it. And I think what David was saying at the beginning is we want to take the best of this technology and the best of what humans are good at, and ultimately we are the only things still on earth that have kind of desires and agency at this scale, and it's up to us to think, what do we want? How could it be better? And then to work backward from there. And I think it's kind of an amazingly uplifting message. It can also be kind of a little bit daunting in a way. It's like, well, what do I want? You know. You never. Yeah, you have a nice team as that, maybe. I want to ask about the talent question, because there is this perception that there is limited... I mean, there's a reality, actually, that there is limited AI talent in the world, and there's a sense that the big techs, the Googles, the open AI, and Tesla's are sucking all that talent up. How does a company like UAI, UiPath, compete? And how, well, first of all, is that true? I mean, I would ask you that. And then also, how do you compete? And how, bringing at least this point about culture, how does that play into attracting and retaining talent? So, from my side, from the research side, you know, it's true, right? I mean, the big, big tech is really, you know, they're huge players, they offer huge salaries. It's very exciting to go to work there, because you have, you know, interesting problems and colleagues to work with. But actually, UiPath has some advantages as well, which I think stem from the culture that we have. It's a very friendly, it's a great place to work. It's a very friendly company. And I think people really enjoy working at UiPath. I think the mission that we have, that, you know, we're kind of clear on what we want to do, is very goal-focused around accelerating human achievement. I think that's a very uplifting message to have. And I think people are very, they like that idea. So, actually, in that sense, we're very fortunate. We have great research team at UiPath. So, I think everybody that I know works there is super happy with it. So, I think that's something we can offer people that you probably not necessarily going to easily find some of the big tech giants. Yeah, I mean, I totally agree. It's a very, like, if you're a geek or a nerd, you kind of always dreamed of automating things and building stuff that does stuff. And so, I think that's a very motivating thing. But coming back to it, you know, most businesses don't need deep learning researchers. Most businesses need, you know, people that are, you know, business analysts that can imagine a better way of something to be. And, you know, I think that's really the super important skill for, and the culture that we all want to enable in every team. Talk a little bit about where you see AI going. I mean, you mentioned the chat GPT, almost a year old, and it just ignited this wave of interest and curiosity globally. Where are we going from here, compared to where we are today? I don't know, do you want to take that? We'll take that. Wow, that's a great question, isn't it? So, I mean, it's very hard to really nail that down, but there are some, I think, fairly clear things that are going to happen. So, the amount of training data is still increasing. You know, people are trying to find ways to get more and more training data, and there are sneaky ways to do that. The scale of these systems is still increasing, right? So, the number of parameters in these models is getting bigger and bigger and bigger. Still not, as I said before, to the level that human brain is at, but, you know, it's getting closer to that limit. So, what does it mean? I think the next stage will be this unification between this cognitive human-like capabilities, which we, as humans, I have, and these more sort of rational kind of logical reasoning capabilities that we're more familiar with machines have. And actually, if you go back to the beginnings of AI, the deep history, the field was split into two. One of the, you know, people were thinking back in the 1950s, like, okay, we want to build an AI system, right? How are we going to do it? Well, one idea naive is, well, let's build it like a human brain, right? That's kind of, you know, an obvious thing. That's the approach, which is being very successful that we are using now with things like chatGPT. But the other approach is a very logical one, say, well, look, let's try to make something that can play chess well, very rule-based, game-playing kind of thing, you know, logical reasoning, and test it that way. So these have been, that's called symbolic reasoning approach, so these were like two, the field was completely split to these two factions, and they never really talked much to each other, right? But I think now, we're at that stage, particularly when we're talking about things like trust, where you've got rules that you want to impose upon these systems, that's the point where you need to start thinking again how to integrate these more symbolic reasoning systems with the cognitive human-like capabilities that we have, that the current machines have. And if you can do that, that's really the next step, that's one of the major aims that we have. There are other natural things that you can think of, which are the multimodality, as people call it, the different modes, this vision, text, but there's also going to be no video, there's going to be things like maybe even motion at some point, right? So all these kinds of things will come together, so you'll be able to talk to the machine, maybe we'll be able to physically interact, you will look at your gestures, you know, the way that you're moving, all these kinds of modalities will feed in to the system and you'll be able to have this incredible conversations with the machine. Incredible engagement. And what's your vision for UIPath strategy if you had a crystal ball here? I mean, my personal dream has always been that you could create kind of automations in a very, very natural way, almost like, you know, when you're on board and you're an employee or you hire someone out of school and you show them what you do and you explain it and you know, kind of a combination of explaining something, explaining the kind of forces or considerations at play and showing them, you know, so kind of, you know, development or software development by showing or creating those paths. So I think we're going to, you know, as David was saying, a very rich natural interplay between people and this technology and also ways then of kind of putting the governance and control around it because ultimately we can have something that offers better, you know, guarantees than people, right? And how often do we make mistakes and mess something up and we want something that is faster, you know, always correct and we've got an opportunity to create that. Exciting, exciting times. Very. Really fun conversation about the future of AI and the enterprise. Thank you so much, Ed and David. Amazing. Thank you very much. I'm Rebecca Knight, stay tuned. For Lisa Martin, stay tuned for more of theCUBE's live coverage of Forward Six. We'll be back right after this.