 Hello everyone and welcome to this webinar on artificial intelligence as a shared service. I'm here presenting with Tejas Sengavi, so Tejas maybe you want to introduce yourself. Sure. Hey everyone, my name is Tejas Sangui. I'm a product lead here on the Einstein platform team at Salesforce. So Einstein platform is essentially the shared AI services at Salesforce. I've been here for about four years and in that time I've worked on various AI applications across sales, service, marketing, and IT. And prior to working at Salesforce, I built AI platforms at other startups as well. Thank you. And on my side, I'm Gary Brenda there. I'm Senior Director and Product Manager on Field Service. Field Service is a solution at Salesforce to allow you to manage your technicians and maximize productivity. It has been one of the fastest programming product at Salesforce. And before that, I've always been working on in the field service area, like at Johnson Controls as well. So we are combining kind of the AI expertise and the field expertise in one single call. So today we wanted to speak about AI. And when you look at the stats today, the adoption of AI is really critical to stay competitive. So when we ask customers, nearly all of them are going to tell you that they definitely want to use AI to compete. Nevertheless, when you look at the reality is that seven out of 10 companies are going to tell you that AI doesn't have the impact that it should have on their company. So we are going to see your why and outcome and what can you do to actually have more impact with AI. So let's speak about the roadblocks with Tejas. Okay. So let's see why a lot of these customers are not able to get the value from AI. It's not easy. There are several roadblocks on the way. The biggest one is talent. There is a huge demand among employers to look at all the data that they have. And as a result, you need a large number of people who are skilled in this, like data scientists. And as you can see from some of these statistics here, there is a huge gap there. And that is, in fact, one of the reasons we have invested in building these capabilities so our end customers don't have to. The other piece is most people, when they think about AI, they think about the data science aspect of it, the machine learning. As you can see in this particular example, which is based on some research, is of all the things that need to happen in order to put an AI solution in production, the ML code itself is a very small fraction. As you can see, there's all these other things around configuration management or getting the data into a lake, verifying that data. And everything on the other side, once the model is built from serving infrastructure and even monitoring how these models are performing in this production. So there's just a whole lot of plumbing that goes into productionalizing an AI, which a lot of companies and teams may tend to underestimate when they're embarking on some of these projects. Lastly, just taking the output of AI and putting in front of users is not sufficient. What we have seen in our experiences building AI products is that humans inherently don't trust AI. Just because you put a recommendation or a prediction in front of a user who already has been doing that for tens of years, they're not going to trust you. What that means is that you're going to have to explain in many cases how you came to that prediction or recommendation, give them some reasons on what about that data leads you to some of those conclusions before the humans trust it. And again, if the end users don't trust the output of the AI, they're not going to act on it and you're not going to get the benefit. So let's see how we can work through these roadblocks and what could a roadmap for success with AI looks like. Now as you can imagine, it all starts with data. We have an internal joke that there is no data science without data. Now that sounds funny, but often the teams who want to build a applications because it's cool or maybe because your boss asked you to, but in many cases, unless you have sufficient amount of historic data to train your models, you're likely not going to be able to deliver business value. And so that's the availability piece there. The second biggest challenge is usually happens with data integration because often the platform where you're doing the machine learning is probably not your system of record. That data is probably stored in other systems and sometimes many other systems and in different formats, everything from relational databases to log lines. And so just getting that data into a single location in a format that it's usable for machine learning is itself a huge challenge. And then lastly, the data quality is a big aspect. In fact, there's several research that shows that a majority of the time spent by data scientists is cleaning up the data just because you have data available doesn't mean that it's ready to be leveraged by machine learning algorithms. Often you have missing data. Sometimes you have all sorts of biases in data because of how it was loaded or how your end users are actually entering that data and working with it. And so a lot of the data quality issues is a big piece of any AI project. So you definitely want to stay ahead of it and have a plan for some of these challenges. The next biggest challenge, as we said, humans don't trust AI. What we've seen here in this example, you know, most of us use some sort of a map application when you're navigating from point A to point B. In this case, when Google's telling me that it's going to take 20 minutes for me to get to work, initially I may not trust it. But what helps is over time, like in the second example here, Google's showing me what are all the other alternatives it's considering. It's showing me why, how there is traffic in certain areas and why it's going to cause delays. And over time, as I keep using this, having this transparency into how Google is making these decisions builds that trust over time. And we've seen very similar experiences when we put this AI applications in front of enterprise users. It's very similar. People want to know how these things were built, why there are certain decisions are being made, the way they're being made. So as a PM, you definitely want to think about what is the transparency that you're going to provide to your users so they can trust the output of your AI application. The next piece you want to keep in mind is that building an AI application is a team sport. Now when you're building a normal technical application, you are already working with, you know, developers, designers, UX researchers and so on. Now on top of those teams, you also need to be able to work with your data scientist and your machine learning engineers. So it's important for you to have the right set of skill sets on your team when you're trying to build an AI application. But at the same time, you also want to focus on what your MVP is going to be. Now in terms of MVP, as the name indicates, the minimum viable product, this is where especially for data science application, you want to first make sure that you're able to get access to some customer's data. And as you can see in this example, you want to validate the end user experience as well as whether you can deliver value with your application with as least amount of engineering effort as possible. And the way to do it usually in my experience has been to sign up for some pilot or even some pre pilot customers who are essentially willing to work with you as development partners and have your data science teams be able to get access to their data and be able to explore that data to figure out if it meets all the necessary requirements. Do you have enough of the data? Are you able to extract enough of the signals from the data to then be able to provide these insights that can actually deliver business value? And it's a constant iteration loop. But the first iteration, you want to be as lean as possible, you don't necessarily need all the UI features and functionalities. A lot of the pilots that we've done, oftentimes we deliver the output of AI in an Excel sheet and get customer feedback before we invest all the effort in building the UI, for example. So again, start small, think MVP and how can you get the most amount of feedback from customers without having to spend your valuable engineering and data science resources. So in terms of team, again, what's the MVP for the team? At a very minimum, you need a data scientist as we discussed to be able to look at your customer data. You need your software engineer to be able to play around with some data pipelines, you're going to still have to extract the data into your system and so on. And sometimes this can also be like a data engineer. And then on the research side, again, this is a little bit related to the data scientist, where often there might be some specialized teams within the company who have, for example, maybe publish certain algorithms or certain solutions that you may want to leverage for your products. One very valuable thing that I found in my experience is you may not always have all of these resources on your team, especially if your organization is new to AI. So it helps to reach out to other teams who may be having similar issues and are trying to solve some of the same problems. Sometimes companies have these resources as shared across multiple teams and it often helps if you are able to align your goals with them. Try to get shared resources wherever possible, but even better over time as these applications mature and go to the market, we ended up creating this shared team, like a platform team that ended up building all of these common capabilities as shared services. Now there are a lot of other teams that are also able to leverage these services. The value of those services grows exponentially because now you have a lot more teams be able to get more value out of that effort. With that, let me pass it on to Gary. Thanks, Tejas. So let's speak about use case. So customers don't care about your solution. They really, really care about their problems. So what's extremely important here is that's really quite fun with AI. That's the use case that is missing. Like as Tejas was saying, people want to do it because it's cool. People want to do it because their boss asked for it, but they don't have the use case in mind. And what's extremely important, what we applied here is the job to be done framework. So here a job to be done. At the end of the day, there's a very good theory, very good framework about this. We could have a separate webinar just on that. It's a very good book from Tony a week on that one. But remember that a job to be done at the end of the day need to be customer centric. So please find our customer solution agnostic, done, tell which technology you're going to use or whatever in your job to be done, stable over time, and you need to be able to have measurable outcome out of the job to be done. So that's important to keep in mind and we apply that just to have an example. If you're unfamiliar with the job to be done framework, what we are doing here, for example, is that a bad job statement would be organize a conference call using different phone numbers to ensure employee talk to each other. While a good job statement would be help remote workers engage with colleagues without in person interaction. And why the first job was bad because it included the solution. So it would have led you to try to find new technologies to play with phone numbers rather than invent new ways of doing kind of code, for example. For the second bad job statements, help me plan a bonding event that my whole team will enjoy. While a good job statement would simply be plan a team bonding event. And the original statement here included too much of aspirational job or compound jobs rather than just getting directly to what is exactly the job to be done here in this statement. So when you apply that in our world, in my world of the field service world, my solution is taking care of making sure that technicians are as efficient as possible on site. And quite often there is one KPI that is measured, which is first time fixed rate. So when a technician come to your place, is he fixing the first time the issue that he was charged to fix? Or does he have to come back to your place? And so here we are thinking, okay, our job to be done is to improve first time fixed rates for the field service companies working on physical device. That's what's our job to be done. Many, many ways to do this. But then we are thinking, okay, to make this job to be done successful, what do you need to have? You need to have the right person, the right skills at the right time, with the right parts and assign the right task. And we are thinking like, okay, but on this parts item, maybe we can use a little bit of AI there to find out which parts you will need when you're going on site. And that's our job to be done. That's always solved this job to be done here with the JAS. We are thinking, okay, what if we would recommend parts on a work order? If we tell up front to the technician, take these parts with you and this is based on AI prediction. So how did we make it happen? We found the right people. First of all, I have this idea in mind, but we needed to find first checking if it was delivering really customer value. So we discussed with a few customers and customers were telling us, definitely, if you managed to do that, this would be amazing for us. So we knew that we are going on the right direction. Second thing is that we didn't know what kind of data we would have, what kind of customer data we would get. And also, I didn't honestly know exactly how to make it happen at Salesforce because AI, everybody is trying to touch it about and to touch it. So how can I convince people to work on my project? So I started first by saying, okay, you know what, let's find some customers that really want to do this that would be trusting us enough to actually share their data so that we can get very good quality data to start working this project. The second thing is that we had to find the technologies. And again, here, I didn't want to invent the wheel myself. I wanted to have a very basic MVP that would allow us to do this. I also wanted to make sure that this, that I was writing, prioritizing trust and security, which we have at Salesforce platform called Angel platform. And I knew that I needed to work with that team, because that would ensure maximum trust for customers. And then of course, you need always to make sure, okay, if I'm starting to invest in this, how fast can I deliver at what cost, and what much value will I provide to my customer, but also to my product. And the last piece is that, okay, I fontages, we have the data now and getting the data was not easy. It took a while, but we have the data now. And now we need to prioritize this. And as we are saying, AI projects, there are many of them at Salesforce in the world in general. So how do you get that to the right person so that at one point of time, it's prioritized, we are spending the right resource there, and we are lining the priorities to make sure that we are getting it done. And that happened by my network and by the fact that I met Stegas. And basically, you had really like the use case, I was the use case like, okay, the field service use case is meeting the Einstein, the AI piece, merging us together, merging our two mind together, I would say, or at least working together on this, and basically making sure that we were delivering this on time, using as well a technology that we knew was going to be reusable. So we are thinking of the job to be done as recommend parts on our quarter, but we are always thinking like, oh, can we go further and make sure that this will be applicable for other jobs. And that's really what we meant by a shared service. We discovered an ID, which was, let's recommend anything on an object. And that was an ID coming from Tejas. It met my use case, which was let's recommend parts on our quarter, and make it very specific to one job to be done. But then let's prove it and let's get the customer impact that we wanted to do. But once we have proven that this was working, we could actually expand and now recommend anything on any object. So it's really a story of at first you have, of course, a very nice ID, but the job to be done, the use case will drive if it's successful or not, if it's valid or not. And then you can build as a shared service once it's proven. And that's really exactly what the methodology that we followed. We discovered focus on one use case and one job to be done, we proved it, and then we can expand. And that's all we got Einstein recommendation builder at Salesforce. Now in terms of three takeaways to get out of this. First, technology is just a mean to an end. So the end goal need to be extremely clear. In our case, the job to be done was very, very clear. We had to maximize first time fixed rates, and how to do this with AI. We felt, okay, one of the many, many things we could have done was, okay, let's recommend parts on the work order. Second, let's prioritize always projects where you have the data. As I was saying, for me, data access was critical. We got very good quality data thanks to our customer, but it took us some time. It took us some time to convince people to get the data, to clean the data, and to ensure that we were getting the right amount of data so that all data scientists were able to do what they needed to do on this project. And the last piece is build your solution for one use case, and then find ways to make it a shared service. So it's really, really important that if you think about it, if you would have done just part recommendation, we would have stopped there, and that would be also stopping the value for our customer. While what we did here is that we said, okay, let's do part recommendation, but have the framework, have the infrastructure to recommend whatever object on another object. And that's exactly what we did here. And so now customers can use Einstein recommendation builder while recommending whatever they want, but also recommending parts. And that's where we delivered extra value for our customers, compared to the basic jobs we've done that we wanted to do. So in general, don't let technology drive your vision, let customer drive it, and define which technology can help them. It's all about sharing ideas, brainstorming, discussing over and over with your customer, and making sure that you are getting it done one way or another. Thanks for listening in, and see you later in another webinar. I hope. Thank you all. Feel free to reach out to us on social media. You can see here, and we'd love to hear from you. Bye, everyone. Bye-bye.