 Live from Las Vegas, Nevada, it's theCUBE. Covering IBM World of Watson 2016. Brought to you by IBM. Here's your host, Dave Vellante. Welcome back to IBM World of Watson, everybody. This is day two for us, day three for the event. This is theCUBE, the worldwide leader in live tech coverage. Joe Francis is here as the lead architect for mid-size events at IBM and he's joined by Pietro Mesolini, who is the program director for cognitive marketing, for IBM research. Gentlemen, welcome to theCUBE. Good to see you. All right, so Joe, let me start with you. Set it up for us. You guys created a capability, sets of services within one of the apps that attendees use for World of Watson. It's kind of demonstrating a great Watson use case. Why don't you explain it? Okay, we started with a blue mix environment that was pretty much blank and an empty cloud environment and we were able to using Watson services, cloud it, blue mix, Node.js, pull together the entire site that you see in the session expert in a matter of months, which is a real testament to how quickly you can implement scalable, effective solutions. So what was the objective? Take us back to the blank piece of paper. Somebody said, I need X. What was X, Pietro? What we really needed is to create an experience for people coming to World of Watson, which could help them choosing in a personalized way between 1200 session. So how can Watson help in that direction? That was the challenges that was presented in front of us and we were able to use some of the Watson conversations, some of the Watson services available, but we had to build a Hadox solution, which could take the way a user can conversate with Watson with the ability of recommending personalized session to them. What personalized mean, it's really an art, because it's depending on multiple factory, depending what the user wants. The key element we wanted to deliver is having the user to interact with Watson and expressing his needs, so that we can be better in recommending the session. And they expressed that need just in sort of natural language formats. Okay, so how did you train Watson? Was that where you started? Yes, there's a combination of the event owner would do most of the primary Watson conversation, which is a testament to how easy it is. Didn't have to be high tech person to make that all happen in the background. But what we found was that as you do your searching, things like that, we needed a secondary Watson conversation instance that we automatically populate from the session data so that the natural language can pick up those key elements and feed them back to us in a way where we can selectively search and filter to provide the real answer that the person is asking. So the outcome was a set of Watson services embedded into the event website that I could use as an attendee to figure out what the best match for these 1200 sessions is. One of the key element back to your question is how we train it. Traditionally when you train a Watson system is trained out of existing question, bank of question and answer that the user can usually ask. Now, when you look in this recommendation type of solution, there is really not such a bank of question because people can come and ask any type of activity. We cannot really pre-train Watson with all the possible questions the user can ask. People can come and say, OK, which are the popular session or cloud session on Tuesday morning which are relevant for my industry? So as you can understand, the variety of the question that the user can ask put us in front of a difficult challenge. That was, how can we train Watson where we don't have question? And that is Joe pointed out, we fall back in some of the search and using Watson to kind of guide the user to the right search approach that they can do. And use analytics to then deliver the best answer. So you said Bluemix, Node.js, and Watson APIs, basically, to put all this together. What did the team look like that you had to do? Actually, how long did you have to do this? What was the time frame? I believe four or five months. OK. So less than six months. And what was the team? What were the skill sets that you had to put together? Were they full time on it? Yes, we had three UI designers, two in London, and one on the event's team, working in conjunction with UX designer and the event owner. And they really spearheaded everything you see and touch in the system, that whole user interface. Then in the back end, we basically had the API system, which is all the Bluemix, which is pointing everywhere. We pull information from our external session management system, pull it all into cloud and feed that with the APIs. That's the back end. Then all of that is a real big picture, because not only do you have those pieces, but you have all of the infrastructure pieces. So the team was really quite extended in that regard. OK, so what? We're talking about seven, eight, nine, 10 people? Yeah, about 10 people active full time, and then a good 10 more that were on the calls pretty regularly. And a combination of front end and back end and infrastructure stack developers in here. The key element in a solution like this one built in such a short period is that we need to make sure that we divide the roles. And one of the key role was the business owner, the event owner, who manage, who has the business logic that this app should include. So to be able to the couple, people who take care of what Watson should say versus what the API and the recommendation and the UI element making coexist on this three element was a key component that make this project successful. How did you deal with the data quality issue, right? To give people confidence that what you were feeding back was a good match. How did you test that? OK, there were, first of all, the data quality is that was one of the harder parts, was to make sure that you are answering the right questions and that the quality of your response is correct. Because if it's not correct, there's no confidence at all. Then when you do start surfacing good results, we need to show why we give those results. So that was one of the biggest challenges was to figure out how to convert numbers and all that stuff to something that a human can look at and go, that's why you did it. Would you like to add? And on top of it, we have a machine learning algorithm at which every couple of hours is training himself and trying to get better based on what we know about this happening at the event, our prediction, measuring the accuracy, and then improve with additional recommendation. So the data source was relatively straightforward. I mean, the corpus was the event sessions, right? So that's pretty fixed. It was really the algorithm that you had to continue to iterate on. And then how did you test it at some scale where you had confidence? Did you sort of open it up to a bunch of internal IBMers? Or did you just watch what people were doing as it was live? We soft launch and we have extremely effective monitoring tools that tell us everything about the application. So that part was pretty easy, actually. In terms of what performance you mean? Performance, alerts, if anything goes wrong, we know. And just complete performance monitoring in the end from how long it takes for each request. But how did you test your algorithm in terms of the machine learning, the efficacy of the algorithm? There are two elements of it. First, when it comes of the user talking to Watson, we can measure and understand what they're actually tracking. So we keep log of everything they do. And we are able, over time, to improve the system of understanding if Watson is getting better. So you had a feedback loop there. Exactly. And on the recommendation point of view, was more at the level of, I predicted 10 sessions, how many of those sessions were actually picked. And that will help us, the confidence in the model. We got a wrap, but I'll give you the final word, Pietro, you had said you have some other things that you wanted to share. So one thing that's where we are going with this one and tapping for what you mentioned before is we want, how do you make this cognitive system trustable by user? How do I know that when I look at the recommendation, this recommendation is indeed valuable for my point of view? Watson, tell me why? So I'm from IBM Research, is one of the activities we are doing is now studying how the user can build trust on a digital environment. That's one of the area that we'll be coming in the next few months. And then how can I see this? How can people go see it? Actually, the best way to do it, go on IBM.com, scroll down, and say chat with Watson. Chat with Watson? All right, great. Gentlemen, thanks very much for coming on theCUBE. Really appreciate it. Thank you. Congratulations on the success. All right, keep it right there, everybody. We're bringing you the cognitive signal from the noise here at World of Watson. We're right back. This is theCUBE.