 Live from San Francisco, it's theCUBE. Covering Google Cloud Next 2018. Brought to you by Google Cloud and its ecosystem partners. Hey, welcome back everyone. We're live in San Francisco for Google Cloud's conference Next 18, hashtag Google Next 18. I'm John Furrier with Dave Vellante. Our next guest is Faizan Bazar, senior director at boxbox.com. Collaborative file sharing in the cloud. Now a stranger to cloud, welcome to theCUBE. Thank you for having me. So you guys have a relationship with Google. First, talk about the relationship with Google and you have some breakouts you're doing on machine learning, which I want to dig into, but take a step back and we'll take a minute to explain the relationship between Box and Google Cloud. So Box has partnered Google for a few years now and we have at least two areas of key sort of collaboration. One is around the Google productivity suite that was actually announced last year, but we actually demoed it for the first time in public today where if you look at a bunch of customers, like about 60% of the Fortune 500 that chose Box as their secure content layer, these guys can now go into Box and say, create a new Google Doc, Google spreadsheet, Google slide and it will open up, it will fire up the Google editors. You can do, get all of the benefit of the rich editing collaboration, but your content is long-term stored in Box. So it does not leave Box. So from a security and compliance layer, if you're chosen Box, you now get to use all of the power of the Google collaboration and- It's Google Drive inside Google Box, but natively you guys have the control for that back end. So the user experience feels native. Yeah, so in this case, it doesn't touch Google Drive. It's basically it never leaves Box. So that's the key benefit if you're a Box customer. That's awesome. That's great for the user, great for you guys. That's awesome. Okay, so take a step back. Now, what's your role there? What do you do? So I'm a senior director for product management and I basically look after two areas. One is our sort of best of breed integration strategy such as the one with Google Suite or Gmail. And then the second area is machine learning, especially as machine learning relates to specific business process problems in the enterprise. So that's one of the areas that I look after. So how do you use data? You talked about the integration. How are you using data to solve some of those business process problems? Maybe give some examples and tie it back into the Google Cloud. So for example, for us, we announced a product called Box Skills last year at BoxWorks and we're going to talk about it next month at BoxWorks too. So the strategy there was we will bring the best of breed machine learning to apply to your content in Box and we will take care of all of the piping. So I keep hearing machine learning is the new electricity, but if you talk to CIOs, it's a weird kind of electricity for them because it actually feels like I have to uproot all of my appliances and factory and take it to where the electricity is. It doesn't feel like electricity came to my factory, right? Or appliances or whatever. So our job, we looked at it and we said, hey, we have probably one of the biggest, most valuable repositories of content, enterprise content. How do we enable it so that companies can use that without worrying about that? So Box Skills actually has two components to it. One is what we would call sort of skills that are readily available out of the box. So as an example today, we're in beta with Google Vision. And the way that admin turns that on is literally he goes into his admin panel and he just turns on two checkboxes, chooses which folders to apply it to, maybe apply it to all of the images in the enterprise. So if you're a marketing company, now all of your images start to show these tags which were basically returned by Google Machine Learning. But to the end user, it's still Box. They're still looking at their images. It still has all of those permissioning. It's just that now, we had a capability for metadata, for humans to add metadata manually. Now that metadata is being added by machine learning. But in terms of adoption for the enterprise, we made it super simple. And then the framework also enables you to connect with any sort of best of breed machine learning. And we look at it, if you were to sort of make a, look at it at two axes, number of users who would use it and the amount of business value that it brings. There are some things which are horizontal, like say the basic Google Vision, basic Google Video, basic Google Audio. Everybody would like an audio transcript, maybe, everybody wants some data from their images. And that's something that a bunch of users will benefit from, but it might not be immense change in business process. And then there is another example, they say you're a ridesharing company and you have to scan 50,000 driving licenses in every city that you go into. And currently you have that process where people submit their photos and then people manually add that metadata. And if now you apply Google Vision to it and you were extracting the metadata out of that, I actually love scenarios like this. Like enterprises often ask me like where we should start, where we should start in terms of applying machine learning. And my sort of candid advice is, don't start with caring cancer. Start with something where there is some manual data being added, it's being added at scale. And take those scenarios, such as this driving license example, and now apply machine learning to that so previously it would take a month for you to get the data entered for 50,000 driving licenses. Now you can do it in 50 minutes. And yeah. And what's the quality impact? Like presumably the machines are going to get it right more often, but do you have any data that you can share with regard to that? So that's actually, that's such an awesome question. And I'll connect it to my sort of previous sort of advice to enterprises which is, that's why I love these processes because these processes have exception handling built into them already. So humans have at minimum a 5% error rate, sometimes a 30% error rate. So when we looked at, you know, caption videos and TV from like 10 years ago, we could clearly see errors in that which humans had transcribed, right? So most of these manual processes at scale already have two processes built in. Data entry, data validation and exception handling. So the reason that I love replacing the data entry portion is that machine learning is never 100%. But to the validation process, it still looks like kind of the same thing. You still saved all of your money, not just money, but you saved sort of the time to market. And that's also what Box does, right? Because if you use Box in combination with Google Cloud, we actually, one of the things that I didn't talk about before, we looked at all of these machine learning providers and we came up with standard JSON formats of how to represent machine learning output. So as an example, you could imagine that getting machine learning applied in audio is a different problem than getting machine learning applied on video is a different problem than getting machine learning applied from images. So we actually created this visual cards which are developer components and you can just put data in that JSON format. We will take care of the end user interactivity. So as an example, if it's a video and you have topics, now when you click on a topic, you see a timeline, which you didn't in images because there is no timeline. You matched the JSON configuration for the user expectation experience. Exactly. So now if you're an enterprise and you're trying to turn that on, you could already see the content preview and now you can also see the machine learning output but it's also interactive. So if you were recording this video and you were like, when did he say box works? You click on that, you'll get a timeline and you will be able to jump through those portions in the timeline. That's awesome. I mean, you guys doing some great work. What's next? Final question, what are you guys going to do next? You got a lot to dig in. You got the AI, machine learning store at Google. You got the skills at box to merge them together. What's next? So I think for us, the machine learning thing is just starting. So it's sort of, you'll learn more at box works, but for us, I think the biggest thing there is how do we enable companies to experience machine learning faster? Which is why when we look at this two axis image audio video, we enable organizations to experience that quickly. And it actually is like an introduction to the drug because the guy who has to process insurance claims or the car damage photos or the drone photos, he looks at that Google vision output and then he says, oh, if I can get these tags, maybe I can get these specialized business process and then now he's looking at auto ML announced today and the adoption of that really, really. Autonomous driving, machine learning is going to happen. Great stuff. Real quick question for when is box works? I don't think it's on our schedule. I think it's August 28th or 29th. So I'm going to go check. I don't think the cube is scheduled to be there, but I'm going to make a note, follow up, check with Jeff Frick on that. Cause I think we were talking about covering the event. It's going to be local in San Francisco area. In Moscone, yeah. Moscone, okay, great, great. Well, thanks for coming on. Machine learning, certainly the future. You got auto drive, machine learning, all kinds of new stuff happening. Machine learning, changing integrations, changing software, changing operations and delivering better benefits, expectations for users. Boxed doing a great job. Congratulations on the work you're doing. Thanks for coming on. Thanks for coming on. More cube coverage after this short break. We're going to wrap up day one. We've got a special guest. Stay with us. One more interview and then we got all day tomorrow. We'll be right back.