 Hi. I'm Colleen. I'm from Google. I've been with Google for one year and one day, exactly. Yesterday was my anniversary. And I work on a team that does open source integrations with Google Cloud Platform. So today we're going to talk about integrating machine learning and Stackdriver with Cloud Foundry. I'm Mikey Bolt. I work at Pivotal. And I work with ISV partners to integrate their services into Pivotal Cloud Foundry. I'm done. So we'll start with the big picture here. So I'll work right to left. We have Google Cloud Platform, which has all these great services. We'll highlight the cloud storage, the machine learning vision API, and the Stackdriver debugger to do live production debugging today. There are also several other services. These are all exposed through the Google Cloud Platform Service Broker. So you can use standard Cloud Foundry service bindings to use these services directly from your applications. So we have great services. You're going to make some great apps, and we're going to have a lot of happy users. So I'm going to start with walking through some of the other machine learning APIs that are available. So the Translate API, probably a lot of you are familiar with, that takes either plain text or HTML documents and translates them to a variety of different languages. So a possible use case for that would be just translating your own website. So just uploading your HTML pages, run them through the API, and out pops a globally accessible website. The Natural Language API does entity detection, as well as sentiment analysis. So something you could use this for at your own company would be to process customer reviews, pick out ones that are especially positive, and try to highlight those phrases in your next marketing campaign. Speech API takes an audio file and translates it to plain text. It can also recognize what language the user is speaking in. So a cool application for this might be, say, you have a phone system that you route to for customer support, and you maybe detect that your user is really struggling to use the system. Maybe the reason is they are trying to speak a language that your system is not in. So you take a little audio sample, run it through the API, and try to route them to an agent that can assist them in their native tongue. Our newest API is the Video Intelligence API. We'll show you a little demo of that in a bit. That takes a video file and analyzes it for content. So if you are taking a video of your kid's little league game or something, it'll tell you which sections are actually part of the game and which sections you turn to record your wife's reaction to that home run. A potential business use case could be holding a competition for your users. You can let them upload videos interacting with your product and running that through the API to pick out where in this video are they actually showcasing the product, collect those snippets, and then use them later. And the Vision API is the one that is integrated into our application today. So the Vision API is a bunch of cool functionality, including text OCR and palette recognition detection of whether an image is likely to contain adult content or violence. One cool thing you can maybe do if you were a retailer, like a furniture retailer, upload pictures of your customers of products that they already have. And you could use the palette detection to try to find similar pieces that you had to suggest back to them. So obviously, each of these APIs are really cool on their own, but combinations are kind of where the true power shines through. So for example, running an image through the Vision API to get the text out, and then running the text through the Translate API, I'm pretty sure is how the Google Translate app works. Or maybe you could run customer voice mails through the Speech API, and then through the Natural Language API to kind of gather a general sentiment on, is this customer happy, and we can just record their kudos? Or does this guy need a callback from somebody who's very patient and willing to help him out? Another idea I thought was pretty cool is if you could troll Twitter for tweets that are related to your product, and then run the Natural Language API on the tweet, the text of the tweet, get a sentiment analysis, and then run the image through the Vision API to make sure that it's safe for work, it's an easy way to come up with things that are marketable and positive towards your company you can use in your next marketing campaign. As a kind of added bonus, we're going to showcase the Stackdriver debugger in this talk. So Stackdriver does logging, metrics, debugging, and trace. The logging and metrics functionality are available to Cloud Foundry through the nozzle that's open source right now, and also available on PivNet if you're a pivotal Cloud Foundry user. Today we're going to be showcasing the debug functionality that is built into the Java build pack. And kind of touched on this already, but these services are all integrated through the service broker, as well as a bunch of others. And I'll just plug right now some of these other services interest you. In particular, I think BigQuery and Spanner are going to be talked about in this room in the next couple of hours. So just stick around after this talk and get all of the Google Data Service knowledge you need. All right. So here's a picture of the application that we're going to demo now. What it does is it goes and scrapes the subreddit awe, which has pictures of cute puppies and kitties and hamsters and sweaters and stuff like that. It pulls them down off of Reddit, sticks them in Google Cloud Storage, and runs the machine learning Vision API on them. It saves the top tag or the top label that the Vision API returns and presents that to the user along with the picture. As Colleen mentioned, we'll also have a Stackdriver demo in there to show live production debugging on this application. So if we look now, we have no smoke and mirrors here. We had nothing in our storage bucket yet. I was supposed to already be SSH'd into this VM, and I forgot about it until just now. I like to throw Mikey some curveballs, just keep him on his toes. All right, here we are. So we have the application up already pushed to Cloud Foundry. There are no services bound to it at this point, unless calling through another curveball. Nope, all right. So we're going to go ahead and create an instance of the Google Cloud Storage service. And so this is just standard Cloud Foundry CF Create service. And we pass in the bucket name. Oh, can you zoom text a little more? Is that good? More? All right, so when we create the service, we pass in the bucket name that it will create. And then we bind that service to our application using the standard CF bind service. And along with the bind service, we include a role so that we can both create and read the images out of the bucket. We'll also create an instance of the Stackdriver debugger and attach that to the app in the same way. And the Stackdriver, this is a live demo, the Stackdriver is integrated through the Java Buildpack. And I hear that support will soon be coming for Python and go-going. So you'll be able to do all the Stackdriver goodness on more applications. So I'll restage the application to run it back through the Java Buildpack with the Stackdriver information that service bound. So in the meantime, we wanted to show you the Video Intelligence API. So this video, you can actually try out for yourself. Go to cloud.google.com slash video dash intelligence. This is the animals video in keeping with our theme. So this video, it's a minute and 38 seconds long, and it took about 15 seconds to process. We got out all of these labels. And the cool thing is here, these shot labels. So you'll see these change as the shots change, identifying exactly what is in the shot. And then if you come to the API, you'll get the same information with these time offsets and as well as the confidence interval about what exactly is in the shot. So pretty cool stuff. I bet we're restaged by now. Almost. This one takes a few seconds to start up because it needs to run through the Java Buildpack. Cool. So now we'll go to the live application. We'll see that there are no images there yet. Again, no smoke and mirrors here. We'll add in Stackdriver now. We'll add pulling the code from GitHub. We can see that the creating the service instance in Cloud Foundry lets us get to our application here in Stackdriver. We can import the source code for the application directly from GitHub. This is an open source example that comes along with the GCP service broker, which is an open source repository. So this is an example in there if you want to look at the code for yourself. We'll put a break point inside the endpoint that goes out to Reddit to do the image scraping. And that's it. Very simple, a few easy steps. You get to see your source code, set your break point wherever you like it. All right, so now we'll hit that endpoint to go do the scrape from Reddit. And we can see that the information pops right in here. We can browse the local variables from that break point so we can dig down here and see images that are coming out of the Reddit scrape. We also get a stack trace so you can see where your code was exactly right in your live production system. And another cool feature is these snapshots are actually shareable. So I can just copy this URL and paste it in another window or send it to another developer who understands the code better than I do. And they're going to get this same snapshot with the same local variables and stack trace available to them. All right, so then we'll come back to the application and go to the main page again where there is nothing there before. And now we see that we have all the puppies and seals and whatever else is on Reddit right now with the labels from the Vision API as well. We'll go ahead and take a look at the storage buckets. We can see the images in there. And we'll pull one down to show some more details that the Vision API pulls out of there. So you can try this for yourself as well. It's available on the GCP website. You can just drag your image right in there. All right, so it runs a Vision API. We see the other labels. The top one was dog. And it has several labels there that it came up with for that image. It can do a reverse image search. So you can see where else on the web this image is being used. These ones happen to all be Google searches. But we have gotten a result this morning from Pinterest or Twitter. So that you know it's not restricted to Google searches. We get the color palette that came out of the image. We get the safeness for work and violence and things like that. So you can make sure that it's an appropriate image. And then all this information is also available through the API in this JSON format that we see here. And so when I was playing around with this, remember I said that the Vision API can do text recognition as well. So I actually thought it'd be fun to run this particular image through the API just to be sure. So that's this guy. And indeed it does pick up the text, sun and sailboat. It creates a guest at a document layout for you. So it's identifying that these words are in distinct paragraphs as well as the same information that you were getting before. Yeah, and so as Mikey mentioned, this code is publicly available through the GCP service broker repo. So feel free to download it and play around with it yourself. And then I think other than that, it's kind of all we had to show you. So if you have any questions, feel free to ask us now or after the fact. And also I'll do one more plug if you found this at all interesting and all of your friends went to the Kubo talk that's at the same time. We're doing this demo again tomorrow in the demo theater in the foundry at 140, I think. Something like that. Cool, thank you. Any questions? Oh, stack driver. The question was going to be you. The question is, is it possible to add a non-public repository to the stack driver code import? Yes. So local files and then if you, yeah, any other services that you use, but so probably for you for on-prem, either the source code capture or local files is going to be the easiest way. So the question was, is there an IAM role that could get you project view only rights along with stack driver view rights? And the answer is I believe so. So the stack driver debugger does have what's called a custom role. I didn't show assigning that here because it's defaulted in the service broker. But just so giving a user that custom role as well as just a project viewer should be sufficient for that. Are there any other questions now? So the request still go through as we saw. It doesn't actually stop like a debugger breakpoint would, but it just captures all that information and pulls it into stack driver. So, sorry, is your question also about stack driver or is about the, yeah. Yeah, so that was the same as the first question. If you're using on-prem, you can select your source through local files or through this source code capture. If your application is not running on Google Cloud Platform, can you still use stack driver? Oh, if your application's not running. Yeah, I believe so. You just need the project. You need a project created in Google Cloud. But again, because the stack driver integration is built into the build pack and the service broker, both of which are cloud agnostic, you should be totally fine on any service provider. Performance impact, so that's a fantastic question that honestly I don't know the answer to. It's streaming data basically. So I wanna say that it shouldn't be too big of an impact. So Ben, do you have an answer? Anybody else? Awesome, well, thank you for coming. Like I said, stick around if you're interested in any of the other Google data services and we'll be here to talk to you. Thanks. Thank you.