 I tried to bring out the announcements from the crowd next to the majority of the audience. So, please feel free to ask questions from me. Okay, cool. Yes, oh. That was not a question. Okay, cool. Yeah, and let's also make it a hand for people asking if you have any questions. Okay, cool. This is supposed to be about that, but I'm sure it wasn't about that. Due to the brightness, but imagine this to be about that. Actually, the only important thing we need to know is that we have three 12-nexts. And basically, there are one in each month. So we did one in July. And there was Tokyo this week, so we actually found them just last week. So what I'm going to do is all of the announcements from the three big, global 12-next events is the last one we did in London. So this basically covers everything. What do we actually announce at the 12-next? So we do announce basically three things. We announce products that are in alpha and beta and products that go to general availability. So I'm going to all of those three today. And I'm showing a few things that I want in alpha. You can actually sign up to all of them. And all some of you might also be part of, like, other groups, like if you see a designer, so you can work for the future customers that have access to those. And ultimately, for instance, I don't know if you might even need to sign up to those. Now, personally, my specialty is other DevOps of data. So those are my two topics, data. I'm not really an LL specialist, but I know a little bit about how to line up. So that's what the announcements today are. I'm going to focus around also in regards to what we have here in the audience. So I'm not going to go through announcements that I really can't talk about so much. And security, for instance, I really know Michelle also that didn't have anyone to talk about security. Cool. So let's start with number one. How is AutoML going to be done? There are a lot of LL announcements that come in. And actually, it's going to be one of those. I chose AutoML because that's, I think for me, the most disruptive if you're already working with machine learning. And it covers a lot of different types of problems. So let's start with AutoML, just very briefly, to explain it to you. This is basically pre-trained models, pre-trained machine learning, what to do. And then we use transfer learning to apply them to your products. So you upload your own product onto GCC. And we do hyperparameter training on the cloud using GPUs. And then you give me a model that fits your kind of specific problem. So think of it that way. Now, for instance, the cloud vision API, all of our machine learning APIs are available. They only have general knowledge. If you show them the human data, say human, right? But you will train it on your data, and it will give you the specific information. That's your cat. We don't allow scanning of cases, or maybe I wouldn't say it's a cat. So either the normal cloud vision model would say it's a cat, and then if you train it with your, with the pictures in your cat, that would say it's a human. Or maybe more. And the industry context, if you presented some part, it just shows it's a part. But if you upload it with your parts, it will say this is the section. So it's all the GPUs in the sample. So that's all done by us. And we can use the GPU teachers. We can use the cloud model vision that's going to talk about the future. But we also use the GPU very interesting models into the future. And that's what is often an actual language. So you know, sometimes you are in a profession that has a very particular language. That's not something that a regular language model can pick up. You might use words that are very uncommon. Say for instance, in medicine or in law, you will see these three unique particular technologies or something. And so a regular text to speech or speech or text model might not pick those up correctly, or give them the right semantic context, and merge them, for instance. So let's say you're using dialogue flow, which is basically an assistant that you cannot write. So that's dialogue flow. Let's say you're running a hotline where people can call it for some medicine kind of issues. Then you want that to detect certain words, and they can merge certain words into certain semantic groups. So you can do that now as often and train those moments. It also works with translations, so the two are kind of like a pair. So let's say you also want them to translate that. You can also tell the translation or particular translations that you use. I know, for instance, many enterprise staff that have an international translation service, and there would be a dictionary that would say, oh, our company, nobody that's the translation we use, because that's particular that everybody uses the same thing. I come from Europe, so maybe you, for instance, are very common for legal texts. There's literally a dictionary that says, within the U, there's a legal translation data that's down in the term. And that's something we do now with that. That's all I've got now. That's also interesting. If you play with them, you can, of course, and it's also easy for you to see what they're doing. So what you can also do is actually interact with it. So it gives you not only one effect, it gives you all of the labels, and now you can have humans look at the errors and actually relabel the errors, even though it's not as well, which is a very interesting process. So that's for Autumnel. That was one, I think, pretty exciting announcement. I don't know if any of that work here. I have a demo for the next section, but Autumnel is strange sometimes. It would just take up too much time here. So then the second one is the published API that will generate availability. So that's not really available to everyone. What does that do? That does much analysis. So it gives you back terminologies. Those are completely protected. So it does not even know what your particular use is. What it tells you about it all. What's published in the API also is important. That was announced as a feature. It can also recognize products and handwriting. So it also has a handwriting detection and it has a power detection, which is very interesting for companies who want to, for instance, find power of handwriting. So you can expect those features also to, at some point, go back into Autumnel. So the integration between the published API and Autumnel becomes deeper. So you can train, for instance, the Autumnel model to do products to find nonprofits. And also, then, have a very easy to use type of API. So not going too much. We'll have a little demo on that. Like I said, this one is a v-second at the top today. Sorry for that. But it might just be interesting to do a very demo to show you what you need. So this is a slide version of the API. And you can just actually go to the website, make it a little bit bigger for you, and upload the image. You can do it even if you don't know what you see. This is actually an auto window, and it will be gone. So until you just drag a picture and see if it's in the paper she was in the city, so just drag that in. Start API, analyze the picture, and now it gives me some general information because it's in the report area. Very interesting. It also tells you all the headquarters. I'm not sure that knows it, but it's on the screen inside. But what's more interesting is not actually it gives you information on the internet. Now that's interesting. So it actually knows to make the pictures in the city too. It even finds the architects, DCA architects who built that building. Yeah, and even finds out with the low confidence for all the low relevance in this case that it's a better place to see them. So that's already pretty interesting information for the cat that is just coming from the web. It's actually like a Google web search inside that it would give it to you. I also have some more interesting properties for the vision API. I don't always think about it as justification. So for instance, what's really interesting is it gives you some colors or some hints to say, this would be a way to crop your image. So let's say you're building an app and you want people to upload their pictures as another signage. So you can use the vision API to just give you back the bounding box on your face, which might just be interesting. So that's kind of who's testing that as well. All you want to say, they should upload their picture and then you want to change the background of the app or crop other signage to have the same background color. So you could use it only in color here to kind of adjust the theme of the app, for instance. That's fairly simple. Who's testing that? Right on the box, you can see that it recognizes a personification experience. This is very interesting. If you have, for instance, in a corporate environment, something like this uploads, we also have the same search. So it will tell you, for instance, whether it contains any, like, viral issues or medical things. So, say, for instance, you're running a lot of public user solutions and users can upload images for the problems they have with your product or something. So you may want to avoid that there's anything uploaded that's not safe, just because you care about your spot users and you don't want to see them like on a picture or something like that. Or even worse. Okay, that was start vision 89. Second demo is very quickly there's a text to speech 89. I'm originally German, so I'm going to talk German to you because so you see that it's actually speaking German as well. I don't know if, why, because I didn't get that. Okay, so you see, it's probably not what I was saying. It's still running. That's why it's still here. And it's just showing it in green. But now I'm also facing it into the transition API that's literally where the steps go and get all the food. So, yeah, very simple. There's nothing fancy behind it. But for the fact that you can just use that right now on your app on a free API, that's pretty cool. So there's nothing much behind it. It's just a simple JSON request you can use that today. And just kind of see just to play around with it. And just last week, well, actually this week, and by the way, I'm just trying to do so much to get by with this. I was at a client who was really something like that themselves, like in a really fancy way. And I was like, don't waste the time and then just fill it and just call it. There's no need for you to actually go that far. Okay, play around with that. As I said, I'm not even involved in in this page, so you can just play around with it and test it. It supports all the languages that the translation, the translation supports. So you see it actually quite a few languages now. Especially here in Singapore, where we are kind of working on a lot of like second or even third year languages. So you would see a lot of native dialects, for instance, also like in the Indian subcontinent. So I know, for instance, I think recently, we added most, like two languages here. Yeah, Marathi, for instance, I know someone who was using that for Marathi. It was kind of cool. It doesn't even expect that, just taking about it. Okay, cool. So let's go back to to okay, let's start with the IPI. So if you think that I'm boring or too much, okay, but I know I'm the first one, so I have to boring advantage. I can stand by you and I'll take out a second cover for me. Okay, cool. Let's start with the IPI. So let's go to the next one. Yeah, that's all. I forgot to say this also, and also use the content moderation. I mentioned it on the side, but imagine save search the other way around. You only want to see the pictures that are labeled for something like violence or extra content or something like that. And then when you give them two content moderation, moderators. Okay, so because I know there's many types of here and also people work on apps, I also do it with Cloud Firestore. So what's Cloud Firestore? Cloud Firestore is basically a mix between Cloud Datastore and Cloud Firestore and combining all of the best parts into one. So Cloud Firestore has a document base that can model rather than an energy base that the moderators use to have the data stored. But at the same time, it provides you the real updates that you get from Firebase and the offline systems in Firebase. Because that's always something that you can use for Firebase. The last Firebase is one of the features that we've got here is an update, but they also want to have a standard backing database of the regular application model. So this is basically a combination of so now you can have only one database that supports similar kind of updates into applications. So you have all of the native libraries some of the regular platform like iPhone and what else. But at the same time you have strong consistency and you have the strengths of all databases. It's better right now and yeah, check it out if you're using mobile applications it might make your application architecture a lot simpler because you really know your purpose on one database. Having said that, just to be clear, I'd still a document-based or a connection-based database. So if you need something more relational then you might still want to spend an hour or something like that. But if you in the use cases say hey, I have those classic apps that runs on the web as well as on mobile platform you can just run it. It's not you don't have to change it. Okay, number four, Qflow. As anyone I saw three data scientists in the audience as anyone rather Qflow one, okay, two. I actually have someone who is one of the programmers Qflow basically and so that's why I included more. So before I go on to the next slide actually let's go to the next slide. So what is Qflow? Qflow allows you to deploy your machine learning models on the Kubernetes sound system but it's actually not that simple, right? And so it actually has another one in a tensorflow it's actually not that trivial to deploy that on the Kubernetes just like you could just deploy it on like a volume like a state volume and then just deploy here serving in one container but it's a little bit punky you have to manage a lot of that. So Qflow takes that away from you just say I have this model and now run it on that many nodes so it maintains the whole life cycle of the machine not only that not only is it a life cycle of the model in terms of serving but also the life cycle of the model so you can say train the one on so many nodes and when you reach that it's based on the same way that we use internally internally for TFX if you're interested in that there's paper and all of them in spaces and even those one level further it also includes Jupiter so it also can super model development so you can have Jupiter running on Internet cluster just to some data analysis you know just like standard cycle stuff where you use and then can either choose to deploy that doesn't have to be that so you can also directly deploy it into serving and even if it's extremely small or if it's or it's a cycle burn model or you can then constate the TensorFlow and use TensorFlow for serving so it's just generally much more smoother than the current process but it's somehow on your local laptop you have that Jupiter running you can play around with a small data set but then it takes a lot of time to constate it into a different framework than that and it serves and it's now a little more of course it's a lot of resources and development developed by the same people who also work with cloud services so they have six different communities speaking of communities I thought okay that's pretty awesome and directly going to the cloud services platform the next three announcements are for the cloud services platform so let me explain what that means cloud services the simple structure is and we can realize that we have to talk about service functions lambda and so on like that and that way it's connected to cloud and it's connected to more cloud either back or just it should direct the cloud but actually it should be the same but actually for you as a developer you shouldn't really care to use one of the R directly if there will be some way of what to do behind it so that's no reason why that's not possible if that's for you you should care about this and that's basically what the cloud services platform is so we we we we brought the decision and we brought the ecosystem around communities and that's all again that's all and so on that allows you to basically run functions on that's where the cloud services are or direct application or a function of it in the end that's all that connects and then around that we have many more kind of ecosystem components and I'm going to go deeper into into the details now but this is basically the way to think about the cloud services platform so think about I just want to run my services and I don't care how it's running and that's what I'm doing okay so let's start the cloud services the function of an ender as you just write it in and then it's done so that's CIC well okay so what we need to provide you is basically a zero configuration CIC service and let's start it so cloud build provides exactly that that we if you update something automatically run the polls CIC pipeline deploy run a cannery and then roll out to different nodes so you know all of the procedures that make that so cloud build has made a focus apart so it generates what it would assume that's one of the things currently there is a feature so you can actually play with it and what's really nice is you can use it also locally or in the cloud so it also breaks a small barrier from local development you just want to locally so you know because the high color always says that he's still building local applications he doesn't have he does something and see how it works so you can use cloud build locally but then scale it into performance or into GKE and what I mean is that it's like either into Kubernetes on premise or into GKE and it also has different integration so you also mount the partnership with GitHub on that so you can use cloud build also as as just the ICD module on GitHub to do that so if you check into GKE talk you can automatically run a few CICs run basically or wherever also you can have but that takes the whole pain away that you currently have and that you like if you use so we have number of functions so let me manage this scale what does manage this scale and so manage this scale addresses the goal traffic and canary has definitely had the functions so maintaining around around the course so in case you don't know that SEO is a it's a program under different hands somewhere and it's basically the traffic manager under Kubernetes it allows you to manage a connection to you can say service A can talk to service B or talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a it has can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a can talk to a to see all of the services and how they communicate, which service for each one, how often, and then we find out the date. It's sort of a topology key. So it's a real, yeah, it is a real service platform. And it's developed, but it's coming in there in the top right corner, so Google, of course, but IBM and Red Hat are actually super strong. Little, in fact, someone told me that last month Red Hat succeeded Google as the highest committed to the Kubernetes project. So that's really great of a community to be up there. And Lyfts, Lyfts contributed the box that is two-stage on the box. It basically wraps your service into a sort of a side platform, a boxy, and process an envoy that comes to Lyfts. And then, in the end, they're working with a lot of networking features, and this is sort of the box that the Lyfts network features need to introduce to them. I just talked about Envoy. This is a little chart of how that works. So basically, this is your application, but this is your pod. Sorry, this is your document container. So let's say you were building a game, you have your I for that, and some kind of a database. And this is your automatically access proxy to you to your application. The Envoy proxy, and this way, you can now control exactly which service you talk to, which you can do services summary, like real, and do an L-based service summary, not only a pod service summary, but just for instance. And you get all of the service integration. You can immediately adjust all of the metrics into the commit service without changing anything in the application. You just add it onto the SEO, and the SEO can also all of the network traffic, and it automatically can all be observed in a funnel or in Stackdriver or anything else. And Manage your SEO is a part of the cloud services that you can now also run directly to the cloud business to enable and look at everything in Stackdriver. And then there's GKE on-prem. GKE on-prem is an outside, and you can immediately anticipate such a big interest in GKE on-prem. Not sure if anyone heard of that, and lots of customers come to us and ask about this. Very interesting. So GKE on-prem means we run a collapsed cluster on your premise. So why would you want to do that? Well, for instance, if you have quite a data to jump on in the cloud, but you want to have a service that is connected directly, so you want to have, for instance, a service that runs on your native on-premise and has a cluster that leads all of the PII, then push directly up to the ECP, so it can very, very quick. So you can now have this kind of workforce directly enter into your native building. It's fully managed to find, basically, that's running right from the end, so as long as you run ECS somewhere, you can install it and you manage it into ends. And you have a service that uses this to a journey, and it's basically, rather than just a cluster, one is in the cloud, one is on-premise, so they can talk to each other, you search this company, it's working shapely, you see everything's back-driver, all of those other features, and then you can extend your application into the cloud. So for us in the hybrid workloads, that's what you can do on-premise to me. Sorry, there's some more responses. That's it. So it's all integrated in the GCP console, so even though you have your on-premise cluster, you see all the other features back-driver, you see everything in the GCP console, you see I have conditions from GCP, so you have all of those features available, even though it's actually running on-premise. You can also use the top-builder ports, and you can use the top-repo, like all of those features, you can check in the code and the top-repo, and then you can go on to on-premise. That's how you need external kind of features for the cloud services platform. So to recap, what did I show? Actual auto-down, additional API, additional fire-source to global, and as part of the cloud services platform, cloud-build manages the CO-9GP on-prem. Those are the things that I thought are most interesting, maybe some of you in the audience as well. There are a few more small things that I just might quickly say. They were not really announced, they were like sign-announced, but some of them I think are really cool, so I just wanted to mention them. One is the carrier help. I'm not sure that any one of you has all of that, but you can now be machinery directly in the carrier. So literally, you can write a select statement that runs a model that you can find inside the select statement, which is pretty cool. Also, the carrier is yes, so you can now, you have the direct functions in the carrier to the geographic functions. So let's say you store a message about your choosing and you can now have a function that says where, and this is my circumference of Singapore, and then you get all of the data that works in our Singapore, and then you store it in the number two registry, which is very cool. And then you create a friend service for connection to other database. We also include cloud maps, so that's the application sounds like a super basic functionality, but actually it's kind of important and powerful that you can just connect to the outside in a managed way. Right, so you run one work on your GCE, and now you want to connect to the internet, and you want something in the middle that you can control and say, I want to connect to the score. Because the score is used by clients who have certain providers who expose certain ideas only, or certain code numbers and those kind of things, so you can now translate that directly. And then we introduce, as a minor feature, but I'm using to manage the number of sites that kind of like that feature, we use GKE, ESL, E-container, make a book about this. Who has any idea of what that could be? I just have any idea of it, so I'm going to explain it. So there is a problem. So you know the Google global open source, that is that global open source, right? Each one is a global open source, okay? And you can connect that to GKE, that's also awesome. So you have GKE running the DSL, E-container, that's the correct. Now one problem is that the DSL, it does not know which part you're taking, but it only knows to manage its master. So you use that, it's not a performance, because you might all the time, you have to look up the pop first. So you can't have technical connections, you can't have direct connections into ports. If you have a product that's not so important to you, that's stateless, but sometimes, that's a game, or just, even if you're a general, like, environment, HPC kind of applications, they're latency-sensitive, you actually want to make sure that every user always hits the same cost. And that's basically what that is. So you can use it and just set up your own open source, and you know that Google global open source has this hot address. So it goes straight into the white box, and that is a much higher quality. And you don't have to change anything for that. You just use the same product application that you use on premise. Zero change, you just deploy the GPD, and automatically the Google open source has now the awareness of where it's at its right, and that's it. And that's all the least, I saw for a sector, we thought for Istio, to get all the metrics in, but we also introduced, or I introduced the sector of an incident and the song selection, that's basically a direct contract integration in the sector. So right now, the algorithm in the sector is a regular algorithm, and the algorithm can send you a page from this. But you can't really manage the page. In sector of an error report, you can see that, and you can click, yeah, I've seen this error, but there is no way to go around it. So now you can say, if I get an error of this priority, and multiply this group, and one of the group can then put the error, and see who that was, and you know, all of those kinds of incident and response management, and can accept you. Okay, that was it. If you want, there is a nice, what was, one of the two, that has all of the announcements in there, but it's this one here, under five announcements, and go up next 18. And so then it could be checked. I think I'm very in the sense that I should still have some kind of a questions back. That's not cool, yeah. So that's what I wanted to, okay, so we have, I think that one is very, that one is very good. Okay, cool, yeah. So yeah, that was very good run through. So any questions regarding any other features or anything else that you can just make us an AMA. Yes. I can't answer you, I don't really have a microphone here. Actually, I thought you were right. Okay, let me see. What else? So what else? So what else? So what else? So what else? Yeah, like, can you talk about some of the input states that you have already, like, I understand like, if I have a good point for that one, what about the, like for example, like one of the example you talked about the, one of the big mansion in Singapore, but what about cars and all the things, I'm talking about these are and so on, if there are accidents and cars, can we scan those things? So I'm going to get an application. Sure, you can, absolutely. So there's no limitations. It's the question like, if I do customer product, it's usually not about the machinery, but it's not good, right? And the question is, how do you figure out that it can be done by a company? So these are basically back search results, right? So basically what's in the example, the shot is the most official API where you put it in Meco 3 and it's a short gap for instance. And it's basically Google's knowledge graph if you like it, right? So we figure out, okay, let's do it in Meco 3 for instance, and then we look at the knowledge graph and we can continue to do our knowledge graph. Then for instance, what we do is the architect. So we can try it, we have time, I can take other questions for us, but I'm very certain that I will put in the picture, your car, and then you can just tell me the brand of car, to start with me, I think that's it. Yeah, okay, next one. So I'm going to say something else for us. Okay, go. Since you're logging in earlier, you're ready to ask me your slide, I would like to understand that. Hopefully you can give me the data to which they are in the semantic concentration there. What do you think that will be related to? Okay, that started your question, I think we'll talk about that. Anyway, I can give you 30 seconds for all of you, okay? So if you want to read about it, there's two seminal papers you want to read. So I want to start with the Dremel paper and I want to start with the Kompassus Dremel paper. And basically what Dremel does, Dremel is a product store, right? But the trick is that it doesn't store the columns that are called in our format, that's the best talk. So basically a physical class is a technological class system. So we presented as a logical type system, which was called a store. But behind that, we optimized how they are stored across many millions, usually thousands of this. And the more often you access the data, we copy those in the mini charts, we open the charts over to as many as we can, to as many as we can, and so I think that's also very important. Because if we arrive at a student, he would hit this column more, and so it starts to actually distribute the physical data, like the logical column more. So that's the main thing, which I'm really happy with, is that we can't really know what to do. So I think that's the main thing. So I think that's the main thing. So I think that's the main thing. So I think that's the main thing. So I think that's the main thing. So that would be kind of an enzymatic as in the query. Okay, so I don't really know if you want data or something to be added. Just to the technology who just created you, but in order to know where that was. So if we have aémesis for that, I know because this is a specific what you have by maybe just creating you. But you would be currently just creating that one student. Yeah, I think that's just the kind of thing. One thing I can tell you don't use too much JVC. Because we query this inside fast, right? But you're still interested in cloud 7 to the network. So yeah, we can work on that really quickly. Yeah, I see that. So yeah, as much as I'm inside the cloud, it's really the fastest. But don't run your own, run your own top level, run your laptop, and that makes us be query. Because we have the network that we call in between, so you naturally don't get the same points. Thank you. Yes. Yes, sir. I like this two-by-four concept. My query is about, so again, about two questions. What is the difference between data power by time and GA? Okay, so usually when you release a product, if you release that to public, you can call that GA. So even in many Java products, I'm not sure, but for instance, J-POS, if you have the latest J-POS version that puts 7.0 at a point without a GA, which means generally a label, which means it's basically stable now. And it doesn't mean it's factory but it's stable. The better release is pre-stable, so we think it works. But it's on your own list, so we don't provide you any kind of guarantees that are on the setting. And alpha release means good luck. Good luck, that's your problem. And basically, I hope it's not a good point. Of course it doesn't go up. We have good intentions, like it will be a stable software, but it might set up boxes, might have changes, it might change interfaces without an asset and some of those. Google-specific is usually like most of our products are in GA and SLA. They're not the same as our products. So basically, if we have product that's really in GA, that usually means it's very clear something.