 Can you guys hear me? OK. Actually, it's my first time to speak in DevFest. It's my channel? OK. How about now? OK. Hello, everyone. Good afternoon. Welcome to Google Developer Space. My name is Dan Bo-ren. I'm the customer engineer from Google Cloud team. For the people first time heard about my name, you probably feel like very interesting name. I normally make some joke about my name at the beginning, especially Saturday afternoon. You probably heard about my brother, the muscle man, the rumble, and the people staying in Singapore. You definitely heard about my sister's business in the East Coast. Anyone get it? Yeah, jumbo seafood. OK. So I work in Google almost for three years, but I feel like seven, eight years already. So I passed some of the certification around the cloud architect, data engineer, developer, and also the G Suite. I hope that one enough make me qualified to speak here. So today, I'm going to talk about Answers. How many of you guys heard about it before? OK, almost one third. So that's great. I have two t-shirts here. By the end of my session, probably I'll ask you a few questions. Who are the people first? Raise your hand. I got it right. You got the t-shirt. But sorry to say, my human intelligence cannot predict what size the people answered. But you can take it or you can share with your friend. OK, let's do it. I think working in the cloud world, technology world, is really challenging now. Because we had to understand the infrastructure, storage, compute, data analytics, machine learning, devices, oh my god, a lot of the same. But still not enough. You still now had to think about learn some great language. I read some great mythologies. The terminology, I think most people know it. Community is to tighten Answers. The book I put here, actually the on the left one is one of my favorite book, My Son, with the teenager already. So I always have some trouble to deal with him now. So now I feel like more and more common language with him. So I talk about some great terminology and he also understands. But for the people familiar with Google Cloud, they have different meanings. Community is orchestration for container. Istio is for service mesh. Titan is our chip, you know, security chip. And also the security keys, which I'm carrying out, two-factor authentication. And then we'll talk about Answers today. And this will be the operating platform for hybrid and multi cloud. And don't ask me why we named product with this one. I have no idea. In within Google, we have something called stop using great God to name our product. And we hope there are no more great God from Google Cloud platform anymore. Since we talked about the Greek mythology, I will use the terminology of the book. So like I'm using prelog, talk about some of the fundamental before I talk about Answers. I believe everyone know familiar with container. Yes? QB90? OK, Istio? Wow, OK. So probably I can quickly go through the first one. Now we talk about the main story about Answers introduction, the technical building blocks, and what is the component and what they do. And also, we will be focused on the GKE Young Prime, which is the key component for Answers. Last but not least, I will share with you guys some of the technical requirements when you plan to start the journey. And also, think about some of the resources or documentations, OK? Great, let's talk about Google's innovation around container technology. I would like to use this one started. I think for the people who stay in Singapore, we are no stranger for container, right? This is famous for trading lots of very close to here, right? Tundra bugger pod. You will see a lot of container over there, right? If you, anyone know the interlaces, the condo around here? Yeah, this is the best design in 2018. I don't understand why, OK? But this is all container, right? But we talk about different containers. Google also no stranger for container, right? So think about the service we provided by Google, right? We had to manage at scale. So every week, actually, this number not accurate anymore. Now it's 6 billion. Every week, we had to launch the container in our platform. And turn out to be, everyone know it's open source project. Google came from Google internal project called Borg. And we make it open source. Now all the community, even our friends, they can benefit from this technology. And how many of you think the container technology came from Docker? Could you read the end? Anyone think about Docker started the container technology? Yeah, OK, I see someone say no, OK. That's great. That's actually, Google is starting internally first. You think about the scale of Google how to manage the services. So very early state, we started to think about how we can isolate different resources and how we can control different process, what kind of resources it can access. So that's the reason we come out with something called C group, control group, which can define what kind of resources the process can access, so resource isolation. Then we have something called RMC TFI. Anyone want to try this one? I will give you a T-shirt. Anyone want to try? What's meaning? RMC TFI. OK, yes, please. What's the answer? Super. OK, later you can click on one shirt from here. Let me container that for you. So simple, right? Disappointed? So that's Google internal term. So basically, this is the Google version of a container. Then 2014 Docker release. And then Google adopt it. The reason why Google not using our standard versus using container, because we don't want to be disrupted in the industries. Better using something already people accept it. Then the rest of the history is everyone new already, right? So we make it open source. Everyone can benefit from it. OK, then after that, we'll talk about Google QB9 engine and Google service match, which is managed version of these two. I will make it very quick on this area. I believe most people knew already. So in nutshell, Google QB9 engine is managed version of QB9. And QB9, as I said earlier, is origin from Google. We call it Borg. So basically, you have master to help you to manage your API, traffic, and configuration. Also, provide a mechanism for you to orchestrate all your process. And then you have one or more not running your Qubelet. It's a proxy. It's a turnkey. It's an open source project. As I said earlier, anyone, you can run it in your data center, in Google Cloud, in our friends' cloud. No problem. Anyone can benefit from it. Then you say, hey, since it opens source, why I need a manager service from Google? I'm not sure how many of you once managed QB9 in your data center. Could you raise your hand? No? OK. So you knew the pain, right? The challenge, how you can maintain, operate the Qminity. It's not an easy task. So this is a turnkey solution, which meaning Google has become your operation team. For the people you're familiar with, Google, SRE, Site Reliability Engineering Technology, basically Google have a bunch of the team to help you to manage Qminity service. And you only focus on deploy application, manage traffic, control the securities. So focus on most important thing for you and for your organization. Another benefit is you can integrate with Google other cloud-native service, like CI, CD, security controls, right? Monitoring, lodging, telemetry, all this kind of service, you will get out of the box from the platform. This is very high-level architect looks like when you're running GKE on Google Cloud Platform. So the master, which is managed service, it's transparent for you. You don't need to worry about it, it's Google managed. And most important thing, we don't charge you, OK? When I say don't charge you, it's meaning our friend will charge you. Then the not. The not is one or more server we're running. Then we provide option in auto everything. Later, I'll explain what's meaning auto everything. So basically, if you're familiar with the Google Cloud, you can use gcloud command SDK to prevent in the cluster. Then you can use Qubit Kato to deploy or capture all your applications. When I say auto everything, it's meaning we provide a fully managed service for auto repel, upgrade, auto scale. Later, I explain what kind of auto scale we provided. We also have the beta version called auto provision. Actually, when we say auto scale, we have different levels. No level, poor level, vertical, horizontal, then application levels. So you have different levels of the auto scale. Some of the highlight features like a private cluster for the enterprise customer, because by default, your cluster is have public IP, which make enterprise customer feel scary. So you have the option now you can disable the public IP, then make your workload within your private network. Another one is Google innovation around the Qubin ID core. You can call it container native load balancer or call it network endpoint group, which meaning traditional load balancer, but never traffic coming in. The traffic was sent through the VM level using IP table to raw to the port levels. When we're using the container native load balancer, we'll get rid of the IP table. Your traffic will be right away from the load balancer directly reach out to the containers. So you reduce one hop of your traffic. So this is really great technology. And now it's GA already. OK, then we talk about the cloud service match, which is managed version of Istio. So Istio, I believe everyone heard about it, or some of you. It's service match. It's open service platform for you to manage all your service traffic, and manage security, and policy configurations. And not only support container-based application, but also VM-based. So Istio is a great combination with QB90. So this is a typical architect high-level diagram. Decoupled different components Istio. Istio origin from Lyft. And Google and Red Hat is the main contributor for this open source project. So different people use different terminology. So it's basically decoupled your service and your service control. So it's called Sidecar. Someone call Proceed and someone call Envoy. So for the people to be developer, you probably understand the pain. So hey, when I code my program, I don't want to care about security, service control, configuration. It's not my job. My job is coding. And I hope I can also enjoy writing documentation. So but this way, you can decouple your service and your service controls. And also have three major components, pilot, mixer, and Citadel. It take care about pilot, take care about traffic control, mixer, take care about telemetry, lodging, monitoring. Then last, not least, Citadel. Take care about your security, your authentication, and authorizations. Then again, if you manage Istio yourself, that take a lot of effort and a lot of operation overhead. So that's the reason Google provide our managed service core cloud service match. So we provide managed component for pilot. We call Traffic Director. So it's a global load balancer to direct traffic to VM-based, to container-based, and Hashtag, and auto scale based on the traffic. Then we provide a managed CA, Google identity, certificate management, and IAP. And last, not least, for the mixture, we didn't replace it. We just provide a managed service for your telemetry and lodging monitoring. For the people familiar with Google Cloud, actually it's called Stackdriver. So I love you guys to lodging, monitoring, tracing, even debugging. Talk about debugging. It's amazing capability. I like Stackdriver. You can debug your code in cloud, in production environment. That's really amazing capability. When I'd be programmer, which is 12 years ago. But that's capability really amazing for developers. OK, so give you a screenshot of how the service match looks like. You can monitor all the traffic. Most important thing, we put Google's SRE. For the people familiar with Google SRE, we allow you to define some error budget, service level objectives. So you can define this kind of SRO for your service. OK, I think enough for build out the foundation about, hey, talk about Qminity is still before we talk about Answers. So Answers will talk about two parts, give you introduction, what the different building, technical building blocks, and what they do, and what's the value for the customer. And then we zoom into GKE on Prime, which is the key component for the hybrid cloud. Answers, we call it say, hey, operating system for hybrid and multi-cloud environment. And Google in the first, usually we also say Google in the first, and only one provider for the platform for hybrid cloud and multi-cloud. And the change, it changed actually three days ago. And our friend also come out something similar. Anyone heard about it? So core Azure Arc. So it's, you did the same standard based on open source technology. So it's made in open source, so someone also follows. You think about it, just think about Answers like Linux for cloud platforms. So across different platform, different environment, all based on the open source technology. So Answers allow you to manage your workload application on your data center, on Google Cloud platform, even on our friend platforms, as long as it's based on the same standard. We provide you a kind of control plan to provide a uniform monitor UI for you to look at all your application running. You can deploy application on different environment. And also we provide a marketplace for the people using open source stack. It's very painful. You have to be signed different contract. When you have problem, you have to talk to different vendors. But Google tried to unify your open source stack, whether it's on prime or in cloud. Last but not least, you still can continue to leverage Google's native cloud-based services. As I mentioned earlier, I will talk about more in the later slide. So this is what I call the technical building block of Answers platform. So of course, based on the open source technology, Qminity is still. You can leverage all the Google service, telemetry, lodging, monitoring, configuration management, marketplace, and also Google other service. Then we also provide you a kind of service. So for the people heard about Knative, we provide a manager version called Cloud Run. Cloud Run, it's a Google manager version for Knative. Then we provide you CI-CD using Cloud Build, other components like container registry, source, repository. And last but not least, also, you can do AI machine learning on the Qminity and Istio platform. So we call it QBflow. So that allows you to train your machine learning model, whether it's on prime or in cloud. So this is a high level of different technical building block of Answers. Then we go a little bit down to the technical detail level. By the way, some people may feel disappointed when we talk about Answers, because they always think, hey, where is the product Answers? Where it is? But sometimes I feel like, oh, I'm going to disappoint you because there are no product called Answers. It's just like software platform stack. So different open source technology and different components are put under one umbrella. So having said that, you can see different technical building blocks within the Answers stack. So of course, the critical thing will be Qminity, Qminity in cloud, Qminity on prime. So you can expect the consistent experiences, whether it's on prime or in cloud, whether it's Google Cloud or other cloud. About level will be the container. We talk about it. The container, we are still using the Docker, but now we can allow people using different container runtime, now called OCI, which is a more open container standard. Then we have the monitoring and Stackdriver for your logic monitoring. And we have configuration policy management, service management and GKE Connect. We'll talk about it more later. And don't forget, we also provide you the CI CD toolings, Cloud Build, which is managed version equivalent like Jenkins. And also, we have private host git to manage your source code and your container image. And we have the marketplace for you to deploy your open software stack. So you can deploy OSS stack, whether it's on prime or in cloud. Let's go through one by one each of the components and how the function it is. The first one is GKE on prime. So basically, we bring Google managed Qminity service to customer data center. Allow you to deploy application no matter where they are. So even you can bring your hardware. So by the way, Unsource is 100% software stack. You can reuse your existing hardware, protect your investment. And also, we provide a kind of managed service for your Qminity in your data center. And we provide a kind of auto scale capabilities for your on prime workload. We also provide you auto upgrade software, which is validated and patched by Google. So you can expect that Google provide a kind of managed service for your GKE in your data center. And the second one is the configuration management. For the people, if you have experiences to manage multiple cluster configuration, security control, policy control, you can imagine how painful managed multiple cluster control and multiple environment. But Unsource provides a single place which is called configuration management. Allow you to define all your configuration in one single source in YAML file. Later I'll talk about more. And it automatically sync to your cloud environment to on prime environments, which keep a single source of truth in the Git repositories. And third one, as I mentioned earlier, Google provide a managed service for Istio. So allow you to build up a macro service-based architect. And to allow you to control your traffic, authentication, and authorization in a single platform. And you can monitor all the traffic among your different macro service, whether it's container-based or it's VM-based workload. And one more new feature I didn't put in the slide is called workload identity. For the people, should you have external application call any services host on container, you know something called service account. You have to download this key. You have to rotate, archive this one. It's so painful. I have so many customers, they forgot to get rid of this key when they commit the code to GitHub. Then someone can use your service account to provision VM, even do the Bitcoin mining under your account. That will be a disaster. With workload identity, you don't need to worry about downloading any key anymore. It's federated authentication. Even your workload running on our friend cloud like AWS, you can use AWS EC2 instance ID to authenticate the workload access to services in Google Cloud. So this is something called workload identity ID. Yes? Any question? The CI solution or the SX? You mean the service mesh between your on-prime environment to cloud? OK, so currently, it will still be separated deployment. So your service. So basically, later I'll talk about the GKE Connect. It's more like a central UI to control all your configuration, your policy, your workload deployment. But the service on-prime and on-cloud how they interact with each other, it depending on your first your network design. You have to build out connectivity between your on-prime environment to cloud environment. And also, they can communicate each other. But through the load balancer, the traffic director I mentioned earlier, that's one will allow you to divert the traffic between different environments. I hope answer your question or later offline we can talk about it. So this GKE Hub and Connect allow you to readjust the cluster through the different environments. Should you use GKE on-prime whenever you create a cluster? Your cluster automatically readjust through this GKE Connect. So that's allow you a single place to manage all the clusters through a different environment. So again, this is two heroes behind the scene for unsolved platform. So GKE, we talked about earlier. And GKE on-prime is we call it turnkey solution to put managed Kubernetes into customer data centers. So Google provides a kind of managed service for you. And also, you can integrate with Google's CICD tooling and also other telemetry capability like Stack Drivers. So I will talk about a little bit detail about GKE on-prime. For the people familiar with Kubernetes, you know, Cubicado, you will continue using it. We have new command called GKE-Cato. So that's one will be happy to create a domain control plane in your data center and also your user control plane. Then after that, you can use in Cubicado to deploy application through the Google's G Cloud command line. One more thing I want to mention about it. So far, we support the F5 for local traffic management. You can expect it more load balancer we supported, but currently only support the F5. For the VR, we support the VM where vSphere 6.5. In the roadmap, we'll talk about more like KVM was supported in the coming years. OK, so by this way, you can see from the cloud environment, you can gain the single visibility about your on-prime and cloud environment. And also, you can preview the cluster. You can preview the application onto your on-prime environments. When we talk about, hey, there are one platform managing your on-prime environment on also cloud, you will have concerns say, hey, should I have public IP to allow you connected to the cloud? Actually, you don't need to necessarily to have public IP to connect it to the Google Cloud platform. So basically, it's using through the Net Gateway and also Firewall. So we build up secure tunnel between your environment to Google Cloud. And also, the physical layer part, you have different options. You can build up a kind of dedicated interconnect between your data center to Google Cloud data center. Or you can use the partner interconnect or using VPN. This is how it looks like for GKE Connect. So basically, how we can get visibility about application running in your environment is all through the GKE Connect. So basically, it's a process running in every service running in your environment. So that way, we can collect all the telemetry data, mental data, and also can send out the command and can deploy applications. Since we talk about telemetry, you can also have the option to install the Stackdriver agent in your data center. So by this way, you can centralize manage all your telemetry information. Again, we talk about configuration management or policy management. So this is an example to show you using Git as a single source to manage all your policy and configurations. You can define your policy configuration in the organization level or cluster level or workload levels or user levels. And everything, you define just a YAML file. And every time you change, automatically will be synced up to multiple environments. So that will simplify your operation overhead when you talk about policy control and the configuration management. Okay, with that, we have still two minutes. I'm on time. So we'll talk about technical requirement. Actually, I covered more or less already than talk about some of the resources. Should you want to get further understanding about Un-Source platform? So first, it's so far we support vSphere 6.5. It's under roadmap. We are going to support more other virtualization platform for the load balancer, as I mentioned earlier. Now support at five. You can expect that we're going to support more like Cisco or Palo Alto. This is some of the resources or documentation. It's always right back to cloud.google.com. You can find out more information and technical overview and Un-Source components. For the people, should you want to try Un-Source? It would be a little bit, not something like simple, like you know, GKE, you want to enable it on cloud quickly, you can try it. You had to talk to Google's representative team and they will reach out to you to qualify your user case, your environment, should you match this kind of user case, then we can think about give you three months trial license. So far it's more for enterprise customer, especially for highly regulated customer like financial service or government, you know, or the customer, they have their own community environment. They want to build up a kind of orchestration across multiple environments. With that, I go into my session. I think that gentlemen got one shot already. Probably have one questions for you guys because I only have two shirts. Should they have two guys? I can come up to one more floor up to get one more. I want to ask you guys, anyone can share with me the two or three values of Un-Source. You want to try? It's about the consistent platform of management of the public as well as on-prem environment. Yeah, consistent. Consistency, it's a software stack, right? It can run on anything on-prem. Yeah, software stack. It's don't need to refresh your hardware. Yeah, perfect. Thank you. It's a good summarizer for my session as well. So it's an open source. It consists of user experiences, whether it's on-prime or cloud and it's 100% software stack. You don't need refresh your hardware. And the beauty for me to me is the beauty is open source technology, you know? You never worry about vendor lock-in, right? You can, anytime you can move out should you didn't feel advantage or value from it. So with that, thank you very much. I enjoyed the session. I will reach out to you guys for the shirt. Thank you.