 Hello, welcome to this lecture. This is a part of masterclass on cloud computing for architects, designers, developers, and CIS operations. Our main focus is Google Cloud Platform, but at the same time, we look at all other clouds to provide comparison between Google Cloud Platform and all other cloud services. As a part of this particular lecture, we will talk about what is cloud computing. Just a core understanding about the cloud infrastructure or cloud mapping that to the real world examples. As per Wikipedia definition, what is cloud? Cloud computing is an information technology paradigm that enables ubiquitous access to shared pool of configurable system resources and higher level services that can rapidly be provisioned. So, the provisioning is important with minimal management efforts up on over the internet. And this is the basis of the cloud computing. And there are some key areas here in this particular definition. It is in resource sharing. It is on-demand available. It is a pay-as-you-go model available all the time. And most of the time, it is over the internet. For you to use any cloud resources, you need to have a tool set using which you can provision those resources rapidly. And that is the power of the public cloud. And we will talk about different tool set in the example going ahead. Before getting into the computing world of cloud, let's understand what is cloud in the real world example. And here are some of the example which I tried to explain it, what is cloud and how it maps to. So, as an example here, consider a person wants to travel from point A to point B. He wants to have means of travel to go from one to B. And this A and B is like, could be 100 miles apart or 1,000 miles apart or 10 miles apart. That's all together use cases which we can derive it to. But the person wants to travel from point A to point B. That is our use case. There are options. One is you can, the person can have its own car, a person can use public transport like buses or trains or they can use the planes if at all that is available. Let's take the first example wherein customer purchase a car to travel from point A to point B. What it needs to have? So he has to pay upfront all the cost for that particular car, whether he is paying upfront in terms of cash or owning it as a finance person or that person has to maintain the car, like oil changes, tires and everything as a maintenance for that particular car and adjust the performance depends on the need. And more importantly, either he can drive on your own or he can have private chauffeur. So, these are different things which needs to be taken care by that particular person. Other option, if you don't want to pay upfront or your need is very minimum, right? And it is just for one particular day or something like that or one particular month or week. And you don't want to pay upfront or you don't want to have your own maintenance or you don't want to take any liabilities of having a car, right? So what option they have? They can either go to enterprise or Hertz or any other rental company to get the car and drive itself or they can actually drive you using the private driver. There are some countries wherein you can't just self-drive the rental car, the driver come along with the car, but that's what it is. So hiring a car from the rental company, that is one option in that you are committed to the duration of that particular, you know, the rental period, right? If at all you are hiring a car for one week or for that particular day, you are paying full amount for that particular one day or for that particular week or month. There are other options though, and those options like you can go Uber or Lyft, right? Using which the car will come to you at the point A and will drop you to the point B. That is one option. You can use busses, you can use trains or planes. All of these options is not asking you to have this particular car or that means exclusively for you for that particular period, like as an example, if you rent a car, you are locked to pay for the committed number of days. As an example, if you hire a car for one day, you are committed to pay for that particular day. If you hire a car for one month, you are committed to pay for that particular month. In case of Uber or Lyft, the bus train or plane, you are sharing the resources, means the cars are moving around, and as because you need, you are just asking for that particular resource car with the driver, and they will come pick you up from A and drop it to point B. You are not at all driving that car, you are not paying upfront fees, you are not paying any maintenance, you are not, you know, thinking about it, like adjustable performance as an example. And if you are traveling from A to B and you have around four bags of luggage and with your family, you can ask for more big car like a van, right? And you don't have to worry about whether that is available because if you purchase a car or you, you know, have a car, if you purchase a car in that particular example, then, and the car is five-seater, you can't actually have multiple people like six, seven people on that particular car. So you are locked to that particular available resources that is not there in, say, Uber or Lyft or, you know, bus train or airplane services. There are some usability versus suitability versus what do you, when you have a car and when you don't have a car, you're asking for the resources when it is available, right? You can think of Uber, Lyft, bus train and airplane, all of these are the cloud services which are available to you for the use or the, for that particular point in time and you pay for that particular point in time when you use it, like bus train and airplane. You are not even, you are not even exclusively having that service for you from point A to point B. You are sharing the bus with any other, your travelers, right? So that is what the sharing means. Let's take about or consider about any other example like if you want to construct a house, if you want to construct a house, you need to design it first. You need to ask for, understand how to construct that. So hire multiple people and construct it. Then you paint it and then you, you know, take care of the maintenance. And for that, you need to maintain the software to design the house. You are maintaining all the tool set which is required to construct and paint the house, right? So think in all that context, if at all you want to do it and do it exclusively by yourself, you are owning all those skill set as well as the tools for you to, you know, do that maintenance or construct a house, right? If at all you are asking any other company to come and build a house, they will have their own tools to construct a house. They have people, they have type with the, you know, the company who does actual construction of the houses and ultimately paint and everything. You don't, you're not owning anything in that case. The service provider of the construction service, right? They have all the tool set in case of maintenance, as an example, right? After five years, you want to decide to paint your house. If at all you, you know, you are owning that in the whole, you need to first understand how to paint it. And then you need to ask, you need to buy all those tools that which are required. And that is, that is that requirement is for one particular use, right? You can even though keep it for forever, but there is no guarantee that all those tools will be still useful after four or five years again. So that is the whole concept of the cloud in computing world. This is the new concept because over the last 20, 30 years, there were, you know, there were no public cloud exists at such for enterprises to use it. And nowadays, they have option to go and use those cloud services without setting up their own data centers or having contractual services from the data center providers, right? So that is the concept of cloud. Cloud means ultimately sharing the resources, having it available all the time, provisioning those resources and more importantly elasticity. We'll talk about the benefits of the cloud, but that is what the cloud is. As I understand, you can think of. So why cloud? Clouds, all three pain areas of the enterprises, think about the pain area of setting up the data center because enterprise are expanding fast. They will need some tools. They will need the data center to store the data, process the data and make that data available for their consumers. And in that case, they have to set up either the data center or they need to have a type with the data center provider to provide that, to have that service. If you look at those services having their own data centers is rigid and lengthy process like you're owning a car, right? So that process is lengthy. And for that, you need to have predefined set of requirements, hardware requirements. And if you look at the businesses nowadays, they are not strictly rigid in its nature, like 30 or 40 years back, they are continuously changing their time and dynamics and most of the time, the resource requirements are not stable. In efficiency, that is the second problem, which, you know, having a own data center is a problem like maintenance and enhancement of those IT processes, keeping those resources up to date. And ultimately, you know, you are doing all these things on your own. Ultimately, you need to have your own resources for hardware, maintenance, operating system, experts, networking guys, cabling guys, you know, all of that resource, you have to maintain it on your own. And ultimately, you have to keep up with the technology enhancement. If the servers are getting phased out after so many days, right? Because it's end of life after five years usually, you need to purchase new hardware and recycle the existing one with the company. So there are these, you know, you can think of it's like rigid, fatty, kind of processes, which you need to get into, and everything is up front. You are, you know, having the data center, the premises up front, you are having the server purchases up front, right? Cabling, rack and stack and everything that is up front and you are managing on your own. So typical pain areas, or you can think of as like basic statements. My IT process don't cater for both on premises and cloud based services or the infrastructure. Technology is moving fast and we are not able to keep it up. And this is, these are the statement from the CIO organization. Not able to keep up with technology enhancement. And think of this is like technology in my enhancement is not only towards your hardware, it is towards the software as well. Because when you purchase a hardware, you need to run software on it. And there are license costs associated with it. If the company purchase, you know, software for CCRM use, right? And after two years, they realize that the software which they have purchased is not optimally usable for their use cases. And they need to purchase different one. The earlier licenses fees lost and they have to procure another license for another software vendor. They have to look at installing the software, upgrading the software, right? So all of that comes with those pain areas of having, you know, rigid or bulky in its nature to maintain it. And then all of this boils down to, I'm not sure I really want to own the resources or manage it in all my data centers. And typically, you can think of core businesses as an example. If the company is doing trading in a share market, right? Or stocks, those companies should not don't want to take all these burdens and then limit their sales for available capability, right? Available capability means number of machines, number of resources. They have it or they don't want to wait for if they have some ideas which they want to launch it. They don't want to wait for four or five months to software procurement and ultimately having that available or hardware procurement and ultimately that is available for them to use it. So what are the solutions? What are the options which they can use it? So there are the benefits which cloud provider gives it to them is first one is the flexibility, right? User can scale their services to fit their needs, customize applications and access their cloud services from anywhere in the internet. And that's where you can see it is ubiquitous access for those services available from anywhere. And that flexibility gives us, you know, full amount of flexibility, right? If at all they want to launch some services into the market as offering, they should be able to, they means the enterprise definitely, they should be able to do it without having constraint of IT infrastructure. And that's the flexibility which they will get it. In terms of efficiency, so enterprise user can get applications to the market and that's where we said we are taking, the clouds are taking out your core infrastructure procurement process thought out from you and ultimately providing benefit that you just click it and then the resources are available for you to go to the market, right? And the value. So cloud services give enterprise a competitive advantage by providing most innovative technology available and think of it as a software, whatever the latest version of software available, that is available for you to use it or the hardware, right? You don't have to worry about whether the software is updated or hardware is updated or not. So these are the top benefits of you can think of cloud So we started with what is cloud and then we understand what is cloud and some of the benefits of the cloud. We will get into more details of those, you know, benefits and cloud infrastructure in subsequent lectures, but in nutshell, that is what it is as a cloud. Thank you. Hello, welcome to this lecture. This lecture is a part of Google Cloud Platform certification training series. For cloud architects, cloud designers, cloud developers and system operations. Are you excited? We will talk about platform overview. And I think you should be enough excited to understand Google Cloud Platform in that, right? So let's go ahead and get started. This lecture is a part of two lectures of platform overview. In this particular lecture, we will talk about Google Data Center Pop Locations, Network Backbone, Regions in Zone, Services Offered and GCP Interactions. Why we talk about this? Because Google Cloud Platform, as you understand, it's a public cloud offering in the world and using which you can provision all infrastructure resources using their tools. So before that, you have to understand how Google, you know, have their data centers, what are those locations? What are the high level concepts to divide the region? Or you can think the geography into region or zones, right? What is the network backbone? So let's go ahead and understand some of those in these two lectures. Let's talk about Google Data Center. Google data centers are somewhat different from any other data centers in the world right now. And these are some of the highlights, which you can think it is different from others. First one is renewable energy. So what Google has done is they have signed a long term contract with the renewable energy companies to use their electricity from renewable energy resources for their data centers. And you can think of this is one of the good objectives where Google says after some years, they are going to completely move their data centers operation onto renewable energy. The second one is the efficiency of our data center. You can think of when they have considerable amount of experience in building the data centers for their existing YouTube, Google search, you know, all those app ring like Gmail or Google Drive, right? So they have experience, a considerable amount of experience in the data center operations. What they have done is with their machine learning experiences they have learned that if you raise the internal temperature up to 70 Fahrenheit, there is no problem. The performance of the hardware goes higher, not the lower. The life of hardware is really good. And when you hate to like 70 Fahrenheit and more, ultimately, you do not need heavy cooling air coolers inside it. You can live with outside air temperature for cooling the inside hardware. And that's how they use it. They build the custom servers, so they don't just procure the servers from servers from the market and use it. They are not hardware manufacturers, though, but ultimately, they have they purchase CPUs, RAMs and all what not, right? What is it required to build a server? And they know what type of servers they will need to build it for their own data centers and based on the research which they have done, if they build their own custom servers and, in fact, they build their own rack the way they organize the data. They build their own network infrastructure, right? So that's how that's how they know how to take advantage of the efficiency of those servers and the network. Data and security. And this is one of the most prominent aspects which enterprises look for. Typically, when the data is stored, where it is getting stored and what about the encryption, whether someone gets access to that particular data, whether they can read it or decrypt it, right? So data in rest and data in transit, both kind of encryption is very much important for any enterprises, and that's what Google provides. And as well, like physical data center security, like there are some locations or data centers. People don't even know where they exist, right, in physical world. So they sometimes take that thought as not to disclose the data center location to anyone, right? So if you look at there are a number of data centers across the world. And this is the map as on today, January or February 2019. These locations, you can think of all the blue locations. Those are the locations currently exist and they are planning to enhance. These services are getting continuously enhanced. They are adding continuously adding the data center locations across the globe. Zurich or Osaka or in Jakarta also, they are planning to have additional data centers in 2019. They divided the whole region, the whole geography into multiple regions. And that's how they have divided this in 18 regions. These actually the services are accessible through to 25 countries and around 100 plus point of presence. We'll talk about that later. But this is the overall map of data centers they have it. Let's talk about point of presence, point of presence, the location in which you can have the connection available or you can connect to the data Google's fiber optic network. If you look at here, the blue ones is their own network. And you can think of the this is exclusively used for their existing services like YouTube, Google search and you know what not. So all all these blue network, this is their own network. But at the same time, what they have done is they have made investment. And these are like shared investment cables, marine cables. They have invested the money into that as well. So these are like partial ownership plus blue ones are like full ownership by themselves and looking at this particular, you know, the network, you can think the amount of skill they have for fiber optic network, like having services in one particular location like in US, if at all there are services installed. And if that is something accessed from Asia or Europe, they have fiber optic connectivity and the traffic travel to their own fiber optic network and to the nearest point of location. So this is the network. These locations you can see the connector dots or the fiber optic connections. These are the pop locations for any customer to connect to the Google Cloud platform. There are the some of the pop locations which support CDN, right? And this is again a part of network where the caching happens for the customer data. If you understand this, the concept of CDN and the concept is usually if at all you have some images, right, or the static content which based on the request, it is accessed from the back end services or your service, rather than if that data is not changed frequently, what you do is you ask for the caching caching for those information to near to your customer. And that is the CDN concept. These are the pop locations or the locations in which they have CDN support. So if at all you are accessing the services from US or the servers from US, even though that is not even the case, we'll talk about that, how you provision services near to you. But consider that example, right? You are server from US in India or in Asia. You are not all the time the service will go to back end to access one particular photo or your profile photo, right? It will it will come from your nearest location to you. That's how our CDN work and they have support for around 80 plus locations across the globe and they have partners. So it's not only these services only providing CDN, but they have partners like Equinox or some other companies. They provide CDN locations and that is some spanning across 500 or 600 plus. Regents and zones. As I said, they have divided the complete geography in multiple regions. So the region is independent geographic area that consists of zones, right? The main purpose of region is to host your application near to your user. It is latency and availability. Let's talk about it, right? The region is the area divided into it's an independent unit, you can say of the complete geography of the world. And in that particular region, if you are hosting the services, it should be redundant enough that if one particular location goes out, you can switch back to the other location. But your traffic stays in that particular region or geographical area, geographical area. So zones are like independent physical locations. You can think or map this to the data centers. And in one particular region, you have either two or more zones. And that's where the services are hosted. So if at all the connectivity to say zone A goes out, the zone B is available for your customer to service, right? And that's where the concept of zone is there. So it is zones are independent physical locations within a region. The region is just a boundary defined so that your services are load balanced and high availability is maintained within the region. Here is the list of regions plus zones as per current like June, January 2019. You can think Asia is one, Asia is two. These are the regions in which you have zones A, B, C, A, B, C. And likewise in US Central, you have multiple physical locations A, B, C and F. So when we look at Google Cloud Platform, we are more drilling down into the actual services or the resources, right? These resources, anything, right? Like your virtual machine or database instance or your IP addresses. All of these are network and somehow computing components, I would say. These components are either it is in global in nature means accessible across all the regions and available across all the regions. There are some regional resources only stick to that particular region and there are some zonal resources like physical actual virtual machine sits inside the physical location. And that is, even though it is accessible from all, but it has presence in one particular physical location. And the example I said virtual machine or VM disk, those are the zonal resources. If you go out, if you look at the static IP address, that IP address is specific to a region. So it is not a global resource. It is specific to a region. And then you have the network, right? The complete virtual network. If you want to create a virtual network, virtual private network into GCP that span across multiple regions. So that is a global resource, disk images, global resources, because image is a service and you can access any disk image from anywhere. The snapshot, whatever you store it, that is a global resource. And you are definitely firewalls and all those defines a particular physical locations and the rules around it, but these are, so the resources are either identified as a global resource, as a regional resource or zonal resource. GCP services in a high level, right? So they have the services from infrastructure as a service to software as a service. So let's talk about what are those in nature, right? If it is infrastructure as a service, this is traditional data center model, right? Where you have servers, machines, CPUs, network interfaces and everything. And you manage the software and platform around it. So the virtual machine typically is a computer on the cloud and you can think that as a infrastructure as a service. You connect, you have your hardware like disk. If you provision a disk to store your backups, right? That becomes the infrastructure resource or you want to have 32 CPU plus 128 CPUs of program in a particular virtual machine, that also become infrastructure as a service. That's where you manage, you provision those resources and control those resources in terms of usage of that resource, the disk and everything, right? If you go from the left side to the right, your operational overhead gets reduced. So if you look at the platform as a service, that's where you will have the ready made platform which you can provision it and you can just deploy your applications onto the platform, right? And in that case, you don't have to manage the actual hardware but you are managing the instance, you are managing the cluster. It is not the actual hardware but it is on top of that particular actual hardware. Google takes care of managing the actual hardware for the cluster but you manage the performance of the cluster and then you provision the cluster resources, right? If you go to software as a service, that's where you move towards more and more serverless environment. You don't even care about the cluster, provisioning those clusters or the resources for your applications to run. What you do is you typically just push your code or application to the platform and the platform starts just running it. You are not worried about how much the cluster size is or how many virtual machines you are using, the CPUs and all that, right? You are just careless about it. You just use the resources and do all other application level access permissions and all that. So from left to right is purely yourself, moving towards no operations kind of work. Where as left, you are managing so many resources and that's where IT operations come in. Then you have system operations which is in between platform and infrastructure as a service because IT ops major towards when you are owning your own data center, right? CIS ops is you have the resources available and you are just managing, connecting those resources, managing operations around it and this onwards up to no operations is what the public cloud environment is. DevOps, you are just utilizing existing cluster and management and provisioning the resources based on its requirement and ultimately no ops means they have App Engine as a resource. You just push a code and that application will start running and you are least bothered. You don't even know what are the resources which are running inside it. You are just build based on the traffic or the consumption of different hardware resources for those services. So in nutshell, we are looking at a different kind of resources and typically if you look at IT organization, what they use? They use the compute resources and in typically we see virtual machine or your server right, physical server. That is your compute resource which is doing manipulation, calculation, accessing a database and serving your customers. The second one as a resource is the storage and compute and storage is typically only two resources which typically IT needs it. If you look at more business needs it, sorry, IT is not the right word. On top of it, IT needs there are so many services and we will talk about it. So data is what the business needed as and when they want to have it. Processing of that particular data and serving the data to the customer. And that serving is handled by the compute services. But to have either the data or the compute resource available for a user to access it, you need to have a connection and you need to have control on that particular connection. And that's where Google networking service comes into play and you have multiple options to use it. Beyond or on top of these three core services, you need to have identity and access management so that resources are accessible to the person who needs to have access to and all other people should not be able to access it. There are big data services and this is another benefit from the Google Cloud platform. They are very innovative in developing the data solutions and the big data work. They are really good at machine learning and we understand more about it, how you can use the machine learning. But these are the core services you can think of which enterprises use it as their own data center, install the software, install having the hardware. Besides that, there are other management tools which are available for anyone to use it. It's like struct over offering like monitoring of IT resources and application resources, logging error reporting trace like that, as well as the developer tools. So having all that information infrastructure available, how you will use the APIs inside your, you know, the applications. And developer tools comes in a play to help any enterprise to build the service or use service Google services inside the applications. Besides that, they have come up with another service offering. Google Cloud endpoint was the only offering which they had it earlier as a API management tool. We'll talk about what is API management and we'll get into how it benefits customer doing the API management. Last to last year, they have acquired a company called Apigee and they were very prominent in doing the API management. So using Apigee, you can do API management monetization analytics and, you know, whatnot. But it's not a marketing page, but we'll look at it. What is that? If at all you are moving toward, you know, public cloud environment for the first time or you are having connection between your own data center because you have some services which you want to keep it in your data center and you want to connect to Google for elasticity, then there are these data transfer services which they have created. I think Google storage service and BigQuery data transfer service. I have not seen these two years back, but this is available right now for customer to use it. That's it guys for this particular lecture. We will get into more details of resource hierarchies, projects, quota, infrastructure services and different type of accounts as well as pricing in next lecture. Thank you. Providing feedback. So feedback is very important for the courses on udme.com or any other platform, right? Because that's how instructor will be able to adjust the course content looking at the student's response, but at the same time it is feedback for the future student as well. More importantly, it is the precise information which each and every stakeholder that uses on udme. Udme also use those feedbacks to enable or disable the course on udme.com if the course is free course like this one, right? If you talk about Google cloud platform concepts course, the idea here is to give you high level understanding on what are computing services, what are database services, what are storage services and how you can do a networking on Google cloud platform to protect your resources. If I go back to the syllabus that we have, it starts with the introduction, around one hour introduction then basic cloud services, background of compute, database, storage and networking service and then we get into how you can connect to Google cloud platform looking at the console, shell, SDK how you can install SDK, then we will get into a compute service and we are going to get into say compute service overview, load balancers, a demo how you can connect to Linux machine, Windows machine. Next is your storage, what are different storage services which are available for you to use it and demo on cloud SQL and cloud spanner, cloud networking how you can connect virtual private network which is like public data center or global data center how you can connect it and one example on bastion host and then how you can do a monitoring, what are development tools available what are different big data solutions on Google cloud platform and AI solutions on Google cloud platform so this is like it will give you high level concept around Google cloud platform and it will give you basic understanding around the cloud as well so it is six and half hours of content that you have available so while providing the feedback you should look at what is the expectation about this particular course so when you look at or provide the feedback you should know what is that it will be covered and if you look at I already mentioned that what you will learn overall Google cloud concept prerequisite you need to be like IT guy and everyone who wants to understand Google cloud platform concepts and this is not a demo major demo cases we have gone through some of the demos definitely how you can access on launch virtual machine and like that but this is not a kind of detail course for your certification this is just a concept high level concept so do provide rating accordingly there are other things which I got a feedback from some of the student some students said that it is very fast course and some students said that it is very slow course so think about it right so we are addressing a global student and some of the Eastern European or Latin American students who is not up to like 100% with the English language understanding they will need to have a slow kind of you know the speed for the course and whenever we design our courses we design it in such a way that it is understandable for all the levels of English and that is our objective so whenever you are providing a rating please consider all those parameter and provide your rating thank you very much Google platform overview part 2 in this particular lecture we will look at resource and its hierarchies projects quota and limits infrastructure services GCP accounts there are different type of accounts what are those and pricing so in nutshell what is resource? resource is any component as an example virtual machine is a resource disk attached to that particular virtual machine is a resource or the network component in which the virtual machine is hosted is a resource your firewalls are a resource your database is your resource right all of those are resources in GCP and those resources are organized in hierarchy manner and we will look at that what is you know the hierarchy resource hierarchy typically if you look at any organization the company has got multiple departments and individual department will need will have some products like product 1 and product 2 here but it could be anything right whether it is a unit of that team or whether it is unit of the department or directly department requires you know the IT resources so it could be anything you can mix and match any hierarchy you can create it this is a part of so organization and folders are a part of G Suite we will talk about that in subsequent lectures but G Suite maintains organization and folders and project and onwards you can control that in Google Cloud platform so you already have organization built here in G Suite and you do it it is not mandatory though if you want to build it organization you can do it there but you can have your individual accounts for a different project different departments and they are maintaining their own resources but there are some restrictions and we will talk about that how you can share the network and like that ok but typically your resources are resources are allocated to a project and ultimately using project you can provision the resources that is the container for your cloud resources hierarchies so you want to implement you want to give access to different team or applications to different applications and that is built out of identity and access management but as well as you want to have the resources the billing right which goes from top to up right where you want to build it whether you want to build it at the project or you want to have the billing account created taking say four or five different departments together that you can define it and the roll up happens from bottom to up for the billing so typically identity and access management will happen from top to bottom so if a particular person has got access to a project all resources inside the project that person will have access to but there are fine grained access control which will need to enable for that particular person but the way it works is IAM is top to bottom and billing is bottom to up wherever you define it there are policies in IAM and we will talk about that later but the policies are the set of resources applies to a rules or members resources inherit policy from its parent as an example so all these resources inherit properties from the parent which is like project policies are union of parent and the resource like if at all we are talking about one particular person having access to app engine then those like union of what project has it plus what permissions or in terms of role has given to a particular person if the parent less restrictive overrides more restrictive resource policy as an example here if the person has got a you know create resources policy I am just giving an example create resources at the project level but you can restrict whether he can create app engine or not right and that's how the hierarchy or hierarchy of IAM works for the billing though usually organization is the full container of all your resources inside it project is typically you have a you know the billing happened at the project level and not at the product level but you can have one billing account created and taking the pay talking about aggregating the payments for multiple projects IAM role hierarchy as we said it's a top to bottom there is nothing much but just elaborate you how and what you can do as an example here organization admin have full access over all the resources you can view access to all the projects folder admin create and manage different folders folder viewer can only view the folders project creator can create a project and resource roles like individual resource role here so we saw there are some resources which are zonal original or global typically if we carry that over here typically the instances and disc are zonal external IP at the regional level and images and network it is at the global level typically your billing happens for all those at the project level billing and reporting at the project level typically gcp resource manager gcp manage means resource manager manage all the resource which is depicted here like this is the resource manager's work in this hierarchy so it is centrally managed hierarchy centrally managed and track all your projects manage IAM across your organization manage organization and organization policies create and manage cloud IAM policies cloud console and IAM access and manage cloud folders and this is a part of resource manager service account so you have two type of accounts in Google cloud platform and we'll talk about service account later in one section but in high level service account is the account which is used for application to application access as an example virtual machine wants to access gcp resources cloud storage resources like images from cloud storage or write images to the cloud storage we will have service account created which will be used by application inside the virtual machine and that account will have access to the cloud storage and this is fine-grained and you can define what that particular service account can do or can't do and definitely this isolates the user level permissions which is like user if an employee leaves the organization or employee move from one organization to another organization another department within an organization then you don't want to go ahead and switch all those permissions to the new employee it's additional work right so instead of that if you use service account taking care of all application to application level communication you are careless like whether employee joining in or leaving the organization used based on secure token and there are three type of service accounts in nutshell you can create your service account you can use built in service account for virtual machine and app engine and used by google api internally so typically these accounts are used by apis the service account which is used by api you don't have any visibility it is just created in the platform to use it and we will talk about that in demo section when you open or enable the api access how you know you get the api access using the service account GCP project as we already understand project is a container for all your resources project takes care of billing for all the resources within that particular project so what project does in nutshell it tracks resources and its quota usage you can have billing on that particular on those resources you can manage permissions and credentials and you can enable and disable api and the services within the project project use three identity attributes one is project name project has got a number and project has got a identity as well ID you can interface with using cloud console or resource management api to the project cloud resources and quotas so typically if you look at you know you want to have some level of quota or limits for your cloud resources because what we understand that the cloud resources are infinite in nature right you don't want to see surprises in your cloud will on a monthly saying that you know you are this is not the case but you are the revenue is say $100 and you got $120 cloud will write go and use as much as you want at the same time you need to make sure that you your IT spends are controlled and controlled it is not actually intentional but someone is monitoring your quota right and that's where you have quota and limits for those resources so quotas are typically controls the budget it limits your resource utilization at different levels and you can increase the quota if the organization is big and you need multiple resources you can increase that quota there are some limits which are enforced in GCP and those limits you cannot increase so that's the difference between quota and limits so typically the limits are you can think of platform limitations and quota is something constraint which is put forward as a recommendation from Google but you can still request for the quota to be increased and Google will look at the different use cases and will enable additional limit increase the limit sorry increase the quota for you project quotas so typically the resources are subject to project quotas how many resources you can create per project how quickly you can make API request in a project like rate limit some quota limits are applicable for region and zone as well and you can increase this quota as I said the example of this quota is you have 5 network per project and you have 3 CPUs per region as an example but these numbers may change when you see this particular slide deck or training GCP infrastructure management services if you look at any cloud offering or your own non-premises data center or you procure any data center services from any other service provider you will need to organize these resources you will need to have the network but you will again need to have some way of interfacing it to your resources and that's where infrastructure management service comes in a plane you will have resource manager which organization organize your cloud resources in project folders and ultimately individual resources you have IAM which is using which you can have fine-grained access control across all the resources you can have audit also setup or logging also setup on IAM so that you see who is accessing what services these are the services which is provided under track driver as umbrella for monitoring logging tracing debugging and error reporting there are some CICD applications or to make your deployment easy you have deployment manager you can just setup template and you can create your own infrastructure and then you have storage and scaling right auto scaling or pubsup these are the additional services auto scaling actually makes the services elastic in storage usage of the virtual machine CPU goes certain 80% or 60% you can configure a rule to spin up multiple of those virtual machines and your service SLA is not hampered so these are the infrastructure services which are available for Google cloud platform or any other service any other cloud platform providers interactions so how do you interact with the Google cloud platform there are typically three ways you can interact with Google cloud platform one is using UI so you can just go to GCP console login there and then just start providing the resources or you can view the resources second option is command line interface or CLI you can install that in your computer you can go to Google cloud platform and you can spin off you know shell cloud shell and you can interface with the cloud and the third option you have is the endpoints or rest libraries right you can interact using the rest endpoints and each and every activity which you do it using CLI or the console you can do that with the rest APIs you can use simple code command or postman to call the rest API and you are done you can provision the resources monitor the resources using the rest API as well so that's all around different aspect of Google cloud platform please let me know if you have any questions otherwise we will get into the next lecture which is different certifications available on Google cloud platform let's understand what are those so that we will focus on what is that it is that we need to focus on this particular training okay thank you hello welcome to this lecture in this particular lecture we will get into more details of certifications offered by Google cloud platform typically there are these three certifications like cloud engineer cloud developer and cloud architect let's go to the website and understand what are those so here I am just typing Google cloud certification the first link pops up is the details about the certifications out of these certifications the one cloud architect and cloud data engineer these are there you can think of existing or old certifications which they originally had multiple and they normalized it into these two but on top of it they have created this new certifications like cloud engineer and cloud developer cloud developer is not yet launched and cloud engineer is launched so this was in beta mode earlier and now I think it is waiting to get the production version of cloud developer so in nutshell cloud architect and cloud data engineers they are existing certifications or old certifications and cloud engineer and the cloud developer has created a new certification this way it is their own collaboration certifications which they had it since so long probably you are aware about it but this is where you can actually go ahead and check their certification details if you look at no details this is where you will understand what is that certification is and go here and you can see the outline of this is the detail syllabus your certification right we will get into details of that but you can actually go to go here check yourself with your the questions click here for readiness and this is where you can think of launch the exam and you can actually give somewhere around 20 questions and this is like free questions given by Google cloud itself but you can if you are sure that you have good understanding you can just check it here whether you are ready for the exam or not or you can practice from this particular training you can practice the questions and then understand yourself going back to the slide deck so cloud engineer and cloud developer is their new exam and cloud architect is the old one the data architect I am not including here because that is very specific to the data related services like big data AI and all that and that is not the focus of this particular certification understanding or the cloud as a core services understanding all these certifications need understanding the purpose of each and every services and provision those and use our monitor in addition to that if you look at cloud engineer and cloud developer these are the people or the stakeholders who will manage applications onto the cloud architect is more over citing understanding the core essence of each and every services how to plan more towards you know how security is maintained or managed how to set up those services how do we comply to the regulatory requirement and all of that cloud developer is more towards how he can deploy his application take the advantage of cloud clouds different APIs or services right where to deploy there are multiple options which they have it to deploy their applications onto the cloud which one to choose what is the best way they can deploy their applications and cloud engineer if you look at what they are expecting you will need to understand set up and configure cloud manage cloud day to day manage security on the cloud apps monitor cloud resources and applications understand the purpose of each and every services and this is where the traditionally you can think of dbs database arcade database administrator plus system administrator right those roles come in play and now it is called system operations cloud sysops but they don't have somehow in Google space they don't have system operations as an exam but they have cloud engineer and the expectation here is he should have enough understanding or good understanding about the cloud console how do you provision different services monitor them scale them right so he is like hands on doing each and every activities day in day out so he is deploying the solution and taking advantage of cloud environment accessing their apis that kind of role and he is more towards focusing on the application deployed versus cloud architect is more towards plan and design solution which one to choose out of you know all those services how do we manage security and compliances how do we analyze and analyze or optimize technical and business processes manage and implement like cloud architecture and ensure solution and its operational reliability and what does this mean by is how do you make sure that if the load goes high you are taking care of you know auto scaling aspect or where wherever you are actually subscriber base is how you are planning to provision the service near to your consumers right all of those typically our focus here in this particular training course is the cloud engineer and he should have understanding about individual services manage those and set up security around it including monitoring developer we are not focusing here or even cloud architect we are not focusing in this particular journey so how do we learn if we look at a cloud engineer as a certification the person who is doing day in day out activities on to the Google cloud platform and that is the expectation right so for that to understand the whole Google cloud platform what we'll do is we'll start up with the setup setup means how do you set up the initial account right on the on to the Google cloud so I will take you through the project I am rules API is how do you enable disabled API is how do you monitor your services using Stackdriver logging and some understanding about those so we need to have account to have this one and then all of the cloud resources you will have you know the billing account using which the cloud will be paid right so how do you configure the billing accounts budgets alerts bill export right how do you export the bill into analytics and then we'll have detail understanding about the SDK CLI's and console the second aspect we'll get into is and that is section 3 of the curriculum plan and configure Google cloud services and this is where we will get into each and every services understand it understand their benefits you know plus and minuses of each and every services to make the decision which one to use and then we'll deploy the services and when we are saying deploy the services there are three different group of services which any or enterprises will use it one is the compute service second one is the data service third one is the network and that's where you security you put forward securities you create VPCs firewall rules load balancers subnets all of that in the network and once you configure everything or deploy the applications you need to make sure that the applications and up and running as and when the customer is required or business wanted and that's where you configure auto scaling traffic management disk network monitoring logging errors or alerts you put forward audit majors and you make sure that access permissions are in place properly so this is our focus in this particular training so how do we you know understand all of that together we will get into details of curriculum or the syllabus for the exam but here is some of the thoughts right so first we'll do a setup and I am management that's well this is where we'll configure basic setup compute service and we'll get into understanding of what are the compute services which are available for us to use it like virtual machine on the cloud right a computer on the cloud if at all what we need it right we'll get into planning configuring deploying and managing the compute services like App Engine or even GKE we'll understand data services and then we'll understand networking services what I have tried to done was I have tried to group it together so that the syllabus is spread across multiple sections will be understood in that particular section as a compute service we will get into planning, configuring, deploying and ultimately managing the computing services like data and the network all of this will have different options to use it from the cloud basic concepts we'll get into basic concept of each and every service we'll launch it using console as well as CLI as a demo and then we'll get into some examples so that you can try it out on your own that's it for as introduction we will get into exam outline in the next chapter thank you hello welcome to this section this section is the next section in the series of Google cloud platform certifications and this is cloud engineer are you excited enough now in last section we saw the cloud console CLIs or SDK we went through projects how you can attach billing account how you can enable disabled APIs and we saw little bit about the service accounts or different type of accounts so I think by now you might have went through the Google cloud console like creation so like create your own account get $300 created and browse through different solutions let's go ahead and get into individual services understanding in this section majority of the cases we will not get into the cloud console besides the pricing calculator this is more towards theory wherein we learn more and more concepts about different components or different services offered by Google cloud platform the name of this section is planning and configuring cloud solution and in this section we will cover pricing calculator planning and configuring computer resources data storage options and network resources so what is that we learn in this course in this section this section is all about knowing what is it what is it about a particular service as an example here computing services what are those services what is compute service in nature why do we need it what are different options available why those different options even exist today right we will go over trade-offs why do we use one service over the other service and then we will get into console I will just walk you through different services and just until you click how to launch those services we will not get into details of individual services yet in terms of database service we will understand what is the database or storage service or why we even use it or where do we use it in real software world right and then we will look at why there are different options exist like cloud SQL Hadoop and etc why we use one service over the other one and I will take you through the console and looking at all those different services and then we will talk about the networking services what are components of network why do we use it why do we even network exist in today world right what is the importance in the cloud environment where we are going to create a global data center in the in your private network so how you can provision your resources so that you others cannot access it right how you protect your resources if at all you are created a database service or you are storing some sensitive information you need to make sure that no one has access to it other than your organization and within your organization also you need to protect those resources only give access to certain people and not to all for those resources so this is knowing in natural knowing what is it about all those services let's go ahead and get into it the first point here is the pricing calculator and using pricing calculator how you can get the discount how you can understand what is your bill going to be for one year we will get into details of pricing calculator in the next chapter thanks let's continue on compute service basics in planning and configuration cloud solution we saw there are three main category of core services one is computing platform second one is networking third one is database and storage service out of which in this particular lecture we are going to cover basics of computing platform what do we mean by computing platform or what are the core services of the computing services in computing service we have virtual machine we have container engine we have app engine we have cloud function if you look at why these particular services are exist today those are broadly categorized into plain simple machine or computer in data center or cloud that is your virtual machine you can install anything and everything what you wanted to have it in the cloud and that is virtual machine in the cloud code and container platform that's where you have microservices architecture or container applications and you want to deploy your application onto the cloud platform you want to install or manage the infrastructure hardware by yourself that will be managed by the platform itself and that's where app engine and container engine comes into play in serverless architecture definitely that's where you want to pay only for when the code is running and that is purely in the context of serverless architecture we have multiple options there provided by different cloud vendors in AWS we have lambda a function in azure and cloud function in gcp and these are like only the serverless environment if you look at the way you manage computing services in terms of whether you are managing the hardware or complete infrastructure or it's like zero operations platform right and this pyramid shows the bottom is like you are managing the operating system and all other infrastructure component and when you go up you are not really managing anything you are just deploying your code so starting with the virtual machine that's where you choose operating system the way you want to have it or you can have your own image you can install that in virtual machine in container you just deploy your container you manage number of nodes you manage how the traffic splitting will be done in apps platform you do not manage the cluster the cluster is managed by google itself and you just push your code and then it will be executed the load balancing or traffic splitting or auto scaling that is baked inside the platform and you do not have any control on top of that whereas cloud function is purely you are just pushing a code and your code is not executing continuously and that's where you will be charged only for the time when that particular code is executed and that's where you can see you are executing one particular program block in the cloud and you will be charged for that particular time before we dive into computing service basics let's go ahead and understand what do we mean by computing service right in earlier era like this is an example of 1999 I worked for a project in which we were developing website plus you can think of application which will do airline reservation system for its customer this company was based in Mumbai and initially if you look at their original thought they used to get a call from customer for any booking reservations for airline what they have done is 1999 as you can very well map it it was dot com era they have developed a website which will take the customer's request about airline reservation and then this request is stored in a database and that request is checked by the associates they will open the request they will check what is required from which day plus point A to point B and then they will call customer and they will verify what they need and they have a Galilio interface and this is you can think of is like centralized server somewhere as a centralized reservation system for airline at that time so they will look at all possible combinations and options and over the call they will satisfy customers need those record is stored here in this particular server but at the time physical copy has to be issued as a ticket so they will issue courier a ticket to this customer but the information is recorded here and that's how accounting happens right so this was the start of automation which they went ahead the Galilio has the toss interface over TCP connection which associate were using it at the time in subsequent year in the next year what we have done was in 2000 it was you know it was looked at data in data center is not safe because at the time it was not security was not proper and there was chances that data would be sealed from the data center there were small data center services across so many locations in the world and we were using one of the data center in Mumbai at the time so it was decided that let's have a data base server locally in the headquarters itself instead of it is sitting in the data center so what we did was from the data center to we they laid a lease line so you can think of when you configure the lease line you need to make sure that you know the firewalls are configured here properly the routes are configured so that it can understand the machine which is sitting inside the headquarters this machine should not accept any other request other than the data center machine as well as the machine which are sitting to service our backup is people so besides this particular implementation additional thing which was done was they were able to program it so that they instead of actually the associate putting forward the request all the customers request will automatically query to Galileo database to get the current availability and that was shown to the customer in near real time manner so it could take somewhere around 10 to 15 seconds at that time I think that's what my understanding is as of now I forgot actual numbers but the customer used to see what options they have it and this was like in 2009 we said wow this is like amazing thing which customer has it right and at that time Xpedia and I think Travelocity also was prominent in the business in US but we were able to you know make the query then after we had another one another server like Saber they came up with the XML interface and that was another you can think of next milestone achieved and we were able to configure even payment interface to this so this was you know the way you can think of it was established was you need to have something in the in the data center which will take your request as a web server you have database running in your headquarter you have Lease line and sometime if there is a cable problem here your services are definitely down it is it was not fault tolerant because internet speed was not there the website was taking huge time to load the image because we designed a nice graphics on the home page of the customer when customer logs in and some of the interactions so we had to reduce those image loading because all the images were stored here in this particular machine every time we had to change the configuration we had to go to rush to the data center if at all we had to upgrade hardware we need to install it properly and then replace the hardware box from the data center there was whole lot of problem but ultimately it was like really good achievement for us that everything is just working fine so hardware procurement having those connections established as because we had a fear that data will be sealed from the data center there was too much of network configuration the firewall was inside this particular operating system and this particular operating system and the routers there was firewall connections between Saber and or you can think of vpm connection between Saber and our web server to query to Saber so this was all you can think of looking at the current ease of use the painful thing to establish and to start the business in web right looking back or coming back to the compute service right compute service means this is a computer who is acting as a web server and that's where you have a purchase server installed this is computer again this is computer taking the request this is again a computer and we have installed servers in 2001 but again this is as a physical it is computer we have a database installed on it on the linux machine so all of these are like computing service for us as of today right and this was sitting inside the data center for us as our servers and these are third party servers but this was our computer in the data center right the same way you have your computer in database computer or the database servers in sitting inside the data center that is what the compute services so you have flexibility to configure routers firewall rules you can install any software and you can make that that available for you to work right and these are some examples or the servers which you buy it from the market and install it in the data center if you want this is like you know big data appliance this contains operating system as well as the big data software solutions from Oracle but there are variety hundreds and thousands of varieties of way you can procure those hardware and install that in database coming back to the cloud right so you can easily map right so computer engineers the box the physical box and they are running virtual machines on to that physical box so you may have in one particular physical box like this one multiple virtual machines running inside it and that is your computer engine and that virtual machine is because in traditional way you have only one operating system running in this particular box but now you have the the hardware is virtualized and you have multiple virtual machines based on the requirement the way you configure it running in this particular box right and that is the virtual machine so whenever you think about the requirement you want to deploy some custom code you want to make use of operating system the computing engine is the starting point to go right so when do you use it probably some of those use cases you will not understand unless we hit to the next services but let's go ahead and get it started right there are some applications currently those are containerized I will talk about that in some time but the application you can't containerized or you have OS level dependency or OS level changes or you want to host your existing application without rewriting it for containers or app engine or you want to have full control of your hardware in looking at the network looking at the compute resources right and that's where you use compute service or compute engine so some of the features of the compute engine is it is a virtual machine with network attached and ultra high performance local storage options you can have premedible virtual machines we will talk about that but premedible virtual machines are something which you go ahead and just build from the available virtual machines and if that machine is available you can get up to 80% discount on that particular virtual machine it all depends on you know how much machines are available for any customer to use it and based on that you get discount but you do not have full control on those premedible virtual machines and we will talk about that later in the next section but the only thing to remember is you have premedible option which is not there in the other services to use it as a computing service and you can get up to 80% discount but there is no guarantee that you will continue to use premedible virtual machines as a 100% because they can take your virtual machine any time customizable load balancing and atto scaling across homogeneous virtual machines you can have direct access to GPUs that you can use it to accelerate any or specific type of workload you have support for most of the flavors of Linux and windows operating system that's the compute engine that's the compute engine kubernetes engine so you know it was not long back actually last 4-5 years the microservices plus containers all those are like really in dominating in terms of technology space because it's ease of use the way you can deploy you can manage your code there is a specific requirement of CI CD and that's where kubernetes popped up as a container orchestration tool and this kubernetes service is developed by Google as a server and that is now even implemented by Azure as well as AWS and they are providing this service as well so kubernetes engine in the cloud in GCP is managed cluster you can deploy your microservices or containers into kubernetes without managing the hardware so when do you use kubernetes engine you have secure and scalable way to manage containers in production you don't have dependency on specific operating system and this is on the other hand if you have dependency you have to use applications should be containers and that is the basic requirement for you to have it when you are planning to use the kubernetes engine we'll talk about it later in the detail section what is kubernetes engine but it is a container orchestration tool for you to deploy your containers and manage it manage your cluster inside the cloud environment some of the features of kubernetes engine it is seamless competing resources provisioning across the globe and you can have a cluster actually running globally serving your customers globally ato scaling and highly available cluster container management and orchestration it is a logical infrastructure and it will provide you or it will provide you a platform so that you can deploy just your container and just don't worry about you know the underlying hardware easy mechanism for building loosely coupled distributor system and this is one of the advantage of containers run as a same application on your laptop or on premises or a cloud and this is not a benefit of kubernetes engine this is a benefit of containers right app engine so this is again another so it has got a multiple versions of app engine but typically the motivation behind app engine is you as a developer just write a code and push it to the app engine the remaining thing is taken care for you by app engine so it is just zero you can think of zero operations platform it has got two different versions or two different type of app engines but the main motivation is it's a zero operations platform you can build reliable and scalable serving apps or components without doing it by yourself and this is doing it means without provisioning infrastructure first or platform first you just push the code or deploy the code and code will start running it minimize operational overhead it has got a developer velocity or infrastructure control so you can't control the infrastructure in most of the time or that is the motivation right and you focus on writing the code and never want to touch the server and this is in the context of compute or cloud engineer you are managing someone else's code and that is what you need to understand so app engineers is the platform service from google cloud platform wherein you manage you manage the applications which is written by someone else in demo in the next section in demo will have additional in demo we will actually get into details of so what I am trying to do is I will give you the application code itself you do not need to understand what is inside the code I will give you some other thoughts on how to modify the code and how to push that particular application but as a cloud engineer you do not need to worry about how to write that particular app some of the features of App Engine a stack with smart defaults and deep customizability and this means you have actually the defaults created for you but you can actually get into customization and you can customize it the way you want to run your app almost all the languages Java, Python, PHP, Go, Ruby Node.js and like that or you can bring your own runtime environment in flexible mode integrated SDK managed services and local environment deployment and this is where it actually shines right you can run your app you will get the SDK, you will get the local development environment and you can execute it and you can see it happening in your dev machine without even deploying that into App Engine app versioning with zero downtime upgrades so this is again another benefit you can create multiple versions and you can split the traffic between the versions and when you get the confidence on the new version you can move all the traffic into the new version automatic high availability and built-in auto scaling you do not have to worry about how to manage high availability and scaling Cloud Function this is 100% serverless environment that's what you can think of and it is event-driven so when you want to use it typically Cloud Function actually gets executed based on your event triggers event trigger means as an example you store something on the server and you want to process as an example you push the video file onto the server and you want to convert that video file in multiple formats that you can trigger with the Cloud Function it is event-driven you don't have to manage any environment you do not control the CPUs which it is using everything is managed for you the main benefits you do not want to run app with no one is using it absolutely no operation over it simple to implement and with multiple endpoints and even based code business logic execution and this is you can think of like it is not a continuous execution environment like your virtual machine your Kubernetes engine or your app engine right this is event based so when something happens that's where you want to run the code and you want to pay only for that particular point in time you don't want to pay you know for even 10 minutes or 1 hour for that particular machine and that's the powerful that's the power of Cloud Function you just pay for 100 millisecond slot as an example if you your code gets executed in 80 millisecond you will be charged for 100 millisecond your code is executed in 170 milliseconds you will charge for 200 milliseconds so like that right it is when your code is running only that particular time for your computing resources you will be charged otherwise you will not be charged so some of the features it's a true serverless environment with serverless economy which means you will be charged only when the code is running and not charged when the code is not at all running connect and extend cloud services it is mobile ready you just go add a code and it will execute as it gets an event it has got open server support you can primarily use it for IoT as well you can even develop APIs and microservices you can even do data processing and ETL job based on the events so different computing services typically we saw virtual machine when you want to use it when you have OS dependency or you want to have full control on hardware or you want to have OS level changes and that's where you use virtual machine when do you use container engine container engine is when your application is fully containerized if it is not containerized you can't use container engine you don't want any you don't have any OS dependency it is scalable in the production load which means you want full control on how you can scale it whether you want to put auto scaling implementation on but the main thing here is your application should be containerized then only you can use container engine whereas virtual machine you can use it for any purpose it is just machine or computer in the cloud App Engine this is shine when you want actually developer in the cloud environment without having any knowledge about about the platform or managing any any resources at such like hardware or operating system it is scalable you can have traffic splitting and all that all those features out of the box cloud function this is purely event based and purely serverless I just wanted to go ahead and get on to this dashboard or console Google console I just logged in I just wanted to show you where is the compute service if you go down here this is compute service you can go to either App Engine, Compute Engine, Kubernetes Engine and cloud functions let me go ahead and show you some of the parts of virtual machine so the first task here is because I don't have any Compute Engine running right now I can just go ahead and create one you can choose region and zones where you want to host your machine you can choose CPUs memory or you can customize it the way you want to have it you can even ask for extended memory we will see this in later if you have a graphic you know load and you can you want to have it attached to graphics you can choose an operating system and all that right go here you can go to Compute Engine and you can create group template and you can use this template to launch multiple instances the disk if you look at you can create a disk these are like magnetic disk or SSDs the way you want to have it and you can manage different snapshots we will actually go ahead and get into details of all these in the next section but this is where you can go and launch Compute Engine this is where you can just go ahead and launch App Engine Kubernetes is saying for Kubernetes you need to have a cluster and Kubernetes is container orchestration cluster and you need to specify how many number of nodes you want to run minimum number of nodes versus maximum number of nodes if at all you are if you are consuming more and more resources so you can actually get this enable API right this was not enabled for me so I'm just enabling it so this is the Compute Engine it's not challenging to get into details of each and every Compute service shortly but this is how you can go and browse let's go ahead and understand the different data services that we have it for use that's it for this particular section guys that's it for this particular chapter let's go ahead and understand what are different data services which are available for us to use it thank you now let's go ahead and understand what is the solution data and storage services on Google Cloud Platform in this particular chapter we are going to look at high level overview of data and storage services on Google Cloud Platform data and storage service is one of the three core services which is offered out of any cloud platform whether it is providing MySQL or MongoDB Spark, Cassandra all of those services are database and storage services you can think of so data and storage services are highly divided into three different categories the first one is data and storage that is plain simple SQL NoSQL databases and object storage like images second one is big data services which contains Hadoop, Spark, Apache Beam and all those implementations inside Google Cloud Platform and third area is artificial intelligence wherein you learn machine learning speech which is natural language API and translation there is essentially not really management required in speech or natural language or translation APIs will get some glance of machine learning in this particular section so there are these different services which are offered out of Google Cloud Platform as a managed service from Google Cloud SQL This Cloud SQL is MySQL and PostgreSQL implementation and this is managed service out of GCP Cloud Spanner they have designed their own relational database management system and we will talk about that later in this section as a overview but this is the relational database and globally available then we have big table you can think of edge based implementation in Google Cloud Platform Cloud Datastore this you can map it to the MongoDB kind of implementation you can store document database inside cloud storage cloud storage is object storage you can store images static contained in the cloud storage or you can even store backups or you know the data for archival purposes big data services we have cloud BigQuery that is edw onto the cloud enterprise data warehouse on the cloud we have data flow and that is you can exchange the data between or you can treat this as a ET engine right this is Apache Beam implementation of ETL that is where it supports streaming as well as batch integration data proc this is Spark and Hadoop clusters onto the cloud data lab this is a visualization tool which you can use it on BigQuery or any other data backends cloud sub is integration service and this is typical messaging service this is treated as a big data service because it is highly scalable no management in terms of hardware or the backends you just need to create publish a subscriber and just start using it this is a part of Hadoop but you can trade this as integration framework or platform service which is required for you to do any integration work as a part of messaging so why do we need you know data and storage services because once you have virtual machine onto the cloud you can just install cloud SQL like MySQL or PostgreSQL or any database software which you want to use it in Google Cloud Platform why do you need the data and storage services the critical reasons are you can think of one or two right and that those reasons are like you don't want to manage operating system and its backbone you don't want to manage the storage attached to the operating system everything should be managed by the cloud and that is why they are offering as a managed services on all those services on top of it you don't have to manage the software installation on top of it you don't have to manage the security around those databases there are two box available for anyone to use it what is that for you know cloud engineer if you look at cloud engineer as a certification you do not need to understand bits and pieces of each and every components of those services in terms of how you can write SQL queries in terms of how you can deploy how you can push the file into hard of databases or how you can run spark jobs right you will need to understand what is that what are those components which are available for you to manage those components onto the platform you are not writing any queries you are not writing any files or data files you are not manipulating any data files but you are just managing those infrastructure components inside the cloud so let's go ahead and get into it one by one the first one is cloud SQL this is fully managed MySQL and PostgreSQL and this is relational database management system if you have any requirement on the MySQL as a database backend for your example for your website or in your requirement you can just go ahead and go to the cloud SQL and launch the service let me go to the console and show you a little bit about it you can go here you can go to cloud SQL and then you can create an instance and you have two options one is MySQL and PostgreSQL you can just go ahead and launch those instances and start using it so that's the cloud SQL so you don't have to manage operating system behind it everything is managed for you by Google so why do we use it for all our database requirements we use cloud SQL that's where you use cloud SQL as a ERP or CRM or e-commerce application backend or dual spatial data application backend websites, blogs or CMS or the BI application web framework or structured data OLTP workloads and this is typically look at if at all you have something implemented in MySQL or PostgreSQL you can just go ahead and leverage some constraints and we'll talk about those constraints in detail section Cloud Spanner is their next database and same rdbms database they have built this particular database from ground up and this is only available in Google cloud platform the biggest advantage for Cloud Spanner is it is horizontally scalable strongly consistent, globally available database and all of those the keyword has got a meaning and we'll talk about that in its individual section but that is you can think of Cloud Spanner is relational database management system it is ground up implementation and it is highly available and global scale so why do we use it all our dbms requirements you take financial, global chain or ETL, data you want to scale horizontally with consistency in the mind and you want to have mission critical application spread across the global and you want to have the database which can take query from all the regions from all the locations in the global scope how do you go to and launch Spanner you can go here and in Spanner so this will take some time to enable in API if you go to create an instance it will ask some typical questions right the name typically instance ID and you want to have this particular instance as a regional or multi-regional and this will drive your ultimate availability of your nodes and then if at all it is regional then you need to specify which region you want to have it if it is regional you can just name where in which side of the geography you want to have it so cloud Spanner is charged based on the nodes as you increase the number of nodes your cost will increase accordingly so that is what it is charged so the storage cost depends on the GBs per month node cost is hourly for this particular node guidance and you will have actually these guidance everywhere in Google Cloud Platform wherever you have a doubt and that is how you can go ahead and create it I am not going to create the instance as of yet but that is the Cloud Spanner the next one is Bigtable Bigtable is you can think of it as a column DB and HBase implementation on to the cloud, Google Cloud Platform why do we use it it has got a native time series support and you can use it as a IoT finance or ad tech as an OSQl database you can have personalization, recommendation, monitoring jobs configured to use Bigtable Geo special data set as well as grabs and low latency read write access and this is very important aspect you need to make sure when you have low latency read write access high throughput analytics your first choice is the Bigtable because that's the column DB and the way you pull the data is very fast as compared to that of SQL let's go ahead and browse where it is Bigtable and here it is you can just go ahead and create instance it will ask you name which instance type do you want it production which means it will create at least 3 nodes the storage SSD versus HDD and you can think there is this particular cost estimation given all the time whenever you are configuring those services so 4000 GB 680 per month write or you want to have low cost storage for 4000 HDD104 you can actually mix and match all those numbers you don't have to launch it you can just browse what options you have it initially unless and until you launch it you will not be charged the next one is the cloud storage cloud storage is a scalable, durable, multi-regional object storage what do you mean by object storage typically the blob storage means usually when you install your operating system that's the blob storage when you are using one particular file or one particular image or video file into the cloud and the way you manage you can either store that particular image there or you just retrieve the image and that's what the object storage is so it is highly scalable and durable and we will talk about what do we mean by scalability and durability what do we mean by multi-regional cloud storage classes what are different storage classes but keep in mind that cloud storage is object storage that's where you store a file for your archival or images or whatever and you will be charged based on the type of storage you are using it plus the size of that particular storage in monthly basis importantly cloud storage cannot be used as an operating system back end means if at all you have say virtual machine and you want to store operating system or you install operating system on to cloud storage you can't do that it is not in blob storage or binary storage it is purely an object storage context why do we use it for storage for your custom data analytics or pipelines archive backup and disaster recovery storing and streaming multimedia data like images or videos images pictures videos or objects blobs or unstructured data for your storage and you can go and go to here and this is part of again storage you can just go ahead and create a bucket and this is a bucket you can think of it as a folder folder in which you have you are storing your objects so that's the bucket name to specify the storage classes whether it is multi-regional, regional, nearline, codeline and based on that the charging mechanism changes the way you retrieve the data changes right and here are some examples right storage cost it's a 2 cent 2.6 cents per GB per month retrieval cost free for some of the multi-regional if you change this you will have retrieval cost as well right so if you create a bucket you are not going to charge for a bucket but if you start storing the object bucket is just a container or a folder you can think of as a directory structure but if you store additional data in it then that's how you will get charged charging from the gcb that's the cloud storage the next one is the BigQuery managed data warehouse onto the cloud that's how you can treat it BigQuery is typically you can just take the data whatever format you have it just store the data and start writing your SQL queries first you map the data and then start writing the SQL queries it is you can think of infinite scalable you don't have to worry about provisioning the instances in the back end or you don't have to worry about managing the instances or the processors to do analytics or run the queries everything is managed for you it is fully managed service from google cloud platform from cloud engineer side you are not managing any instances you are just monitoring the usage, the charges right how much you are getting charged, what are those queries if at all some queries are getting filled how you troubleshoot it with BigQuery so all of that you manage besides managing the platform not even instance, not even hardware so that is the data warehouse onto the cloud you can do analytics, dashboarding it's no operations platform as I said, sync for all your BI reports and it's EDW enterprise data warehouse onto the cloud if you go here so if you look at BigQuery is not a storage option right or it is not as a normal database and storage service it is under big data if you go here welcome to Google cloud console you can actually manage queries I don't have any data set actually to run the queries but you can just have a data set and run the queries if at all you are using the existing database which is like in terabytes please please be check make sure that you are not using too much or you are not firing too many queries because this can get you very expensive and when we look at the detail we'll talk about that but make sure that you are firing a query to limited database so that you are not charged that much that's the big query it's a data warehouse in cloud the next one is data store this is you can think of document database so if you have structure data in the form of document or JSON and you want to store it and you can map the real time examples of MongoDB to data store because that's what that is another example of document db that is where the use case you can map in these two typically it is used for hierarchical data, durable key value data store you can as an example you can think of storing user profile, product, catalog or game state you can store document data and same structured application data as well so typically this is used for very tiny use case you can think of as user profile if there are user for your web application storing multiple attributes of that particular user data you just store it and then retrieve it once user is logging in and you want to have that very low latency data pool and that's where you can use data store or the MongoDB in other implementations so that is where you just create a document and that particular document has all the attributes whatever you require to present the user profile and that's where either you can do that or you can just store the game state game state is in the form of multiple attributes of that particular game having different structures in it and it is a JSON format if you don't know the JSON format you don't have to worry about that too much right now but you can think of as it is a document storage as a data store in console you can see cloud data store as a storage option and entity is you can think of entity is map to uh entity is map to let me go here I just wanted to give you an example to you so database is getting created and this you will not be charged for uh for just a database and listen until you store the data inside it and you start performing query on top of it I think this will take some time but we'll continue our discussion on this one in the detail section persistent disk persistent disk is it's a block storage which usually attached to uh the virtual machines and you can think of virtual machine has got two options one is the local storage as well as persistent disk you can use it for taking snapshot of data backup it is rapid durable backup running in virtual machine for running your virtual machine you can have persistent disk shared across multiple virtual machines it is a block storage for google compute engine or google kubernetes engine but at the same time you need to make sure that you understand the context right persistent disk is a network attached storage virtual machines so it is like nas in traditional world which you can attach it to any virtual machines and you can detach it from that particular virtual machine memory stored this is the caching solution which we have it in google cloud platform for retrieval it is mainly a ready space implementation you can go up to 30 gb to 300 gb of data in memory having to install gbps of speed and it has got sub millisecond data access latency cloud data proc this is it is either the spark or hadoop clusters inside the cloud so why do you use this implementation versus your own spark and hadoop in virtual machine again the benefits are same if at all you are installing the hadoop or spark cluster inside the google cloud platform or your own premises you have to maintain the hardware you have to maintain the operating system plus the clusters its sizing and the nodes capacity right in cloud data proc you don't have to manage anything so everything is managed by hadoop by google cloud platform and it is available for you to use you just launch the cluster and use it and if you don't need it you can use it in the cluster so what it is available out of data proc spark you can use analytics you can have etl and batch and hadoop for batch processing more importantly if you don't know hadoop and spark hadoop is you know enterprise data warehouse you can think of and its a part of big data we we have actually hadoop implementations in multiple enterprises to store as enterprise data warehouse and you can fire query it is distributed file system in nutshell right spark whereas its a streaming more towards real time processing or analytics engine and you can have you can launch either hadoop or spark out of data proc if I go here and go to big data you can go to data proc and clusters first enable apis so apis enabled you can just click on I think it is still doing it create cluster and there you can go ahead and create you can give the name you can define where it should be whether it is global or original in nature you can specify the zones cluster mode the way you want to have it whether it is single node one master inside that there is worker or standard one master in nodes or with a high availability and this is usually preferred in the production loads the machine types which you want to use it for those nodes the storage right worker node machine type and storage right this is what you configure it and you can just click create and it will create the cluster for you going back again data flow data flow is actually either you stream or batch data ETL transformation tool this is Apache being implementation in the cloud and carries the same benefits as we had it for data proc use cases you can use it for fraud detection, financial services IoT analytics, manufacturing healthcare and logistics personalized user experience in gaming click stream point of sale or segmentation analysis in detail so this is we will talk about the architecture more in detail in the next section but this data flow is it's a ETL engine and you can either use stream facility from the ETL engine to inject the data or you can do the batch mode of data transformation you can go here and you can launch the services in big data let me go here try data flow you can just go ahead and use different API interfaces to execute the jobs how you can do that will actually do it in the in the detail section but this is where you can actually go and try data flow cloud data lab data lab is a tool for data exploration, analysis, visualization and machine learning so what it means is you can use the data from the backend or from BigQuery and you can visualize data in the graphic format so the use cases for the data lab typically is data visualization it is integrated with the GCP database as a back end so you don't have to do additional plugin in the model it supports ipython or jupyter notebook format and it is used in machine learning mostly for data lab we do not have any you know services directly which you can use it in let me go here so if you look at the google console you do not have any service which is called data lab you go here storage track driver yeah you know you don't have anything here as a data lab what you need to do is how you can access data lab is you can go to google cloud platform data lab and you can go here and click on launch cloud data lab if you go here you will understand that data lab is not the straightforward service offering what you need to do is you need to create a data lab instance and once you create the data lab instance you will be able to browse and write your queries into data lab so that's the data lab it is visualization tool for for database on the GCP as a back end the next service is cloud pubsub pubsub is past memory caching that's what they say but it is it is an integration tool so you have any client coming in and talking to any other service it's an integration or messaging framework which can be used so that you isolate any failures in any other application the real time use cases or the real use cases for cloud pubsub is it's real time messaging it has got at least once delivery exactly once processing it decouples the system different systems and it's a serverless and highly scalable service typically pubsub if you start creating the publisher and messenger publisher and subscriber you don't need to worry about provisioning the hardware resources or software components for it this is a free managed service from google and you don't need to manage any infrastructure for it you just need to go ahead and use it so let's just go ahead and look at and this is one of the exam topic if you understand what are different use cases for different the database and storage service and when you will use what and this is typically a summary this is no more the cloud spanner so we have app engine memcache so app engine can directly have memcache and you can use that memcache for application level caching the use cases for that is you want to store the game state or user sessions in relation we have two different options one is cloud SQL and second one is the cloud spanner in cloud SQL you can launch mySQL as well as postgres SQL instances it is good for web frameworks and they use cases such as CMS or e-commerce cloud spanner this is fully grown or in-house grown by google and this is horizontally scalable globally available database service it's again a rdbms you can use this again for rdms purposes with highly scalable highly available service you can use this for user metadata add finance or martek or any other use case which rdbms highly scalable database is required in noiseql we have two services one is cloud datastore that is hierarchical mobile web you can think of that as the use case and you can map this to mongodb as we discussed earlier you can store user profile or game state in cloud store cloud big table though is you can think of hbs implementation in google cloud you can have a high read and write use cases typically add tech financial or iot and this is mainly a columnar database for object storage that is where you want to store the static data or the binary data like static files which usually doesn't get changed or you want to store images or videos like that so that is where you use cloud storage big query is idw or data warehouse in the cloud without management of the platform and this is the decision tree and decision tree talks about when you need to use what service because it is so much complex we will just start with this one is your data structured data if the data is not structured and you do not need mobile SDK you are just using the cloud storage to store your file in informat or images or videos if at all the data is data requires for mobile SDK that is where you use Firebase if your data is structured then you have all the options which is used for structured database source panel and SQL for relational databases cloud data store for non-regulational database but the structured database which you want to use it using web framework or any other real time work if at all you have some requirement towards data warehousing that is where you use big query if at all you need low latency solution from the data warehousing or analytics you will use columnar database like big table other options there are multiple other options which are available for you to use it it does not really does not really matter for you as a cloud engineer exam but you want to go ahead and look at that why you will use all those different databases out of all other topics in this particular section there is a specific call out for cloud storage classes this is related to the cloud storage there are different storage classes and we are going to look at this in detail in cloud storage in next section but in nutshell what it means is there are four different storage classes one is multi-regional, second one is regional third one is nearline and fourth one is goldline the common scenario is to use multi-regional the content store and delivery business continuity for highest availability of frequently accessed data that is where you will use multi-regional storage class regional storage class to store the data for analytics and compute within the region and this is very important we will talk about it why you use regional versus multi-regional nearline and goldline this is used for archival for infrequent accesses these are some of the additional thoughts around it what is the storage cost what is the access fees for all those different storage classes but we will talk about this in more detail in next section about the cloud storage these things are database service overview if you have any questions on database and storage service please let me know otherwise you can move to the next section, thank you planning and configuring cloud solution networking service and this is one of the three core service that we have available on Google cloud platform like other services like computing platform as well as database and storage service networking service is also very important and very important for this certification as well because you manage all the resources all the cloud resources in terms of database or the computing resources and you connect them with each other or you can even protect those resources using the networking services you can create VPC that is global data center or virtual data center on to the public cloud environment you can have load balancers firewalls, routers subnetworks content delivery network etc as a part of networking service this is highly divided into three different categories one is the global load balancer or the load balancer we'll get into details of what are those different load balancers that we can use it in next chapter VPC and subnet that's where you create virtual private network on to public cloud environment and we'll talk about firewall VPN and interconnections between them and then we have another supplement to service that is DNS and CDN DNS is domain name service and CDN is content delivery network we'll talk about it what are those and why those are important for size of business so in nutshell from the Google we have these different icons I would say as a services for us to look at one is cloud VPC virtual private cloud cloud interconnect to connect your premises with Google cloud cloud load balancer cloud CDN and cloud DNS so why how those services are grouped together and used in applications you use the load balancer and those load balancers should support STTP, STTCP traffic SSL traffic, UDP and that's the load balancer part in application you use CDN for the content caching and DNS for your domain name registration like google.com as a domain name the second one is the virtual network and that's the VPC and this is purely you can think of your own data center global data center on public cloud so you can protect your own resources whatever whether it is the computer in the cloud or virtual machine in the cloud or databases instances under the cloud you create those within the network so that no one can go ahead and access it without your permission access control definitely you put forward access control rules to have access to your network and then hybrid cloud and this is where if at all you have a hybrid cloud environment like your own database your own data center as well as you want to utilize GCP as a elastic scaling services for your internet facing application that's where you do have hybrid connections right so with hybrid connection and this is interconnect this will tell you how you can connect your premises your data center with Google cloud platform so this is purely the overview of network services as a part of your job like as a cloud engineer you need to make sure that you understand each and every component and this is very much emphasizing for your job because you are managing network into the cloud if you look at other certifications like developer they don't really that much bothered about the network but they really care about you know the load balancers CDN or DNS but all other thing it is your duty to manage it on the cloud so it is important for you to pass the exam so when you look at the network we saw this already Google has actually has made large investment in submarine cable like the green lines which you see it that is where it is shared fiber optic cables but the blue line if you look at that is Google's own network and somewhere I heard that this Google own network carries more than 40% of global internet traffic look at YouTube as a service that only carries so much of video traffic into the network in the globe these blue lines is Google's own network and this is the backbone which you have it whenever you are creating the network or your private network using Google's data center you have this backbone available for you to use it and this is fiber optic network so it is very fast for you to use it what is the innovation which Google has done it right over past so many years so if you look at the traditional technology traffic it was assumed that if there are packet losses it is because of the network condition but with the BBR and that's where Google came up with their own idea of having the faster network bandwidth where because all this particular network is their own network and they control what data and how data is passed so they know that you know there is no network condition right and just go ahead and assume that you need to throttle or lower down the bandwidth so that the data is transferred successfully as per Google's analysis it is 2700 times better and faster than any you know the traditional network and that's what it makes even Google differentiated from other cloud provider like AWS Azure or you know Alibaba so I'm not actually getting in details of what is BBR is you can just go ahead and look at this I'll try to make sure that this slide is available for you but in nutshell what is is it is bottleneck bandwidth and round trip propagation time and is the new congestion control algorithm which Google has developed so that they can transfer the data very fast let's look at you know the individual services that we have in Google cloud network that you need to learn understand provision it in this particular chapter we'll just go through a high level thoughts around it we'll not get deep dive into individual services but let's look at high level right cloud VPC cloud VPC is the managed networking functionality for Google cloud platform and its services that's where you you can go ahead and create your own public I I won't say actually public it's a private network into Google cloud platform so Google has data center spread across all the globe right you can create a network which will connect your resources deployed in Europe Asia and US and you can connect them with each other and you can isolate this particular you know components or the gcp resources from others using the VPC and that's the virtual private cloud Google Google platform so using VPC what you can do you can build private global data center like field without managing hardware or building it from scratch you can configure subnet firewalls routes and peering and that is your traditional way of doing networking you can monitor using the flow log and this is very important we will get into it in the next section about details of VPC but the flow log will help you how the traffic is being routed in your components and who is accessing it and that's the network flow log and it is global shareable expandable by design so you don't have to manage any resources for the VPC everything is managed for you and it is SDN so there is no bottleneck if at all your router has got a considerable amount of traffic in it it might there are chances that it will fail there will be bottlenecks if at all those routers are physical equipment right in this case in Google's network it is not a physical network it is software defined network so it scales based on your requirement based on the traffic and that is the USP for you to you know for you to go ahead and create the VPC I want to go ahead and get into the console and want to show you want to show you how you can go ahead and create a network you can go to networking if you come here and go to networking right VPC and you can create the network here sorry I mean wrong account so if you are looking at this one you can go to so let me show you where you can go and create the network you can go to networking and VPC network and this is where you can create the network you can see the default network is already created for you and that is available for your resources the routes are created subnetworks are created but you can go ahead and create your own network I'll go ahead and talk about this what is default network and what is your custom network but this is where you can just go and create your virtual private I would say data center for the network right so that's the VPC cloud interconnect so if you have you know your own data center or you are working for organization and they have their own data center and they want to use some of the services from google cloud platform right and that's where you want to connect your data center or the services with the google cloud platform and that's where cloud interconnect applications will come in and this will actually let you connect your data center to the cloud for different use cases you have different options but that's what it is interconnect is to connect your premises with the google cloud platform what are the benefits plus features of cloud interconnect it is low latency highly available service it has got dedicated and partner connection option it supports RFC 1918 network spaces and you can have direct buy from the private data center to the google and that is the benefit for you know having cloud interconnect the way you can go ahead and create cloud interconnect is again you can go to network you can interconnect you can just go ahead and get it started the next one is external peering and this external peering is the peering which you use it to connect your cloud services right so external peering as well as the interconnect has got plus and minus benefits details of why do you use one versus another one in next section wherein we look at details otherwise you will not otherwise you will not understand you know what is the core essence of these two in external peering also you have direct versus partner peering option available you can connect via VPN interconnect for low ingress fees and that's the external peering the next option that we have is cloud VPN and VPN this is you can think or you can map it to your own traditional networking concept like you have VPN connectivity created with the tokens earlier we used to have this particular tokens right but you can have two different facilities connected via VPN connectivity the advantage with VPN is it has got a SLA so you can define the SLA and SLA is maintained for the VPN connection you can connect side to side it supports cloud router and we will talk about what is cloud router router in next section but it supports cloud router you just need to keep that in mind for now and you can secure traffic with earlier like direct peering you probably may not have option to secure your traffic but with cloud VPN you can secure or encrypt the traffic while in transit right you can go here and you can go and create VPN connections as a new VPN connections you can provide different parameters to it and that's how you can create a VPN connection this is the decision tree for you to choose one particular networking versus the other one purely in the context when you want to connect your on-premise with Google cloud platform right so let's talk about it do you need access to private compute resources on GCP if you do not need you just want to have access to your to the resources and you want to reduce the ingress fees that's where you use this peering right so you do not need private access you do not need interconnection between your own network with the GCP and that is what it means is like your one machine inside the GCP that's where you use this particular route and not this one do you need connection to G Suite yes you meet Google's peering requirement so there are specific peering requirements so you should be available in Google's pop location and that's where you can do direct connection otherwise you can go via partner peering or carrier peering if you need or you want to exchange your network addresses from GCP versus on-premises that's where you do this particular route do you encrypt the sensitive information at the application level that's where if at all you are doing that that's where you you choose this route otherwise you need to go for cloud VPN because cloud VPN provides you encryption mechanism or security mechanism which usually direct interconnect partner interconnect will not provide you so if there is an application level encryption available and you are doing it you don't need to worry about whether in transit there is encryption then you can just go ahead and use these services otherwise use cloud VPN so can you meet Google's one of the conditions yes then you can use direct interconnection or partner interconnection if at all you do not qualify then definitely you need to go with the partner or do you need 10 GB pipe or more that's where you use direct interconnections if you still qualify for Google's requirement but you do not need you know you probably need say 5 GB pipe right that's where you go with partner interconnection so these are the basic requirement or the decision tree which you map it what do you want to do it and when I say interconnection, interconnection means you want to connect to your premises your data center with Google cloud platform and then this particular decision tree gets applicable if you do not have your own you know infrastructure locally everything is in the Google then you do not need to worry about the interconnection at all cloud load balancer this is more towards the developer uses it or your application uses it like you have this particular machine facing outward to your DNS right and you have multiple services in the back end and you want to have the load balancer configured here that's where cloud you know Google's cloud load balancer on they say global load balancer comes in a play this is again a SDN driven so it is not a physical box which may fail based on your traffic this is software driven and this scales according to your traffic right and you don't need to worry about the infrastructure behind it it is managed service from Google cloud platform it supports single anycast IP and we'll talk about what is single anycast IP in detail section it supports almost all the protocols in the market scalable software defined network as I said and it does intelligent routing and I'll talk about it what is intelligent routing in this section itself in next chapter where that's where I'll just talk about what are different load balancing options that we have available and what do we mean by intelligent routing the next service is DNS service and I think probably we know what do we mean by DNS service or domain name service it is actually it will take your name and map it to your IP address and there are so many operators or providers in the market who does this it's the same it will give you are actually it will give you name for your IP for the back end IP or vice versa it is reliable resilient low latency DNS serving from Google worldwide network and the important part is it is 100% uptime SLA guaranteed it supports private zones it supports DNS forwarding and it is cost effective and we'll talk about this in the next section itself because even though they have considered cloud DNS as a this particular as a part of this particular section but we are going to cover this in detail in the next section what is cloud CDN cloud CDN is it's a content delivery network and you can think of your services the back end services has got some images in your virtual machine or your cloud storage has some images which you want to deliver it to the customers handset or tablet or web browser right but the usage is very high and that's why what you want to do is you don't want to flood the storage bucket or the virtual machines in the back end what you want to do is you want to cache that information near to the customer and that's where cloud CDN concepts come in it is low latency low cost content delivery network using Google's global network and I'll talk about this where so there are pop locations which support CDN and those are the caching locations for us to use it what you can do with CDN you can actually do logging it supports virtual machine and cloud storage as a back end you can do invalidation request but there are some restrictions we'll talk about that and it supports any cast IP that's it for this as a overview section of networking if at all you have any questions on networking just let me know I'm going to cover little bit more about the load balancer how it works and what is the purpose of load balancer in next chapter in this particular section itself thank you in this section let's go ahead and understand what are GCP interfaces which are available for you to use it these interfaces which means how you can interact with Google cloud platform using which tools so the first one is Google cloud console and this is the UI or the front end which is web UI or the mobile app UI using which you can interact with the Google cloud platform we are going to get into details into this section but that's the first one the second one is command line interface where you use your gcloud command or BigQuery command or gsutil command so that is your command line interface and in that command line interface you have two different utilities which you can use it the first one is cloud SDK and the second one is cloud shell and the third one is API libraries and these are the rest end points which you can use it to call from your programming interface you can include that into python or java or any other programming interface or you can use postman to hit rest point URL or you can use the simple utility called curl to interface with the Google cloud platform so this is typically API and primarily used inside your programming language so if you divide the whole aspect into the list the first one is the console, console again has a mobile app so that is again another interface there are plus and minus in this too the third one is cloud shell again the interface is SDK command line interface but this gets installed on your laptop or computer and we need to actually go through the installation step for SDK and we are going to do that in the next section the cloud API and that is the command line interface it is you cannot just interact with cloud API using your command line you can do that with the curl utility but that is not the whole purpose of cloud API so let's go ahead and understand the cloud console in the next chapter thanks cloud console this is the interface which you can use it to interface with Google cloud platform in the UI format and here I am I just logged into Google cloud platform with my trial account and if you come here this is your home page you will get the complete summary of what is available and what is not available you can see the project information what is the project and all of that we are going to see it anyway but you will get the project information you will get the resources information so currently I have three versions of app engine one cloud storage bucket one BigQuery dataset you will have the traces if at all you configured it and then there are kickstart guides or getting started guide you can do API explorer you can go ahead and run around API's you can test those API you can deploy pre-built solutions like cloud launcher you can do monitoring you can take a quick VM quick start you can actually just go ahead and launch the VM or cloud function so this is the getting started or quick start guide here is the API response that are available whatever API's which we have requested and here are some dashboard which are available currently my app engine is not running so you do not have any request responses the status if you go here it says all services are normal you can actually switch to dashboard and you can see if there are any disruptions you can think there are some issues here right on 20th and 23rd 22 to 23rd but this is the whole dashboard which you have it you can access billing and manage billing out of Google console you can do error reporting and if there are any news out of Google cloud platform and documentation so this will point to the documentation if you go here if you click here you can go to any services which you want as an example you can go to say compute engine resources and you can launch VM here right or you can go to billing you can manage billing you can set up alerts and all of that configuration you can do that from here and we are going in details of all this in subsequent sections but using Google cloud platform console you can launch decommission all those services and it is very user friendly and you can access it through UI so this is Google cloud console so here are some of the features it's a web admin UI you can do resource management out of UI DevOps on the go it has got a mobile app but we are going to see that in separate chapter you can do secure administration out of console and you can get the data inside we saw your error reporting the latency you can get it out of Google cloud console and here are some of the list which is there in the Google documentation as well which you can go through that's it for Google console I don't think there are so many questions out of Google cloud console this is for your information thank you Google cloud shell is the shell which you can use it out of Google console and this is another interface and it's a CLI it's a command line interface let me go back and show you from where you can launch on the shell this is my console you can just click this particular button to activate a shell so it's getting provisioned so now shell is launched you have all those utility which are required to interact with the Google like gcloud command so you have gcloud command you have BigQuery you can BigQuery you can do gsutil and this is for cloud storage bucket so you have everything installed in your console and this is very good to get it started if you do not want to install cloud SDK this is the first thing if you want to interact or try your Google cloud platform commands you can actually go ahead and open this in new window as a full screen mode you see that in full screen or you can actually you can customize this terminal you want to have different colors right or fonts or something like that you can add additional sessions the good part of this one is if you install or run your local application here as a sandbox you can actually see the web preview from here right if you click here and I don't have anything running right now but I can click on this just you will get the response from here I do not have anything currently running but you will see this when we have App Engine demo so that's the Google cloud shell some of the features you can do it's an admin machine you can think of you can't actually use it for your own application deployment you will have full power or access from anywhere secure and fully authenticated up reinstalled and up to date so all the libraries which are there which you are using including gcloud command everything is up to date it is developer ready and it has got 5GB persistent disk in it so you can store your local file and you can already see that there are some some of my folders and directories which are lying there for my demos it's a web browser access using the web browser you can run command you don't need to install anything it comes with 5GB persistent storage it is secure unless and until you log in you will not be able to access it and you will have all the developers tools which are required to deploy any applications into Google cloud platform using cloud shell so if you look at what it provides you will have shell interpreter, bash, sh linux utilities to install libraries if you want to have it gcloud app engine SDK and it's a sandbox gsutl text editors you have emacs, vm and nano buildpacks, gradle and all those these are already available out of the shell which you can just go ahead and use it and this is you can think of it's like command line interface just to go right you can go ahead and do one thing here if you come back again you can actually do the explorer file explorer and this is you can think of windows explorer like field so all your directories will be listed you can even create your own files here or modify the files here right and you can go and save the files so this is again if you don't want to play or you don't know how to use the vm or nano you can actually use this browser as well but I do not prefer this as much because I just like line x command line utilities to edit the files that's it for cloud shell guys if you have any questions on cloud shell let me know otherwise you can move to the next lecture thanks we already saw cloud SDK cloud SDK is CLI command line interface and the utility which you install on your laptop or computer let's go ahead and download it and install it let me go here I just went into cloud SDK quick start or download you can go there if you have this one cloud SDK it has detected my operating system I can go here so I already have a project you just need to download it single user mode and all you will get an option whether you want to install beta commands I said yes so installation has been done now it says complete I had to pause the video to complete the installation to save the time so let me say next finish ok so that is done let me go ahead and do cmd here and let me put gcloud so the first command which you need to do is or to initiate your gcloud command is you need to do I just put that in here you need to initialize it let me go ahead and initialize my interface so you must login to continue would you like to login I would say yes so it is asking me your user ID password to login and I just said yes allow so it is logged in as my user ID go ahead and so these are my list of projects that are available for me to select it as a default project enter now it is going to ask me do you want to set up default zone for compute I am just going to say default yes and I am going to choose us vs us vs 1a that is 30 everything is set up now let me go ahead and put forward gcloud and these are the options if I say compute instance list instances and list it will give me the list of so this is how you can install and set up gcloud command along with gcloud you will have gsutils as well the path is not set up but gsutil as well as bigquery ok so that is the utility which you will get it by default out of cloud SDK that is it for cloud SDK guys if you have any questions you can ask me or otherwise move to the next lecture thanks cloud apis this is you can think of you can use it while in your programming and it is a rest end point which you can use it let me go to the console and show you how you can explore different apis let me go back if I go to the cloud console as a home page you can go here explore enable apis and here you will see what are apis executed in your duration this one you can actually change that duration or download those transactions but let me go ahead and show you one example I went inside compute engine apis in here you will get the information how many usage for your compute engine apis but you can go and try in api explorer so here this is the sandbox you can try different apis I can go here and let me try one compute engine instance create instance list let me try this one so what it needs it needs project id and the zone let me go here and get my project id this is my project id and zone which zone I am referring and I just need to put any zone I will just give underscore so it will ask me permissions which user what it says unknown zone let me give 1a I just get the region us is to 1b it says unknown zone let me go here and see if I have any virtual machine running there it says us is to 1b copy paste so here you go the outcome the instance name what it is kind of compute instance instance 1 it is a name and it will give all the details of IP addresses and all the information so I have this IP address so api explorer is really a good tool to explore these apis and you can query it and you can get the information out and you can see the sample outcome this is what you will get it in your programming interface this is the api request which you are putting it that is the cloud api guys if you have any questions on cloud api let me know it is a programming interface that is what you need to remember if at all you want to try it you can use api explorer it is another reminder and if at all there is any questions it will be around this one if you want to try restrain points what is the option which you want to use it right and api explorer is one of the option that you will get it which you need to select it that is it as the interfaces guys we are going to get into cloud SDK installation in next section anyways but if you have any questions on theory part let me know otherwise you can move to next section thanks hello welcome to this lecture this lecture is a part of master class cloud computing for architects, designers, developers and CIS operations are you excited you should be because we are going to take deep dive into compute engine and this is the first lecture in deploying and implementing cloud solutions as a detail of service as we already know compute engine is infrastructure as a service from google cloud platform it is a virtual machine or computer in the cloud you can think of computing service is one of the three core services just to remind you out of networking database and storage and computing platform and in computing platform virtual machine is one of the three services other options available are code and container and serverless so as I said virtual machine or computing service is infrastructure as a service where you manage all the component your full access on the hardware before we dive into the compute engine let's go ahead and take a look at what is the syllabus says for understanding compute service so here you go so what we are going to see deploying and implementing compute engine resources like launching instances instance group using an instance template generating uploading SSH configuring VM for stack driver monitoring and logging accessing compute code and requesting increase installing stack driver agent and monitoring and logging other theoretical part which we need is managing VM like SSH RDP and this is more you can think of demo to understand this demo you need to have understanding on what do we mean by snapshot or images instance group all of that I am going to cover in the theory lecture here in this section going back compute engine we understand that why do we use compute engine so there are requirements which we can't easily containerize and you want to use existing VM images onto the cloud application that needs OS level changes you want to host application without rewriting it and you want to have a full control of infrastructure direct accessing network and other compute resources and that's the power of compute engine you want to spin off compute engine let's go ahead and take a look at components of compute engine disk disk are required for you to install operating system or to process the data or to store the local data in VMs images are the images using which you launch a compute engine networks we saw that there are the networks which are global in nature you need to attach the network so that you can isolate a particular virtual machine from access by someone else at the same time you need to make sure that you are accessing you have access to the virtual machine using the network you do configure firewalls and firewall you can attach it to the virtual machine as well we'll see how to do that addresses like IP address internal and external versus DNS regions and zones so we saw region what is region right so they have divided the whole geography into regions and in one particular region you have multiple zones so that if at all one zone goes down you have the zone who can actually take the request customer request so zone is physical location and how you attach or select a particular zone we'll see that instances how you manage the instances load balancer auto scaling instance template instance groups that's what we are going to see it some of the features of compute engine typically in the context of Google cloud platform it supports sub minute billing what do we mean by that so you'll be charged if at all you are spinning the instance for say 30 seconds you'll be charged for minimum one minute but then after if at all you are using that particular instance for one minute and two seconds you'll be charged for 62 seconds only right and that's the subsequent sub minute billing globally scoped images so the images which it supports is globally scoped it supports CDN content delivery network and we are going to have a demo about it disk performance you can have local disk or RAM disk for higher performance for your VMs it supports high network performance because you have Google fiber optic network which supports virtual machines live migration live migration means actually when there are maintenance happening on the physical server physical hardware then your virtual machine gets live migrated to another physical hardware but it is seamless to you and we are going to talk and look at it it supports at the start if at all some problem happened and it is not scheduled stop or you know start it will try to restart the virtual machine machine right sizing and this is a very good actually use case of compute engine that you can configure your number of CPU and GBs of RAM the way you want to have it and this is only available I think in Google if you go to AWS you have predefined set of machines which you can use it automatic discount so you have multiple kind of discounts available to you one is committed saving sustained use discount inferred instances virtual machine will talk about that in the discount section it supports linux and windows it has predefined as well as custom machine types option and it supports global load balancing it supports auto scaling and that's it so these are the features of compute engine and this will impact you know going ahead when we configure those virtual machines in the demo let's look at some of the performance attribute for compute engine and typically you know when you want to look at how your compute engine is performing what are the parameters you need to consider the first one is processor family and this is you can think of standard if at all you want you are launching say one instance in asia you know one particular zone and it has got one or two type of processor family you don't have choice so but that is this is one factor you can think of in computing performance the processor family defines whether the processor is 2 gigahertz 2.2 gigahertz or 2.5 gigahertz so some of the processor family here hash well broad well ivb hsand bridge processors the next one and which you control and you define is vcpu but typically what do you mean by vcpu what says actual physical cpu right vcpu if at all you just generalize it as a virtual cpu and if you want to map it the capacity so two vcpu which means one cpu in the physical hardware let's look at the network throughput right so what is the network throughput how actually it varies so typically the network throughput is dependent on the number of vcpus in your virtual machine and that is per vcpu you will get two gbps of network bandwidth so if you spin off an instance which has got four vcpus you will get eight gbps as a bandwidth network attached to it and you can go up to 16 gbps with eight core machines but you cannot actually go beyond that even if you increase the number of cpus after eight cores it is not going to increase the network throughput beyond 16 gbps the last aspect here is based disk hyvo how performance the disk hyvo is right and that's where you will need to think about how you can increase the disk hyvo for disk hyvo there are different parameter which you can look at whether it is ssdr magnetic disk that is one differentiated factor what size of that particular disk is right so there are these two parameter and we are going to look at what is that we said it supports two machines linux and windows and to connect spin off linux machine or windows machine what are the parameters which you look at typically the license cost if there is any like redact linux or any other license that is included in your cost per minute cost per second cost right the connection the way you do connection if at all you want to directly ssh using the console you do not require any key you just login and then click on ssh and then you will get connected you can ssh it from cloud shell you can ssh via cloud sdk and that is from your desktop or third party client that is where you need a key otherwise you will not be able to connect it but typically if at all you are connecting it from outside outside that console then you will need to open tcp22 that is the protocol in firewall if you have not opened it then you will not be able to connect your connection will be refused so that is the linux you will have a key to connect it from outside and we are going to see that in demo for windows anyways the license for windows is also included so you do not have to register your instance everything is included you will have a rdp support like ssh in linux case using any rdp client power shell you need to set the password and this is very important even if you want to connect it from the console you need to have a password to connect to the instance the firewall rule it should allow is 3389 and this is compulsory for your windows machines machine types there are let's look at 4-5 machine types in gc because there is one custom type and in custom you have flexibility to configure your own so standard machine type in this standard machine type it is balance of cpu plus memory you can think of one vcpu you will get 3.75 gb of ram and that is your standard machine and this is a balanced so this is if you don't know how you are going to use it you just spin off standard machine the cpu you want it and memory will be configured for you and you can go from one cpu to 64 or 96 based on your need in standard the second one is shared core and this is very small when you have very light load and you can think if you look it here if you take 0.2 or 20% of your cpu in f1 you will have 0.6 gb which is 600mb of ram and you can attach a disk the way you want to have it but this is how you represent f1 machine and then you have g1 machine in that you have 60% of cpu it's not even one vcpu and 1.7 gb of ram micro busting and what that means if at all that particular physical hardware has a capacity and your application or the programs need additional capacity for some point in time when your program is running at that time it can go beyond your 0.2 cpu usage to say 0.3 or 0.4 we don't know that but that is that f1 allows micro busting for certain loads and the next one is high cpu machines and this is where you have higher cpu as compared to memory if you look at the standard one it has got for one cpu vcpu you have 3.75 gb of ram but in case of high cpu you will get for one particular cpu you have because you do not need that much memory so it is 0.9 gb and you can go in multiples of that in this kind of machine the cpu starts from two and onwards in standard it can actually you can start with one and onward but in this machine you can actually start from two and onwards and there are some restrictions you cannot go below one gb in these standard machines and that is why they have I think restrictions high memory this is actually the vice versa of high cpu wherein you have for one cpu you have 6.5 gb of memory and this is where you you are processing too much of data in ram and that is where you need in this case also you will have cpu from 2 to 64 or 96 based on your need and based on what you have availability typically the machines are represented like this if at all you are using the standard ones but there is the third one third machine type wherein you can actually go from you can define your own cpu and you can define your own ram right you can mix and match between all these configurations right whatever it is available and there is a there is actually button here and you can extend the memory as well and you can go up to 624 but I'll show you how to do that I just wanted to actually show you another one this is ultra machines and they recently came up with this particular machine type in these ultra machines you have vcpu 40 and you can go up to like 961 memory so these ultra machines are special requests you can think of and you can use it they are they must be very expensive if you look at number of disk you can attach it any of the machine any of these machines or ultra machines you can go up to 16 persistent disk attachment and up to 64 terabytes of total memory which you can attach it to any machine let's go ahead and look at this in console how does it look like let me go here so if you want to spin off virtual machine just go to home and compute engine and the first step click on create you don't have to spin off the machine to explore so you can provide the instance name the way you want to have it the region this is where you specify which location you want to launch it I'm just selecting say by default region here default zone here I'm not changing it and you can see your monthly estimate for whatever definition whatever configuration you have it right and this is where you can go ahead and change and define your configuration so either micro small which is G machines right F and G machine and then standard machine if you go down high memory machines high memory high CPU and ultra machines right this is I think this is the new machine they have it like mega machine mega memory somewhere like 1.4 terabytes of memory with 96 vcpus but this is ultra machines and just keeping whatever I have it you can even customize it that's the third one right so you can define like I need 4 cpus and then I need say 24 and this is based on your calculation right 6.5 gb per cpus and that's how you have 26 gb but you can just click on this one and now you can go so much high 624 I just captured that particular number from this itself as I said you can even use or configure cpus platform right let me go here basic flow first one and cpus platform automatic and this is where you know you can map cpus platform and that will give you the speed right which I mentioned so if I go here so it depends on the cpus platform you allow speed let me go to vcpus okay I don't see any configuration here how the network bandwidth impact but that's the return one let me go other aspect so if you want to attach gpus you can attach gpus and you can I didn't tell you can choose which model you want to have it I'm just saying none here right now then if you want to deploy container we'll talk about this in next section and then service account on default service account I have service account which I can just use it app engine so I have these three service accounts and just keeping it as a default when you log in or create virtual machine when you enable the api then you will have service account which is by default then firewalls right you can add firewall tags I don't want to say anything here if at all you want to have HTTP traffic you can do that you can add firewall rules if at all you want to have it you just need to click that details page that's where you can do management you can just give some tag labels automatic startup script if you want to say run one particular script when you start the instance this particular script will be executed if you want to say start apache instance or tomcat instance in the back end you can install you can have the startup script here we'll see this in demo ok meta data you can assign meta data to the instance primitivity on and off you can do it I'll talk about that later automatic restart when machine isn't maintenance mode this is where you can switch on and off if at all multiple if at all for some reason hardware failure or any any reason right your machine stop you can set it automatic restart on or off and host maintenance what you want to do you want to actually live migrate your virtual machine live migrate means you migrate your instance from one particular physical hardware to another one to stop it somehow magically google does it you will see some performance bottleneck if at all hitting you are hitting like 80% CPUs but if at all you are using say 10, 15 or 20% of your CPU and mam memory you will not feel the downtime at such and that was management if I go to other tabs like security If you want to encrypt your data, you can, if you want to secure boot, you want to use TPM or Integrity Monitor, you can configure it here. We'll talk about all of this in security section. The disk, as I said, you can have the disk and that disk, that disk, you can have the opening system installed or the database installed. You can go to networking, you can attach the networking here. I just have one particular networking, so I don't need to worry about that right now. Sol tenancy, we'll talk about sol tenancy, but sol tenancy is like, if you go here, just understand sol tenancy, what do you mean by that, right? This is a physical box, physical hardware machine, right? Sol tenancy means you will have your, your virtual machines running in one particular hardware and no one else is actually using that hardware. That is what, so this particular server is dedicated to you only and that's the sol tenancy. Go back, okay. That's it as a basic. I will continue on to compute engine, the next aspect like GPUs and everything in next lecture. Thank you. Let's look at quickly what I'm going to cover in this demo. Server balancing, auto scaling, high availability. This illustration is something you can see in coming versions of your lectures wherever it is applicable. Consider this particular representation wherein we have website and that website is used by users across the US. I'm just considering the US use case and which you can ultimately replicate it to the different regions across the globe. You have the subscriber base which is located in east. You have subscriber base or users which are in central. You have users which are in west and then cloud DNS which takes your request forwarded to the load balancer which you can see HTTP load balancer here. And then the request gets routed to your implementation and your implementation could be container engine, app engine, cloud function, probably your compute engine and considered here only the compute engine part because individual implementation like container works differently than app engine than cloud functions. So consider a plain simple case here. Assume that all the instances are the compute engine instances and understand how load balancing works. All the requests which gets into the cloud DNS are forwarded to the load balancer and load balancer distribute those requests across multiple instance to service those requests. Load balancer makes the decision where to forward those requests to the backend services. When the users from the central wants to use the service or website here, the request goes to DNS, DNS forwarded to load balancer. Load balancer magically makes a decision and understand this particular customer is coming from US central and it should be served from the nearest location for him and which is like the implementation which is based in US central and the requests are forwarded to the compute engine which is in the central. Consider the next case where the users from the east trying to access the website or the service, the request gets into cloud DNS, gets into load balancer and those requests are forwarded to the nearest region or zone for that particular user. In case of east, it should be somewhere in your east of US. Similar fashion, if you have the customers actually accessing the service from west, the request is forwarded to the west implementation of your services. So this is how cloud load balancer makes the decision and forward the request intelligently to the backend service. But hold on cloud load balancer use another parameters or input to make this decision. Let's look at this. So you can see consider looking at you know the speed of these particular requests which are getting into the different services in different regions. You see there are considerable amount of requests from west to the west compute engine. So as the request are continuously going high, the server is getting busy and busy and busy and that's where auto scaling will play important role. But hold on here is the catch. If load balancer checks a particular container engineer is somewhat full and they can actually then it needs to be reduced the traffic from those particular instances. It will route to the next available instance or the region which is you can say some of the request are which are red dots. The requests are getting into the US central as well at the same time. In the meantime, auto scaling makes the decision and decides that they need to create another instances because there is considerable amount of traffic and it kicks in additional instances to serve that particular traffic in US west zone. That's the auto scaling. And then magically everything works and then the additional instances also start getting the traffic from the US west users. So this is auto scaling plus the cloud load balancers decision what to forward and where to forward the request. By this time, you know that how load balancing works and auto scaling works. Now understand how what HA high availability. So consider a case here if something goes out, right? If you are not able to reach to your central and just replicating or using the regions concept here, but you can think of central region is one particular zone. Now the link to the zone is gone or because of some reason it is not available. What will happen is your request which are you can see the yellow dots which are going there now cloud load balancer with the health check understand that there is no connection to the central zone. And then it makes a decision to forward the request to some of the instances which are available to or near to them. And in this particular case, it is east because west is somewhat far for them to send the request. So it will send the request to east and that's a check. So you understand cloud load balancer makes a decision where to forward the request. It has a health check which checks if the instances are healthy to send or forward the request. It also checks whether the instance is not too busy to service it or otherwise it will forward it to some other location which is nearest to them. I hope you understand in a high level how load balancing auto scaling and H.U. works. Let's go ahead and create some virtual machines to understand how we can actually go ahead and create those. This is Google console to have this. Let me actually just get into the console straight away. Once you get into the console, you will see the dashboard. It will show you the projects which is currently selected. You can actually see multiple projects whatever you have created in this particular account. You will have this information. You can actually see if there are any resources which you have created that there are known right now. So you don't see any. There are multiple actually documents or you can say guide. You can actually use this guide to create the virtual machines straight away. You can see the API's traffic currently which you are using and there are different items here which we already saw it in the console's overview. Let me go straight to the Google computing gin. So this is what I have right now. I just cleaned up each and every resources in this particular project to create fresh. Let me go ahead and create one virtual machine. So you can actually see that it will need instance name here. I can actually type in any name which I want. So you can actually see that it is only lowercase letters, numbers and hyphens. So let me put web 01 instance name. The zone as we discussed earlier ultimately all those data centers are divided into regions and ultimately the zones. So if you look at there are multiple zones here wherein I can go ahead and create as we are actually talking right now. These are the zones which are available. So the first later you can actually see that Asia east one that's the region and then there are multiple zones in east one which is A, B and C. Likewise we have multiple zones. So the regions and corresponding zones associated right now which is available for this particular resource to create. So let me just you know pick any of these zone and you can actually see the price and these are estimates for monthly you know the virtual machine usage. You can actually go ahead and change different zone and you will see the price is also getting changed. So you can actually very well get that the price for this particular resource which is instance virtual machine is getting changed or is dependent on you know the zone which you are actually creating or in fact it is the region which you are creating and then based on the region the pricing will get changed. So which says effectively hourly rate you know 3 cents 3.4 cents. Let me you know let us try to search a very cost effective option if we can get. So you can see in Asia it is $28 per month in Australia is $34 which is somewhat expensive in Europe 27 Europe West okay $31 US Central 24 East 27 and West 24 again. So East or West you can actually just go ahead and create. You can actually select the machine type which you want to create. So you can go ahead and select the available machine types. As we discussed already there are multiple machine types which is like first one is the standard per CPU there is 3.75 GB of memory available and you can go up to 64 virtual CPUs and up to 240 memory in the standard. Then you have high memory in this particular in this particular machine type you have per CPU there is a 6.5 GB and you do not have option to select one virtual CPU you need to actually select two or onwards or more and you can go up to 64 CPUs and 416 GB of RAM. And then you have high CPU machines which are you know per CPU there is 0.9 GB of memory which is available. So and you can again cannot select one virtual CPU here as of now you need to select at least two to go ahead and start those instances you can actually see you know you can go up to 64 virtual CPUs with 57 GB of RAMs. You can actually select all those configurations or combinations based on your need whether you need more memory to cache some data or you need more processing or the CPUs to do say computational analysis or something. So based on your requirement you can go ahead and choose but there are besides these there are other two which is shared CPU the CPU itself is shared between the instances and you have memory options between 0.6 GB to 1.7 memory one is f1 micro and given small in a f1 micro we saw that you know using f1 micro you can actually go ahead and so the instance itself will be you know it can throttle to the higher speed based on the requirement. Let me go ahead and choose say f1 micro for that 0.6 GB memory is already selected right. Let me just go ahead and customize this one and see what I can get. So if I come out of virtual CPU which is one it is giving me 3.75 GB which is standard machine and yeah I cannot actually go below 1 GB or I cannot go below 6.5 6.5 is the machine where you have high memory machines you can say option. So it will get you know increase you can actually select any combination of those. The best thing about this particular combination is you can very well see the what is monthly cost what is estimated monthly cost definitely if you go ahead and terminate these instances and then use it again or stop those instances for certain time and use it again there will be progression happen but this is high level running cost for the machine considering 730 hours per month. Then you have option actually to select based on the CPUs and the memory let me actually go ahead and select you can actually go ahead and select the the CPU platform okay. So you cannot actually I think select the CPU platform if you are in custom yeah but let me just go ahead and choose one of these particular machine and try to select because there are constraint and which you cannot use certain platforms okay. So I think in US West let me choose US Central let me see what platforms yeah. So it is not actually based on the CPUs definitely there are constraints to select the CPUs but it is locked to the zone that particular zone if you have the CPU platforms available like Skylake, Broadwell or Hashwell that is available for US Central US is let me see what are options only one US West that is what we saw only one. Let me go ahead and select Central and if I select say Central for 7.5 GB RAM to V CPUs and I want to select Ivory Bridge the price for those two or any of this platform is going at least currently showing it is same for me I can go ahead and add GPUs. So GPU aren't available in this particular zone only you selected I need to actually change the zone let me choose US East okay that's not there in US East as well okay. Now in US West you have option to choose GPUs and I can choose two GPUs of this one you can say the cost is going tremendously high. So if I take out GPU it's showing somewhere around $50 but if I choose any of the GPU the cost goes too much high somewhere around $500 per GPU is what they are charging per month right. So let me actually not select this one right now about this this is where actually you can choose predefined images if you want to use the public images right now you can actually choose it from the images and the options available as I said Windows and Linux whatever it is used right now in the market you can actually choose either the Windows machine or Debian Linux as an example right or you can actually choose any of the any of the image application image like this one is Windows Server 2012 or you can actually select from any image I don't have any image right now I cleared everything I don't have snapshot I don't have existing disk at all to choose the image from the boot disk actually you can you have a choice to either select the magnetic disk or SSDs and price actually changes based on what you choose it and in fact this speed is associated to that gets changed. Let me go ahead and just keep the magnetic disk and continue select. So that's the Debian Linux 8 which I have selected it is asking me service account I want to actually select the same thing I don't want to actually do too much ifs and buts I want to select all HTTPs and let me go ahead and with all defaults create. So this will take I think some moment to get that particular server started yeah the instance is ready now and there is a so there is internal IP address which you can actually use it for internal load balancer or any other internal communications of rules for external IP that's where you can see this IP is assigned and you can actually very well go ahead and ping that particular IP if the traffic is allowed you have multiple options to actually click connect to this particular instance one is through your SSH browser right now the way we are doing it or there are multiple you can even connect it with party for party there is another you know a process will top that in detailing subsequent lecture but this is the way let me just go ahead and connect with SSH see it is connected you have full access to the to the machine right now if you want to actually do SU which is root access you can just very well go ahead and do it so that way you can access I will see more features on this one in subsequent lecture so if you are ready let's go ahead and jump on to the next lecture thank you in this particular demo I want to create or show you how to create Windows virtual machine and how to connect the instance so if I click on create instance I can see web win 01 I want to create that in west because it is somewhat cheaper I want to change this image to the windows one thing to note here is if you select some of the windows machines it will show you you know the size for the persistence is 50 GB and for some other it is it will show you you know for some other which I tried I want to create the it is 32 GB that is even fine so right for some other it is saying that 50 GB is required so at least 32 GB is required for some windows instance that is the take here I am just selecting whatever it is required but I want to create the SSD for this particular instance because it is taking too long time to create the instance and I want to even change this particular instance for the demo purpose because it was taking too long time to create the instance with the small configurations so I want to allow everything here so SSH key actually it is you can say similar process but I am not creating any SSH key here or attaching it I want actually prem table so if I switch it on the price should go down drastically so before I switch it off it was 112 dollars and if I switch it on now it is showing me 78 dollars this price is there because I have relatively considered big machine if I select a smaller machine it is 41 dollars let me choose the big one because I want to create this video as much as fast I can do it so let me create it so the instance is being created let me pause the video for some moment so that will come back once the instance is up and running so the instance is ready so it has got external IP address internal IP address I can go ahead and create instance group out of this particular instance I just let me just go ahead and do RDP to the particular instance it is asking me password I have password already created but I think it should reject because this particular password which I had created was for the other instance so before you guys go ahead and connect to this particular instance you need to reset the password for this particular instance so set password for this instance so this is the user and this is my password I can use this password to connect it but keep this in this is in mind it takes actually some time for you guys to get this particular instance if the instance is very small you need to wait for certain you know at least some time for your password to get reflected in that particular instance because it takes some more time so the instance is coming up now or it is at least getting connected you have option to either disconnect minimize send out control this is this is Chrome's you can say extension for RDP which I am using it right now I have created this particular instance but this is not you know unfortunately which I created with the desktop so whenever you do RDP you will be able to connect this particular instance but you will not be able to see the full desktop here so I have created another instance which we are going to do RDP in a short while once the instance is ready I will pause the video for some time so the instance which I have created earlier was the server instance and it does not have the graphics inside it so in the meantime what I did is I created another instance Windows instance let me reset set the password for this one as well copy the password just need to actually save this password you need it while you are connecting it so close and do RDP to this instance it is connecting so it takes while for smaller instance to get that password reset whatever we did either you can actually wait for some time to get that reflected or you have another option actually just to go ahead and stop the instance and start again so that way you will have all the updates for password into the instance so the instance is coming up so the instance is started now I have resetted the password as well and I have the password for this particular instance let me do the RDP so I think there is some issue which this particular guy is guiding me on certificate and the instance here is 97 delete certificate so connect again password continue see it is now connecting to my Windows instance which I have created the earlier instance which I had created was only the server version so it has got no actually the UI as such as a desktop I have created another one with the UI choosing another image so let us go ahead and understand if this particular instance has got a internet connection right now or not once it pops up first time usually actually takes time it will not be too much for the second time onwards because for Windows it has to set up the profile and everything when you are logging in for the first time so this is internet explorer okay and this would be the risk so let me put google.com continue I hope it is coming in this is warning message here see we have google.com so let me actually try to actually install Chrome here I do not want actually this warning message to come up so I want to go ahead and download Chrome let me see if it is allowing me to install that here and you can actually configure the Windows option here if you want to do it taking some more time probably the instance which I have chosen is this guy is still setting up the options I think let me go ahead and pause the video here yeah means ultimately I can go ahead and browse internet so which is good news and then by default google actually gives you google cloud you know the shell you can run gcloud commands inside the machine if you want to right so that is it for I think the Windows machine guys if you have any question on this one let me know otherwise continue to the next lecture or the demo into which I will try to explain so that is it for the Windows virtual machine guys will continue our discussion on the virtual machine demo and next chapter is the next demo for us hello welcome to this lecture this lecture or the section is part of master class cloud computing for cloud architects designers developers and services operations our main focus is Google cloud platform and certifications around it are you excited to get into new section altogether yeah yes we are getting into data solutions and this is the next one after completing our computing service database and storage services one of the three core services that are available in any public cloud platform and it contains relational database it contains no SQL databases like Hadoop Dynamo DB MongoDB Spark Cassandra and what not right so that is all database and storage service database and storage service along with AI divided into three different high level categories and this division is because of we have new partners like artificial intelligence and big data and those are like offerings on to public cloud platform and that's why we have you know so database and storage is again one of the three offerings that we have in space of say data big data and AI so database and storage contains SQL no SQL database and storage we have big data offerings like Hadoop Spark Apache Beam like that and then we have machine learning and this is altogether separate topic altogether but we are going to look at machine learning in actual as well so what is that we have on database and storage services we have cloud SQL which is relational database on cloud and you can actually launch my SQL as well as Postgres SQL instances in cloud using cloud SQL out of Google cloud platform cloud spanner it's homegrown relational database management system horizontally scalable across the globe big table it is you can think of HOS implementation in cloud Google cloud platform data store you can think MongoDB like implementation in Google cloud it's a document DB and then we have cloud storage it is an object network attached storage you can think of you can just store an object into this and the object could be any static file or images or videos or like that and then in big data service we have cloud BigQuery cloud Dataflow cloud Dataproc Datalab and PubSub we are going to see all of these services together in fact cloud PubSub should not be a part of big data service they have just categorized it inside big data service it is an integration tier or messaging platform which we use it for all our messaging needs in public cloud let's get into details of what it is expected out of you as a cloud engineer from from data site right so you need to deploy and implement cloud solution around data space in short successful operation of those cloud solution around data services when we say you know deploying and implementing data solutions in nutshell what they have put forward is initializing data systems with the product like cloud SQL data source data store BigQuery cloud spanner cloud PubSub all these data services like initializing so you need to launch instances or initialize the service for you to use it and then you load the data using command prompt API transfer import export from cloud storage streaming data to the cloud PubSub and this is wherever it is applicable okay this is the part of section three deploying and implementing solutions and as a part of section four managing data solutions onto the cloud you need to execute the queries once you have the instance ready everything is there in the cloud now you need to execute the queries retrieve the data from data instances on all these estimating cost you cost of BigQuery backing up restoring data instances reviving job status in cloud for cloud Dataproc and BigQuery moving objects between cloud storage buckets and this is pure in the context of cloud storage converting cloud storage bucket between storage classes setting up life cycle management policies for the cloud storage working with management interface in cloud console so all of these are the details syllabus but if you summarize and categorize all those things together what it meant what we are looking at is for cloud SQL data store big table Dataproc and spanner we need to launch a new instance create new instance set up those instance or configure using different parameters load the data to and read the data from those instances high availability configuration backup replica and restore configuration all those are like admin activity and you have to manage it in the cloud environment for cloud storage your managing buckets objects etc. your managing storage classes and its pricing around it how you optimize your cost if you look at BigQuery and Dataproc so BigQuery just manage service how you schedule a job and execute a job the process in its life cycle for Dataproc it's a Hadoop and Spark implementation so how you create Hadoop jobs and Spark job and run it and maintain its life cycle for cloud Pub-Sub it's a messaging platform and we will get into demo of this one we just had a glance of it in computing platform as well but we are going into details of cloud Pub-Sub and to create a message and trigger those messages data flow it's purely you can think of not asked especially in the in the syllabus but we are going to cover so that you understand if at all you have any questions one or two you should be able to cover that but we are going to go through that that's it as a starter Kickstarter for this particular section if you have any questions let me know let's get started on to cloud SQL thank you Cloud Spanner it's a global RDBMS or relational database management system which Google has built it from the ground Cloud Spanner is one of the database service that are available for us to use it and it's a RDBMS category so the database and so it is a part of database and storage service Cloud Spanner this one so you can think of we saw cloud SQL which is a relational database management system and that using which you can launch MySQL a PostgreSQL instances with Cloud Spanner it's not you are not actually going to launch any other database it's a Cloud Spanner is the database which you have it and you can launch it you cannot take Spanner on to your on premises environment so you need to just launch it from the Google itself currently Cloud Spanner is used as a Google's own RDBMS requirement and it is globally scalable database that we have available so Cloud Spanner it is fully managed relational database management system built from ground up and that is done by Google all RDBMS requirement and that is the Spanners you can think of use case ad take financial global supply chain and detail you can scale plus consistency requirement and that's where you use Spanner it is horizontally scalable database that we have available and we are going to see it mission critical applications with high transactions but at the same time you can say if someone has global requirement global database requirement that's where you can use Cloud Spanner besides cloud SQL that we have typically what is the best of you know best of both word that Spanner is giving it to us it has got a schema which is there in traditional RDBMS which is not there in the relational database you have SQL like syntax almost all the SQL syntax which is there from relational database is available in Cloud Spanner which may not be available in some of the no SQL database consistency strong and consistency which you can tune it in Spanner and in relational database where in no SQL you have eventual consistency availability in terms of availability is high available horizontally scalable and automatic replication whereas traditional RDBMS like MySQL Postgres SQL or Oracle you cannot actually do horizontally scalable it is fined to fail over you can actually scale vertically you cannot do horizontally a replication it's configurable the way we want to choose it definitely a no SQL provides high availability as well as horizontally scalable as an example I can give you you know the dynamic DB or MongoDB or you know Cassandra it is highly available clusters that we can use it some of the Cloud Spanner use cases and this is you can think of the use case or typical use case which you want to understand why do we you know use Cloud Spanner this is not mandatory for engineers Cloud engineer certification but it is required for the developer exams and cloud architect exams so financial trading inconsistency leads to and this is before Cloud Spanner inconsistency lead to potential monetary loss during reconciliation global reconciliation replication is a trade off feasible inconsistency lead to incomplete view of a customer and you can actually map this to the the type of database which we are comparing is you know Oracle is there say as an example Oracle is installed in Europe somewhere right that is not available in all other so the instance is there in the in the Europe and it is not available in US and Asia right whereas Cloud Spanner actually it can span across multiple regions and you can have strongly consistent database spanning across the globe and you can maintain this consistency you can have unified view of the customer and you can actually compare that so all of these thought processes are the use cases which you have it is in comparing the database which is which resides in one particular region or location whereas the database which span across multiple regions or locations global call centers event eventual and out of touch it's a real time and up to date supply chain management and manufacturing supply chain manufacturing supply chain presence inconsistency global view or data must be shipped in batches and this is where you if you want to actually reconcile the data between all other different locations then you want to reconcile or transform the data into one system and then that's where you get the single view with Spanner you can have global real-time consistent view of enable which will enable real-time decision-making telecom and billing processing capacity limited to finite scale up compute resources scale out allows improved processing speed and I don't think this is a problem for many of the telecom providers because I work in telecom but maybe some of the telecom provider may face this logistics and transportation regional reach with many systems glued together global reach to lower latency and consistency view gaming each server and cluster is own universe like you need to have different cluster in different regions to provide consistent and fast access to your gamers consistent global will to deliver unified experience e-commerce limited availability SLA or no SLA guarantee in practical practice potential missed sales guaranteed max up to five minutes downtime on paper like this is the SLA which they have it this is you can think of it's like sales speech or why even summer Spanner exists today okay in a shell what what you want to see is the one is regional in nature or zonal in nature if you map this to the Google cloud or any public cloud context and Spanner is global in nature so these are the attributes which are there I'm not going to get into details what is valid and what is not valid and why it is not valid okay this is just information purpose I'm going to share the slides and you can actually go through it global scale it is here a bite scalable and practically infinite the way you want to have it availability monthly SLA multi regional offers 99.999 with downtime less than five minutes and this is monthly SLA consistency purpose to build for external strong and global transaction consistency and this you can think of why they are putting forward this consistency right so they have this fiber hypha you know the network which is actually backbone for the Spanner and all other Google services and the way the speed of the network which they are achieving it they are saying that even though this database is globally nature it is still strongly consistent database in 2018 what we saw multi regional admin versus data logging strike driver integrations all of these are like in 2018's enhancement you can commit timestamp you can have commits time time stamp time stamp compliance requirements which you can enforce it so some of the data which are like European Union it can reside in Europe and like that open census integration you have import and export out of Spanner DML queries also supported and query stat also supported so all of these enhancement is related to 2018 and that's what we saw in earlier when I prepared the other codes this was not there you can think of how this the global you know global Spanner look like right if you look at if you create a database and the database actually as an example here DB one right so DB one existing zone one it is existing zone two and it is existing zone three as well like DB three it exists in all three zones DB two exist in all three zones like that so it can span across multiple zones that is what we want to say here the instance is typically if you look at if you drill down the DB one part right if you look at DB one exist table one and table one is there in all three zones table two is all three zones so consider a case wherein when the update actually goes to say zone two right in this particular table table one what it will do is it will take the updates right into database one but at the same time it will hit the other two zones right it there and then so then you will get actually feedback back that the confirmation back and then that's how you get the confirmation for the commit and that is how it is strongly consistent database so consider if at all zone two goes out for some reason what will happen is you can still go ahead and get the data from you can still go ahead and get connected to zone one and that's how the data gate actually data goes to zone one and then it will write to zone three and then you will get the confirmation back so what is instance instance is allocation of resource when you create an instance you can choose where your data is stored and how many nodes you use it for let's go ahead and create one instance in cloud spanner so I'm in storage and inside storage you have this cloud spanner option so cloud spanner is fully managed machine critical relational database I'm not going to read this you can just go through it what you need to give is you need to give the instance name jcp train cloud spanner instance and you can actually configure regional versus multi-regional nodes the configuration you can choose it if you say multi-regional then it will give you only few options right europe uh num your ishia and these two three and six says I think us and you will get the details about it right northern virginia southern virginia like that so two read replicas which are available in us east four and two read replicas which are available in us east one that is south carolina availability is this so whatever you choose it here you will get the details out of it so num six and all of these the details about it uh if I choose europe you can see belgium and netherlands uh you will have this region and then there are two redux read write replica in europe west and two read write replica in europe west four if you click regional and select one particular region you will get the information right regional asia east one three read replicas in three separate zones within europe ish ishia east one availability is reduced so if you at all you choose any of the regional ones you will get four nines of availability if you choose multi-regional uh example this one you will get five nines of availability finance of availability means uh it is 0.001 percent not available right which means around five point four or some something like that minutes in a in a particular year the instance may not be available and that's the s le guarantee which you will get it i'm going to select regional and you can actually just go ahead and choose one you can define nodes here okay and the nodes is uh the cost factor so if you increase the node here like you will get 400 nodes you will have 90 dollars per hour charges storage 30 uh gb per per month okay if i change the number of nodes the node cost in terms of processing gets changed but a storage remains same because you are using the same storage node guidelines right what is the guideline which you have so each cloud spanner node in this configuration can provide up to 10 000 qps and that's how you request for a node if you have only one node you will get 10 000 qps of read this is queries per second that is read queries per second or 2000 queries per second of write so this is r it's not actually and so mix you can actually mix and match up all these queries and that is what uh you will get the performance on the writing single row at one kb data per row and two terabyte of storage so you are actually pulling the data from two terabyte of storage and that's the per node performance which you will get it you may ask a question about you know you have high your database is suffering and currently database has got two nodes uh what is your recommendation whether you want to increase cpu memory what is that you want to do right and your answer should be the nodes if the node is there for optimal performance of the configuration we recommend provisioning enough nodes to keep overall cpu utilization under 75 percent minimum three nodes recommended for production load okay so there is nothing here at search i can just go ahead and hit create and it will create a spanner uh for me so there is no data yet so spanner instance is created for me okay i can create database inside the node i can give gcp spanner demo db spanner db continue and this particular name is unique to your gcp product project define your database schema at the tables indexes the way you want to have it edit as a text you want to you can actually give statement to create it i'm just going to create a database so now i have a database so instance i have instance inside that i have a database i can create multiple database inside it i can say create multiple database i can edit instance i can see instead of two i can instead of one i can put forward two and then cost will be changed like this uh you can import the data you can export it out of it you can either delete the instance okay so that's the cloud spanner in nutshell this is how you launch uh the instance so when you create an instance you can choose where your data is stored how many nodes you want nodes typically individual nodes that's what we saw spanner has node to process the data actual instance node provides 10 000 queries per second rates and 2000 queries per write with single one kb and this is assumption if at all your size goes high then probably you'll not get this performance and the performance will be reduced uh and two terabytes of storage node count there defines the number of nodes in the instance for additional performance you can add nodes encryption it automatically encrypts all your data before it's actually writes to the disk data is automatically and transparently decrypted when you read from the authorized user there is a concept of uh interlude table here uh and you can actually have one table sitting inside on the other one like the example here it does not really matter for you uh in in here i can have additional data is designing for spanner as a separate lecture but it is not really uh required here uh this is the way you can actually go ahead and create the tables uh so the table has got uh fields and fields has got a type and then you can use primary key you can just go ahead and get any select statement from create statement from the internet i'm not or i can actually just go ahead and copy paste this and create a resource file so that you can just go ahead and create it okay uh so first one is just a table single table uh second one is actually the album table and which sits inside the single table and you can have a songs table interlude a parent album and you can go ahead and create the indexes also so indexes is also supported you can create uh index on the songs with the multiple it's like cascade index so the number of spanner nodes in your project is based on regional or multi-regional the amount of storage that your table and secondary indexes use the amount of network bandwidth to which is used and this actually translates you can think of the number of nodes right it translates to per node our rate regional you will have 90 cents and that's what we saw to 1.126 depends on the location which you choose multi-regional it can go from three to uh you know nine and you can actually go here i'll show that create instance if i go here choose multi-regional and select any region then you will get you know three dollar for asia it is more nine dollar per hour okay so it is different in different region so three five and nine dollars okay uh for the storage uh per jb per month 30 cents to 45 in case of regional 50 to 90 in case of multi-regional if you look at here multi-regional this is 70 cents if i choose uh change the reason it can come up to 50 cents or it can go up to 90 okay in case of regional it is different so uh you will be charged based on amount of network traffic or bandwidth which will be used uh ingress is free egress with the same region is free you need to keep that in mind between the regions you will be charged per jb uh one cent intercontinental egress at internet egress rate and you need to look at you know the network uh the network charges which will be applied in uh the vpc and the network section clouds panel and iam as because it is ground up build uh it is very well integrated in iam and you will have different uh kind of iam permissions which you can use it one is instance configuration you can create a list get the instant information uh the second one is instance operation uh you can delete uh create and all that instance operation uh you can create a database this is inside the instance you are creating updating database you are getting the database you are doing it database operations or you can even you know you have permissions for session like whether you can connect to the instance or not right or database or not so typically admin is a person admin database database admin is again a person or database reader users can be machines and viewer again can be a person and this is primitive roles which are also used uh that's it guys i can actually go ahead and get another demo created wherein we will go ahead and create the database tables as well but thank you for this particular uh but that's it for this particular lecture guys if you have any questions let me know otherwise you can move to the next lecture thanks cloud networking cloud networking is very much or you can think of critical uh service that are available from any cloud providers right now there in the market we use cloud networking to isolate your resources cloud resources from other companies or from public accesses let's go ahead and get into details of what are what is that that we have available from google cloud platform cloud networking is one of the three core services that are available from google cloud platform or any cloud platform to say in cloud networking you can create virtual private cloud you can have load balancers you can create firewalls routes sub networks cdn as that you are actually using it from your own data center in a high level cloud networking divides into three different areas the first one is the load balancer that's where you take the traffic from your customer and distribute your traffic to the backend services and we saw this in detail in our computing service the second one is vpc and that's where you create the virtual private cloud which is you can think of private networks in the cloud in the global cloud and you can spin off any resources inside that particular virtual environment or private environment so that others cannot actually access it so vpc is one and you have subnetworks firewall and vpc created inside the vpc to isolate different environments but besides that you go ahead and create a hybrid connectivity like if you have your data center and you have some applications running inside your data center you want to connect your data center with the google cloud platform and that's where the vpn or the interconnections part also will come and that is what we are going to see it in vpc as well as vpc as well as vpn or interconnections in this topic we are also going to see dns and cdn services these are also the core services that are available but those are you can think of optional service or additional service which you can use it based on your requirement but we are going to see this in detail in this particular section so in nutshell we are going to see cloud vpc which is the network inside the cloud we are going to see interconnect interconnect means connecting your data center with the google cloud platform we have already seen load balancer we are going to see cdn and we are going to see dns here in this particular section cloud vpc it is managed networking functionality for google cloud platform resources using cloud vpc you can create a private network what it means is you can provision your gcp resources connect them with each other inside the vpc and isolate them from one another using or creating different vpc or the subnet inside it you can also define fine grained networking policies with google cloud platform on premises or other cloud infrastructure using vpc you can think of it as comprehensive set of google managed capabilities including granular ip range selection you can define routes firewalls and cloud routers as you are actually doing it in your own premises or your own data center let's look at some of the features for vpc it is built you can build private global data center like field without managing hardware in terms of like hardware switches or routers or building it on your own you can have subnet firewalls route or you can define the pairing inside vpc you can monitor the network connections using flow logs and it is global shareable and expandable by design so you don't have to provision any resources it is not a physical device so that it gets bottlenecks when there is a problem or there is huge bandwidth utilized it is managed functionality everything is managed by google cloud platform and it scales based on the requirement so you don't have to worry about the scaling it is software defined and it is not the hardware i just wanted to reiterate that it is possible to provision cloud resources you can connect them with each other and you can isolate them you can create a subnet and you can even isolate different environment like prod dev and test right and that is what vpc provides different there are different types of vpcs which are available one is the default which is created by default when you have an account you can create auto mode vpc or you can do custom mode vpc and we are going to get into details of that some of the features of vpc in google cloud platform it is global in scope so it is not specific to region or zones or data center right it supports multi-tenancy you can have private communication you can define subnetworks you can define firewalls you can define routes you can have cloud router for bgp link you can share the vpc you can have access control managed via iam so in nutshell vpc is a comprehensive setup you can think of google managed networking capabilities including granular ip address range selection you can define routes firewalls and it is virtual private network on the cloud and it supports cloud router and what what do we mean by cloud router we are going to just see it in the next slides so vpc is a virtual version of traditional physical network that exists within and between your physical data center so you can think of it is virtual version and it is software defined and it is not your physical devices inside your own data center each gcp project contains one or more vpc network vpc is global in nature and allows global vm instances and other resources to communicate with each other via internal private ip addresses so vpc itself the concept of vpc itself is global in nature it is not tied to a particular region or zone so vpc does not have ip range so ip range is created within the vpc you can create different sub networks so part of that particular network and you can attach the ip ranges to the sub network so if you look at the vpc it is firewalled network and it has got no ip ranges it contains sub networks and sub networks has ip ranges you can have more than one sub network in a region for a given vpc as an example if at all you are deploying your services into say us east one of the region you can have multiple environments created they are like dev stage performance testing or uat and the production you can create all of those as a different environments or different sub networks and that's the power of sub networks some of the thoughts on project and vpc all the objects associated with project definitely because vpc is contained within a project so all the objects inside the vpc is also associated to a project each project starts with default vpc and that is default which is created out of it you don't have to create it and it is in automated so everything is created for you you can go up to five networks per project and this is the limit right now that we have it to create the vpcs let's go ahead and get into details of different type of vpcs that are available and its features the first one is default vpc so this vpc is created and already there when you have a project so for each and every project you have default vpc subnets are created by default per region per zone internet gateway also created firewalls also open between subnets so that all the resources can communicate with each other let's go ahead and browse this in the console so if you look at if you go to network and go to vpc network options you just click on vpc network and this is the default vpc which is created for you you don't have to create the vpc and these vpcs contains subnetworks that are created per region all the subnetworks are created the routes are defined for all those subnetworks so that it can talk to each other and there are firewall rules also created for all you know increased traffic there are firewall rules which are created and you can attach these firewall rules to a particular instance or you can have it for the whole subnetworks let's go back so that's the default vpc that is created by default for you by google platform per project the second one is auto mode vpc and this is you can create a vpc here if i go ahead and say create vpc network you can either say custom vpc or auto mode vpc if i say auto mode vpc then that this is what it changes so you will have subnetworks routes created by default for you and you do not need to worry about it when you click on auto that's where you need to add the subnetwork wherever you want to have it as an example right this one so i'll go back so auto mode vpc has got a single subnet per region by default created it has got fixed slash 20 uh side range per subnet you can expand that up to slash 16 uh default network is auto mode network and predefined ip ranges are there so you you are not creating any ip ranges so if you look at this one automatic you have these subnet ranges already created ip ranges already created and that is by default firewall rules already created for you okay and the routes also created for you you do not need to create anything in case of auto mode vpc if at all you want to disable something you can just go ahead and disable that custom mode vpc uh no default subnets are created if i just go ahead and uh make this as a custom vpc let me just create it custom vpc and let me see custom and i can just go ahead without any uh configuration i can make dynamic routing rules whether it is regional or global and we are going to get in details here uh but i'm just leaving it as is now and you can have dns policy also you can just go ahead and create it so the custom mode vpc is getting created so it has got no manual uh no default subnetworks uh manually created subnetworks can be valid uh any valid rfc 1918 ip range uh range do not have to be contiguous between the subnetworks because you are defining your own uh subnetwork ranges right and you have full control of ip ranges going back here if you look at this custom mode vpc does not have any subnetworks and it is custom right no firewall rules created no uh dynamic routing i can just go ahead and add subnetwork subnetwork us west 01 and i can go here us west 1 i can actually define ip ranges and you can go up to 16 per uh per ip ranges or cider ranges you can have google private access on or off you can enable flow logs and flow logs is very important when you want to audit uh who is accessing what data and you want to monitor the network and that's where flow log will be useful we'll we'll go ahead and get into it but i'll just go ahead and add this now so it is creating my uh subnetwork inside custom vpc there's not yet created i think okay yeah it's ready now so i have subnetworks created but if you look at firewall rules none of that is created so if you look at this network for this network uh we have firewall rules but no firewalls rules created by default for custom network the routes if you look at the routes there are two routes created one is default gateway to the internet and one for your uh subnetwork that is created so routes are created for the subnetwork which you have it but not for the other subnetworks which you have not created or any other region or zones right if you want to connect to so that's the custom vpc so in vpc as a summary if you look at you can create per project you can create five networks and that is by default which is the quota which you get it from google cloud platform so you can create say prod dev stage and all that network or based on the department also you can create the network and you can create different subnetwork to isolate different kind of environment like dev stage and prod right uh those networks are global in nature it is not tied to one particular region uh and you can create any resources in any region within those uh network uh it can talk to each other with the internal network communication so it it does not have to use uh external IP address to connect to each other or talk to each other but if you have if you are creating a resources in different network all together this communication is the external uh traffic and the internet egress uh charges will be applied for this particular communication just i need to keep this in mind so if at all you are uh creating resources in different region within the network then it is internal google internal network and there is there are no charges incurred as a internal to the network uh there are some you know cross region charges which is minimum but uh it is it is treated as an internal to the network but if you are uh connecting two resources in different network all together even though they are in the same region it is treated as uh internet egress traffic cloud networking hybrid connectivity we are going to get into details of the connection between your own premises uh to your vpc and we already saw vpc how you can create your private networking in public cloud environment now we are going to see how you can connect that private environment of private network with your own data center if at all you have it or your office so hybrid connectivity or the connection from your own office to your cloud or your private network is a part of networking service and networking service is a part of one of the three core services that are available in public cloud environment we are going to see we already saw vpc that's your private uh you know networking in cloud we are going to connect you are now the vpc with your own uh premises using vpn or interconnections and we are going to get into details of that right now so to connect your own premises with uh with gcp you have multiple options and you can use based on what is the use case that you have it so the first option and this is just copy paste you can think of from google cloud platform the first option is google cloud interconnect and what is that it is so this is required wherein you want to have considerable amount of you know data exchange between your google cloud platform and your own office not your end customers but your own office or your own data center and that's where you use uh cloud interconnect we are going to get into details of interconnect shortly uh but wanted to just give you high-level overview the other option that you have if you do not want to have dedicated connection is cloud vpn and this is you can think of it's like traditional vpn or ipsec vpn which you can use it over public internet there is no physical connection exists when you are using cloud vpn it is just going via public internet and why you use it you have you do not have actually the use case to uh connect your cloud interconnect or to have a pipe or dedicated pipe from the gcp vpc to your own premises and that's where you will start with the cloud vpn it is low cost as compared to interconnect and it has got its own benefit and disadvantage as well but that's the second service the third one is the peering and this is not purely a part of google cloud platform but you can think of peering is required when you want to connect your google cloud google platform like g-suite applications you want to access all other google services and you want to you know take advantage of reducing ingress fee and that's where you use peering you have multiple options they are in peering as a direct peering and carrier peering and we are going to see the differences in all of those shortly so the first one is cloud interconnect so cloud interconnect let you connect your own premise or the data center you can think of with google cloud platform the additional advantage which you get it out of cloud interconnect is you have dedicated connection to gcp directly and that's where uh you can exchange the data you can exchange the network configurations like if you have a sub network kind of implementation in your own data center and you your resources a virtual machine or physical servers wants to connect to the gcp resources or gcp wants to connect to your own servers that's where cloud interconnect will be able to help you to have dedicated internet connection or interconnections between your premises with with google cloud platform so high-level features it's a low latency highly available service that you can use it to connect your on-premise to google cloud platform you can have dedicated and partner connection options so you may have a location wherein you can get connected to google for it pop location but there are some companies they do not have interconnections or the location exactly near to the google pop location that's where you can go with the partner connection option it supports rfc 1918 what does this mean this means is you can have network exchanges between your on premises and cloud and all of these resources can talk to each other you can create ip space ip ranges that the way which we we have seen it in vpc right the similar way you can have the network connections or support between your on-premise or the office with your gcp vpc and it is a direct pipe or private pipe you can think of which is like data center from your data center to google cloud premises another option that we have is direct or partner pairing and why do you use it so this is not you can think of you want to exchange the network between on premises with google cloud cloud platform so everything is you can think of everything is there in your in the google and you want to just take down your egress fees and you want to have high speed connection and that's where you use direct peering you're not exchanging any network information with google cloud platform using peering option so the features you can have direct connections with google or partner wherever if you're not available there with the direction direct connection you can have vpn which you can configure or directly internet like public internet and you want to reduce the cost of egress fees right and that's where you use direct peering the third option that we have is cloud vpn and this vpn is you can think of traditional traditional vpn so why do you use it it has got a sla and sla of 99.9 service availability you can have site to site connections and you can create multiple connections to cloud environment or gcp environment from your own data center or offices it supports cloud router and we are going to get into details of what is that and what is that means to us as a cloud router but you can think of if you want to exchange the network information from your own premises with the gcp because that's how gcp service resources can discover your resources on premises the cloud router can be used and you can have encryption or secure traffic using cloud vpn and that's where you use cloud vpn so in total what we are going to look at is cloud interconnect cloud direct and carrier peering cloud vpn and cloud router cloud router is just an additional you can think of the service which you need to use it to announce the network changes if there are any and we are going to get into details this is high level you can think of decision tree and based on this decision tree you want to use you know one particular service or the other service to connect your on-premise data or on-premise office or your data center to google cloud platform if you look at very high level do you need direct access to your private computing resource on gcp the answer is no that's where you go to peering options right so this is where you are just accessing google cloud right from on premises and not your cloud resources accessing your on-premise data that is where it differs right do you need to connect your gsuite yes and can you meet peering requirement if at all you are there in google's location you can do direct peering if at all you are not there there are partners out there who can actually get you connected to google with their own connections and that's where you use carrier peering so this side that's where you want to exchange data between your on-premises and google cloud platform with the interconnections or the networking steps and that's where you take this route do you need to extend your data center to the cloud yes that's the requirement do you encrypt the sensitive information at the application level if you encrypt the sensitive information on the application level you can just go ahead and get into interconnect because interconnect does not provide you the encryption for that particular pipe if at all you need an encryption because your application does not do it you should go for cloud vpn okay so if you're if you do if your application is encrypting it or you do not need any encryption for your data you can take the interconnect route can you meet google's at one of our pop's location and that's where it says if at all you are there in the pop location that's where you can choose this route or you just go ahead and talk to interconnect partner and get it connected to google right is your need a 10 gb or more and that is where you have direct connections because if it is less than 10 gb google recommended to go with the partner because it's less it's cheaper right not going in too much detail here but the high level thought here if you need an sla that's where you will choose dedicated interconnect and vpn if you are least bothered about sla you will choose direct peering or carrier peering also more importantly if you want to exchange the network from your google from your own data center with google or you want to have direct connectivity between these two your own data center and google cloud platform that's where you will choose dedicated interconnect or cloud vpn if you do not need you know your on-premises resources talking to google cloud platform that's where you choose direct peering and carrier peering if you are just accessing g-suite applications or collaboration platform and you want to reduce the egress fee you are just going to go ahead and use direct peering or carrier peering based on where you are or where you are located okay that's in nutshell we are going to get into detail of cloud vpn in next lecture thank you where you can actually go and explore the hybrid connectivity options you can go to the console and you can go inside the network networking and you have hybrid connectivity and that's where you see three options right vpn and interconnects are the connection option and crowd router is just exposing or announcing your internet changes or network changes so if i go here you will not be able to see peering option here because peering is supported out of g-suite connection and not as a part of google cloud platform connection so you can go here and create a vpn connection if you want to have vpn connection you can go here and do interconnect connections if at all you have data center and you can replicate and you can have cloud router connected you know are created out of this one we are going to see in demo for vpn i will not be able to show the demo for interconnect because i do not have the data center to connect to but i can show you the demo from one region to another region on the google cloud platform itself and then we'll add cloud router as well to announce the network changes and we are going to see that in the demo as well so this is all about the hybrid connectivity guys if you have any questions on here you can wait for the theory for individual services or you can ask me in your in our questions thank you so in this particular demo we are going to see the bastion host how you can actually go ahead and set up the bastion host what i'm going to do is i'm going to use one of the virtual machine which is already there in the existing which i have created in existing lab and then i'm going to delete the external ip address for that particular server machine and then i'm going to create another virtual machine in the same subnet and i'm going to connect that so this is just i just wanted to show you that how you can actually delete the external ip and then how you can access it from outside so i have these virtual machine instances let me go ahead and delete the ip address external ip address for this one pvm one then go to edit i had to delete the network tag which was allowing all the traffic i have deleted the external ip address and then i'm going to send save it this is saved now you can see that for pvm one there is no uh there is no external ip address so you can't actually connect this one from outside now without actually creating any other machine right now i can just you know treat pvm two as a bastion host i can just go here i can just do ssh to the internal ip address it was so simple so uh you can actually do ssh to the hostname as well right pvm one see you are already in so this is how you can set up the bastion host i hope you understand this one this is very quick demo i just try to utilize whatever i had created in earlier our demo for this one as well this concludes actually the virtual networking configuration if you have any question on this one just let me know otherwise you can move to the next section in next section i'm going to cover the interconnect so if you have say my infrastructure or your own data center and you want to connect to a google cloud platform for additional services how you can actually do the interconnect uh that's what we are going to see it uh in addition to that vpn connection dns cloud dns and cloud cdn thank you stack driver using stack driver you can do cloud monitoring logging trace debunk and error reporting we already saw the core services like computing database storage and networking right but all of these services needs provisioning monitoring uh utter scaling load balancing right so all of that we need to have some infrastructure services who are managing those resources in the cloud right and that's where resource monitoring comes in a play as a stack driver offering uh stack driver monitoring does a platform or system application matrices you can do health check uh uptime check you can create multiple dashboards where you can monitor and you can configure even alerts in stack driver monitoring uh in stack driver logging you can have platform system or application log you can do log search view or filtration you can do log based matrices uh you can do debug in a production using stack driver debug you can take conditional snapshot and it has got id integration as well uh using stack driver trace you can do latency monitoring per url latency sampling and in error reporting you can do error notification and error dashboards so how stack driver is used uh stack driver is actually a multi-platform cloud monitoring service uh you can have gcp projects you can monitor gcp projects uh in terms of you know monitoring debugging error reporting logging and trace but you can also do uh that with uh aws projects right so you can do aws monitoring as well so stack driver works with gcp as well as aws stack driver needs to have host project to uh have an account and you can enable billing account uh around the host project uh this host project is in google cloud platform so you cannot have uh so if at all all the services which are there in aws you cannot have uh you know its account used in gcp you need to have uh a host project account uh to have this monitoring enabled so when we say stack driver a stack driver is very much integrated with all the services about uh out of gcp okay so what are what are the benefits having actually the pre-integration right so you can do monitor uh multiple clouds aws and gcp and those connectors are already available in stack driver you can identify trends and prevent issues you can reduce monitoring overhead you can improve uh signal to raise a nice ratio and this means is uh you need to have uh a good uh responses over the bad resources that's where you know uh you using stack driver you can actually improve that and you can fix the problem very quickly if you understand it very early let's go ahead and get into stack driver monitoring hello welcome to this section on big data service from google cloud platform big data platform services uspr unique selling point for google cloud platform and one of the oldest service from google big data platform solutions help customer to process huge amount of data without managing the underlying infrastructure for enterprises or customers it really matters how easily or efficiently they can ingest the data if it is stream or batch or existing data process the data or store the data do x data exploration using visualization or report the data google big data solution provides different solution to satisfy all of these requirement or similar requirements this is the naji i will take you through google's big data solution overview cloud certification does not cover in-depth aspect of big data solutions but what is what is necessary as it's one of the critical service from gcp so let's go ahead and get started big data solution as a pass platform as a service google offer big data solution as a platform service and different options on big data services are google big query which is uh edw or enterprise data warehouse uh in gcp google data flow uh or you can say stream processing uh google data prog which is hadoop and spark clusters uh google cloud data lab data exploration tool some of the other services which can be treated or they have a consult dated into big data service are google cloud pubsup and google genomics google big query is fully managed enterprise data warehouse for large-scale data analysis google big query it's fully managed enterprise data warehouse for large-scale data analytics petabyte scale and low-cost enterprise data warehouse for analytics big query it's serverless there is no infrastructure access to manage and you don't need a database administrator so you focus on your analyzing the data to find mini capable insight using familiar sql some of the product features of big query are flexible data injection so you load your data from google cloud storage or google data store or stream into big query at hundreds of thousands of row per second to enable the delta analytics into your data it's globally available so you have option to store your big query data in european locations while counting benefits from fully managed service now with option of geographic data control without low-level cluster maintenance or head act security and permissions you have full control over who has access to the data stored in google big query share data set is not impact your cost or performance cost controls big query provides cost control mechanism that enable you to cap your daily cost or to the amount that you choose highly available transfer and data replication in multiple geographic geographic means your data is available and durable even in case of extreme failure modes it's fully managed in addition to the sql query you can easily read write data in big query via cloud data flow spark and hadoop connect with google products you can automatically export your data from google analytics premium into big query visualize it using data studio and analyze data stored in google cloud storage automatic data transfer the big query data transfer service automatically transfer your data from partner SaaS applications to google big query on scheduled or managed basis this is all for big query so in that shell what you need to understand is big query is enterprise data warehouse google cloud data flow or Apache bin it is a stream or batch data processing so google cloud data flow offers a unified programming model and manage service for executing wide range of data processing patterns including stream analytics etl and batch computing cloud data flow frees you from operational tasks like capacity planning or resource management and performance optimization and like that some of the product feature of data flow it has got automatic resource management so cloud data flow automates provisioning and managing processing resources to optimize or minimize latency maximize utilization no more spanning of instances by hand or reserving them dynamic work rebalancing automated and optimized work partitioning dynamically rebalances logging work and no need to chase down hotkeys or pre-process your input data it's a reliable consistent and exactly once processing provides built-in support for fault tolerant execution that is consistent and correct regardless of data size cluster size or processing pattern or pipeline complexity it's horizontally auto scaling of worker resources for optimum throughput result in better and overall price to performance ratio it supports unified programming model apache bin SDK offers equally rich map reduced like operations powerful data windowing and fine green control or correctness control for streaming and batch data like it's a community-driven innovation so developer wishing to extend the cloud data flow programming model and can fork and or contribute to the apache bin so you can actually think of this data processing logic google has open sourced it in the name of apache bin in nutshell what you can actually see is data processing can take a data from stream or a batch of any kind and we'll talk about what is purpose of and all of that but it can take batch job or stream job it can process into data processing logic which is in data flow and then it can pass the information to either the BigQuery cloud machine learning or big table from which you can draw your either machine learning analytics or you can see the insights from the data cloud data proc it's a faster easier more cost effective way to run your spark and Hadoop so you can actually think of data proc is your Hadoop cluster or spark cluster so use cloud data proc as a managed spark or Hadoop service to easily process big data set using the powerful open tools in apache big data ecosystem a cloud data proc integrates with storage compute and monitoring services across the cloud platform products giving you a powerful and complete data processing platform some of the features of data proc it's automatic cluster management so managing or deploying logging monitoring let you focus on your data and not on your cluster so your cluster is stable scalable and speedy you don't have to manage the the cluster a resizable cluster so cluster can be created and scaled quickly with variety of virtual machine types disk size number of nodes networking options it is integrated with other gcp services and you can actually plug this with cloud storage big query big table stack driver logging stack driver monitoring which gives you a complete and robust data platform image versioning image versioning allow you to switch between different versions of apache spark or apache Hadoop and other tools it's highly available you can run your cluster with multiple master nodes and set the job to restart on failure to ensure that your cluster and jobs are highly available developers tools multiple ways to manage your cluster including an easy to use web UI google cloud SDK restfully apis and ssh access initialization actions you can actually run initialization actions to install or customize settings and the libraries which you need when you when your cluster is created automatic or manual configuration cloud data proc automatically configures hardware and software on your cluster for you while you are allowing for while also allowing you to manually control the configuration you have flexible virtual machine which you can use so cluster can use custom virtual machine type or prem table virtual machines so that you have perfect size for your need cloud data lab cloud data lab is an interactive notebook based to explore collaborate analyze virtualize the data it is integrated with BigQuery and cloud machine learning to give you easy access to your key data processing service so this is an supplementary service for your data analysis or exploration some of the product features like other big data solutions this is also integrated with other gcp services the cloud data lab simplifies the data processing with your cloud BigQuery cloud machine learning cloud storage and stack driver monitoring authentication cloud computing computation and source controls are taken care out of the box you don't have to worry about it has got a multi language support cloud data lab currently supports python sql and javascript for BigQuery as a user defined functions notebook format cloud data lab combines the code documentation results and visualizations together in an intuitive notebook format it's a paper use pricing so only pay for the cloud resources when you use interactive data visualization use the google charting for easy visualization machine learning so supports tensorflow based deep machine learning models scale training and predictive via specialized libraries for the cloud machine learning engine ipython support data lab is based on jupiter five formally ipython so you can use a large number of existing packages for statistics machine learning etc and it's open source cloud pub sub cloud pub sub is ingest event streams from anywhere at any scale reliable real-time streams analytics pub sub is serverless large-scale and reliable real-time message processing service that allows you to send and receive messages between independent applications you can leverage the cloud pub subs flexibility to decouple the system and components posted on cloud platform or elsewhere on the internet by building and building on the same technology google uses cloud pub sub is designed to provide at least once delivery at low latency with on-demand scaling to tens of millions of messages per second so some of the product features at least once delivery it synchronizes cross zone message replications per message receive tracking ensures that at least once delivery at any scale it's exactly once processing with cloud data proc supports reliable expressive exactly once processing of cloud pub sub streams no provisioning so cloud pub sub does not have shards or partitions just set your quota publish and consume it integrates with other gcp services open apis and the client libraries even libraries in seven languages support cross cloud and cross hybrid deployments global by default you can publish from anywhere in the world and consume from anywhere with consistent latency and no replication necessary compliance and security cloud pub sub is HIPAA compliant service offering fine-grained access control and end-to-end encryption you can actually visualize in this particular picture as cloud pub sub is an ingestion engine from any application devices or any data set and that can actually pass the information or ingest the data into the cloud data flow likewise there are many more use cases for cloud pub sub probably i can create another video to detail out pub sub because pub sub is much more important service not even for big data but other application in gcp domain that's it for this particular lecture so you can actually think of big data is managed service you don't have to provision the resources in majority of the cases and you don't have to provision the resources you just need to use it and based on how much you use it you will be charged so it is pay as you go amount of data whether it is amount of data or volume of data which is ingested or data is processed by those applications cloud architect on the big data will ask about overview questions only you need to have understanding on what is what to pass the exam i will create a detailed course on big data with the demo for your understanding only or if you're planning to give data engineer certification it will be useful that's it for big data as a overview section if you have any questions let me know otherwise you can move to the next section thank you hello welcome to the section on artificial intelligence or machine learning service from google cloud platform artificial intelligence platform service is another usb besides big data for google cloud platform and its oldest success full service where they don't have almost any competition ai or machine learning platform solutions help customer to use ai or humanization of their own software data without managing underlying software or infrastructure enterprises or customers it's really matter how easily or efficiently they can use machine learning apis speech recognition image analysis video analysis text text analysis like sentiment analysis or language translation which we know translator google.com machine learning along with other services provide solutions to satisfy all of these or similar requirements this is naji i will take you through google's machine learning solution overview google certification cloud certification does not cover in-depth aspect of machine learning solutions but what is what is necessary like big data as it's one of the critical service from gcp so if you're ready let's go ahead and get started ai or machine learning solutions is platform or software as a service google offers artificial intelligence or machine learning solutions as a platform service and different services under ai or gcpr cloud machine learning which is tensor cloud video intelligence api vision api cloud speech recognition api cloud natural language api cloud translation api and cloud job discovery so let's understand what is cloud machine learning so it is fully managed machine learning on any data any size it's a tensor flow implementation of machine learning so google cloud machine learning engine makes it easy for you to build sophisticated large-scale machine learning models that covers a broad set of scenarios from building sophisticated regression models to image satisfaction to image classification some of the product features it is integrated with other google services and designed to work with data flow for feature processing cloud storage for data storage and cloud data lab for model creation you can discover and share samples so machine learning is tailored to sample your data for industry use it's hyperduo it's built better for performing models faster by automatically tuning your hyperparameters with hypertune instead of spending many hours to manually discover values that will work for your model so it is managed service which focus for you to focus on model development and production without worrying about the infrastructure so managed service automates all the resource provisioning and monitoring scalable build your model for any data and any size any type managed distributed training infrastructure that supports cpu and gpu accelerate your model development by training across many number of nodes and running multiple experiments in parallel notebook and developer experience you can create and analyze models using the familiar ipython notebook development experience with integration to cloud data lab portable models you can use open source tensor flow sdk to train your models locally on the sample data and use google cloud platform for training at a scale the models are trained using google machine learning engine and can be downloaded for local execution or mobile integration so this is all about the cloud machine learning so you can actually think of this tensor flow implementation with additional sdk or apis for you for ease of use cloud video intelligence api you can search discover media content which is video here a cloud video intelligence api makes video searchable and discoverable by extracting metadata with easy to use rest api you can now search every moment of every video file in your catalog and find every occurrence as well as its significance it is quickly annotate videos stored in cloud storage and it helps you identify key nouns nouns entities of your video and when they occur within the video some of the product features it will give you insight from your videos so apis allow you to extract actionable insight from the videos files without requiring any machine learning or any computer vision knowledge cloud video intelligence api improves over a time and new concepts are introduced and accuracy is improved you can have label detection so it detects entities within the video such as dog flower cat and like that short change detection it can detect the changes in scenes recognitions it can specify a region where the processing will take uh processing will takes place integrated with other jcp platform it can access via rest api or client libraries over seven languages to request one or more annotations type per video and we'll see how that actually happens in small demo and you don't have to manage any infrastructure for this one as an example here i just took it from the website itself you can actually go ahead and give the video to the api and it will give you video labels it will give you shots short changes and short labels and then it will give you rest response and you can use that response in your programs as an example here let me pull the demo video so when you go to actually uh products machine learning and video intelligence api you can either select sample api sample video there any of this one right this is going to take time because ultimately uh it has to complete the analyzing the video you can go ahead and try uh this on website on your own and you'll get uh the outcome like this one cloud vision apis google cloud vision api enables developers to understand the content of image by encapsulating powerful machine learning models in easy to use rest api it quickly classifies your image into thousands of categories which is already defined and detects individual objects and faces within the images and finds reads printed words contained within the image you can actually build the metadata of your image catalog moderate offensive content or enable new marketing scenarios through images sentiment analysis it analyze images uploaded in the request or integrates with your image storage on the cloud storage so some of the product features of image apis uh it's labeled detection you can detect broad set of categories within the images ranging from models to transportation to animal uh it can detect explicit content uh it can detect levels landmark detection is also possible with uh vision api optical character recognition it can detect an extract text within the image uh with support for broad range of languages and along with support for automatic language identification you can have face detection so detect multiple faces within the image along with associated key facial attributes like emotion state or wearing headwear image attributes it detects general attribute of image such as dominant color and appropriate crop hints it can detect web web detection search the internet for similar images it is integrated with a rest api that's the vision api actually I have passed one of the image from Seattle and then it has given me some of the information here so label it says it is city skyline metropolitan area skyscape metropolis urban area daytime at the same time it can actually give you the properties image properties and whether the content is adult spool or medical or violence if I go to the website for image let me let me bring one of the image and let us try to understand more of this particular image so it has given you the information like mixed use urban design neighborhood 81 chances it's a real estate 69 percent charges pandominium 68 percent city 67 percent shopping mall 63 percent plaza trees recreation right uh if I have to go and click web you can actually very well see that it's a Microsoft visitor center and there are chances more chances uh Microsoft Redmond campus building 111 to be very precise Microsoft corporation the text information which you have here is eel I don't know where it is resides the color properties you have it based on these particular chops safe search adult it's very unlikely very unlikely medical very unlikely violence very unlikely and this is the outcome in your json you can actually we will see that it's urban design neighborhood and likewise you have multiple information whether it is safe search very unlikely very unlikely so you will get that actually from the api as an outcome that's the vision api google speech api so speech or to take a text conversion so google cloud speech api enable you to convert your audio file into a text by applying on neural network models in easy to use api the api recognize or 100 plus languages and variants to support global user base you can translate the text of the user detecting to an application microphone or enable command line or command and control through the voice among many other use cases google speech api so it is speech to text conversion google cloud speech api enable you to convert audio into the text by applying neural network model in an easy use api the api recognize or 100 plus languages and variants to support global user base you can trans transcribe the text of users detecting to an application microphone or enable command line control through voice among many other use cases some of the product features it is powered by machine learning the return results are in real time it is context aware so speech recognition can be even tailored for your context by providing separate set of words hints to each api call which is useful especially for device and up control use it works with app across any device so this speech api supports any device that can send rest call or g rpc request including phone pcs tablet iot devices and whatnot automatic speech recognition so automatic speech recognition asr powered by deep learning neural networking power your applications like voice search or speech transcription it has got a global vocabulary it recognize 110 languages and its variants with extensive vocabulary streaming recognition it returns a recognition recognition results while the user is still picking the word hints speech recognition can be customized to a specific context by providing set of words and phrases that are likely to be spoken especially useful for adding custom words and names to the vocabulary and even voice control to use case it's a real time and prerecorded audio support as well noise robustness it can handle noise audio from any environment without requiring additional noise cancellation inappropriate content filtering it filters inappropriate content in text results for some languages integrated api audio file can be uploaded in the request or integrated within google cloud storage some of the example here which i tried to do it earlier you can convert your so i was speaking it and then it was converting in real time i selected another language here and then that has also converted into it let me go ahead and show you how actually does it work hello it's me we are learning google cloud platform one of the other language let me select which i know google and this is how it works in real time so that was speech api cloud natural language api you can actually drive insights from your unstructured text so google cloud natural language api reveals the structure and meaning of text by offering powerful machine learning model in easy to use rest api you can use it to extract information about the people places events and much more mentioning the text document news articles or blogs some of the features you can have a content classification relationship grab so you can classify your document by common entities around 700 plus general categories such as news technology entertainment you can build relationship grab of entities extracted from the news wikipedia article or by using signal from the state of art syntax analysis syntax analysis you can extract tokens and sentences to identify parts of speech pos and create dependency phrase freeze each sentence entity recognition identify entities and label by types such as person organization location events products or media you can have sentiment analysis you can understand overall sentiments expression in the block of text the content classification you can classify document pre defined 700 plus categories sentiment analysis you can understand overall sentiment expression in the block of text content classification you can classify your document in pre defined 700 plus categories multi language enables you to easily analyze text in multiple language including english spanish japanese chinese french german italian korean and portuguese integrated arrest api access via rest it can be uploaded in a request and or integrated with you can see you can store that in cloud storage so one of the example here which i have up forward is and the output which i got it is you can see the first one the google is an organization that's what it has detected the sentiment the second one is the users third one is the phone android the person name mountain view is the location uh consumer electronics an event consumer goods which is phones and keynote let us analyze the content here in here so it has detected location native americans war history war an event again north america george washington people colonies shelters leaders location colonies other colonies yeah so these are the information which you will get it after analyzing this particular text which i copied it from this one and that's our natural language apis will help you to understand more about your the text google cloud translation api it is fast dynamic language translation the google cloud translation api provides simple programmatic interface for translating an arbitrary string into any supported language our translation api is highly responsive so websites and applications can integrate with the translation api for fast dynamic translation of your source text from the source language to the target language some of the product features of translation apis you can translate mini language so somewhere around 100 plus language is supported currently you can detect the language it has got a programmatic access so with apis you can restful apis you can use that in your programs you can do text translation it's continuous updates so behind the scene the translation api is learning from the logs analysis human translation examples existing language pair to improve and new language pair comes in online with no additional cost translation api has got a adjustable quota you can increase from 2 million per character to 50 million per character and even that can be changed it is affordable and easy pricing some of the example here i don't have to actually give you a complete example here because many people and used translate translator google.com it's a similar context it is just rest interface to that that's it for AI or machine learning cloud architect exam particularly ask only the overview part of this particular section and that's it if you have any questions on machine learning or AI please let me know otherwise you can move to the next section