 In this lecture I'm going to talk about cross-compiling and Seago. Go can cross-compile to any supported operating system and architecture. Cross-compiling means that I can compile on my Linux system a Go binary for Mac or a Windows system. Architecture means I can do that for AMD64, that's the Intel architecture or I can also do that for ARM64 which is now used by MacBooks. It can also be another architecture that is used on embedded devices or on servers. You need to supply the Go operating system environment variable and the Go Arc environment variable during Go build to compile for another operating system or architecture. Go tool dist list shows you the support combinations. So when you want to cross-compile you can use that command and then you can see the operating systems and the architectures that you can cross-compile for. When not cross-compiling Seago will be enabled. When cross-compiling it will be disabled. Seago allows you to run Seacode within Go. This is relevant even if you're not using this feature yourself because standard Go packages like Net can use Seago. For example for DNS resolving. Seago will link your binary to the current C library available on your operating system but it will not work on an operating system with a different C library. So this is how it works when Seago is enabled. So Seago enabled equals one Go build. This is the default so you don't really have to supply this. The result is a dynamically linked binary will have your Go binary which is linked to the Lib C system library. This can be G Lib C which is GNU Lib C just Lib C or muscle which is used by Alpine and this is a library that is installed on the operating systems. It's an external file so it's not in your Go binary. It's in the library directory on a Linux system and you will need those files to be able to run this Go binary. So if you would now want to run your Go binary on another system you might not have these files and your Go program will not work. It will not execute. It will execute with an error that you cannot find this C library and that's why you have the Seago enabled equals zero Go build that you can do and then the result is a statically linked binary. So you are not linking to this C library anymore rather than using these C functions. There's a pure Go implementation of these C libraries for example in the package and that results in one single binary and now your binary is portable across all operating systems that it has been built for. So if it has been built for Linux it will work on any Linux system. So this just creates one binary that is non dynamically linked. You would still need to cross compile if you want to run your binary on a Windows or Mac system. So if you want to run your binary on Linux, Mac and Windows you would have three binaries. Those would then all be statically linked so that you don't need any external C libraries. If you want your Go library to just run on a single machine then you can use Seago and then it will link to these C libraries because you have those available on your system. So enabling Seago when you're not cross compiling may lead to a binary smaller in size as those C bindings for DNS resolver and networking will be in libc or glibc. So it's not only the DNS resolver it is also other parts that are using Seago. You could even use Seago yourself. You already have the C libraries bundled with your operating system so there's no need to have them included again in every binary. So that's actually how Linux works and Unix as well. If you execute something like ls or cat or any other Linux commands those are dynamically linked so that you don't need all these libraries again in every single binary otherwise every binary would be pretty big and then it would take a lot of this capacity for every single binary to have them statically linked. Also if you have them statically linked you would have to recompile everything. If you have a security bug in one of the libraries if it's dynamically linked and there's a security bug you just update the library restart your binary and then you will be using the new libraries. So there are definitely benefits in enabling Seago. It will also lead to faster builds and that is why Seago is enabled by default if you are not cross compiling. Disabling Seago by default when you're cross compiling or with the flag is necessary when cross compiling and it's also necessary if your C library on the Disney system is different. For example if you compile on Ubuntu Linux but you want to run on an Alpine Linux you wouldn't think you would have to cross compile because it's still a Linux binary but actually it will not work because Ubuntu Linux is using GNU, libc and Alpine is using muscle libc. So let's have a quick look how this Seago and cross compiling works. So this is my Mac operating system but I'm going to start a Linux docker container just so that I can show you how the linking works. I'm going to use docker for this. I'm going to enter dockerrun rm make sure that the container is removed after I exit the container interactive mode and my container is going to be Alpine Linux. So now I'm in my container so I already downloaded this image earlier that's why it's not downloading the image it's cached and I'm going to install go first apk add go and this is the package manager of Alpine so you see here I'm running Alpine 3.16 and this is going to install go it also needs to install all dependencies and then after this is complete I will have the go command installed within this container. So now it has been downloaded and I just need to create a test app in the directory app main.go and I just pasted a listen surf application. So this is just an HP test server go build go build main main.go so this is the standard command and then main this starts application. So this works main is the binary that has been created now I'm going to use a tool ldd which will show me how it is dynamically linked and you can see that main is linked to lip ld muscle x86 so if you would now copy this main to another Linux system and it would not have muscle version 1x8664 then it will not work it will say I cannot find this library. So what you can then do if this is the case and we are not cross-compiling yet we are just creating a static binary is to use this flag cgo enabled equals zero go build main no cgo of main.go ldd main no cgo and main no cgo is not a dynamic program so what happened now is that it's not dynamic it's a static binary that can now be copied to other Linux systems it's now not using cgo anymore so it does need to be linked. So cgo is also disabled if you cross-compile so if you do it again go build main Darwin 64 which is for Mac Intel based Macs then go operating system is Darwin and architecture is only 64 then this will cross-compile what happens when I execute it it doesn't know what to do with it because it's a binary of a different operating system and it also says it's not a valid dynamic program. So the defaults are most of the time okay the only use case that I know of where you would need to put cgo enabled to zero is when you want to compile a binary that is still compatible even though it doesn't have the same seed library on a different system otherwise if you're on the same system it's beneficial to have cgo enabled. So if you want to see a list of everything that you can cross-compile you can use a command go to this list and you can see here Darwin AMD 64 the one that I just built but you have also Darwin ARM 64 for example when you need to build for those ARM 64 based MacBooks you have Linux a lot of Linux different architectures so you have AMD 64 but you also have 32-bit you have ARM ARM 32-bit ARM 64 bit so you see you have a lot of different architectures that you could build for and here for example you also have Windows. So this was a brief introduction into cross-compiling and cgo if you run into any problems cross-compiling you can always send me a message directly or put something on the Q&A because there's a lot of cases where something could go wrong but those cases are so specific that I cannot really cover them this what I just explained is really the basics that you should know and that you can start from when cross- compiling for different operating systems and architectures. In this demo I want to show you how to package your goal program in a Docker image. Once it is in the Docker image you can launch it easily on any public cloud provider. Today many SaaS applications are running based on Docker images so it's very typical to ship your goal and program as a Docker image probably running an HP server in most cases and then you can launch it easily using a cloud provider or a VM running somewhere. So I'm going to show you how to build this test server using a Docker file. So first make sure you have Docker installed we can just go to the Docker website and download it. I'm using Docker for Mac there's also Docker for Windows you can also download Docker for Linux. The only thing you really need to build a Docker image is a Docker file and a Docker file is actually pretty easy to use. The first step is to say what your base image is so I'm going to use from and there's a goal line base image that you can use for compiling so this is goal line 1.18 and I'm using Alpine Linux because it has a smaller footprint as an Ubuntu image so it's very well fit to run goal programs on. Remember what I said in my previous lecture if you are building for Linux you need the same libc libraries so if you're building using Alpine you also need to be running Alpine. So here I have my build container so from goal line Alpine as go builder and here I have the runtime container. So this is the actual container that will run when I use Docker run. Let's start with the build container. I have my base image that contains all the build tools. I have a word directory. So this is the directory that I will build everything in. This is slash app. I'll copy everything from my local folder this folder here into the Docker container into slash app because that's my working directory. I then need to also add build tools, curl, git so this can be necessary to build your executable. Most likely for this project you only need git. You can just add some build tools. This container here we will not ship. It's only for building so it doesn't really exactly matter what you install as long as you have a runtime container that is separate where the build tools are not installed. So we add these build tools. APK add is a package manager in Alpine Linux. You then do a go build of the go files to the executable server and this will be dynamically linked to this lib muscle C library because we are not cross compiling and we are not specifying to disable C go. These lines are not really necessary. It just because I copy pasted them over from another project where we just want to clean up after every comment. So then we have the runtime container. Runtime container is going to be Alpine just the latest. So the C library needs to be the same. So we want to make sure that the C library in this one and this one are the same. Now this is the case. But if you get an error while executing your app then you might have to change this latest into a specific version. It depends what Alpine Linux is being used for Golang 1.18. I also have a working directory slash app. I'm going to install CA certificates which is not really necessary. If you only have a server it's necessary when you want to make API calls. So if you make an API call to somewhere you need CA certificates to be able to validate the server you connecting to. So these are the root certificates and then you can copy from the go builder the app server and save it into app server. So this Alpine Linux will only have the CA certificates installed and the binary in slash app server. We'll always expose port 8080 in the container and then the entry point is slash app slash server. So that will be the process that starts when we start the container. Let's try to build this. How do we build this with Docker build. We are going to give it a tag, test server and then we need to specify the path that we want to build which is the current directory. And you will see that the commands that we have listed in our Docker file will be executed. It will install git and some dependencies, fetch our core dependencies and then copy to our runtime container, the app server. So what is the image size of this Docker container? Let's have a look, Docker images filter on test server 812.8 megabytes. So it's pretty small. It will obviously become bigger once you start making more complicated go programs and have more dependencies. How do we run this container? So you can either run it locally or on a cloud provider within Kubernetes or just with Docker on our VM. Docker run, R M make sure that when I exit the container that the containers also removed. Interactive mode not really necessary. It just I want to hit control C once I want to exit a port. So the port that is exposed within the containers 8080. But I also want to have it binded on my local machine. And I'm going to run the test server starting server from point 8080 curl local host 8080. The server is running. So that works. So we have a container running. We can also execute in this container to have look in it. Docker exec, Docker PS, first Docker PS. Container ID is this one. Docker exec minus it for interactive mode. And I'm opening a shell. And then we are in the work industry app. app has a server binary 6.5 megabytes. And if I do tell the D of the server, you can see it is dynamically linked with our C library. What if you want to get it even smaller? That's also possible. I'll exit this. And then I have a Docker file scratch that I created. So I'm going to close this one. Docker file scratch is very similar. The build is exactly the same. This is all the same. But instead of using a runtime container that is alpine Linux, I have scratch. And scratch is a Docker maintained base image. And that's really almost nothing in it. So from scratch, we're going to copy the app server. And we are not really installing the CA certificates anymore, because we don't have AP key anymore, which means that we would have to also install CA certificates in our build image if you want to have it. And I'll also do a copy from the CA path if you want to have that he has installed. But because it's not really necessary to run the server, I will just leave it like that. Just something to take into account. Docker build, Docker build minus the test server was our previous command. So I'll call this test server scratch. And then I want to specify the Docker file because it's not a default Docker file. If you don't specify minus F, then it will look for Docker file. If you specify minus F, you can specify any file. Docker file scratch is the one I'm going to build. And it's almost the same. It will do the same build. And it will just copy over the image to another runtime image. So let's first try to run it. Docker run test server scratch, starting server on port 8080. Okay, that works. Now let's compare Docker images filter for test server. And we still shaved off some megabytes. So test server is 12.8. And our test server scratch is 6.7. So if you really want the minimum, then test server scratch is also a good runtime image. It kind of depends on what you prefer. If you still need to do debugging at some point on the runtime container, like while it is running, because something went wrong, then often you want albylinux, a test server and have maybe also curl installed. So here sometimes I also install curl and bash, just so that I can do some debugging on the container itself of how this is running. If really, something goes wrong. For example, you want to do a curl local host on a specific endpoint to see if it's working. It's a little bit more difficult to debug with scratch because there is almost nothing in it. So if you run our test scratch again, and we look for the scratch image Docker exec. So if we want to open a shell, then it was to say, I don't have a shell, because bin sh is not installed. There is no shell, because it's really a minimal container image. There is really nothing in it. So you cannot even enter in it with a normal shell. You would have to install that first before you would be able to do some debugging. You can still make it smaller. The image, if you want, there's also some flags that you can enter here, go build, you can remove debugging information stuff like that to get your goal and binary even smaller. If that really matters to you, then have a look at what flags are available, you can remove debugging information to get a smaller Docker image. Most of the time, I would say just using Alpine Linux already helps a lot in making sure that your Docker image stays small. So once you have a Docker image, you can really kind of deploy it everywhere you want. In this section, I'm going to talk about the AWS SDK. AWS stands for Amazon Web Services and is Amazon's cloud offering. In this section, we are going to use Go to make API calls to AWS. This is how it looks when you make an AWS API call. So it was has multiple endpoints that you can reach. For example, easy to is their virtual machine offering where you can launch virtual machines. So they will have a separate AWS endpoint for that. You can hit that endpoint over HTTPS and add parameters to call different actions. Every call needs to be authenticated. So you would also have to supply out parameters. This makes it quite difficult to do these calls yourself. You could do them, for example, with curl or any other HP command my utility, or even with go with the HP get that we did, we can do these API calls as a response to your API call, you will then get a reply as you see here at the bottom of our image. You then have to decode everything into a struct and every API call will have a different scheme. So you would have a scheme for every API call. There are thousands and thousands of API calls that you could make. So it would be quite a work to do this all by yourself. And that's why AWS has an SDK for Go that you can use. And it will have functions in Golan, like describe regions that you can just call. And then this SDK will take care of authentication, and will also parse the output for you. So it's quite a no brainer to use the SDK for endpoints like this. So what do we need to do? First thing to open an AWS account if you don't have one already, you're then going to create an IM user, download the credentials. And then we're going to configure these credentials on our machine using the AWS command line utility, or as environment variables. So you can configure them straight into Visual Studio code if you want, or you can use a lot of command line utility to configure them. And they can use their command line utility together with the code that we're going to write. Let's start with configuring the AWS credentials of a user. If you don't have an AWS account yet, it was Amazon com create an AWS account, and it will guide you through the steps to create an AWS account. Once you have your AWS account created, you can log in to the AWS account by signing in with your root account or with any administrator account. Then you will see a page like this one. Very similar. We need to first create a user that we can use on our machine. So we're going to go to IM. Can just type IM, manage access to AWS resources. You go to users, add user username, you can call it go as SDK, for example. And we need an access key programmatic access enables an access, enables an access key ID and secret access key for the AWS API command line utility and the SDK. So that's what we need. permissions. You need to make sure that you have enough permissions to create an EC2 instance. So you can give it easy to full access or just administrator permissions. You can click on attach existing policies directly. So we have here already policies created by AWS. The easiest is to give it administrator access. But there is also, for example, easy to full access that you could use to start with. I'm going to start with administrator access because we will have other API calls to make as well. And then we can add text, which is optional. And then we can review. So we have the username go AWS SDK, which is going to be administrator. And we're going to get an access key. Make sure that you always protect these access keys. They're secret. Otherwise, someone else could create instances for you, or take over you AWS account, especially when you create an administrator. This is the access key ID. So we can copy this. And then we have the secret access key. So if you click here, I can show the secret access key is also just a string. And this we need to configure the easiest way to configure. I think is to use AWS command time utility. If you go to AWS amazon.com slash CLI, then you can download the command line utility. So there's a Windows installer, macOS installer and Linux installer to download this AWS command line to download this AWS command line utility. I'll show you how to configure it with this CLI and how to configure it within Visual Studio Code. So you can pick either options. I would recommend the AWS command line utility. But if you cannot download it for any reason, then you can still configure it directly in Visual Studio Code. In Visual Studio Code, if you have the AWS command line utility installed, then you should be able to use AWS as a command line command. To configure your credentials, you can do a less configure. And then it will ask for a less access key ID. And your secret key. This is my access key ID, my secret key. And then you can specify a region us one, or if you are in us us east one, default output format, you can leave to none. And then it should be configured. If I do a less STS, get color identity, it gives you the output and shows you that you are authenticated. So there is another way to do it. And it is to edit the launch configuration. So if you go back to the configurations here, you can add here an environment variable. And as an environment variable, you can add the environment variables necessary for alias. And then Visual Studio Code, when you run this, it will add these variables to the environment. And you will also be authenticated to your AWS account. There's three environment variables that you would typically configure alias access key ID, alias secret access key. So this is the key ID and access key. So these are the two that are supplied. And then the alias default region, which can be us east one or any other region that you would like to launch in. So these you can also configure within your launch configuration. If you then launch your goaling application with SDK, it will be able to call the alias endpoints using this login information. In the next lectures, I'll point out what part of the code will fail if you don't have your credentials properly configured. This is going to be our first iOS SDK colon demo. What you see here on screen is what we are going to build first. These are the steps that we need to take the functions that we need to call to launch an EC2 on alias. The first step is going to be that we need to load the config. This will load the credentials, the alias access key ID and the secret key, whether it's in an environment variable or in a file. If you configured it with alias configure, then it's going to be in the file. Or if you even run it on an EC2 machine, it can be a role. So this load config is going to take care of that. And then we can do the EC2 API calls. First we're going to do an API call to create an SSH key pair. In case our SSH key doesn't exist yet. If it exists already, then we're going to skip that. Then we're going to find the Ubuntu AMI. There is an API call to list the available images and AMI is an image. There is an Ubuntu image that we can search for and that will then return AMI ID that we then going to use to launch our EC2. Then we can say launch EC2 using this Ubuntu image as a base image to start our EC2 machine. And then when everything goes right, then we can output our instance ID, our EC2 instance ID. So let's get started on that. For this demo, I'm using a new directory which I created and opened it in Visual Studio as a folder. I'm going to run code mod in it because now this will be important when we start using external modules. Let's start with a main function and then a function to create our EC2 instance. Func main, this will be the entry point. And we need two variables. We're going to output the instance ID, which is a string and we're going to use error. So these two we need. And then we're going to call a function. We're going to call our create EC2 function. And this will then return our instance ID and our error. We don't really know what yet to pass. So let me just write these function signatures here. String and error is what we return. And then we return nothing yet. What do we do? We need to check for errors. So let me just add an if statement here. If error is not equal to nil, then we're going to say create EC2 error and then an error message. And we're also going to stop. Let's exit one. And if everything went fine, then we're going to print off instance ID, which is this instance ID. And that's why we declared it a little bit higher up, so that let's remove this column, because it's not necessary, so that the instance ID is assigned here, but declared here, so you can still use it here. Save this backslash n. Oh, I did return, but I'm not going to return anything. I'm just going to print it. And here I need to print it instead of rf. So we print this, we print this and then it runs a create EC2. We want to maybe pass one thing. We want to pass the region, the ADLS region, so we can easily change it. So I'm going to launch in US East 1 and I pass this and now I can initialize the ADLS SDK. How do I do that? Eat in the ADLS SDK go documentation. Let's have a look at it. ADLS SDK for go version two, we are going to use is the GitHub repository and you will find here the SDK documentation, but also you will find getting started, for example. Here is the getting started. So the reference documentation is right here. So this is a developer guide with the getting started and then you also find the services. Here's an example of the services with S3. There will also be an example of EC2, how to call them. So this is what you should read if you didn't have any idea how to get started. So let's just have a look at this getting started and it will get going. So we did a go mod in it and then what you should do is go get, need to go get these dependencies. These are external dependencies. So we are going to use config using the SDK default configuration from the environment variables or the shared credentials or the configuration files. So this is what we need. I'm going to copy this and then we're going to change it a little bit. Config undeclared because we need this one. And then we'll add this as an external dependency and it says could not import this config. Quick fix, go get. That was the command that you saw in the getting started. You can also run it within Visual Studio Code or you can run it in the terminal. What happened then in go mod you will now see that we are requiring this dependency. Once we start adding more dependencies it will automatically add more to the go mod file. So this go get command adds dependencies, external dependencies to the go mod file. You will not find these internal dependencies because they just commit the go line. So the config now works and now we have contexts, so we always need to provide a context with this load config and context is something that is within go line. So context to do means that we are supplying an empty context which is fine but what we also can do is we can pass this context if you want. And context can be useful in a way that it can contain variables is a context variable so it can contain extra variables but more importantly you could actually cancel a context which means these API calls can take some time to execute if there's an API and point that is not working and it would just time out it would take 30 to 60 seconds. This context allows you to cancel this whole flow in another point in your program. Right now you're not going to use it but we are going to pass it. So I'm going to say context and I'm going to just define here context ctx is context background. Background returns a non-nil empty context it's never cancelled has no values and has no deadline. So you could then change this context if there's a deadline or if it needs to contain values that you want to pass between the functions or if you want to cancel it later on. So we will use background and then we'll just pass it around to our functions. This is of type context context so let me have ctx context context and then now we have context config with region and our region name and we can use this config and then let's just return an empty string with fmt-rf enable to load as the kconfig. So what do we do next? We are going to initialize an ec2 client ec2 client equals ec2 dot and this is again something that we need to import now so we'll import service dot ec2 I think it is it's already in the help file service dot ec2 ec2 and I would need to go get this go get and now it's downloading this service ec2 and now I'll be able to use it new from config we can do ec2 new from config so all these services are very similar so if you use another service within alias you'll also do a new from config and you pass this config and now we have an ec2 client and this ec2 client we can use to do api accounts ec2 happens to be a really big package with lots of api commands so it's very difficult to find in this list what exactly you need so but there's a trick I always have look at alias sdk to have to to see what is available it just gives you a starting point to see so we need to create first this key pair so we have create key pair although we need to pass to create key pair our context and the parameters the parameters is of ec2 create key pair input so that we can supply input so we have a context and then we need to reference this variable ec2 create key pair input and then there's options but they are optional key pair output and create key pair and then we need to pass some variables a key name let's pass a key name let's call it alias go demo alias sdk demo or go alias demo that's maybe better name and what do we see here cannot use so we have a pointer string pointer string so you could say I want to do it as a reference but that actually doesn't work that's why alias has some helper functions I can use alias string and string will convert a string to a pointer string we then need to go get alias which I already have so now I have alias imported if this shows an error for you you still need to go get or you have to click and click on go get so what is alias string go to definition string returns a pointer value for the string value passed in so it passes string and then it converts to a pointer string and returns a pointer string so just some helper functions we have the same for a string slice string map it's because these alias variables always expect almost always expect pointers so we have the key name what's next on my list I need to get the Ubuntu AMI image easy to client and then if I just type image what do we have create image describe images let's have a look what describe images says describes a specified images amis available to you or all the images available to you sounds like right describe images context and parameters but this time it's of easy to describe images input so we pause the context easy to describe image input and what do we need here we need a filter filters and then we also would need the owner because the image that we are looking for is owned by Ubuntu so we need to still find this owner id which we will find on ubuntu.com it's somewhere described in the documentation and we need these filters these filters so these are those of type filters and now sometimes it gets a bit tricky because types not filter it says in documentation here but what is types not filter I can find this types not filter so that can be sometimes a bit annoying to find let's have a look if we go to definition then types not filter it says github.com service easy to types so we can add it to our imports because otherwise it's difficult to find sometimes save this okay now it's recognized it's an array of name and values so it's a filter so if you have an array then we need to do one more time curly brackets because this is one element in our array or in high slice name is name we need the filter on the name so we are going to save the filter name and then we have the filter value so this just happens to be coincidence that it's name and name but its name refers to the name here the name of the filter and we're actually looking for the attribute within this scribe images of name name and again able to string because it needs to be a pointer here as well and a comma at the end oh this needs to be something else it's a slice string and what's going to be we don't know yet so there are multiple ways in finding finding this you can do this in the alias web console there is a describe images interface where you can type ubuntu to find the correct filters i'm just going to give you the filter once you know what the filter is you can easily change it a little bit you can change the bit depending on your us case so you can see ubuntu images hvm ssd ssd this hvm ssd is just a type of vm and then ubuntu focal but you can see now that you can easily changes if there's a new version of ubuntu you can change the name and the version and that's how you would then find other versions it's still showing values now it's showing problem so again if you log into your alias account go to ec2 then images or amines or find images you can play around with that interface to find these it's just some alias knowledge that you need to know to be able to filter images there's another parameter that i want to add because we are going to ask alias to launch the hvm image so i want to make sure that we filter only on hvm images could be a bit too much but we just want to be sure so it's going to be virtualization type and the value is going to be hvm just so that we for sure have only images returned that are of type hvm within alias and then we need the owners owners is again a string string slice and this we need to find online like i said if i type ubuntu alias owner then you have ubuntu.com cloud images and here it is explained that you can find images using using alias commands or you can use it with the describe image but then you need the owner and this should be the owner 09972 and this we can then use there's also an interface here on this website there's an image locator that you can use so this is very similar to be able to find these AMIDs so what we are looking for is an AMID like this and our api call will return this for us so you also see this instance type the version the name so we are basically filtering on that plus then the owner so that's what we need to input here and this will give us an AMID what is the output describe images output image outputs error oh we are not catching our errors so if you have an error then we say create keep error and error message and if you have an error here for example if nothing was returned then we can say describe images error then we have the image output and let's do some more checking if image output images which is of type images which is a slice so if this is 0 this slice if the length of the slice is 0 then we also want to return that we didn't find anything that means that something is wrong with our filter is empty this was your length if it is not of zero length we can take the first element of this slice which is going to be the most recent one the zero and then this is going to be the image ID so let's keep this for now we need to now launch our image launch our ec2 so ec2 client launch now that's what it's run run instances so this api call is actually very old it is already around for a very long time you can see ec2 classic ec2 vpc so it's definitely very a very old api call ec2 run instances context and then ec2 run instance input and what does it return an instance and error i guess let's have a look run instance output and error so let's already capture our error run instances error and then if we have our instance id instance instances first element and then the instance id so we need to do another check and this is going to be also a pointer because that's how illus returns it so every time you are assuming that there is a first element make sure that you check it so if lan of instances is zero then we are going to say instance instances is of zero length and then we need to provide our run instance input so there's a lot of input here that we can supply first of all our image id our image id is this image id right here from our image output our key payer and it's just a key name so we can just say key name and we just reuse the name itself touch it to work and then what else do we have there's a min count and a max count that you always have to supply but it also needs to be a pointer so you're just going to use illus int 32 as a helper function key name image id and then we also need a type there's going to be an instance type what type of instance i'm going to launch a t2 micro t3 micro or something else we can again use the types package and here everything is declared so if i just type t3 micro instance type t3 micro which gives you a very small image but it's within the free tier so if you just open your illus account you have a free tier so you don't have to pay for it just make sure that you shut it down after you tested it that it works so that's it i think if we're going to run this multiple times what's going to happen then is that it's going to say my key pair already exists so i want to capture that as well but we don't have to really capture it straight away we'll test it first what is happening here i don't need any output so let's remove this variable you're launching an us east one okay i think we are ready to test this maybe it will just crash and then we'll have to fix something but we'll see first of all let me run this and i haven't configured any illus credentials just to show you what the output would be go and go and then we actually loaded a default config but then fail to retrieve credentials fail to refresh cache credentials no easy to i am the srl found operation error so it tries to fetch these credentials using something but there are no credentials so what if i pass some credentials it was access key id abc and it was secret access key xyz so now i pass credentials but they're not valid so i get states code 401 because there are credentials but they're still not authentication failure because they are not correct so i'm going to configure my credentials on my machine and then i'm going to run this go run so either you run the illus command line utility to have it configured globally or you configured in go line itself in visual studio code itself but then you have to use the run and start debugging or run without debugging to have visual studio code load these environment variables i have configured my credentials now i'm gonna run it again oh and it works instance id is returned and let's have a look at my illus console so this is the illus console i want to easy to right here click on it and then you have instances on the left that you can click it opens with a dashboard but then you can click on instances and then you can refresh a few times and now you can see it is initializing this is my instance id and here's my instance and i have an ubuntu am i here's ubuntu here's the am i id ubuntu focal and then we have the key pair name go illus demo so you could actually log into the instance if you want to try to log into your instance using ssh then have a look at your security group because your security group will be default and what you want is that you can actually log in with your ip address because we don't have any security group supplied you might have to edit this so if you click on this security group and then you do edit inbound rules then you might have to add a rule for ssh and you can type here your ip address or you can just open it to everywhere if you type your ip address it's going to be a 32 so if your ip address is 1234 then you need to add a slash 32 000 slash 0 opens it to anywhere for ssh if you want to open everything you can just say alt tcp so i'm not going to do that because i'm not going to log in at this point but just for your information if you want to be able to log in using ssh that's how you should do it so it's initializing now let's just delete it we don't need it terminate so make sure that you terminate always your instance so it doesn't keep on running otherwise you will start incurring costs if you run more than one t3 micro outside the free tier if you are still in the free tier you will not incur charges for the t3 micro but if you are running multiple ones you will you will get charged it just depends how many hours in a month that you run it i think you have something like 730 hours of free or something there's a page on ilbas that describes you in detail how these work so this is already shutting down so what do i want to explain you now what if i do it again the go run dot go it's going to say create key pair error there's a duplicate it already exists how would you solve that well we only need to create the key pair if the key pair doesn't exist so there's also an easy to describe key pair easy to client describe key pairs there's a context easy to describe key pairs input and this outputs the key pairs and error describe key pairs error what the one does input around key name so by default it describes everything but i just want to have key names and then this is the string slice i think it is my key name is go ilas demo and then if it exists then if lan key pairs key pairs what is yeah key pair info if it's zero if the output is zero then i want to create my key pair and actually something else that i forgot because where's my key pair where's my private key i still should output my private key otherwise i can log into my machine so new key pair or just key pair is going to be here and what i want is i want to output it key pair what does it have key pair okay now i need to call on key pair dot key material is a string and that's my perm encoded file and i should write this out to the current directory so that i have my key available how do i do that there's another function for that os write file write file writes data to a named file creating it if necessary i'm going to call it go ilas ec2 perm file and then it's a byte so i need to convert my key material which is a string pointer to a byte so something like this so i dereference it so it becomes not a pointer anymore when i put this star sign before it so now it's a normal string that is being passed to the byte array so now it becomes a byte array or a byte slice and then write file and then permission so permissions if i give it zero six zero zero then it should only be readable and writable by the owner so that's the best to do for something like a private key so nobody else can read it what is the output of this an error so i want to say error write file error if something goes wrong so what am i missing don't think that i'm missing anything so let's go run it again okay that works but what if i delete my key file because i don't have my private key so i still need to delete my key file and then i'm sure that another error will pop up that we still need to fix let's first delete our instance then let's go to our key pairs key pairs are here and here we have to go alias demo action delete so yeah once you created the key pair didn't save your private key there's no way to retrieve it we have to delete it and then recreate it otherwise we will not be able to log into our instance although just for complete less there are other ways to log into an ec2 instance nowadays if you don't have an alias key pair there's still something called the alias session manager that you could use to log into an instance so clear go run uh-huh another error describe key pairs but our key pair is not found so we return an error describe key pairs it's as an api error occurred that we have our inlet key pair not found because we were asking for this go alias demo key pair to describe it but it's not there so alias gives us an error how to solve that there are ways to filter on this invalid key pair not found using a type in golang with the alias sdk but i was looking and within this alias go alias sdk go v2 if you check the types there is actually no type for this error so there is another package that you could use if you want to know more about it you can search for alias sdk go v2 and then error handling it will explain you how you can handle errors so i'm going to do something that is a little bit more straightforward but maybe not the best way to do it but i don't think we'll ever have issues doing this so what i'm going to do is if the error is not nil but we can only enter this conditional we can only return this error if the string does not contain so this error is basically just a string if it does not contain this statement string strings so if not contains invalid key pair not found it's a little bit ugly but i found an open issue in github documentation it's not completely up to date for the v2 golang sdk for alias and that not every single type is available so there is a solution for it there is a package that you can use but the package might change over time so this is code that should keep on working when you want to try out this demo so if there is if there is something better that comes out in the future i will update my code in github but for now i think this should be fine so let's have a look go run go oops another error and and this is always when you when you start coding against this alias sdk there are things happening that you just don't anticipate you always have to check a lot how certain variables are returned and there's also tests that you can write so now i'm checking for this string contains so we didn't go to so we didn't go to this error but then what happened line 44 the length of this key pairs and key pairs is just not initialized key pairs is a pointer which is not initialized always so here we have to check if key pairs equals nil or the length is zero so if it's not defined then we probably got an error here and the error could have been invalid key pair if it was really an error we return an error if it was not really an error but just not found then we enter here and then we check if the key pair is nil if the key pair is nil or the key pair is defined and we the length was still zero then we're going to create this key pair and this should work go run go and it creates another instance so back to our instances so this is the third one we created now and then we can terminate it we don't need it successfully terminate it so we clean this up nicely and that should be it so we have a main function the final context so we could we could have deadlines here in case it takes too long that we stop our program for example but we don't have anything so we just spline empty context create ec2 create ec2 will load the configuration we need to make sure that we have the environment variables we will from the config create this ec2 client describe key pairs if it doesn't exist the key pair we're going to create a key pair and then we didn't check whether we have the output file and here we have the output file which should be our private key so if you want ssh to the machine you can either use putty on windows or the ssh command ssh command is going to be ssh minus i of the goadls pam file and then the login is always ubuntu on these type of machines and the ip address and then you should be able to log in create key pair write file that worked then we describe the image the image outputs an amid and this amid we're going to use here specify the key name instance type and we're only going to launch one instance and then we're going to return the instance id and then we output the instance id so that should be it to launch an atlas ec2 instance in this demo i want to show you how to upload a file to s3 but before we can upload a file to s3 we'll have to create a new s3 bucket so let's get started on that it's going to be very similar as the last demo we just need to do a few things different so this load config i'm going to make a function for that i'm going to call this function in its a streak line maybe so i'm going to have the region which is string and the reason why i'm going to do this is that we going to split up our functions we're going to have multiple functions where we need the s3 client if we had multiple functions where we would need the ec2 client in the previous lecture then i would also have done it like that so the s3 client is going to install as a config we're going to return an error to what else we're going to return the s3 client as well and the s3 client is going to be s3 client is going to be well this needs to be nil for sure but what this need to be s3 dot new from config and then cft and new from config replies replies then this s3 client so we need to also pass a context so we have cdx context context and then the region so this we still need import go get so now we have a service s3 this is now imported new from config what does it reply an s3 client so that's okay but we also need to reply that we don't have an error and then init s3 client s3 client so you would need to declare it here s3 client is of type s3 client init s3 client or s3 client equals an error equals init s3 client the context and the region will be us1 and then we're going to make it if for that if s3 client if error is not equals to nil then we can return something fmt printf init s3 client error and the explanation and then we need to define our error here as well s3 client is declared but not used so i think we are good for now we just need to write another function write our function create s3 bucket and we also going to supply a context but what else we're going to supply the s3 client s3 client and we can return an error and then we just can copy paste this a little bit here or in here we need an os exit1 and here we also need an os exit1 so we're going to create an s3 bucket and we need the s3 client for that s3 client save this create s3 bucket and then it will supply the error so we have no errors now just that we have to return nil here and i think we are all green we're all green so create a new bucket s3 client create bucket that's an easy one supply the context supply the s3 create bucket input and then we need to give the name the bucket name bucket is the bucket name a last string let's define bucket name for that so the bucket needs to be unique in alias that means that if i create a bucket bucket name alias demo there's bucket then if you're going to execute this lab it's going to say this bucket already exists because i created it because it needs to be unique so i would say add a random string here like something like something like this and then if everyone that does this demo adds a unique string it will always be unique oh so i save this alias string bucket name and then what happened here is it imported the older alias SDK because if you use the older SDK previously then sometimes Visual Studio can remember but it's actually the wrong one because this one is the v1 and we need v2 it is possible that you're going to implement something using a service that is not completely implemented in v2 or feature that is not completely implemented in v2 and then you still have to use the v1 but in general you should use v2 and i hope v3 never comes out otherwise i would have to redo all these lectures create bucket and then there is a lot of explanation here and why is there a lot of explanation here because again s3 exists already for a long time has lots of features and there is a possibility that you will get an error depending on what region you are using by default the bucket is created in the us us east north virginia region you can optionally specify a region in the request body you might choose a region to optimize latency minimize cost and so on for example if you reside in europe you will probably find adventures to create a bucket there and then you might have to specify something else so let's just execute this and see how far we get if i do go run.go it seems to have created the bucket there's no error how can i know that it was s3ls of this test bucket let's try that okay no error that means it has been created good let's just see if i would create this in europe you will ask one just gonna give it another name if that would work and if it doesn't i will have to change something okay that seems to have worked as well no such bucket us one oh oh i already see what is happening here i need to specify error and i don't really need outputs so the first time it actually worked the second time it didn't work so i'm gonna say create bucket error and let's see let's see what we get back as an error when we run our goaling program in us one on this bucket create bucket error the unspecified location constraint is incompatible for the region specific endpoint this request was sent to so yeah with s3 buckets that are not created in us each one you might have some issues and that's why i wanted to cover all the use cases here if you're gonna create a bucket in another region then us east one you will need to add something let's try to fix that so what do we have here create bucket input create bucket input configuration of types create bucket configuration types create bucket configuration and then you see now it's a street types and not easy two types anymore and what do we have here location constraint specify the region where the bucket will be created if you don't specify a region the bucket is created in the us east region so we also want to specify location constraint and this is of times bucket location constraint and what is this a string so we can add a region here and then we just need to pass the region again here region string and then we just add the region here as well okay maybe i should make the region a constant why not region region name e us one because we're using it in multiple places and then i don't have to pass it anymore region name remove this remove this and then region name that should work and let's now see whether we can create a bucket in a different region seem to have worked and this doesn't give an error all good so i'm not going to use this bucket here i'm going to use this first bucket in us east so let me just bring it back to us east one so we created our bucket but we only have to do this once then again we can do a describe bucket or list bucket to check whether this bucket already exists so how do you do that a street client list bucket or a scribe bucket there is no describe bucket so it will need to be list buckets what does list bucket say returns list of all buckets owned by authenticate sender that should work list buckets and then s3 list bucket input what's going to be our input there's no input it is going to list all the buckets all buckets error and then we will return an error for example these errors can still be hit as in we can still have an error if you don't have permissions for it and that's what it says here you will need to you will need to s3 list all my buckets permission to be able to do this and then we need to say did we find the bucket we'll start with false and then we need to iterate over all these buckets so let's have a look range all buckets dot buckets all buckets is the variable but in all buckets you have buckets which is of types buckets and then this will be done a bucket of types bucket and if you have bucket name we can compare the name which is string it's a pointer so we need to make sure that we compare the actual name so that's why we need this star otherwise we are comparing the addresses if bucket name equals to bucket name which is our static variable then font is true if not found then we're gonna create this new bucket and otherwise we're just gonna return nil oops let's see that works save go run and that seems to have worked and we still have our bucket we still have our bucket let's upload something now now we are going to pass this s3 client to another function upload to s3 bucket and you can say upload to s3 bucket error if we have an error let's make a new function upload to s3 bucket and return nil if you have no error okay do we have still errors no no errors what do we need to do to upload something to s3 we need our s3 client and if you just type upload then you see upload part uploads a part in multi-part upload that's a really low level function so this is not something that we are going to use amazon actually has other functions available for us to upload data to s3 with higher level functions we have to do less if you have a specific use case and you cannot use those other functions those high level functions then you would have to use those but for us for a simple use case of uploading a simple file github.com illus feature s3 manager so we're gonna get this package manager and then once it imported we'll be able to use it manager dot take some time to import it if it takes long time we can see if there's no errors and it seems github was just a bit slow so now it is imported manager and what did it say if i type upload new uploader new uploader creates a new uploader instance to upload object in s3 and this we should be able to use new uploader and we need to pass our client s3 client and this will give us a new uploader uploader equals manager new uploader and then we can upload something to s3 with our high level function within the uploader package what do we have here upload uploads an object to s3 context and then s3 put what was that put object input put object input and that should be it and this will give us an output the output we probably don't need it will return things like bytes output and stuff like that so we just need an error if there's an error then we say uploader what are we going to supply the key the bucket the key is just a file name and the body and the body is of io reader which we have used previously and do we need something else you might need something else for example if you want to make an object public you can add permissions but we don't really need it now so i'm going to keep it simple bucket is a bucket name convert it to a pointer prefix is going to be the name of the file so this can be a directory as well so it can be directory slash test dot txt something like that and it will automatically create this directory because there's not really directories that exist in s3 just a prefix this doesn't need to work unknown field prefix key key is a name so the directory is called prefix but it wants key and then the body of it so the body can be you can maybe read a file first with the io package to read a file or we can just say i'm going to create a new buffer a new reader which gives us a strings reader which implements this io reader interface and it will just say hello world so this also works if you want to read from a file you can also use the io util reads file this reads from a file if you want to read from a file a local file we are using strings new reader which gives us this body and then it should upload a file let's have a look we can output upload complete just so that we know the program executed upload complete let's hope that it's now in our bucket and there is our test of the xd let's copy it locally to see if it worked and then you can also use the illus web console if you want to download a file with the browser or you can just read it here hello world or on the console this command will only work on linux or macOS it just outputs a file hello world so our upload of our test of txt with our text hello world worked so in this demo i uploaded a file to s3 and i did it with a little bit of text i'm sorry if i'm going a little bit quick over this demo but a lot of these code is just boilerplate code because you can see that it's just this piece of code that is new and this piece of code is new and then here it is on for loop but you can see that there's a lot boilerplate code involved to do these illus calls i can still show you one more thing i can show you if you want to upload a file that is from our local drive so like i said this io util we can use read file and the file name is test.txt and it returns bytes and an error so test file error and then if we cannot read the file given read file error and then we have a test file which is bytes so what do we do then we remove first this column and then bytes new reader bytes new buffer oh there's new reader as well new reader or new buffer it's new reader new reader from bytes test file so this gives us a reader reader implements the interface io reader so this should also work go run upload complete let's now call test2.txt to see if the file matches and yes it is now a hello world with three explanation marks just like i had here in the test.txt so if you have a local file that also works if you do an api call like htp get and then you get the full body and then you upload it to s3 that also works so where your source file comes from that doesn't really matter as long as you can give an io reader which can also stream this data because it's a reader interface then you'll be able to upload a file to s3 if you want to delete your bucket afterwards there will also be an s3 client delete bucket if you want to delete your bucket manually afterwards then you can do alias s3 delete bucket alias s3 delete bucket and you can also do it over the interface and alias s3 rm can remove the file so if i do this then the file is gone because you will be charged for storage for get requests and put requests if you are in a free tier you have some free usage as long as you stay within the limits but even then if you have to pay for it it's really only a few cents as long as you keep your files very small so that's it for this demo so we uploaded a file into s3 and we created s3 bucket now that we uploaded a file to s3 let's try to download it next to our upload to s3 bucket we gonna have download from s3 pass s3 client and most likely we'll get back the contents which is going to be of bytes so we're gonna call it out and we're gonna say download complete and i'm gonna output it and we just have to write this function so our bind is how we typically transfer file contents with a slice of bytes download from s3 same signature as upload from s3 so i'm just going to copy paste this function and i'm going to call it download from s3 what i'm going to return is also byte it's very small files so i can just return the contents in one variable i will then return empty bytes so that everything is green we have here the new uploader manager there's a new downloader this new downloader accepts the s3 client and returns a downloader downloader equals this manager new downloader and then we should be able to initiate a download function download and context and a writer at this writer at will be a buffer we just then need to provide a variable of the writer at interface and that will be the actual content of our file that we want to download we will provide a variable of the writer at interface and the downloader will then download the contents of this file of s3 into this buffer s3 get object input and this returns n and r the n in 64 returned is the size of the object downloaded in bytes i'm going to call this num binds and then the error if we have an error then we say download error and then we just need to define this buffer buffer equals manager new write at buffer so they have a function we can use that returns a manager write at buffer and implements this writer at we just need to provide an initial buffer which we can just do by providing an empty byte and then we can return here the buffer and if you return the bytes it will return a slice of bytes written to the buffer so this just returns all the bytes that are in this buffer so download will put everything in this buffer and we will output the buffer and if there's nothing to output we can just output null we would still need to whether the num bytes matches the bytes in our buffer just to be sure that we download it properly if num bytes equals to the length of buffer bytes and if we give this a variable then we can nicely give an error code with the correct variable num bytes received equals this length of the buffer bytes that we received and if then the num byte is not equal to the num bytes received then we're going to draw an error num bytes received doesn't match and then we can just output it num bytes and num bytes received num bytes is int 64 and num bytes received is int so we need to make sure that we're comparing the same things we can just say that our int needs to convert to int 64 first so we can convert to the num bytes so what are we doing here downloading checking for errors checking for num bytes and then returning the actual bytes what are we downloading that's what we need to provide here still get object input bucket and a string bucket we can just copy it from here actually bucket and the key save this and then download complete here we output it okay let's have a look where this works core run core file upload complete that's this one download complete that this one and it's the hello world so the hello world that we uploaded here from our test.txt so if we now change it this is a change then it should first upload this change and then download this change okay that also seems to work so our upload and download seems to be working also when you upload something to a stream it already just it is going to override it so that's why we're not seeing an error it first overwrites it and then it downloads it again and we see that we can download something from s3 and output it that's it for this demo we have a simple upload and simple download function to write and read to and from s3 now that we have completed our a2s upload and download in s3 let's also have a look how we can write tests for our new code i'm going to create a new file main test.go and what you could actually also do is you could move those functions to s3.go and then have s3 test.go that is probably a little bit more clean than what i'm doing package will still be main let's test our create s3 bucket test create s3 bucket t testing t save testing is imported and we want to call our create s3 bucket with a context and s3 client context background is a good one create s3 bucket error if error is nil not nil let's just exit create s3 bucket error context and then now this s3 client we don't really want to make real s3 calls in our testing so we will have to write another mock s3 client what should our mock s3 client implement it should implement this list bucket and create bucket how did we do that last time last time we created an interface and now we will have to do the same so the list bucket is a function that we will have to implement in our interface so let's make a new type our s3 client which is an interface and it will have these two functions we are going to use list s3 bucket and create s3 bucket and then we can change our function signatures create s3 bucket get the s3 client and that should work you can also return it here but then we are going to get into some trouble if we are going to pass it to our upload s3 bucket so let's just change it here do the test and then we'll continue explaining how we do the other functions s3 client if i save this this should be still working looks okay test create s3 bucket and then we can create our mock client so we're going to have type mock s3 client which is going to be a struct and then this struct will have these two functions list bucket and create bucket so this is going to be a function and mock s3 client twice and then we just need to create a function and then we have the list then we have the list bucket output that we want to return in the create bucket output so we can just like last time we can say s3 list or just list bucket output and this is of s3 list bucket output and create bucket output and this is of s3 create bucket output and then when we return we return m.list bucket output and no error and here we return the create bucket output list buckets output yeah that's better and then here we'll pass our mock s3 client what buckets are we going to output we can output buckets of types buckets will this types bucket be imported probably not let's copy it from here this is going to be time is bucket buckets oh i need a colon here comma here another comma here and then we can say the name is test bucket and then it will still create our name so undeclared oh yeah it's a slice so we need to add one element so this is one element if you want to have two elements it's going to be like that does bucket two let's add two elements and this will test our for loop here create buckets so we this will test our font and then we'll have the create bucket output of s3 create bucket output and will we have anything in here location but we are not using it so we can just leave it empty as is let's put a breaking point and let's have a look let's first run it without debugging okay it runs let's now have a look with the debugging so font is false and actually what i wanted is i wanted to go in this okay so we are comparing all the buckets and in buckets we have two buckets our test bucket and our test bucket two so this is how we can test our logic after an api call and then we create a bucket and this returns no error the error is nil and we succeed so this is how we can test api calls and this we can do for every every api call to it was that we make there's just one slide problem with our upload to s3 bucket because we are passing the s3 client and then here we are initializing our new uploader so this ideally we would like to get out of the upload to s3 bucket and then we don't have to pause the s3 client we can then pause the uploader and if you pause the uploader we can then implement an interface for our s3 uploader so the way we would do it is type s3 uploader interface upload and we would also have s3 downloader interface and they would put the download we would change this into s3 uploader actually no one second we would first change this uploader is the s3 uploader and then we don't need this anymore so we'll just take it away but we still need to pause the uploader and the uploader we can pass like this because the new uploader implements those two functions or this one upload function and we can do the same for the downloader so we can say here manager new downloader s3 client and then here down we can say the downloader is of s3 downloader and then we can also implement tests save this and then there's still one problem we are doing a read file and we could just pause for example the file name because even when we want to test it we might still want to read a file but we might have another path to the read file so i'm going to take this away file name here's the file name and pass this right here test.txt and here if we want to create an uploader we have to make a type s3 mock s3 uploader which is a struct implement this upload method m mock s3 uploader and this gives the upload output upload output return upload m upload output and nil is it nil yeah it's an error save this and then when we create another function test upload to s3 testing what is it called download from s3 upload to s3 bucket test upload through s3 buckets it doesn't need to exactly match but it's nice to have it exactly match and what do we need to pass context uploader and file name so here in our files we could then make new folder test data and in test data we could have new file test.txt this is a test file and it's not not going to really upload but it's going to read it upload to s3 buckets context can be context background and the uploader is going to be our mock uploader and then we have a file name which is going to be test data test.txt let's call this mock uploader and then we're going to define the mock uploader right here what is going to be in the mock uploader upload output upload output we're not really checking for it so it can be empty upload output needs to be of manager upload output and it returns location upload id but if we are not really checking on it if you are not really checking on it we don't really need to output anything we're just going to check if we have no errors if you have no errors then our test was successful if error is not equals to nil then we have an error of the upload s3 buckets we're not testing a lot of code here but you could potentially have a lot of other code here in bigger applications where you're going to do an upload and you want to test your whole flow so right now it's just small function but you can easily see that it could get bigger let's put a breaking point now the file name is test data test.txt and we read the data and in test file we now have this is a test file and we have the bytes in there which we are going to upload and the upload return nil if we want that we could still write some more tests in our list bucket or create bucket or our upload to see if the arguments past are correct for example the put object input is this we could test basically on this where we are supplying a body if you want to write more evolve tests so this is how you typically test AWS endpoints you write a mock uploader or a mock client for every service that you are using and then you make sure that you pass this to your function and as long as you implement the correct interfaces then within the function the client will always be able to do an upload or download a list bucket or create bucket and then you can mock these functions in your testing file. In this section I'll be covering Microsoft Azure I will show you how to use the Azure Go SDK to execute API calls on Microsoft Azure. What do we need to do before we can use the Azure Go SDK first you need to create an Azure account if you haven't done already and you will have to download the Azure command line utility then you can run az login which will authenticate you in the browser and then the credentials will be configured in your home directory in the azure directory so azure stores a token in this dot azure directory in your home and then when you use the azure go SDK it can download these credentials from a token file in the directory and can then use the azure API. In our first lecture we want to create a virtual machine on azure to show how the SDK works. To be able to create a virtual machine we have to go through a few steps. The first step will be to create SSH keys and I actually have a separate lecture on how to create SSH keys so we are just going to use the code that already exists as a library in my github repository if you want to know the details about creating SSH keys in Go you can have a look at that separate lecture. Then we're going to initialize the SDK retrieve the token from our config the token that has been stored by the azure command line utility and then we can start creating resources on azure that are necessary to create RVM. First we need a resource group which is just a logical grouping so that's pretty easy to create we can then create the public IP the vnets the subnets and a network security group we can then create a network interface that we can attach to this newly created virtual machine so then this newly created virtual machine will then have a public IP address that we can then contact and SSH into it. To be able to SSH on port 22 we also need to create a network security group so the network security group can allow access to port 22 based on an IP address or it can also just allow access on port 22 so that we can test our newly created instance. That's what we are going to do in our first azure go SDK lecture. The first step before we can use the azure go SDK is that we need to install the azure CLI if you haven't done already. So I just typed in azure CLI in google and one of these microsoft.com links the first one here will bring you to the correct website to be able to download this azure CLI so here are the instructions to install on windows on mac on linux so if you are on windows you can just download the latest release or you can download a specific version on mac the easiest is to use brew so I installed azure CLI with brew once installed you should have the az command available and the first thing you have to do is to use az login so if you open a new terminal then you can type az login and az login will open your browser where you can then login to your azure account with your azure credentials this is how it looks like you'll be able to pick an account if you already have set up one or you can use another account and then once you give your credentials it will say you have logged into microsoft azure and then you can close this window so make sure that you have opened an azure account first and then you just have to login and then the azure command line utility will have saved the credentials in your home directory once your credentials are set up you can use the az command to check whether your credentials are set up correctly so if you use a command like az account list it should output you a list of accounts that are available so if this command works and you don't get an error your credentials are set up correctly on your own machine let's get started with our azure instance demo i'm going to try and create a virtual machine on azure using the azure go SDK first thing i did is i did a go mod in it of the azure instances and i already put a little bit of code in place just three functions the generate keys function the get token function and the launch instance function so the generate keys is the only function that has some code in it the other two have nothing in it yet so what is this generate keys this generate keys is going to use this ssh demo code to generate a private and a public key so whenever we're going to launch our instance we are going to create a my key dot pam and my key dot pop so private and a public key going to write this in this current directory and then we're going to return the public key as a string this we don't need to launch the instance because we're going to launch the instance with our public key so we don't really have to worry how to generate these keys these keys will be generated by this piece of code if you want to know how this a safe generate keys works there's a separate lecture on ssh that you can have a look at the next step is to get the token the azure token so the initializing of the azure go SDK to get as a token that we can do API calls with this we still need to write and then we can try to launch the instance so first we need to create a resource group then all these other resources and then finally the virtual machine can be launched so let's try to get started I will first write this get token try to execute that see if we get an error and if we don't get an error we can start with a launch instance so how do we initialize the token well maybe let's have first a look at the azure SDK let's google for the azure go SDK so here's the azure SDK for go on github so here's a lot of information about the azure SDK it says you can find the library folders grouped in a service call the SDK directory and you need go 1.18 or later because they are using generics and then there is also older packages that you can use but you don't need to use if there would be something that is not implemented in this newer SDK so if you have a look at SDK here we have an az core az identity and the resource manager so these are all packages that we can use let's have a look at this az identity the azure identity client module for go provides azure active directory token authentication so we'll have to start with that what do you need an azure subscription which you should have it should be authenticated and go 1.18 here it is also explained what type of credentials you have how you can get your credentials to the environment or to a managed identity if you're on an azure host then you have a managed identity we are not on azure host so we want to use the command line utility so let's have a look which one would be best for us and here we see for example this new azure command line interface credential so now that we have the command line interface installed we can then use this command to retrieve a token from our stored type tree and we also need to do the go get of this az identity so let's try that out this is our get token az identity and then the go get it is downloading let's handle our this error return the error and then otherwise we'll return the token i am returning here a string but i'm not going to return a string i'm going to return whatever this credential is returning so this has been downloaded save this az identity returns an azure command line interface potential options no it returns the azure command line interface credential so let's see but what do we need to actually invoke this API we can have a look what we need to know what we have to return here whether we return a string if you do a get token or whether we return this whole variable let's output right now an empty string and then we can see when we are going to launch our instance when we going to create our first resource what exactly we need because then it will be more clear so i'm just going to save this and i'm already going to run it to see if this new azure command line interface credential actually works so as i do go run okay that seems to work my key has been created and this get token was executed no error so far if you get an error there's probably something wrong with your credentials if you don't get an error your credentials should be good let's now have a look so we did the generate keys get token we are we are just sending an empty string because we don't really know exactly what we need in our launch instance but let's try to do the first step of our launch instance creating the resource group to see what exactly the type is that we need and i will tell you why i don't want to put the exact type here because here we have the azure command line interface credential and that seems to me a variable that is not abstract enough it should just be an azure credential rather than the azure command line interface credential and once we start using the functions to create resources we will see what type that we need so let's start with the launch instance function so remember the graph that i showed you in the previous lecture the first step is to create a resource group a logical grouping within azure so this az identity we did let's go and have a look in sdk and the resource group is part of the resource manager so if you click on the resource manager you will see that everything that is managed by the resource manager in azure will be right here there is also a reference documentation where you can potentially more easily search for things but then here if i look for resource resources then within resources we have the arm resources and this arm resources can create for us a resource group so this module provides operations for working with azure resources and here we have the reference documentation that i can also open so do we have some examples here the new client we can use and we also need the subscription id so this subscription id is also a concept in azure where we now have a token of an account but an account can have multiple subscription IDs so this subscription id you will have to retrieve still and we are just going to pass this as an environment variable because once you have multiple subscriptions then you will have to pick yourself in what subscription you want to launch your resources so i'm going to pass this as an as an environment variable but you can also pass this as a flag or hardcoded but be careful when you're hardcoded that you don't commit it to your public github repository so that's why you probably want it as an environment variable or as a flag i'm just going to copy this just to start with i probably don't need the options right now and here is more sample code of a resource group and then they actually have a main.go so create resource group is the one we need create or update resource group and we need a context resource group name the location we are going to pass so the location is the physical location where you're going to launch so they probably have here west us as a static variable declared i'm also going to declare the location static i'm going to do get enough of the azure subscription id as well and that's how we're going to pass it so we have a subscription id and then i'm going to initialize this client for a resource group and i'm going to try and create a new resource group based on the same code here so client is the new client and i need the subscription id so i'm going to say subscription id and then i can say string or if my type is the same as a next parameter i can also remove it so it's going to be the subscription id the credential and then the options but if you don't need to pass any options you can also pass nil arm resources we didn't do a go get of this so i will need to do a go get of this so this was in the resources i think it was this one not 100 sure no it's not that one it's this one that's it so i'm going to save this one now it's imported the arm resources and the new client is asking for a credential an az core token credential so this is what we probably want to return here in our get token so let's have a look what happens az core token credential it's an interface of get token and our token here azure command line interface credential it also had this get token you see there's one function the get token function so if we reply the token and we have this interface get token then we can nicely pass this credential so we'll call this credential so subscription is a string and the credential is an azure core token credential and i just need to change it right here as well our token is in token credential and we still need to pass the subscription id as well so and the context so let's try to install those context background and subscription id is always get and subscription id so either we pass subscription id here with our go run or you do add configuration and in this add configuration go launch package you also can have nf and nf is an object and here you can then have the subscription id if you want like this so if you use the run within visual studio code you can define it here the subscription id otherwise what i'm going to do is i'm just going to export it export which doesn't really work on windows you might still be able to use another way to set an environment variable maybe something like set would work on windows export subscription id and then the subscription id that i want to use and then i will do this command here and then i will do a go run so this subscription id needs to be set because we are going to read the subscription id and also if you are windows and you don't really want to set it as an environment you can also pass this as a flag we have some other lectures where we used flags the parameters that we pass so this subscription id we can actually say if run subscription id is 0 then no subscription id was provided and then we exit and then we're going to pass the context the token and the subscription id or actually subscription id was first and then there was a token let's have a look so subscription id first we have a context of context context then we have a subscription id which is string the credential and the public key and that should be it so we have the new client is of arm resources so the arm resources client now we have the arm resource client if there's an error damage is going to return error and then we can use this arm resource client to create a resource group let's have a look in this example again so here we had this example oh we need to create a new resource groups client not sure if you did that new resource groups client and then we can use the create or update of this resource groups client let's copy this resource groups client new resource groups client parameters are the same and then we can call this the resource groups client and then the resource groups response is the resource group when they call group not groups resource group client create or update we pass the context then the name then the parameters then the options and the options can be nil so resource group name what should we name it we can name it go demo and actually i like it more if you just put it on one line because not that long and then let's just say resource group params is these parameters because typically the parameters are going to be longer and then we can still put this on one line resource groups response okay and then the parameters okay and then the location what is the location going to be we can also use a static variable const location is west us do we now have everything oh we have a comma too much here this two is not declared okay and i just saved it and it was declared this two is also a package in the az core so this two is a helper function just like we had with the ad loss sdk we need to convert our string to a pointer so to ptr actually takes a string converts to a pointer returns a pointer to the provided value and then we have a location to our location variable but this just passes a pointer so you will see this two pointer is used a lot just to convert it to a pointer and this is actually the reason why you need go 1.18 for this sdk because this pointer and it's a function of t any this pointer takes any type and will convert it to a pointer this implementation is done using generics so it can take multiple types and then we will return you the type as a pointer now we have our resource group response do we need it we probably need it but a little bit later let me just put an underscore here save this and let's try to test it so i'm first going to set my subscription id so how do you find your subscription id easiest way is to log into your azure portal and then look for subscriptions click on subscriptions and then here i have only one subscription and my subscription id is this one it's a uuid that's how it looks like export subscription id and then if you're actually on windows the windows equivalent of export is set you can also do set instead of export and with this uid if you want to go run that should also work you can also have a bash installed on windows if you want to use a bash that is also a very nice alternative so we have exported our subscription id go run and this should then create our resource group oh invalid resource group location oh and i already have this resource group in my west europe so what happened here is we have an error that there is a conflict i already have this resource in west europe and now i'm trying to recreate this in west us so i would have to first delete this resource or i can just give it a different name i will say go azure demo you can use go demo unless you also have used a name that is already in use go run and that worked you can always verify whether those resources have been created by checking the azure portal so we have resource groups and then you can see our go azure demo and if you click on it we can see the resources within this resource group and there's nothing yet so we are going to now create the next resources the next resources that are necessary to launch our vm now that we have created our resource group the next step would be to create our vnet let's have a look again in the documentation this was the arm resources and if we have again a look at the resource manager and then in network arm network we should have the virtual networks there are a lot of files here let's have a look what they say here more sample codes a virtual network and a subnet and to create a subnet we also need a virtual network so maybe i will just have a look at this example code right here because here we also create a virtual network and then we create a subnet let's have a look at create virtual network we need the arm network for that and the arm network is resource manager network arm network i'll copy this already and then i'm just going to copy paste this code and this code is a little bit different than our resource group because our resource group and we have it still somewhere here the resource group the create resource group was just new resource group client and then create or update and the resource group is immediately created because it's a logical structure it's not a resource like a network or a VM that you have to launch so you just kind of create it and then you immediately immediately get the response but with those vnets and with a lot a lot of other resources you actually do begin create or update so it's not a create or update it's a begin create or update it just starts it and then you get a polar response and then you can use pull until done to wait until this resource has been created because it can take a few seconds it can take a few seconds it can take a few minutes before this resource is created and only when it's created you have the response so that's a bit different than our resource group and most resources are actually like this where you create the resource you have to wait a little bit and then you get the response so let's copy this code to create our virtual network client and then do a vnet creation so i'll just copy this so this was our create resource client and now we're going to do create vnet and this needs the arm network go get arm network and this also takes a subscription ID the credential and the options which is null and then we have the begin create or update the context the resource group name resource group response is here and now we can use this resource group response colon here and we can use the name the name of the resource group it basically the same as here but then if you would like to change the name you only have to change it in one place this is the resource group name and then we need the virtual network name go demo we'll call everything go demo from now on and then we need the parameter the parameter is of arm network so i'm just going to save so that our network is imported and this is a string pointer but what does it need just a string so i'm just going to put the star in front of it so that it's being passed as a string and not as a pointer and then we have the virtual network so the location we have and then we can pass the properties and one of the properties is the address space so every azure resource has properties and because this parameter is just a struct it is actually very easy to see what is required as properties we have the virtual network virtual network has a location that you need and has a properties of the virtual network and the properties is of virtual network properties format which is then this type this struct type that we define here and then we have here all these options the main option that we have to pass here is the address space because a vnet needs an address space so address space needs an address prefix and the address prefix is a slice string slice as a pointer and then we have our string but because it expects pointers as elements we do a two pointer and then we can say 10 and 0 0 or 10 1 16 it actually doesn't matter it's just your preference what you would like to have as your address space and then if there's no error here then we are going to poll until done this is the poll response so poll response poll until done we pass our context gives us the response and this response is our vnet response and this vnet response we can then use later if you need to know the name instead of using the hardcoded value so what is next then the create subnet so the subnet works very similar we have a subnet client so we are going to inside the subnet client begin create our update and then poll so i'm going to copy this put a comment here because here we are creating the subnet the subnet if there's an error we're going to return error the polar subnet client oh polar response so this is going to be a problem i think can we reuse this polar response probably not so let's call this vnet polar response and then let's call this subnets polar response because these types are different so we need to have different names for them and there's going to be our subnet response and we need a resource group name so resource group response resource group response dot name we need a virtual network name vnet response dot name and we need a subnet name go demo and then we're going to create this subnet and then we have a subnet response and then we should have our subnet we can already try to run this no errors let's do a go run and this is going to take some time but they should run without issue now at some point we will get errors when we try to execute it again and again and again and then it's best to actually start checking whether something already exists because ideally you want to be able to run it over and over again even though some resources exist so let's see what happens if we're going to run it again now that our vnet and subnet already exists so that went actually fine because there are no dependencies yet but once we start creating dependencies there's going to be an error that we first have to remove the dependencies and then we can only create this vnet so let's continue a little bit and then i'll keep on executing this and at some point we will see that we will get an error and we'll have to add some extra logic to it what is next after we created the vnet and the subnet we can create the public ip and let's have a look how we can do that now we should be able to write ourselves without using an example i think the arm network should also support a new ip address client so let's try if we can have that so it should be very similar as this ip client new ip public ip address client so you see there is actually a lot of resources here interface ip load balancer frontend virtual hub if you check the azure documentation you can see what difference is between all these services you can also sometimes start from the ui create a few resources and see then what resources in the end you would need and then write this in golang using the azure sdk public ip address client so it's going to be our public ip address client and then you see the parameters are always the same so you you get a client same parameters then use this public ip address client to begin or create something and this and also has very similar parameters it's just the parameter of this resource that is then often difficult to know what to fill out but then it should be the same what you fill out when you would create the resources using the portal so if you don't really know what to fill out i would create the resources first in a portal see what the parameters are then use those parameters in the sdk so the context resource group name public ip address name and then the parameters context resource group name is the resource group response still and then the name and then we have the public address name code demo and the parameters of our network public ip address and then the options which can be nil then a comma and then we just need to fill out the parameters here and then we will have the polar response the polar response polar public ip polar response then the error and then we can do the poll until done and we also need to check on errors so we can just go and paste this submit poll response public ip poll response poll until done and begin creator update and then we just need to do the parameters what should be the parameter location which we can go be past from another one location and then the properties and the properties is of type our network public ip address properties format you also see that the visuals to the code auto complete it to make it easier and then what ip address do we want to associate we don't really want a hard core ip address we probably only want to define the allocation method because we want a static ip address public allocation method what does it accept the ip allocation method and a comma oh and this is a string this is not a struct it's a string and it's a pointer so two pointer and the string is going to be the allocation method but normally when it asks a string and it's some kind of methods that there's only one or two options there's gonna be there's going to be variables declared within this package so you cannot mistype it so if I type arm network and then I type in allocation no allocation there's nothing ip ip allocation and then here we have a p allocation method dynamic and we need static and this is of type ip allocation method so sometimes it's a little bit difficult to look for it you can also have a look at the reference documentation and then just look for this type ip allocation method to see what is available but you can see if you look a little bit and you type in part of the name you often can't find it like this so we are going to create this ip address we want a static ip address we don't want a hard core ip address so we're just going to pass this so this is going to give the polar response and then we wait until we are done what's next I'm just going to add some comments so that I don't lose track this was a public ip what is next let's create a network security group that's also going to be in this arm network so let's copy this a little bit because we can start working from this new network security if I type network there's nothing but if I type security groups there's a new security group client creates a new instance of new security group client this is going to be the one that we can use new security group client groups client use security groups client uh creates and or updates a network security group so that's the one that we need so again it's not easy to always find the correct resource scrolling through the reference documentation and just having a look what is available and in the github repository having a look what is available is often the way to go this then has a network security group response that we can use it's going to be the polar response that we can use right here network security polar response pull until done and what does it need the context the resource group name the network security group name go demo and then the security group as a parameter the location is going to be the same and then we have the properties it's going to be of security group properties format and then we're gonna have a different contents here a collection of security group rules so now what we want to do is we want to allow access to port 22 you can allow access to port 22 just to your ip address but to keep it easy we are just going to allow all access to port 22 and then you can still change it if you only want to allow your ip address i'm only going to run this instance for a couple of minutes so it's easy to then just allow all traffic so you see we have all of read only stuff so security rules will be the one to go with so security group rules ease of new security group rule and then we have our elements and this needs to be of type arm network let's have a look what we need to put in here a name of the security group and then again properties and then we have the rule itself so we have a name and this is going to be a pointer i guess pointer is the pointer yeah it's a pointer so name is going to be allow ssh and then we have the what was it properties properties ease of arm network security rule format properties format let's see yeah and then it added this ampersand here because it needs to be a pointer and then what do you have again a description destination address prefix destination address prefixes so one is just a string or pointer string and then one is also a slice destination port range ranges a priority priority is just a code so this actually just reflects exactly what you see in the UI if you have created a network security group this is how it looks like so the content is going to be exactly the same so you always have a source and destination so let's start with a source source address prefix we only need one so i'm going to say prefix two pointer what is our prefix going to be just all ip addresses source address source port range source port range can really be anything so i'm going to say star because we don't really know what the source is the destination though is going to be 22 so we have destination address prefix and this is going to be the same we don't really know what it is it doesn't really matter but the destination port range is going to be 22 so we are going to allow access to port 22 we're going to have a protocol what's the protocol going to be tcp so we have azure firewall network rule protocol tcp is this correct we are looking for what protocol security rule protocol that's not it i think what do we have arm network something with tcp okay now i get a lot more something with tcp network inbound security rules protocol tcp i think that's it no because it's of inbound i just need security rule security rule ah there it is if i type security rule tcp it actually matches security rule protocol tcp and this is of security rule and this is of security rule protocol and this is what i'm looking for what you also have you have the protocol what is the access you're going to allow allow access allow security rule access allow that also matches and then what else do we have direction and description direction and description maybe first description and you often only write these things once in your life and then you just copy paste it that's how i do it allow ssh on port 22 direction also to a pointer and the direction is going to be inbound security rule direction arm network inbound hmm security rule inbound direction inbound security rule direction inbound yeah that's it so this rule is for inbound and then we still need the priority and the priority is what int 32 and this value can between 146 so it only depends if you have multiple rules i'm just gonna say thousand or thousand and one two pointer cannot use value of int needs to be an int 32 so if i say in 32 of this then it will make it in 32 pointer and then it works okay so this is the security group and then we're just going to pull until done let me rerun it because i actually didn't rerun it for the ip address so let's see if it is still working this worked so now we have an ip address allocated and a network security group but if i run this again will this still work because what happens is we are going to recreate some of these elements okay that still works so we still are going to do a creator update of all these elements so i just want to see at some point we'll get an error but we are not there yet so we can still run a go run as much as we want because when you are developing that's quite important i find that you don't always have to go in and remove all the resources that you can just run a go run over and over again so that you can just keep on testing if there's one resource that doesn't want to be created because you have some mistakes in your properties or you are just testing out with the properties what is next before we can launch our vm our virtual machine is we need a network interface and this network interface is really going to tie everything together like our subnet and our network security group and our ip address first step is to create a new client for this network interface it's still in ARM network new what do we have here you can create quite a few resources in this ARM network new interfaces client we have subscription id credential and no options and then we're going to call this interface client interface client interface client and what can our interface client create a new interface interface client begin create or update context resource group name network interface code demo parameters ARM network interface and then the options and then we have again this polar that i'm just going to copy paste public ip response polar i'm going to copy paste here so we have the network interface polar response and error and that gives us no errors anymore so that should work but now we need all these parameters location that's an easy one properties now it comes interface properties format what do we need network security group and typically when you refer to it you can refer to it with id so we can say id and where is our network security group here network security group response and network security response id we can put here does this work undeclared name because we need the colon here now it works network security group we also need the subnet do we have a subnet here i don't see one immediately but it's most likely going to be in these ip confrations because we have an ip conferation for a network interface if you don't know immediately you can have a look at the example code that might give you some clues let's have a look what's in here so we have a slice so this is a slice declaration and then we have the elements what do we have as elements a name and properties the name can be go demo and then the properties name needs to be a string a pointer string properties needs to be interface conversion properties format of arm network what's in there a lot we need a private ip address but here we are also not going to declare a fixed private ip address we want to have a dynamic private ip address we wanted to have a public static ip address but the private ip address can actually be dynamic so what do we need to do here ip allocation method so arm network a p address allocation and this time it's going to be dynamic so very similar as we did with static but now with dynamic for the private ip address the subnet let's have a look subnet subnet was it subnet or subnet subnet arm network subnet and what's going to go in here again an id where is our subnet where did we create our subnet right here we have a subnet subnet response and then here we have the id so every resource that you create in azure has an id as well and that we need to pass basically subnet response but i also need this colon which i keep on forgetting every time that you create a new variable you need this colon because the error was already defined that's why you need now a colon so we have all this what is then the last that we are missing the public ip address and the public ip address was right here public ip address response a colon here and now we link everything together in this network interface our network public ip address and the id will be public ip address response id looks good to me let's try to execute that do we have the poll until done yes we have and next will be the virtual machine because then we link our network interface with our vm so now i have this network interface what happens if i execute this second time and now we actually get this 400 batch request that i was talking about what does it want to do there was an error when did the put requests for the virtual networks so when you have these virtual networks when you have this begin creator update it can actually not update it because now you have a network interface so any updates are forbidden so ideally when we do this creation of this virtual network what we want to check upon is does it already exist and if it doesn't exist we want to create it but if it already exists we want to skip it otherwise we will always get this error and we have to remove our network interface every time we want to execute this again so it's actually best to have also some code to check on that so that's what i will do next i will create some code to check whether our virtual network already exists and if not then we will create it otherwise we will not create it in this lecture i want to check whether our virtual network already exists and if not we're going to create it if it exists we're going to skip it so i'm going to write another function find vnet if we find our vnet then we're going to skip it so i'm going to say if not found then we're going to create it so this code all belongs in in here but then i can already see that we need our vnet response later on because here we need the vnet response name so vnet response is of type virtual networks client creator update response so let's have a look how we can extract something out of it we could just get a name but then we don't have all the other attributes so here we have virtual network and that's probably a better choice virtual network so if we have a variable vnet of this type arm network virtual network then we can say vnet equals and then our vnet will be accessible right here because the scope is different the scope of this vnet response is only within this if and here outside this if we have declared the vnet so if we then say vnet equals to vnet response of virtual network our vnet will be accessible by other functions outside this if statement find vnet we still need to figure out what are we going to pass to this find vnet well maybe our virtual network client because this virtual network client will need to check whether our vnet already exists find vnet and then we probably also have an error here so we are going to return found and error let's write this function find vnet and then vnet client and we return a boolean and error and this vnet client is of type arm network virtual network client return false nil so we couldn't find it by default how do we find it vnet client we can have this begin create function begin delete or we have a get they get accept a context resource group name virtual network name and options so if it exists then it will return this virtual networks client to get response if it doesn't exist it will return an error but if the operation fails it returns an az corresponds error type and this az response error type we can use to see what error response is so we need the resource group name which is going to be a string and we need the vnet name which is also going to be a string and we also need context and we don't need to have this string here because we already defined here the string these will be automatically both of type string so context resource group name vnet name and no options gives us the vnet and the errors if there is an error then we want to check whether this is an error because it doesn't exist or whether there is something else we're gonna have the vnet as well so if it exists we also want to probably return it but we want to make sure that type is also the same so here we have a virtual networks client get response that's not something that we want to return we would like to return vnet virtual network as well and this is of type arm network virtual network which is actually the same which is actually the same as our vnet right here so we can actually remove this do vnet is find vnet so this will be a vnet this will be of type arm network virtual network if it is not found then we are going to create one put this virtual network in this vnet and if we actually found one then we're not going to execute this and then the vnet that we found should be in this vnet variable context and then what else resource group response dot name and the vnet name the vnet name is going to be go demo what is this this needs to be a pointer okay so find the vnet if it has not been found we're gonna create one assign it to the vnet variable if we found one we can just use the vnet variable later on in our program like here for example we need to return the vnet if we don't have an error we found it so it's gonna be true if we have an error then we need to check whether the error was not found or something else and the way you can check on this is there are some helper functions that we can use to see if the error is of this response error so if the error is not nil then if error is s s finds the first error in the error chain that matches the target and if one is found set the target to that error value and returns true so we pass the errors the first argument and then the target and the target can be any and actually the way that is worse is like the same way that we do a json-martial or un-martial we're gonna create a variable first so we're gonna have our error response of this azcore response error and then what it will do is if one is found set the target to that error value and returns true so if this is true then our error response will have the error in it and it will be of type azcore response error so here error is of type error and if this is true if this errors s is true because this is a boolean then this target will contain the error value of this type az response error if you're confused by this have a look at the error handling lectures that we had in the beginning of the course where we had something similar going on this is the azure implementation of custom errors so now that we have this error response and if the error code is equal to not found then we can return to our previous function that it's not found and i think even that there's some other response codes that we could use status code is int that's most likely the htp status code so you could also just check on the 404 so 404 or not found is going to be the same and then we return the virtual network which is going to be empty but we also say we didn't find it so we will have to run the code to create a vnet but if the error is not the not found error then we want to return the vnet virtual network which is also going to be empty we're going to say false but we're going to return an error because there's some other error that happened so we still want to return an error in case something went wrong here with the get that we stopped the program so that should be it really our vnet will only contain something if we are here if we are returning true in the other cases it's not going to contain anything and we need to check on this found variable so let's have a look so we already created our vnet so let's do a go run and now it will not create a vnet because it is already found will that work let's see and also when you test these flows in your program you have to test it both ways because now you see it actually works but we didn't create it the vnet right because we skipped it so let's try to run it again after we remove the vnet just to see if our whole program would still work so not creating the vnet actually worked but let's just delete everything again delete and then yes and let's see then if we can recreate everything nicely so it's now doing this delete it's running it will take some time but then once these elements these resources are moved then i'm going to try to execute it again and see if it still works and then we will go to the flow where we still need to create a vnet so succeeded go run and now go run is going to create all these new resources oh we got an error we got a get error maybe i made a mistake here response 404 not found oh yeah see i made a mistake so we ended up right here and because it's the error code is actually resource not found and it's response 404 not found so either we can check on the status code or i think we can do resource not found let's try that and see if that works so that's what i mean but it's always important to check whether both flows still work because in our first flow there was no error because we just found it so we returned the virtual network but now we are basically just testing whether this code actually works and now it works and if you refresh here and then if we have a look again in our resource group we again have those for resources so then the next step is going to be to create our virtual machine finally in this lecture i'm going to now create the virtual machine so let's have a look at examples because where would we start it's a bit difficult to know if you don't even know what the package that we have to use so this was our resource manager network example that we used earlier here are in the azure samples let's first go back into the azure sdk for go i'm going to click on sdk resource manager and then we should have compute somewhere resource manager resources compute arm compute here we have again lots of files this is what you're going to need go get compute arm compute and then we have samples we have a sample of a virtual machine that i'm going to use so here we can go to the main.go and then we have the create vm function let's have a look create virtual machine is a function that they have and then you can see we have a lot of parameters here actually and that's why i wanted to copy paste it because these parameters it could be difficult to find without an example so let me just import the package first and then let's copy paste these parameters go get arm compute all right and then we have this arm compute new virtual machine client this polar polar response exactly the same what we have been doing so far i'm going to copy this and i'm going to paste this right here create vm and this can take a long time so i'm going to say creating vm just so that we know that we started creating the vm in the code because it's going to take easily a few minutes and if i'm going to save now we're going to import this arm compute we need the arm compute new virtual machine client so it's going to be the vm client the ssh key we already have in this code so we have the public key that we are passing the location the identity and then we have the storage profile the image reference and here we need an image what are we going to launch a windows server or a linux server i'm going to launch a linux server and we need ssh key for linux this is the 1804 but i don't really like the 1804 so i'm going to use another one and again you can then i think it was it sat here az vm image list output table to find the images so i already know which one i'm going to launch i'm going to just change these variables a little bit to launch a 2004 i actually found those in a github issue on the azure sdk where they were saying that you can use actually those for 2004 you can still use 1804 if you want or you can even launch a windows server it doesn't really matter at this point you just want to have the latest lts of ubuntu in general so the latest lts is 2004 if there's a new one that comes out then you can use the latest lts what do we have then so we have the image and we need a disk disk name can be go demo and the rest i'm going to leave so it's just caching to manage disk these are all azure options like the way you want to have your vm but i'm just gonna create a smaller 50 gigs disk for our virtual machine what else do we have hardware profile so yeah what kind of instance do you want to launch this determines how many cpu's a memory you will get in your instance i'm going to launch instead of f2 a b1 which is very small and it's a development instance so it's very inexpensive to launch this one computer name go demo admin username is going to be demo admin password i don't need i'm going to use an ssh key so then we are going to say linux configuration disable password authentication true we don't need password authentication we're going to use a public key and we are going to write this public key to the demo user so this is where we're going to write to in this vm and this is our key and our key we pass it in all the way in the beginning our key is this public key and it is actually asking for a pointer string so what i could do is i could change this also in a pointer but not here right here so now it is a pointer and if i go all the way back to the bottom so that then i don't need this two pointer anymore i can just say pub key because it's already pointer so we have our ssh configuration our network profile that's why we need our network interface network interface output no response we call it copy this and this is going to be the id and always this colon that i keep on forgetting the id so we have network interface which is going to give us the interface we created we want to configure the login and password we are not using a password we're using ssh key standard b1 and an ubuntu image and we have a disc of 50 gigabytes so this is enough to launch our virtual machine begin creator update resource group output response dot name the name of the vm i'll call the go demo the parameters that we are passing and then if it goes wrong we return an error and then we have the vm response and i'm just going to output it when it is created so i'm going to say print f vm created and then i'm going to output the id or the name the id is also possible then id will just be the full id that should work i still have one error somewhere or the subscription id i have a capital d here and is it going to work i think so so let's try to run it and then i'm going to open a second screen and then i'm going to ssh into this machine so let's have a look what this ip address was so we could have either output the ip address because this network interface would also have this public ip address or while this vm is now creating we could have a look in the portal so in the portal we have this network interface and this network interface has a public ip copy open a new bash prompt and we can use ssh command if you don't have the ssh command then you can download open ssh for windows or you can use putty if you are more familiar with putty but if you're using putty you will still need to convert with puttygen this mykey.pam we also have an ssh client that we built in one of these lectures so you could also ssh into this with an ssh client so i have looked at ssh demo which also includes an ssh client demo is our login and then the ip address and that should be it so if you don't get this prompt then you have to wait a little bit longer because it can take a few minutes before it is up and running so now it's still booting and here we have our go demo up for one minute so this actually works we have launched a vm i'll log into the vm that we just created and this is the output this is the full id of this vm that has been created so we can just exit and then if you don't want to keep the vm make sure to delete it so we can go back to the portal and then delete it and then if you want to run it again you can just go build this program again and run it or do a go run so we have created quite a few resources if you would continue to write on this program i would definitely recommend to split this create vm out in multiple functions maybe one function for every client that's going to keep the overview a little bit better than what i did i was just focusing on explaining how to launch these resources so i didn't want to split everything in functions so let's try to remove this vm and then we are finished with this demo so if i go back to the resource group go azure demo i'll refresh this and now i have this virtual machine and you can remove this virtual machine and you should probably also remove the network interface and the public ip address and the disk just to make sure that you are not incurring any charges because the disk is going to cost you money the virtual machine but also the public ip address if it's not in use there's a very small fee to pay every month if it is in use you don't have to pay but if it's not in use you will have to pay for it so make sure that those are for sure deleted the network security group you don't incur any charges for that and the virtual network neither but to make everything clean you could actually remove this whole resource group if you're done testing i will just remove this whole resource group so that's it for this lecture that was quite a lot to get this vm launched but at least you got some training in launching all these different resources so now if you want to launch another resource like a database just have a look at the reference documentation at the sdk and it should be fairly straightforward very similar to what we did to launch other resources on azure in the next coming lectures i'm going to show you how to use go with kubernetes if you're not familiar with kubernetes yet i would recommend you to first take a kubernetes course i have another kubernetes course on udemy for example that you can take what are we going to do in this first lecture is i'm going to show you how to use the kubernetes client in go kubernetes has an api it's a well-documented api that you can call using rest calls so just over htp using x519 certificates for authentication so we have certificates configured in our kube config configuration file and using those certificates you can authenticate to this htp rust api the kubernetes api and interact with it you could do this with curl you could do this with the htp client in go but it would be again very evolved to do this all yourself so there is a kubernetes go client available that will do all the heavy lifting for us and we just have to supply the config file and the api calls that you want to make the kubernetes api will then reply with a json response when we do api calls we do an api call and the kubernetes api response with json this kubernetes api is the same for any kubernetes distribution whether you run minicube or on alos using cops or the google kubernetes engine or the azure kubernetes engine or anything else it should always be the same there are differences but mainly in the api version of the kubernetes api so depending on what kubernetes version you're running you will need a different go client the version does need to exactly match but on their github page they have a matrix to see what client version is compatible with what server version to make these demos work you need a kubernetes cluster the easiest way to have a kubernetes cluster running on your machine is using minicube so let's have a look at minicube this is the kubernetes minicube github page so it's github.com kubernetes minicube and here you can download minicube either from the release or you could actually go to the documentation there is the documentation here that has an installation page to get started page and here it explains what you need all you need is docker or similarly compatible or a virtual machine environment what does that mean you need either docker installed like just docker for windows docker for mac or you need virtualization support hyperkit hyper v parallels virtual boxes are free that you can download if you don't have docker installed or it's not compatible with your system then your operating system i'm on macOS but if you are windows then you just a few commands that you have to enter to download the installer and to add it to your path if you are windows i also like this one chocolatey it's a package manager you can just install the package manager and then do choco install minicube and then you'll do everything for you i'm on macOS i will execute these commands or you can use brew as well and then you can start a cluster with minicube start if you have any problem with minicube start it'll most likely be that you have an issue with any of these virtualization machine managers so for example if you run the virtual box there's a virtual box page that can have a look how to use virtual box in case of virtual box you first need to download it and then minicube start driver virtual box to run minicube on virtual box the easiest way i find is use docker i have docker installed so minicube will run within a docker container on my laptop so when i'm going to do the demo in the next course i will have my minicube cluster running and the only thing that i did is just minicube start in this demo i will try to deploy a new container packaged in a deployment on our kubernetes cluster that is running locally so i started in a new directory kubernetes demo i did the go mod in it i have a main function and the first thing i need to do is i need to read about how the go client for kubernetes works so let's go to github invite app kubernetes go client in google then github.com kubernetes client go this is it and here it says what i need to do first it says have a look at the compatibility matrix depending on what version you are using what kubernetes version you are using you want a different kubernetes go client and how to get it go get client go together with the version number so also they say we recommend using the v0 x y tax for kubernetes releases 1170 so i am on 124 so i should use v0 24 and then the minor version so i'm on 20 so i'm going to use the 024 1 so go get kubernetes client this is what i copied from the main page 20.4 so i am on 24.1 so you can know that by doing kubectl version or just have a look at your output of minikubes started to also tell you what kubernetes version you are on other versions will also work so if you have a newer version it most likely will also work it just for best compatibility you should use one that matches it's just that there are new api calls introduced in later versions so if you want to use those new api calls then you need to use the correct version for the api calls that we are going to use you could even use an older version so you could even potentially use this version it just depends whether these api calls of this deployment are going to change if they change you will need a new version for a newer kubernetes cluster and when we do this go get we get an error so it has it requires this google cloud google.com this package and this package and when it arrives at github.com go lang mock 143 then it says cannot authenticate record data in server response it's possible that when you do go get a newer version that you don't get an error it just there is something wrong currently in this mock repository for some reason i cannot get a 143 version but then when i tried earlier i can get a 144 version so this is an excellent opportunity to show you how to force a certain version so we can say replace and then this package without a version so when we see github.com go lang mock replace it in still the same still the same package we could also change another package for example if you would fork it on your own repository make a fix you could also do a replace and then this would be instead of go lang it would be your github account but we are going to keep it we're going to keep it on this go lang mock we're just going to force it to a different version so we're going to say i want to use the 144 instead of 143 and it actually works so now we are using the v0241 kubernetes client and we're forcing it to use the mock version from 144 so just like our other packages we first need to initialize it we need to make sure that we provide the correct config so let's have another look at this client go they have an example somewhere how to use it if your application runs in a pod in the cluster please refer to our in cluster example otherwise please refer to the out of cluster example so there are two ways of authenticating in kubernetes if you're within the cluster it's easier because then the api server is already there you just have to contact it locally and it knows that you are coming from within a pod but if you're externally then you need to do authentication you need your certificates so it needs a cube config file so let's have a look at this example main go and then here we have some code that we can use create the client set so this client that we need and with this client that we can then do api calls so we can do core v1 pods list the pods of my kubernetes cluster or create a deployment so i'm going to copy this client set because this is how we're going to get our kubernetes config actually i'm also going to copy this let me just copy everything i will remove these flag parts because we are not going to work with flags it's always good to start from an example so we are going to call this our get client function and we are going to return we are going to return whatever this new for config is going to return we don't know yet because we need to save it and import some variables so let me just put this first all in the function so this is just the code that we copy pasted and i'm not going to work with this flag so i'm going to remove this first and what i'm i'm going to do i'm not going to check whether a home directory was supplied i'm going to straight to this home directory and that's going to be my path where i'm going to look for this cube config file so if you want to supply another cube config you can just hard-code it here because build config from flags the second parameter here is just a path to your config file the path to your config file should be your home directory so this till design is my home directory and cube config is then your kubernetes configuration on windows it should also be in home directory in the cube directory and you should have the config file if your config file is somewhere else then you could override it you could just say my cube config is for example just a file called config in the directory where i am executing this gallant program so let's save this it's not automatically adding all these things these imports so i will have to end them myself or i can copy paste them from the example let's just copy paste everything that i found here save this it'll remove the ones that i'm not using and we might have to import some more let me try to see if i need to import this go get this kubernetes this is already imported go because this is also already imported yeah because i did import of this and it also has all the sub directories okay so now that i did this go get visual studio actually updated this go mod file so now i have the correct packages in my go mod file what do i get here i get a kubernetes client set so what i'm going to return is this kubernetes client set and potentially an error and then i can say again var client is my kubernetes client set and i'm going to do if uh client an error but then i will have to define the error as well this is the error get the client if error is not equal to nil then i'm going to print the error and i'm going to always exit one save this okay i need to remove this colon client is not used and here i still need to return either an error or nil and the client set client set nil because nil the error is nil client is here and i could already test this so printf tests i have no idea what is in this client it just functions that we can use so i just try to print it which will give us some useless information i guess but i want to see if this actually works i want to clear go run dot go what does it do no corporation has been provided try to think kubernetes master environment variable so it's actually not working as because my kubernetes cluster is not running so if i do kubectl get pots it says localhost refused so i need to still run my minikube cluster and it's going to say that it detected the docker driver and now it's going to create this docker driver so now it needs to run this kubernetes cluster so i'm sorry about the noise it's going to make the background noise because this is all very heavy and then my macbook starts using its finance well so actually i'm not on 124 i'm on 123 but 124 will will come out very early i think i just have an older minikube version right now installed and 124 will be out very soon and this 124 this 024 that i'm running here will work with the v23 as well let's try to run our program again okay so now i have the client ready so now we can do the actual deploy code and i'll just pass for a second because this fan noise will stop once everything is booted up now that we have the client we can actually connect to our kubernetes cluster so let's try to run a deployment i have here an apple demo file of type deployment this is the api version it has a name hello all deployment and specification we are going to run one replica so one instance of our deployment and then we have a selector which is going to select the template with the app hello label so we have here the template and the specification of our container so this specification will run the kubernetes demo from my docker hub repository and it will expose a port port 3000 so this runs on port 3000 so with this label we will also see in the pot definition so later on we can filter on the pots that have this label to see whether our pot is running what do we need to do first we need to do the deployment so let's create a function deploy and what are we going to return we'll see that later we'll start with an error and then we can we need to pass the client for sure so we need to have the client kubernetes client set and then here let's start with an error deploy and we're gonna pass this client so and this we can remove or we can say just deploy it deploy finished just so that we have something that we know that the deploy has finished so how how do we do this api call well the api version is apps v1 so you will find this also in this client you see apps v1 is an endpoint on this rest api so you will see within this client package we also have this division the same as the api endpoints so apps v1 deployments and here we have to pass something namespace which is a string so either we can say empty which is going to then use the default namespace or we can specify the default namespace or we can create a namespace so i'm going to say default or you could pass this parameter or you could just hard code it where if you want to deploy to another one then default so default is the one that is there by default in minicube and we can then do apply create the lead get list patch update let's start with create because that's the easiest one create and we need to pass a context so let me pass again a context here and then i will have a new context here context background and i will pass it as well context so we have a context and then we have a deployment v1 deployment so this v1 deployment is actually all this but it needs to be in this v1 deployment type and it's still in yaml so we need to pass this and actually these these Kubernetes packages they have something to pass it so let's let's not let's not do it like that let's pass deployment and then we're going to say par deployment is a new pointer to v1 deployment then we need to add another argument are create options create options which doesn't seem to work because what is v1 here apps v1 let's have a look in this create definition meta v1 okay so we can just copy paste meta v1 so these are the create options that we can pass meta v1 oh meta v1 and then here this also is called meta v1 and it doesn't work meta v1 create options works but it's not a pointer so meta v1 this is how we should do it what does it return a deployment called deployment response the error and then if the error is not nil we can return the error deployment error like this once we have deployment response what we actually want is we want to return these labels we're going to assume that whenever we do a deploy we're going to have these labels and in the next function we want to filter on these labels when we get the pots just to see if the pots are running so if you do if you reply from this template to the labels then we can later on use them so if i say deployment response and then we have what we have here the specification and we have this here as well specification then we should have a template and then we have labels and then we have no error but we need to change our function signature because now we are returning a map this will be a map app it will be the key and hello world value and we just need to make sure that we also returns something here so the map will be nil but we still have an empty deployment object we cannot deploy an empty deployment object because it will not work it will give us an error we still need to convert this yaml file into this deployment struct and now you could use some yaml unmartial package to marshal this into this specification i actually tried it and that doesn't work 100% and when i was looking into the documentation i found that this go Kubernetes client has its own functions to do the unmartialing so some kind of helper functions that can help you it's not very well documented at this point in time so i just found it in a github issue and i'm just going to copy paste the code that i found in this github issue and then i will try to explain it so these are the two lines scheme codex universal dclizer so they call it dclizing your yaml into the Kubernetes scheme and this is the decode function that we are then going to call and the only thing that we need to supply into this decode function is the contents of our apple yaml and it will magically make a Kubernetes object from it so that's all good but i already saved it and it couldn't find this scheme so we'll need to help Visual Studio code a little bit and it's going to be in this Kubernetes package in this client Kubernetes package for a go and it's scheme and now it found it so app file is then byte so this is actually a function so we could actually make this a bit shorter because we're only going to use it once so we could also just say scheme codex universal serialize a decode and this is our function our decode function decode attempts to desalize the provider data which is yaml but i think it can also be something else you can probably be also json that's why it's really a decoder and not just un-mushler because it just accepts bytes and yaml and json are compatible so it says the object is not guaranteed to be populated we need to still this runtime object we still need to figure out whether it's a deployment object let's start with reading our file so app file error equals io read file it's io util read file and read file expects a file name our file name is applet yaml read file error in case there's an error so you read the applet yaml file pass it here to the decode function and the decode function gives us a runtime object this object is going to be of type v1 deployment but we don't know yet because this is a universal function for all the Kubernetes objects it just gives us a runtime object and we need to test whether this could be of type v1 deployment and if it is of type v1 deployment we want to put it in this deployment variable and then do this create so how do we do that we can use switch object type and if the type is v1 deployment then we can say case v1 deployment then we know it's this type but if it is not this type then we can make a catchall default so then we can say return an error unrecognized type and the type is of group version kind so it also returns the type this universal serializer so if we don't know then we can give a clue and then we might have to add more cases so v1 deployment if it's v1 deployment then deployment equals the object but with the v1 deployment type so we're just going to change this object type to v1 deployment save this and then we could actually test it so let's try to test it let's have a look whether we have deployments qtl get deployments no we don't have any deployments we don't have any pots okay let's run this deploy oops i still have an error somewhere oh yeah i didn't change this code here so we will say deployment labels is a map string string and then deployment labels deploy finished did a deploy with labels i never thought i could output it deploy labels all right let's try again let's check our deployment qtl get deployments and we have the hello world deployment and we have the pot running let's try to do another go run and this is a bit annoying you can only run it once because it already exists so let's try to change our code a little bit so that we we can make change to our app yaml and then it will still apply the changes what i want to do now is i want to check whether the deployment already exists if not i want to create it if it exists i want to update it so instead of create we can also do a get i'm going to do a get context then the name of the deployment and then the get options so context the name deployment name we should then be hello world deployment and then the meta v1 get options what is returned a deployment but i actually only want to know whether it exists or not and if it doesn't exist it will throw an error so i will just check if there's an error if error is not equals to nil and i want to split it out i want to say if there is an error and it is not found then i will do something if there's another error i will not return an error and there is a package within the kubernetes client that we can use for that there's an errors package that can check whether something is not found so i'm going to add the kubernetes io api machinery and the errors package and you might have to do a go get of this package if you don't have it yet what you can then do is if error is not nil and errors is not found over the error so if this error includes is not found it's not found returns true if the specified error was created by a new not found so if this deployment get returns is not found then we go in this conditional else if error is not nil and the errors is not found but we are going to reverse this so if it's not is not found then it's a real error let me say deployment get error and otherwise if it's not found then we just move this code in here because if it's not found then we need to create it if it is found then we're going to update it and then we need to have the update options that way we can override our deployment just like that if it is already created so there's a deployment so let's test this no deployment deploy finished the deploy with labels map hello world one replica let's now put three replicas and then instead of an error we should get the deploy is finished the deploy and now you have three replicas and three pots running so and if you put it back to one then it should still work so this now will do the update if it already exists and create if it doesn't exist now next what we can also do is we can have another function to wait for the pots whether they are running or not here they are quickly running because we already have downloaded the image and there's no health checks on it but other pots it can take a couple of minutes before they are in the running state so we want to check on that now that we have our deploy running let's have a look at how we can wait after our deploy so we do our deploy but ideally we want to make sure that all the pots are running because the deploy only creates a deploy object a deploy resource in Kubernetes it doesn't mean that our pots are already running so ideally and that's why we are getting these deployment labels these deployment labels are going to be pushed to our pots so our pots will also have the labels app and hello world so we can filter on those as long as those are unique to see whether our pots are running so i'm going to make another function deploy client and i'm also going to pass deployment labels which is a map string string so it's going to be the key is going to be the app and the value the hello world wait for pots i'm going to call it and i'm just going to copy this this is going to be called wait for pots and we're going to return an error in case something goes wrong and we're going to pass these these labels which is a map string string and at the end we're going to return l what are we going to do we are going to block our goal line program with a indefinite loop and to make an indefinite loop you just write four colored brackets and then we will stay forever in this for a loop until we hit the break point once we hit break then the loop is over in this loop we are going to list the pots and only when they are all running we're going to break and otherwise we're just going to keep our program running or we could say also that we are only running it for 10 checks so after 10 times we checked whether the pots are running we abort or we could just keep it running and you can abort with control c so we are going to use client client apps v1 again oh it's not going to be apps v1 because it's not going to be a deployment or a demon set or replica set it's going to be a pot and a pot is in a different function it in let's have a look what do we have core v1 so you see you have autoscaling batch certificates there's a lot of api endpoints that you see here that are typical in Kubernetes core v1 contains the pots near core v1 and then pots pots accept a namespace namespace can be default you can make a static variable of this or you could even pass it as a parameter instead of repeating it and what are we going to do we're going to list the pots list accept the context and meta v1 list options and it returns a pot list an error pot list error if there is an error then pot list error and now we can iterate over the pots so we are looking for pots not running so pots not running is zero then we're going to do a for loop for key value range pot list pot list is not an array it's a struct pot list items is an array so we're gonna iterate over that and then we get a pot we don't need the key so we use underscore which means that we don't need this variable and it's a pot then we can say if pot what do we have here pot status and what is status pot status pot status phase is not equal to running so it's not equal to running then we say pots not running plus plus so plus one and then if our pots not running is zero then we can do the break pots not running let's make this pots running actually and then if it's equal to pot running then we're going to increment this because we need at least one running otherwise otherwise we're just going to exit so if pots running and then i will have also all pots or i can use the length of the items pots running if pot running is greater than zero then we have pots running but pot running also needs to be equal to the length of the pot list item so because all our pots need to be running and then we can break what is not happening still is that we are not passing our labels so let's also pass our labels and we can pass our labels in the list options we have a label selector selector to restrict the list of returned objects by their labels defaults to everything say label selector and going to be a string and we can again use some helper functions if you want so we already use this errors errors helper functions of the api machinery and we have some more helper functions in there we have the labels and we just need to remove this api because it's not an api it's just pkg labels and if we then have look at labels and just comment this out labels when that doesn't seem to work labels i'm just gonna save no sometimes it's a bit difficult if you haven't imported a package yet to get the visual studio code to autocomplete so this is clearly labels but it doesn't want autocomplete labels so if i command click on this then i get the api machinery labels documentation and then i can also have look here so there is a validate selector from set then i'm going to use validated selector from set returns a selector labels validate oh and now i see what is going wrong so if you use your labels as a map string string you cannot use labels from the import so i'm just need i just need to rename this one labels deployment labels and now i will be able to use labels from my import validate and then i can pass a labels set and this label set is a map string string so i can just pass this deployment labels in this validated selector from set and it will reply a selector i'll say pass labels or validated labels error so if my labels are invalid it will also return error so it's a very nice helper function just to make sure that your labels are correct and this then has a function to pass to a label selector just a string function and then it will be 100 correct what we pass to our list options so now we do the wait for pods we have a for loop i'm just a little bit scared that we are going to breaking out of the for loop too quickly because not all pods might be created at the same time so what we could do is we could also reply the replication amount so if we say deployment response spec replicas then we could use also the amount of replicas that we expect in our wait for pods function just to be 100% sure for our simple use case i don't think we need it we could also just do a time sleep here which is not very nice but we could also just wait a little bit before we are going to check whether this deployment is created in general when we are using mini cube the pods are pretty quickly created so for our use case it shouldn't really be a problem but if you want to be sure and you want to check off all the edge cases you will need to also make sure that the pods that are running is equals to the replication factor so if we are checking we can say waiting for pods to become ready running D out of D and we can say pods running that's the ones that we found and the total is the pot the length of the pod list items and like just n and then we go to a time sleep five seconds and then we will just sleep five seconds and then go again in this indefinite loop until we break the loop i think that's it i would like to test this let's already spin up our mini cube cluster and we just go over the code one more time so we have the pods running zero we do pods running plus plus if the face is running only if the pods running is greater than zero and it's equal to the total of the positive that are being returned then we're gonna break sounds okay let's test this go run go uh-huh waiting for pods to become ready so you see so that's why we need this zero because when we check the first time there's no pods and if you wouldn't check for this greater than zero it will already have kicked us out okay deploy finished did it deploy with labels map hello world so we could actually put this fmt higher so that it's also outputting one out of one so we do a cube ctl get pods we have one let's try to do this with three not sure if this is still going to work because when we change it it's already it's already going to see it's already going to have a pod running so that's why we need this deployment response spec replicas so that we can pause it as well expected pods is an int was a replicas an int int 32 we'll dereference it and then int I'll just need to add change all returns add zero everywhere I'll make it in 32 and make it here also in 32 still have an error that's it and then we can add expected pods and we can pause it here and then we add a parameter and we just need to compare and pods running equals to expected pods okay and also because pods running as int we need to change the type from int 32 int we could have also done that earlier and we didn't have to pause in 32 we could have done that here as well could have done that right here as well if you want it okay let's try this so let's now start five pods waiting for pods to become ready uh-huh and now it works yeah first we have three out of three and now we have five out of five so this is how we can also have a function to wait for the pods and then we can have another function after wait for pods where we do something with those pods maybe we want to update another system or you want to execute something in them or you want to verify something it's always good to have a function to wait for a certain state and this is where we wait for a certain state and this is typically implemented with a for loop that is going to be running forever until we break the for loop so that's it for this demo we have created our first goal line program that can interact with our kubernetes cluster and the other api calls are very very similar so once you know how to work with pods you can use this client set to interact with all the other apis as well in this demo i'm going to rework the kubernetes demo the deploy that we did in the previous demo and i want instead of to supply this this ability ammo i want to have this ability on my github i will never make a change in my github repository i want to have it updated on my kubernetes cluster so i'm going to create a new github repository add a webhook and when this webhook is invoked i want to have it deploy on the kubernetes cluster so what do i need to do for that let's have a look i first don't need these so i can remove those and then the cat client i'm going to change a little bit because i want to eventually run this in my cluster so i'm going to say in cluster bool and then if in cluster then we are going to do something else then uh then not in cluster so the config is going to be different if he are in the cluster because then we have rust in cluster config and this returns also rust config just like this one so i'm going to move this in my if right here i'm going to declare the variables error and config right here and this also accepts an error so now we have the cat client that is going to create a client set depending whether we're going to pass in cluster or not we are first going to say false because i want to test it first outside the cluster then when it works i want to move it in the cluster so we might not need everything here anymore let me clean this up a little bit and what we are going to do is we're going to create a server an htp server that is going to be able to receive our webhooks from github so github every time there's a push is going to send us a webhook so we need to be able to capture those webhooks github is just going to send us a post request we just need to capture the post requests htp listen and listen and serve this is how you can listen on a port in go the address can just be 8080 so that it listens on port 8080 and we don't need a handler we don't need this handler we are just going to define one htp handler here we are going to say that if we hit webhook then we are going to invoke an htp function so we need to write our server and our server will have an htp function so let's create a file server.go it's still going to be package main but you could perhaps move this to a separate package if you later want to make it a bigger program then we're going to have a function our webhook and i'm just going to copy this function signature from here so handle func accept a function this is the function i'm going to write and there's going to be our requests and our writer so that's htp and then we're going to write a struct that's going to keep all the information that we need type let's call it server struct and what information are we going to keep this client that's the first thing we are going to keep and now we can define it as is a new server and our client is the client and then we can pass the function but if you're going to pass this function webhook this is going to work it's going to compile but this webhook doesn't have access to our client so if you make this function part of our struct then we will be able to reach a client using s.client going to save this and then we just need to do s.webhook because now the webhook is within the struct going to save this and fmt print f test just to see if it works will it work uh no it doesn't work because my Kubernetes cluster is not running let's maybe just comment this and this and let's try again okay seems to be working curl localhost 8080 page not found okay and now bit webhook it output test so this is working all good so far we can uncomment this our server is working what do we need to do when we hit this webhook we need to do the deploy on the Kubernetes cluster where do we get this deployment file from this app.yml from git so the webhook will supply us some information some information in the json from the push notification and we can analyze that there is a github sdk for goline so let's have a look at that github goline sdk i type this in google and then we have the go github so we can do go get github.com google slash go github v45 and then how do how does it work you can construct a new client we can do authentication so we can pass our access token in case our repository is private then we can supply this access token and then we can use this client and this client has several ways of then querying the github api and it's all described in the godok reference so for example we want to download a file from our github repository so if i just look for download here download artifact download contents artifact is probably something else download contents is download a file and we can pass an owner a repo file path so we can try to download our file using this function from the repository's service in this github sdk so github will send us a push you just have to analyze this push it's a json in this push request it will tell us what files are updated and added and then for every file that is updated and added we can invoke this download contents and then we just need to do the deploy on Kubernetes to send the YAML definition that we basically put on our github repository so this webhook let's have a look how this works there's a webhook section here go github provides structs for almost all github webhook events how do they look like we need a push request so if i look in the list there's a push and there's an example payload so we already see here commits an area of commit objects describing the pushed commits and here's an example so this is how it looks like we will extract from the repository the name and we have the full name and we have repository owner the name so we could use name plus repository name for example and then we just have to extract the files that are added send it to our Kubernetes server this is the event and we can parse this event using this github SDK so we don't need the structs ourselves we don't need to create these structs to translate JSON into a goal line this is all done for us so we could basically copy this code and make an event for a github push and extract the parts that we need so let's try that out let's first initialize the client and we are going to support authentication so let's copy this so we could create another github.go we could just do it here we have the get client and we do get github client what will it return we don't know yet it's going to return this new client from github so let's make sure that we do a go get go get off this github we added this let's add this here go github we call this github and github new client let's save this or in order to look at it there's a slash github that we need to add it's going to be like this package github and then new client it gives us a github client so we can do a return of the github client and we can pass access token and what if we don't have an access token what if the github repository is public then we pass nil so we can actually say if access token is empty then we are just going to return a client with no authentication and otherwise we're gonna add the contacts here and otherwise then we are going to use a static token access token and the new client so it uses this OAuth2 package to make sure that every time we do a get or post request that this includes this token so there's almost nothing we have to do ourselves we just have to glue all this code together get github client returns a new github client we need to store that again we can say github client and then in here we can say get hit up get github client also context context is new context and the second parameter is our token i will just read the token from our environment github token does it all work what does it reply string what does this reply the github client is there no error okay there cannot be any error so we don't need this error then we have github client and then we're going to use github client is github client we need to add it here as well github client is a github client and actually maybe we don't even need this intermediate variables we could also remove this say s is a new server and then still do s.client and s.github client and this is the same and then we need a little bit less of code which is always beneficial okay so far so good github github client i'm going to save this this is going to be imported still an error here i'm not sure why probably i have to okay i needed another go get and then maybe my gomot was not updated or something like that so what do we need to do now let's have a look at this webhook code can just copy paste this to start with what is our payload validate payload and then we can just pass the request htp request so if you pass the request then it will extract the payload itself so this json it knows how to extract it as long as we pass this request and then we have the webhook at secret key so we need to have a webhook secret key which is going to be of type string and we also need to get this from somewhere so i'm also going to provide this webhook secret as an environment variable or as get nth webhook secret and this is a secret that we can define when we create the webhook just as a validation whether it's our own webhook or it's someone else's webhook making these requests so that if you don't have the secret that you cannot impersonate that you are a github if error is not equal to nil then what should we do then we should probably send an error 500 back to the client we can do that by doing right header header write header write 500 which is which it gives us a 500 error but then we also would like to print the error on our screen or in our logs eventually and then we say validate payload error and the error and then we return so we stop the function so we could put these three lines in a function and call this function or we are just going to be a little bit lazy and go we paste it first is a validate payload then we're going to parse the payload and then if it's parsed it's going to be in event and that event we can again check to see what type of event it is and these are types in go from this SDK and we should have a push event type and this is the event type that we are going to use we probably want the default as well to just say that we don't have this kind of event event not found event not found and then the event type what is this 500 or i'm probably using the wrong function all right what is this htp request oh i see what i did this is the response writer and this is the request and this webhook secret key should be a bind bind and then this writer is going to be write header 500 so now write the header which is off-code 500 and that should all work we save it oh it's also going to be request save it looks like it all works so the webhook comes in we validate it we parse the webhook and then what do we do with it so now that we have access to this push event we could extract some information and mainly we're going to have to extract the files so this is going to contain the commits so let me write a function for that i'm going to pass this commits to a function files is get files commits and then i will write this separately because this is going to take some space get files commit and commit is off-head commit commits is off the github had commits get files what do we return a string slides we will need to loop over these commits this is one commit arrange commits so let's loop over this and then let's have a look in this commit what is in this commit we have removed we have added and we also have modified so let's say that we have all files is a string slides now we want to append to it but this edit is also a slice so we basically want to merge two slides which we can also do it up and we just need to add three dots after our slice and then these two slices will be merged and we will be put in all files we can do the same for the modified files but what if a file then is duplicated let's try to remove all the duplicates as well how to do that easily we can put all the files in a map and then put the map back into a slice so we're going to have all unique files is a map string bool and then we're going to go iterate over these all files file file name range all files and then what we are just going to say all unique files is file name and it's true all unique files file name true and we still need to convert it back into a slice so now we can say for file name range all unique files and then we can say all unique files slice is another slice i'm probably doing it in too many steps because i could have put it in i could have put it in a map already here but yeah now i've already started what are we going to do then append this file name here and i'm going to return all unique files slice when i write something like this i always want to test it so let's write a server test go and quickly write us a test test get files t test a t and how do we test this we get the files files equals get files and then we have to supply github head commit save it first so the github is important and the head commit is an array so we need an extra set of curly brackets and then we added a file added string test dot txt and then we have no modified files if files if lem files is not equals to one then we're gonna say log fatal f no not log t fatal f expected this one needs to be modified expected only one file and i have more than one file let's try to test this files declared but not used oh it will be here files declared not used and then commit is not found because it needs to be event commits and let's just print this out found files s and i'm gonna join them together files and a comma okay that should work run test what does this do okay it passes and then back to the server now we have the files what do we need to do now now we need to download the contents for every file from github so another loop file name range files and then we have this github client repositories and what was it something with download download contents and download contents gives us a read closer the response and error downloaded file i don't think we need a response and the error if the error is not nil then again we will do this download contents error what goes in it the context we have context no we don't have a context we only need one so i will say context background context background and then oh my macbook seems to not be able to follow for a second it's because i'm making a bit messy here context owner repo file path and options contents and what was it repo repo is i can find this in this event event repo and then first i need the owner name this is going to be the github username and event repo name and then the file path is file name and then github repository contents cat options which is probably going to be empty and then i think that these are pointer strings so i want to make sure that we pass them we pass the actual values to them save this and now i have this downloaded file it's of i will read closer so i need to make sure that i also close it close but we differ and then we can download it just like we did with our htp streams we can say i will read all downloaded file and this is the body file body error again error handling read all now that we have the file body we can do the deploy deploy we need a context so let me just use the same context again context is context background and context background and then pass the context here the client we could also move this deploy into this server struct and then we wouldn't have to supply the client but it doesn't really matter here and then we get back the map the pots and whether there was an error i don't think we need those first two but we need to pass this file body somehow so i'm gonna pass this file body which is of binds and i'm going to add a parameter app file and i don't need to read app file because this also reads everything in binds going to save this this works this works save this deploy error so what else do we need so we are first going to test this outside our cluster and it's going to then do what start a server start a server here it will initialize my github client and my get client it will then wait for the webhook if there's a webhook it will validate it parse it will get some information out of it like the owner of the repo the name of the repo the files and then for every unique file we have we'll do a deploy and then we can just say deploy of file name finished and then file name is the file name is the file name all right so let's try to get this operational we need a webhook secret we're gonna enter the environment variables here if you are on windows i think what is easiest is to add a configuration go launch package and then you have the nf here nf that you can use so here you can then say webhook secret and so on and then you do run run without debugging and then the environment variables are injected i'm not going to do that i'm just gonna put them on screen here webhook secret my secret i can choose it myself so i'm just gonna keep it simple and the github token so we still need to create a github token so let me create a github token for my own github account so this is my account and on the right top i'll go to settings and then there's developer settings here left bottom and then i'm going to create a new personal access token you can click generate new token you will have to enter your password we are doing a go demo only to be valid for seven days and then we will need to have repo control that should allow us to access the repositories and then download the data generate token this is a token i'll copy it and then i can enter it here the github token and i'm going to do go run dot go which will not work unless i'm going to start my mini cube cluster but how can github access my server if it's running on my computer well either you have a public ip address which i don't have because this is just a residential connection i mean i have a public ip address but this port 88 is not going to be accessible on this public ip address another i'm going to do some complicated port forwarding so the easiest way to have the forwarding setup is to use something called ngrok it's another piece of software that you can download so if you go to ngrok.com serve web apps with one command sign up for free so this you can download brew install ngrok or with windows zip file or you can install first chocolate the package manager and then install ngrok or there's also a docker so it's the fastest way to put anything on the internet so you install this and then when you have it installed you will have to run ngrok so i already have it set up so i'm just going to type ngrok and then it will give me the help ngrok so ngrok will tunnel local ports to public URLs so if i do ngrok hsp 80 it will make 80 public but i'm going to write on 8080 so i will do ngrok hsp 8080 ngrok so i'm online my session will expire in two hours there's terms of service you can read and i should be able to access my server at this URL so let me do another bash here curl of this URL it's not working because i get like a whole page that is not working because my server is not running okay we need to go run just a dot because we also have serve dot test allow and now i can go to my bash screen and i get a not found which is also normally showing here but this window is too small i think you see 404 not found so this was the first one that it was not running and now i get a 404 not found so this is working so now i just need to use this URL slash webhook and set up the webhooks in github so i'm going to go to my repository that i created i created a separate repository for this the go deploy kubernetes github and this one is private so that we can immediately also test this token first i'm going to create a webhook and then i'm going to create a file so webhooks and webhook slash webhook so this URL comes from ngrok the secret which i supply as an environment variable my secret enable asl verification should work because ngrok is doing the ssl for us which events you would like to trigger this webhook just a push event so if there's a push then it should trigger this webhook and webhook now the webhook is active should have this check mark next to it and i can add a file add file create new file and i'm going to use this app.yaml that i used in the previous lecture so this one is the kubernetes deploy app.yaml i'm just gonna copy paste this one this is the one i want to deploy i'll first give it a name and let me just check that my kubernetes cluster is empty that there's no deployment running kubernetes i'll get deploying there's no deployment running i'm not cheating going to test this webhook and i'm not sure even if it's gonna work i hope it'll work oh there is already something that happened here event not found oh yeah so when you create a webhook it creates an event so maybe it would be good to also capture this i didn't really think about that one case github hook oh yeah there's a hook and this is when the hook is created so then we can just say hook is created and then it will not say event not found hook is created and you would have to restart your server actually for this to work and then you would have to recreate the webhook if you want to have this nice output but just pushing the file now should trigger the webhook as well go deploy kubernetes github so this is where i configured the webhook this is a special repo that is empty best to keep it separate because otherwise you will just randomly push files to your kubernetes cluster you could add a filter on yaml and all these things if you want some more complexity and yeah this should be it commit let's try and let's have a look found files apple tml need an extra return here deploy of apple tml finished wow and we have now the 200 okay here let's have a look oops i'm opening too many windows oh and the deploy is there no it's not no it's pots and the pot is also there that's good news let's try to update our file and see that if we do an update that it also takes into account to update three replicas commit found files apple tml deploy apple tml finished so we now we have the second webhook and kubectl get pots pf3 pots so i kind of wrote a whole ci cd here in just a few lines because you also have ci cd systems that kind of take your git repository and then ingest these files and put them in kubernetes so that's kind of what we did now so we rewrote some of these systems that do that for you in kubernetes so this is our working version but then in the next lecture i'm going to have to move this within the cluster because we're still running outside our cluster with this ngrok if you then put it in the cluster then you can make it a complete service itself that runs within your kubernetes cluster and monitors your github repository to see if there are any files that need to deploy now that we successfully tested our github deploy locally i'm going to run it within a deployment on my kubernetes cluster so if you have another look to our main.ml these get client had the parameter files because we wanted to be not within the cluster so i'm going to change it to true and now it will expect us to be within the cluster so this code will be executed instead of the kube config another approach could be instead of this flag to check whether this file exists and if this file exists then we are outside the cluster if it doesn't exist then we are probably in the cluster and then you should load the incluster config that would also work so i'm just going to keep it simple now so get client true we are within the cluster gorun.go or gorun. will hopefully say something unable to load the incluster configuration kubernetes service host, kubernetes service port is not defined so these are the environment variables that it looks for and these are defined within the pod to then contact the kubernetes service it will contact the kubernetes service and if you contact the kubernetes service it's just the REST API and this REST API if you just contact it like that then you don't have any permissions to create pods or do anything so we will still need to give our pods permissions to create deployment and we can do that using a service account so let's have a look at this deploy.yaml this yaml definitions that I created to have our github deploy application the application that we wrote in the previous lecture to have deployed on a kubernetes cluster so if we start from the top we also have deployment I call it github deploy so we have the label app is github deploy one replica for now and we have a service account name so I will create a service account called github deploy and I will attach the permissions to it we have a container the name is github deploy and the image is wordfiana github deploy so if you want to deploy your own github deploy you will need a docker container repository you could use docker hub that's what I'm using and then this is your login of docker hub and then this is the repository so create this first this docker hub account and then you can build and push it to your docker hub account to have it available I'm going to show you in a second how we're going to build this docker container but you will need your own docker hub account to be able to push this image the port, I give it a name it's called the HTTP port you don't really have to give it a name but I gave it a name and the container port is 8080 because we run on 8080 I specify two environment variables because we don't really want to have it in our code that we're going to push to github so I set github token comes from a secret so we'll still have to create this secret the secret name is going to be github deploy and the key github token is webhook secret so what Kubernetes is going to do is it's going to have a look in this secret and inject these values in our bots so this is basically our deployment we would still need some other resources to get this working so we have another resource a service also called github deploy we say it needs to be on port 8080 and this is going to be how we're going to contact our application our github deploy outside our cluster because it is of type load balancer if you deploy this on AWS or Google Cloud with their Kubernetes offerings then this load balancer will in case of AWS create an AWS load balancer and then you will have a public host name that you can then configure as a webhook I'm going to use Minicube because that's the easiest way to run a development environment and you can still use load balancer in Minicube you just have to enter a command to tunnel this you can enter Minicube tunnel which I also will show you that will then create a link between your local machine and this service and then you can still use ngrok to expose it then we have the service account github deploy and then we have a role so this role is necessary to give us the correct permissions so we have this API group apps if you scroll down to our deploy you see client apps v1 this is actually apps API group and then the resource is deployments and this is our deployments here so this follows nicely the same structure and then the verbs that we are going to use get list watch create update patch delete it's a bit too much for what we need we use create update and also in this wait for pots we do a list but as an on the pots which I didn't give permissions for because we are not using it here but you could add it as well in that case the API group is empty and then we have the role binding so the role binding is going to link this role with this service account it's also called github deploy so it says the subject is the service account and the role is the github deploy so that's it we have the role binding to link those two, the service account and the role we have a service and then we have the deployment that also contains this service account name so it's linked with the service account and then we have the secrets so what do we still need to do we need to push our container and we need to create our secrets let's try to start with this container I have created a dockerfile here so this is a pretty generic dockerfile that is going to package our go lang program into a docker image so you need to have docker installed to be able to build this there's docker available for windows, mic and linux so you can just download and install it if you don't have it yet what are we going to do we're going to take the official go lang image we're going to download it we're going to use this as the go builder to create a new directory and cd into this directory is the github deploy directory we are going to use this is just to compile our program so to compile our program we need to copy everything from our current working directory everything that is right here into our container and we also have a docker ignore file and it's going to ignore the yaml definition and the git so we don't have to upload the git into our docker container then we're going to install curl and git because we need it for the go get to work and then we're going to build it because we are on alpine which is a slimmed down version it's still linux but we need to disable cgo so we just have a few flags here so that it works on alpine linux because this is the 118 alpine if you don't want to use alpine you can just remove this this alpine but then you wouldn't be able to use apk add because this apk add is specific to alpine if you remove this alpine and you have a normal linux distribution I think it's a debian based so you would have to use upcat so here we build it we build our go lang into a single binary using this line go build and our output will be github deploy so in the github deploy working directory we have file github deploy which is our binary then we're going to start another container just an alpine container so it's very minimal there's nothing in it and we're going to add CR certificates, bash and curl so these things are very handy for debugging and also CA certificates is necessary if you want to make outgoing calls and verify TLA certificates then we're going to say copy from the go builder this github deploy executable into slash github deploy so we'll just have one image file extra this file which is our executable and then when we start our container we'll execute this executable github deploy let's try to build this docker build I'm going to give it a tag and this tag is necessary to push it to my docker hub account let's have a look this needs to be the same docker build t word fiana github deploy and what do I want to build I want to build my current directory so I'm just going to put the dot now it's building and the build is finished so you see all the steps here my steps are cached because this is not the first time that I run it if I don't want to use a cache no cache which you don't need but I just want to show you the full output now you see downloading dependencies and installing everything and now the build will also take a little bit longer than the cached build and then if I would do a docker build again without this flag then it will go quicker again because now I already built it the next step is to push it to our docker hub this is docker hub hub.docker.com and then if you don't have an account yet you can create a new account here or you can sign in to sign in with your own account my account is hub.docker.com slash you slash wartviana and here you can see I pushed this github deploy I will push it again just to make sure that I have the latest version and then we can start using it so how do I push this image first you need to run docker login docker login will ask you the credentials for docker hub so that you can push your docker image to your own account so I do docker push and then the name of my image it will take this docker image that I just built and send it to github and now within our Kubernetes cluster we can use it so this will take a few seconds to get pushed the next step is to create these secrets to create these secrets you can use a kubectl command I rather use this github create secrets command than having them hardcorded in here because this deployable I'm then going to push to github so that way we can enter a command that is not visible in gith and then have still our definition in gith so this is finished let's try to start our minicube cluster and then create this secret so I need to start this minicube cluster always again because what I do is I do a minicube delete to then start from a clean minicube cluster so my minicube is running I'm going to create this secret create secret it's a generic secret so if you have a look at the help then you can create a docker registry a generic or tls so I'm going to create a generic one and then I'll make sure that this command is also in a readme in my github for this demo I need to enter these flags from literal and then key equals value so the webhook secret is my secret github token is this token that I just created again oh and then you also need to add a name so kubectl create secret generic and then the name is the github deploy because we refer here to the name and then it created a secret kubectl get secret if we then have a look at this secret then you see the actual input the data key value and the value is base64 encoded webhook secret and then again the value so this should work let's try to deploy this kubectl apply of this deployable this creates deployment the service account the role and the role binding one deploy one pot and one service how do we then access this service from our host system because this is of type load balancer we can use minicube tunnel and minicube tunnel create a tunnel between my host system and the service that is of type load balancer so if you are using minicube you can follow the same steps if you are on a real cluster then using this service load balancer will spin up a cluster that also gives you a host name that should be accessible starting tunnel for github deploy we need to keep this window open so i'm just going to open another bash window curl localhost 8080 page not found ok so this is our service within our Kubernetes that we are hitting now so let's now run ngrok again ngrok hp8080 and now ngrok instead of going to this local instance it will go to our minicube tunnel and our minicube tunnel will go to the service service will go to the pod and in the end we get to our real application this URL changed so i need to copy paste this again as a webhook go deploy Kubernetes github settings webhook i can delete my old webhook actually i didn't have to delete it i could have just updated it actually but it doesn't matter webhook secret is my secret just send the push event and then add the webhook and now i don't get the correct now i don't get the correct it just took some time so we did a post and yeah now the last delivery was successful let's have a look in the logs to be sure i'm going to open another window look we did our post gave us a 500 kubectl log get pods let's have a look in the logs there's nothing in the logs but the delivery was correct so let's continue and see what happens when we do the webhook there is still a possibility that i didn't that i didn't properly use this github hook or that we get a validated payload error because there is also backstab and missing here so let's just continue for now let's see how it goes code m.yml so if we trigger here a change then it will trigger our webhook so i made a change let's go and have a look in our webhook we can click on it recent deliveries okay so we have two and we can see the event you see so we have the repository the owner and so we have this commits array here it is commits where we have the modified apple.tml and we can then send this apple.tml to our Kubernetes cluster so let's have a look whether this worked or not kubectl get pods and yes i have my pods running kubectl get deploy and i have my deploy running there is just one pod let's try to run three of them then i just need to make one change in my code m.yml three replicas commit and now we have three and three pods let's now have a look at our logs kubectl get logs of this oh we still get this event not found that's why we had this 500 error kitabhook was the event not found maybe there is still something wrong here most likely the type is still of a different version so that's why it's not working and also it was not displaying it immediately because i also don't have a backstress N here these logs might not have shown if there was no new line but then when the webhook triggered we found the files and then we did the deploy so this is the most important here we definitely want to capture this first event but even if you don't capture this first event nothing is really going to happen i will try to update the correct code in my github repository so that you also have the correct code there it's probably going to be something small so now we have this working we have these logs here the deploy the webhook works we have ngrok that we have two times the states okay and we have the minicube tunnel that we set up for github deploy so we now really have our application running within Kubernetes and you can also go and have a look inside this pod by using kubectl-exec and then because we installed bash you can also enter bash and then we can have a look what is running here so we have this github deploy running and that's really our only binary that we have put in this image and the reason why it can contact our kubectl server is because we have somewhere these kubectl service defined and then if we curl to this IP address using hbs then you can see that we are actually well just clear screen for a second we can actually contact this API server and you see if you don't use any user then we have anonymous and then we get forbidden and that's why we needed this service account and this service account is then somewhere mounted within our pod run secret kubectl service account that's probably going to be it and then here we have the CA certificate and also we're going to have something to authenticate a token most likely or another certificate so that we can assume this service account we can connect to this API and then we have the role to do this getlist watch create update patch or delete and that's what this go SDK does for us so this get client here this rest in cluster config is going to take this environment variable and see if we can contact the Kubernetes cluster using this secret of our service account and that's why we don't have to do all these steps ourselves so that's it for this lecture if you're deploying this in a real cluster you're going to have a real host name that you can then use and expose your application and then you don't have to use and croc so the only downside here with minicube is that you have to use all of steps to get to your actual pod but if you would use a real production cluster then you don't need all these steps this was quite a long demo I hope when you try this demo yourself it also works for you if not reach out to me in the Q&A or send me a message