 Hello everyone. My name is Niek Róbalik. I know that my name is pretty hard to pronounce, so Don't feel about bad about this. I will talk about just briefly talk about How we can build functions without Docker or portman So why this is important because as you probably all agree some people don't want to have Docker on their machine or portman Some people are even not allowed to have have this kind of technology on the laptops Or some would like to have some flexibility So before I go I will just just quickly go give an overview of functions I suppose you learn all the good things from Lensys and Mauricio's talk, but just to highlight some things So basically we are using the KN-funk plugin to Drive the functions and the main thing is that the container images are being built By by build packs. So this this step requires portman or Docker on the developers machine Once the container is produced then it's deployed as a standard k-native service So basically the standard workflow and developing functions is just I will create let's say I would like to create a node function So I will run the first command then I will implement the business logic And then I will deploy my function and I can repeat those steps till forever and This is the step that requires portman or Docker on developers machine How we can solve this? So We have already Kubernetes On the cloud somewhere on internet. So why don't we use resources on the on the Kubernetes cluster? There are a couple of options Some of them is some of one of the option is basically we can use tecton to define pipeline That will drive the build and deployment of the function on cluster There is obvious a problem how to get the source code from developers machine on on the cluster so Again, we have multiple options So the first let's say obvious option is that we can push the source code to some github repository and then just fetch the github On to the cluster we are tecton and then execute build and deployment The second option could be to directly upload the source code from the developers machine to the to the cluster this might be beneficial for let's say When we are developing the function and we would like to iterate fast so we don't want to go through the through the github repo Another option might be not to use tecton at all and we can just package the source code as a as an image and push it to some container registry This has been proposed by mink for example but I suppose that The tecton approaches several benefits because this way we can easily plug the CI-CD solution To our system we can have some stability Then we can also cache Cache the build artifacts or even the image layers on the cluster so the builds are then faster etc, so I Will just quickly show you demo so basically this is This is not function It's very simple just generated function is not important But the important thing is that basically that I will I will use the approach with the git repository So all I need to do is to specify the github repo that contains the source code of the function and specify a build type to git Because we have limited time I already pushed the source code to the git repository So all I need to do is to execute execute the deployment. So I will just write fun deploy and As you can see it starts doing something I will go back to my presentation and This diagram might help us Understand what's happening under the hood. So basically that guy on the left that might be me or maybe you or some other docker-haters. So The funk CLI then creates all the resources for me. So basically it creates tecton pipeline with tasks So there are three tasks at the moment. So first task basically clones the repo And stores is on the persistent volume. That's been as well created for me Then I can execute the build pecs task that will build the function build the container and store the image digest and finally I can use this image digest to reference a container in my container registry and Deploy deploy the function as a cognitive service If I go back to my Pipeline it's still it as you can see now it is it's deploying to the cluster is running on mini-cube on my laptop So it might be a little bit slow Anyway, so this is the approach with the GPO and if you would like to use the direct co-op load We can just sweet. We can just change the first task in this in this pipeline Where we can implement some let's say basic receiver application that will wait for open connections And then we can from developer machine. We can open a connection to this to this pot and Basically copy copy a diff of source code or something like that and then execute the build again and deployment again Okay, so let's see if my function has been built. So as you can see my function has been built. I'll just quickly check Sorry typing my name So as you can see as you can see the function is up and running we can we can also check the pipeline the tecton pipeline stuff so we can Describe We can check the latest by planer and so as you can see this is the tecton resources that has been generated for me So me as a developer don't need to know but as you can see here I can check that the status of all all steps. So the first is it's the fetch resources then building and deploying Okay, so that's all. Thank you for your attendance and