 So my name is Waldemar. I'm the CTO of Local Stack. You can reach me on this Twitter handle or by email. I would be happy to connect with you also after the conference. And also want to say thanks to the organizers for organizing such a great event where people can come together and talk about the latest developments in the Python space. So just a brief outline of the agenda. So I want to talk a bit about what Local Stack is, some background about the project, brief architecture overview, and then really diving into how to use Local Stack and how you can actually use it. So I'm going to try and run a few interactive demos, and hopefully the demo guides will be with us and everything's working. Then also talk a bit about the Local Stack internals, because it's written in Python, and I think there's some nice details we can share about that. And then also talk a bit about some advanced usage and features, and happy to take some questions towards the end. So just a quick intro and background, sort of the context of where we're operating is maybe just get a quick show of hands. Who of you are actively involved in cloud development, say with AWS, for example? OK, so that's a majority, I would say, of the audience. So 75% problem. So what we're seeing a lot is that the cloud is great. It's very scalable. It's very performant. But sometimes the dev loop can be quite slow and tedious. So you need to deploy some changes to the cloud to wait for the results, see if it fails with the runs, and then do another deployment and see if that works. So kind of slowing you down in your quick cycles that are kind of critical. Also, if you look at things like remote debugability, if you have a red building in your CI pipeline, how can actually replicate that to the local machine? And then ultimately also the aspect of managing lots of different dev accounts for your organization. If you maybe an SRE team or a DevOps team that needs to provide and provision all these AWS accounts, it can become quite a bit of an overhead and also costly. So that's kind of the context of where we operate and what local stack is and brings to the table is what we call a fully functional local cloud stack. So you can literally develop your AWS applications on the local machine, even offline in some cases. So I've actually been doing some work on the flight here, literally developing some sample apps while being offline, which is quite nice. So it chips as a Docker image, reasonably easy to install and has support for some 50 to 60 AWS APIs right now. We'll go more into those. So different compute services, various databases, messaging, and also some more sophisticated exotic APIs. So we'll talk a bit more about that in the course of the presentation. So the kind of 10,000 feet view of what we want to achieve is that if your application, this blue box here, is kind of moving through different stages from the local development machine throughout CI CD all the way to production, then the application is just talking to these API endpoints, not really knowing that it's just sort of talking to these emulated endpoints in the local context and in the CI system before actually moving to production. So it's really about providing these gray boxes here on this image that really are the emulated, actually the best services that are just running on the local machine. So you might be asking why local? So a lot of people are generally asking like, why would you even bother doing this? And so what we're hearing in the space is that it can help just keep in control of the environment. People like to have things like locally attach a debugger in an IDE, just have like quicker iterations and cycles, just the speed of development, reduced management overhead and also removed restrictions. So if maybe some of you are working in an environment where it's hard to even get access to a cloud account, then it can be very easy to get started and also just experiment with these cloud APIs without having any costs associated. There's also some interesting discussions on hack and use, you can maybe check them out, some of the pros and cons of local cloud development. Bit of the history of the project, so it started as an open source project, actually quite some time ago already in 2017, there was an initial bump when it got some traction from a tweet by Jeff Barr, the chief evangelist at AWS and then from then on it kind of was growing in the open source. And the initial phase was really just bootstrapping and getting the ideas out. The medium phase, the middle part here is kind of the early adoption where people started using it more and more and now we're sort of entering what we call growth phase because it's now really taking off and getting real. So Locostack is now also a company and there's a team behind it and we're now actually seeing a lot more innovation happening in that space. And also if you happen to have been using Locostack in the past, I'd encourage you to take a look at it now again with a fresh view because it's very different today to what it was in the past. Yeah, I'm not gonna go too much detail but it's essentially a very frequently used system. So we're also able to innovate and add a lot of innovations to the platform recently. So for example, improve the startup time. It's previously took almost 10 seconds for Locostack to start up. It's now basically one second immediately available. A new plugin system, a multi-arch build if you happen to be using M1, you can now have an RM64 Docker image. So a lot of really cool things that were happening recently and I'm gonna go through some of them in the talk as well. Okay, so the very high level architecture and I should mention Locostack is written in Python so that's I think why it's also a good fit for this conference. So the very high level overview and this is slightly overloaded chart here but basically what it's trying to show is so you have a Docker container which is basically where everything is running. Then we have the main Locostack Python process which has one canonical entry ports which is Joe's port 4566. So it's the random port that we chose. And then we have the runtime which is basically consisting of a bunch of utilities for process management, request parsing, plugin loading and so on. And then the gateway which is really the dispatching logic that then forwards an incoming request to the corresponding service implementations. For example, we have a Lambda service provider, CloudFormation, Kinesis, basically for all these different AWS services there's one service plugin. And these have their own internal logic, some of them actually call or spawn new Docker containers like Lambda, for example, it's spawn new Docker containers. Some of them are actually calling external processes. For example, Kinesis we're using a third party tool called Kinesis Mock. And so it's a very different type of logic. And there's also inter-service communication happening. For example, CloudFormation which is like used to deploy these templates. CloudFormation is actually talking to all these other services in order to get the state and get the resources deployed. So this is kind of a very high level goal overview. And the goal is to be lightweight, easy to use and cross platform compatible. That's why we chose Docker as the runtime. Just a short word on the service providers. So essentially what we are doing these days is very much being driven by the API specifications. So AWS and all the other service cloud providers as well are providing detailed API specs about their APIs. And we actually use those API specs to generate stops and sort of interfaces in Python. So here, for example, on the right-hand side we can see the API specs for the create function API in Lambda. You can see all these details here. So it's basically the path, the methods, the expected response code. And then what we generate here is basically the interface which can then easily be implemented from Python by the developer. We make heavy use here of a library called BodoCore which is basically the AWS representation of these specs in Python code. And each service keeps the state in memory. So each service provider has sort of a state container that's just an in-memory ephemeral state of the representation of the state which basically is indexed by an account region service tuple. So you have this hierarchy of accounts in AWS, then you have regions and you have services and those basically make up the mapping of the identity to the state. And then we have some common mechanisms for persisting that state also to disk and reload. I think I'm hearing some feedback, but hopefully it's gonna be cool. And yeah, as also mentioned, there's also some external service providers. For example, Kinesis is where we really sort of in the Docker container spin up a third party tool that we integrate which is called a Kinesis Mock and then basically we just forward the request to that because that's the canonical implementation right now for Kinesis. Okay, so that was a very quick overview of the sort of the architecture of local stack. So what I would like to do now is just go into some demos and just show you how it works. And what we're gonna start with is just some very basic usage of local stack. So we're gonna start up local stack and then run a few commands to create some S3 buckets, some SQS queues, putting some messages on the queue and just playing around a bit with the services. So I have a terminal prepared here and so what I'm gonna do, I'm gonna start local stack in dev mode here. So it's the same as running it in Docker but we're just gonna do in dev mode. And then I'm gonna go back to my few samples prepared and I hope this is actually, can read this also from the back and maybe zoom in just a bit to make it, the left hand side is just the local stack container output that's maybe not so super relevant but okay, so now we can take a look at the sample one. And this has just a bunch of commands for really creating resources. So the simplest one is, for example, creating a bucket. And what you'll notice is that we have this AWS local command here. That has the same, sorry, hold on a bit. That has the same API and the same interface as the AWS command line. So if you're familiar with the AWS command line, that has all the commands for interacting with the different services and AWS local is basically just the same interface but it just points to the local APIs. Okay, so we created a bucket, we can create a file with some content, say hello world and then we put this file, we say it was local S3 copy, we copy that file and you can already see that there's some output happening here, S3 put object. So, and if we do a list on this particular bucket and then test, then we see that the file is there. So really, really simple, kind of shows you how to interact with the system. We can then also do things like creating a SQS queue. So SQS is the simple queuing service essentially used for sending and receiving messages in AWS. And then we can, we've just created the queue here and now we can send the message to the queue. They get some of these attributes like the message body, MD5 hash, message ID and so on. And then we can also obviously receive the message with this command here and we get back to the message. So fairly, fairly straightforward, this is just kind of hello world of how to interact with local stack. Okay, so hope that was clear so far to follow. There's lots of configuration options that you can use. It's a highly configurable system. So there's various network configurations. You can configure the ports, Docker networks and so on. Some service specific configurations. For example, you can inject certain latency if you want to emulate or simulate the real world environment that also happens in AWS. Sometimes there's some delays, right? So the resource creation takes some time. So you can actually simulate those and configure them in the system. Lambda has a lot of configurations, how to mount the code into the Docker containers and so on, we'll share that in a second. And also some more settings for debugging, log outputs, persistence and more. So we tend to assume sensible defaults for most of the configurations because we just want to make it really easy to use out of the box. But if necessary, then you can actually configure the specific configs as well. There's also a web use interface. I'm probably not gonna go into too much detail but there's also basically a way to browse these resources that you created on the local machine from a web UI. It's just a different representation of how to look at things. Okay, so I'm switching gears a bit and going to a slightly more complicated scenario. So next one we wanna look at S3 bucket notifications. So let's assume we have an S3 bucket and we're basically gonna use that bucket to push log files to it, right? So some JSON encoded log files. And what we then wanna do is we wanna have a Lambda function written in Python that's basically scanning these logs and checks if there's some critical messages in there. And if so, it actually puts them to an SQS queue which can later then be consumed by a consumer. So fairly standard kind of producer consumer example. And we've also prepared this now to run on local stack. So switching back here again to my example. So I'm gonna go to sample two. And so what we have here, this is actually implemented as a Terraform module or Terraform configuration. So some of you may be familiar with Terraform but just for a quick refresher. So it's basically, so if you take a look at main.tf, so Terraform is a way to specify resources that should get deployed against your cloud environment. It's just kind of a declarative way, infrastructure as code that allows you to define these resources and not having to call each API individually, basically. So the way this looks in Terraform, you have these resource sections here. So for example, we have this S3 bucket which we saw before where we store the logs. We have an SQS queue, which is gonna be called alerts queue. It also has some policy attached, an IM policy. There's a Lambda function, the one that we're gonna be using to do the filtering of the logs. And then some more IM roles, just some general boilerplate that also needs to be generated as part of the Terraform configuration. And this piece here is the bucket notification. So here we're actually linking the S3 bucket with object created events to the Lambda function. So whenever an object created is happening in the S3 bucket, we call the Lambda function. So maybe this is familiar to some of you, maybe a quick show of hands who's been using Terraform, maybe or some of these, okay, yeah, almost a majority. So you're familiar with that as well. Okay, and again, we have a small wrapper script that is called tflocal. So if I were to run tflocal, so the process is obviously to Terraform init, which initializes the project, downloads some modules, make sure that everything has a fresh state. And then once that's done, if I were to run Terraform apply here, it will fail because I don't have AWS credentials configured on this machine. So it will tell me, hey, I don't know how to get the credentials, please give me access to AWS. That's why what we have is a tool called tflocal, which is again, a small wrapper script that just deploys this Terraform script against local stack. We cannot do a tflocal apply. And it will again call Terraform under the covers and we now see that it's suggesting these edits, right? So it's basically coming up with a plan that can then get executed against local stack. So we're just gonna say yes here. And you can already see here now on the left-hand side that a bunch of log outputs is starting to happen. It's basically, so Terraform is now checking the state, what is already configured, creating a few resources, making sure that they are properly deployed, just basically running a deployment loop to make sure that the state converges to the desired state. And as you could see up here in the resource section or in the plan, it's all the resources that were defined in this configuration that we had before. There's actually a lot of more information in here because Terraform is applying a lot of default values. If you don't specify them as part of your config, they will just get added here as well as defaults. Okay, so especially S3 buckets, I think is sometimes slower to be deployed because they have some timeouts. Once that's done, you can actually see now it's checking some object log configuration for the S3 buckets. So a lot of different calls now happening from Terraform. And yeah, now it's creating the bucket notification as well and that is now complete. So we now have this stack deployed against local stack and we can now follow, if you remember again, the scenario was to put some log files into the bucket and then have the lambda triggered from that. So we're gonna do that now. So we have one of these log files prepared here. This is what they look like. It's just got some timestamp, message, CPU utilization, some CPU values, and we basically wanna say if the CPU is higher than a certain threshold then we want to trigger an alert, right? So fairly simple example for some log monitoring. And what we can do now is I'm gonna switch down here, go to sample two. You have my commands, my cheat sheet. So we're gonna copy this file into the S3 bucket with this S3 copy command. And now what you can see here on the left hand side that local stack is actually spawning the lambda function and we already see some output happening here. Now let's take a look at the actual lambda function. It's doing the processing. So again, if those of you who are familiar with lambda function at AWS, you have this simple interface here. You basically implement this handle function with an event in the context and then this gets automatically called by the cloud provider or by local stack in this case. And what we're doing here in this lambda is pretty simple, so we get the records and then we just loop over all the records that are in this log file and then you have these thresholds here, right? So if it's higher than 90% then we say it's a critical CPU utilization, otherwise if it's higher than 60% it's high utilization. And what we actually saw here in the output is that it's creating these warning critical messages for us already. We already put these messages to SQS and we can consume it from there. So a bit further down here it's doing this SQS send message so we can now actually consume the messages from there. So we're gonna do that here, oops. And just call this a receive message which is basically, oops, okay, let's actually probably have to name, spell incorrectly, that's interesting. Let me just double check real quick. Classical. So if you look into our main TF, the Q name was alerts queue, name alerts queue, SQS list queues. Ah, okay, so it's this slightly different name. And that we're getting here, just queue one. Okay, yeah, so this was a slightly different name here but you get the idea that basically you can just get these messages from SQS and modify them. So what we can also do now is if you go back to our Lambda function here we can actually make some life changes and I wanna quickly show how this works and what this means. So let's assume we now want to make some modifications to this Lambda handler that we just defined before. We have this mechanism that we call Lambda Hot Reloading. So essentially you can make changes to the Lambda handler on the local file system and it's automatically reflected in the next time you invoke the Lambda function. So there's no need to redeploy the Lambda, essentially you can just make a change around the invocation again, it's automatically reflected. We achieved this by putting some special bucket name that's called Dunder local Dunder which is just basically indicating that this is just a local bucket where we use this mounting mechanism. So if you go back here again to the handler what we're gonna do is just add some hello world and then also Euro, Python, 2020. Okay, and then if I just basically copy the same, the same file as before, the log file to S3 it actually triggers the Lambda function again and we can now see that the, I hope you can actually read that from the back but now the output is updated here with the change which is made. So that's a fairly easy way to kind of iterate really quickly with your setup and your Lambda functions while you're developing your local amps. Okay, so I hope that was not too rushed and clear from the demo. Speaking a bit more about integration, so we already saw Terraform and this is actually something that's really important to us to have all sorts of different tools to integrate with the local stack. So Terraform, we already saw it, there's also Perlumi and other infrastructures code framework. The AWS CDK, we're gonna look at that in a second. Maybe some of you are familiar with that as well. The cloud development kits, actually maybe a quick show of hands again who's been using, okay, so that's less people than with Terraform but a few hands were raised so talk about this in a second. And then there's also other things like obviously Docker, Compose integration, all different application development frameworks like serverless or AWS SAM and also CI CD system. So really try to have a very active ecosystem of how to integrate local stack into your environment. So talking a bit about the CDK, so this is kind of yet another way to define infrastructure. It works a bit differently than Terraform. So basically what CDK, the way it works is that you have, you define your infrastructure in a programming language. It can be Python, it can also be TypeScript, it can go anything really. And, but it's basically just a declarative specification. You just create basically an object tree of what your infrastructure should look like. And this then gets compiled. So it have a CDK synth step, which is a compiler that compiles it into a cloud formation template. And cloud formation is this YAML or JSON specification where it just have the resource declarations. And then there's the processor which is really cloud formation that it makes the creation of the resources effective. And we also implement cloud formation in local stack. That's why it's, we can use CDK as well. So again, demo time. So what I wanna do now is take a look at something, a bit more complicated application which is using AppSync. So AppSync is an AWS API that's using GraphQL schemas, essentially. So you can define GraphQL APIs and then you have your query and mutation sort of operations to query some data from a backend or add some data to a backend. And there's different types of backends you can have. So for example, DynamoDB databases or RDS databases or even you can even call Lambda functions or other things. But what we're gonna demo here is just a GraphQL API that's connected to a DynamoDB basically. And we're gonna just deploy the whole thing against the local stack. All right, so this is sample number three, I'm just gonna do, just take a bit fresh start here. The instance, gonna go here into my sample three folder. And so now we can take a look at, first of all, what this CDK script looks like. I should probably do this here. So it's the CDK, so I think it's here in there. I'm gonna need to start this real quick. Okay, so the CDK script itself, the stack is actually written in TypeScript. It could have been Python as well, but just to show the different languages. So this has a class here, CDK demo stack. It's actually taken from one of the AWS samples, I believe. And then it just has a bunch of different resources. So you have this AppSync GraphQL API and there is a schema associated with it. The schema has certain definitions with the different types. So basically we have a query operation and mutation so we can basically retrieve items from DynamoDB with all and get one, or we can save and delete items as well. So these are kind of the two different operations, query and mutation. And then there is the actual DynamoDB table that's also part of the stack, right? So we need to create the table itself with the partition key, which is called, which is using some ID as a name. And then also the data sources, which are basically connecting the DynamoDB table to the AppSync data source. So there's quite a bit of sort of boilerplate involved and kind of connecting the different pieces. Then there's also this concept of resolver in AppSync, which is basically making sure that we actually have the schemas or the types in the GraphQL schema associated with actual actions that we can run against the endpoints. So the DynamoDB table database in this case. So you can see it's a reasonably large and also non-trivial example. So what we wanna do now is deploy this against local stack. And as you could have guessed, there is a CDK local commands that we can use which will basically not apply, this is deploy in this case. CDK local deploy. So what this does now basically, it looks at this script that we just saw, it creates the Terraform templates and now it's basically running this Terraform deployment looking against local stack. And you can see a bunch of output that's happening here. So I've enabled the debug logs here. And so basically it's just making sure that all the different resources are now deployed. Everything that we saw in the example before, right? So the DynamoDB table, this GraphQL schema, the API, all the different pieces. And now we can actually run commands against this. So it's now deployed. So again, my cheat sheet here is run as H. So we can now, first of all, list the GraphQL APIs. So this was just created by us. It has the basic information. You can actually also get an endpoint here, a local endpoint that can be directly invoked with the GraphQL API that was created. And also a WebSocket endpoint if you want to do real-time communication. Then, yeah, what we can do is we can get the API ID of the API we just created. So this is just API ID. Yep, it's just an identifier. Then we can also receive the API key because that's part of the, I'm actually not sure if this was, yeah. API key, this is just sort of a generated key that we can use to invoke the API. And now we can actually start running some curl requests against it. Oops, I'm having some issues with the line wrapping here. So we're gonna run a curl request that passes the API key as a header. And it does a, the payload is a query operation where actually we do a mutation to save a new item that's called test one. And as you could see on the left-hand side, it was actually triggering some DynamoDB put item calls here in the background. So again, we make an HTTP request to this GraphQL endpoint. It detects kind of which API is associated, looks up the resolver and then creates the request against DynamoDB in the background. And now we've added this item here. We can add another item and just get the item ID here. Oops, look the host. So this is test two. So then we have an item ID that was returned. Item ID, yeah. And then we can also retrieve this via the query part of the API, not the mutation, but the query. So we're gonna just pass this particular item ID to the get one operation, and then this should actually return us the item. Again, doing the full round trip against the DynamoDB database and really allowing you to interact with these resources in a very easy kind of fashion. Okay, so that was a bit sort of slightly more complex example, lots of moving pieces and kind of shows nicely again how to iterate quickly. If you wanted to make some changes, for example, in the schema logic here, it's really quick to make changes and deploy them again against local instance. All right, so that's sort of the first part of the demos. I wanna talk a bit more also about the Python terms because I think that's maybe interesting for especially for you all with the Python background. So there's a couple of different patterns that we've sort of established along the way. The project has been around for quite some time and we've been getting a lot of different requirements. It is also the code base has grown organically over time and we've also refactored a lot of it in the last couple of months and in the last year. But basically some of the highlights I wanna mention here. So one is SSL socket multiplexing. So one use case we often have is that we wanna expose our port 4566, both as HTTP and HTTPS. So in fact, if I go back here, if I do a curl on a local host 4566 and the health at the end, I get the health endpoint information. I can also do the same with, yeah, unfortunately, I need to specify the, okay. So it's also an HTTPS endpoint. There's also, you can actually also, there's a valid certificate that we use which is called localhost.locostack.cloud. Then you don't even need to specify the SSL. So that's actually a valid SSL certificate for localhost. So yeah, so basically multiplexing SSL traffic and non-SSL traffic over the same socket is something that is just very, very useful. And the way that we do this with this very simple mechanism here, we basically wrap the SSL, SSL context, SSL socket class, which is basically the factory that's creating new SSL sockets. And we basically, we say that this, we overwrite it with a duplex socket and the duplex socket essentially makes a peak on the first five bytes of each connection. And then checks whether the bytes are in a certain range. If they are less than 32 or higher than 127, it's a good indication that it's not a regular HTTP traffic. It's most likely SSL. There's also some libraries who can actually achieve that, but we just happen to use this because it's been serving us pretty well. And yeah, it's just a nifty thing that has come up over the years. Actually curious to hear also your experiences maybe after the talk, if you have some good solutions for some of these problems. But yeah, so this is SSL socket multiplexing. The other thing I wanna point out is plugging loading. So by now, local stack is a pretty sizable code base, several hundred lines of 100,000 lines of code. And previously, we were basically essentially loading importing the entire code tree on startup, which was terrible frankly, like it took very long to really start it up even on some new hardware. So what we're now doing, and I'd really like to encourage you to take a look at this project called Plux. We actually open sourced it. It's a very nifty piece of software and what it basically does, it's a plugin mechanism that's using the setup tools entry point mechanism. So setup tools is the packaging mechanism when you publish something to PyPy for example. And basically what you see here in the bottom left corner is part of the entry point specification of the local stack PyPy package. I hope you can read it in the back, but basically it's defining some, the service name ACM colon default, which means the default implementing provider for the ACM service should be looked up in this module. So we have this information basically pre-import, so we don't need to import any code in order for us to know which modules do we need to load in order to instantiate a service provider plugin. So this is, and I believe this is something that's quite useful for other projects with large code bases. If you have this issue of lazily loading code as plugins, so I'd really encourage you to take a look at that. So if there's two things to take away from this talk, local stack is great and plugs is great. So hopefully I was able to get that across. Now it's really, it's a total game changer for us because it's really like the startup time is really sub second now and we just do this lazy loading, which is extremely powerful. Okay, so that the next common sort of pattern we were seeing we're doing is serialization of states using the pickle library. So as I mentioned before, the service providers by default have just ephemeral states in memory when you tear down the instance restart, it's all gone, but there's also a serialization mechanism. We use a deal, which is kind of builds upon pickle and adds some more functionality, but is also basically adding some more functionality on top of it. So one example is, for example, if you have serialization primitives like locks or queues, they would, it's problematic if you reload them from persistent state because they might be acquired. So we basically make sure that when we reload the state, we just go over the entire object graph and reinitialize things like queues and locks with kind of new values, with new objects so they can actually be acquired again. So that is also something that we've come across by the bit. The other point is runtime code patching. So we use a variety of third party libraries that we depend upon and sometimes we have the requirement to just change things like minor modifications that need to be done. And instead of always creating a fork and making the modifications there, we apply monkey patching in quite a few places. So basically what it would look like is on the right-hand side here, you have your original function, you keep a reference to the original function, you assign a new function, you can call the original function in the new function and then basically update the result. It's fairly straightforward. But what we did is just introduce a add patch decorator which makes it very easy. You can apply it to a patched function and it passes in the original instance of the function that you're patching. So again, something that's been very useful in the context of working with third party libraries that you need to patch. Process management is another one. So there's a lot of sort of abstractions for process management. So we have a server abstraction and then for example there's a Docker container server class and then there's a proxy Docker container server. So it's quite a bit of a hierarchy. So the proxy Docker container server, for example, creates an SSL proxy within local stack which spins up also an external Docker container and then basically proxies the requests through. So it's just helpful for just maintaining the life cycle of external processes and containers. And also in terms of testing, so we use PyTest quite extensively and for example fixtures when we create AWS SDK clients, then we basically configure them with the endpoint URL. So if we're running against local dev, we can set the endpoint URL, otherwise if it's against prod, we can just use the default border clients. This is very helpful for us to do what we call parity tests when we actually run our integration tests both against the real cloud, against AWS and then a second time against local stack to really see what the difference is and ideally they are identical. Okay, I think I need to take a look at the time already. So yeah, again, PyTest also resource cleanup is also something that's extremely valuable and helpful. So PyTest is a great library for testing. There's one more thing I wanted to briefly share which is one feature that we're now sort of releasing very soon, it's called local cloud pods. Basically it's a way to take a persistent snapshot of your instance. So as mentioned before, by default when you start local stack, the state is ephemeral, but there's this new feature now and I can briefly show this, I think this should still be within time. So I'm just gonna show a few, sample four commands here. Okay, so basically what we're gonna do is we're gonna create some states. Create an S3 bucket, you're gonna create an SQS queue. Okay, we've seen this before. So now what we can do is we have this new command local stack pod which is using the local stack command line and we can now commit the states. We can basically, similar to Git commits, we can actually do a commit of this state to the pod number one. Actually before I do this, I wanna make sure that the local pod was to be used. So I commit the states into the pod one and again, a cloud pod is basically a notion of keeping the state of an instance and once it's been created, I can now do an inspect on this pod. So the inspect is basically showing us, this is kind of an interactive end curses CLI here. Basically you can see the S3 buckets that were created in the SQS queue, right? So that's basically the content of this cloud pod. Now what we can do is we can create some more states. Let's also create some SNS topic and an IM user and then we basically update the pod, right? So we do another commit operation, pod commit. Again, state was successfully committed into a new commit and then we can do another inspect and then see the state has changed, right? So now additionally, we also have this SNS topic here and the IM user that was just created. And now the cool part of it is now, okay, if I now restart local stack, so we're back into, we just restarted the instance and now since the state is ephemeral, if I do a list queue here, it's empty, right? There's no queues because we just restarted. But now we can do the pod inject operation which basically injects the state that we were previously committing. And now if I do another list here, it actually lists the queues and all the other sort of resources that were in there. So this is kind of a bit of a sneak preview of sort of how we envision sort of the state management and like really treating the internal state of local stack as almost like Git objects that you can easily share and push and pull and all that. This is just sort of a quick overview of what these operations look like. We just saw this pod commit which from the local instance pushes it to a local cloud pod storage and you can also push that to a remote backend, obviously. Okay, so it looks like I'm running a bit out of time and I wanna just spend one second for a very quick announcement. So we actually have a version 1.00 of local stack coming up. It's actually scheduled for release this week if all goes well actually today. So we have a lot of cool new features and optimizations in the product, huge set of new and advanced features. So we don't wanna create hopefully a bit of hype to give it a try and test it out and looking forward to getting the feedback. We also have a short, if you can do a short promo here, if you haven't used local stack before or if you have used it already, there's a small promo. So if you basically post us on LinkedIn or Twitter and just tag us with local stack in EuroPython, you'll basically get a bunch of free licenses for your team, three licenses for three months in your account. So hopefully if you wanna check it out and give it a try, if all goes well then this afternoon we should be ready to hit the release button. So the team is currently quite excited and working on this right now. Okay, so I think that's a wrap. Sorry, I was going a bit over time, but again, I think what we've seen that local stack, local cloud development is feasible and also enjoyable and enables very quick dev and test loops locally. Python is a great language for our purposes. So it's dynamic, has the ability to do monkey patching and a lot of really cool optimizations we were able to introduce that were very specific to our problem, the lazy code loading with the plugins of Plux and the future directions are that we wanna look into things like hybrid scenarios, so kind of blurring the boundary between local and remote. So it's not always black and white, you might have some shades of gray of some resources are remote, some are local. We also wanna dig more into state management based on cloud ports that I was just showing before and also we're releasing now a local stack extensions mechanism which is essentially a way to really easily plug in extensions into the system. So adding new service providers or also doing things like intercepting all the service calls and doing some logging that goes to some downstream system, for example. So we're very curious to get your feedback and also your questions now if you have any. So thanks a lot. That's it for my side. Yep, there's a question here. Yeah, so there's an open source version, a community version of local stack which has the core set of services that you use, Lambda, DynamoDB, like I think almost 30 services, and then there's also some pro extensions we call them that includes some more advanced features and the licenses would be for that. Yeah, so the question was whether the cloud formation or terraform scripts are the same or need some modifications. They are for the most part can be literally identical. So these days you can take a lot of AWS provided samples, cloud formation and deploy them directly into local stack. There's a few minor subtle differences when it comes to end points, for example, which we use for the local domain names and a few minor details, but for the most part it's actually really identical resources you can deploy. Yeah, thanks. Yeah, so the question was is there a way to extend the lifecycle mechanism for the external process? It is certainly something we could add as part of this hierarchy I was showing. So we tried to introduce abstractions from the very sort of ground up, okay, what is a server? It has certain characteristics like a port, usually some endpoint and you can derive from that. Docker is one of three of this class hierarchy, but I think we can also integrate something for virtual machines as you mentioned, right, where you can actually enable the lifecycle of VMs. Definitely, yeah, great point. Any more, yeah, one more question. Yes, it is. So that is actually something that is part of our, the upcoming release that I just mentioned. So we now have multi-account supports. So previously basically you could only have multiple regions, but now you can have multiple regions and multiple accounts and also cross-account IM enforcement basically. This please. Yeah, yeah, the question was if there's an easy way to integrate a mock, for example, like text track or something that's not currently available in Locostack. I think the extension mechanism that I was mentioning before, maybe we can take it as a follow up, is a great way to essentially get new extensions into Locostack. We're actually gonna demonstrate this with our V1 release based on a Stripe API emulator so you can plug in a Stripe API emulation. And I think you can apply a very similar approach, for example, for text track and other APIs. In fact, text track is also on our roadmap, but if you want to do it yourself, then the extensions mechanism would be great, I think. Yeah. Is there any more questions maybe from the online? I think there was maybe some online Q&A or any other questions? Okay, yeah. All right, thank you so much for your time and just great conference, see you all.