 I hope this time is going to be a long afternoon for all of you, since 2 o'clock, I am sure you would have a good experience of the speaker and the speaker before me. So my name is Amit Sharma, I am the Solutions Architect with Amazon Web Services. Anybody who is not aware of what Amazon Web Services is, no, alright, anybody So Amazon Web Services is an Amazon.com company, it's a separate business unit of Amazon itself and we are purely into infrastructure web services. So when I say infrastructure web services, it's purely around offering storage, compute, networking, all of those, as a pay as you go model for our customers for them to build their sophisticated, scalable and secure web applications. And how many of here are purely technical and how many are from business side? How many of you are from business side? I should not be for you, and others are purely technical. I'll just wait. So for the non-pay folks who are from business side, I'll just try to put in some of the business stuff as well so that you get some context around it. So I was given this agenda on infrastructure automation, so I'm sure Jasveer also spoke about it as to how NDP has been using the puppet around the entire infrastructure and application automation, etc. So my topic will be primarily around how you achieve this automation in Amazon Web Services. So just before I start, there is something called an Activate Program that we have. So please register for that. It's open for everyone. So it gives you a lot of benefits like we give the promotional credits. You also get some 30-day trial for the support, business support that we have. We do a number of trainings. We do a number of office hours where customers of similar domain can come and do a brainstorming around that. Plus a lot of special offers that we keep on doing that, so just register for it. There is no charge for registration as such. So a quick agenda. It's a bit long agenda, so please bear with me. Mitesh from Nightingale Springboard actually told me that keep it as informative as possible, so I'll try to do that. I just hope that I don't bore you with that doing that. I'll talk about AWS and Automation, and there are multiple services, Elastic Beanstalk, CloudFormation, Observes, which allows the customers and startups and in fact customers of all sizes to do automation to the maximum extent that has been possible. And this is totally integrated into Amazon Web Services. So we'll talk about pros and cons here, all these three terms very frequently in this session. Plus I have a couple of demos on Elastic Beanstalk and CloudFormation, so that will be interesting for the guys coming from a technical background. And then of course there is our question. Q&A, so please interrupt me whenever you want to. No problem. Quick overview of the Amazon Web Services products and services. So AWS, we do not categorize our services into IaaS, PaaS, SaaS and all that because we believe the boundaries have blurred and there are no clear distinctions as to which service will fall in what domain. So as you can see, we categorize our services into networking, compute, storage, databases, applications, etc. So during the course of the slide presentation, I'll cover some of those services in more details. Now since you are startups, I'm sure a lot of you won't be listening to your customers very seriously and your customers would always say that give me a fast application. At the same time, give me a service which is always on as lots of features related to it. So you will want to put in as many features as you want into your services. And finally last but not the least, they will also respect the law of customization and personalization on your web services. But it's easy to say but how do you normally do that? And how does that fit into the overall context of automation? So before we jump into automation, so let us see automation in context to the larger picture. That is when you run applications into cloud, into say Amazon web services or for that matter in any of the cloud environment, there are certain rules or certain approaches that you have to take to create your application. So you cannot just take an application running in a physical environment and put it into cloud and expect the benefits of cloud also to be visibility. That does not happen. So that has to be taken very seriously that when you create an application and put it onto cloud, there are certain rules that you need to follow which technically your application may just run if you just pick and drop into cloud. But that will not give you those benefits. So what are those rules? Let us first see that and that will actually set the context of automation as well. So the rule number one is serve all the web requests that you want. Every request that comes out of your application should be served. How that happens, I will not go too deep into the whole Amazon web services and how it allows you to do that. But customers like NDDV and others have actually leveraged that extremely well. They leveraged the entire global infrastructure to make sure that none of the requests that come onto your websites are dropped. Second is, serve the request as fast as possible. The color combination may not be too great but you will see the approach. So choose the fastest route. So many DNS services across the board have these features of latency-based routing and routing to the nearest pop, etc. So a lot of startups actually leveraged that. So we are talking about the startups like Instagram, Pinterest, Netflix, etc. So they all leveraged that so you can make sure that the request is routed to the nearest pop. So offload your application servers. Now this is a bit of an abstract concept but at the same time extremely powerful that if there are certain services available then don't load your compute nodes to give out those services. In this case for example, content delivery networks. I am sure you have heard about what CDNs are, content delivery networks. So try to offload the static and dynamic content to the CDNs rather than letting that go back to the compute nodes. The entire concept takes almost probably 50% lower off your compute nodes and your compute nodes can only do what they do the best. Next is the caching. So this again is a bit of an under-value concept that we have seen across the table. So I keep on speaking to a lot of customers in North of India and caching is something that is not given as it should be. So caching can happen at every layer. So CDN is the caching at the web layer. Then there is caching that is possible between the app and the database layer which is a last-minute cache, radios and all of that. So there are services available so we advise our customers to leverage that. Apart from that, single digit latency is where it matters. So this is more in context of the database. So when database is scaled, it becomes extremely important that you're going to put everything on to a MySQL or a relational database. So just think about putting some of that load out to some of the newer skills like the MongoDB and some of the other services which are available. Very important concept, not very easy to execute though, but it's all the big companies like Shazam and all they actually leverage a new skill for their unique use case. Whatever Shazam does, you make it here, a song and it will actually go and play. It's so strange, right? None of the relational databases across the world will be able to handle that and they leverage all the new skills for that. Next is the scale. So scale is a very important concept. So new applications may work extremely fine for a few hundreds of thousands of users, but when you actually scale to much larger numbers, say millions of customers and all, the whole architecture needs to be defined according to that. So scale up is one option, scaling out is another paradigm that comes in the cloud. So don't try to jack up your compute loads from two loads to four loads to eight loads and all that. Try to spread that load across multiple compute loads in parallel and when you don't need them, just shake that load back to the pool of resources. Then it's for horizontal scaling. We have this concept of auto scaling, etc. which allows customers to easily implement all this. Simplify architecture with services. Again, now if you have been using some of the services like queuing or some of the database services, you can either build up those services on your own on some of the compute loads or you can just offload those to some of the services that most of the cloud providers would generally have. So queuing databases are some of the load-hanging foods which you can offload in the cloud. So this is a nice thing about that. Instead of focusing on your business, you sometimes tend to focus more on creating a service for yourself. Whereas in a cloud-like environment, you can actually focus more on your application and just leverage a service which already gives you that as a service. So your whole point is that you move away from the service and move into the service. Services are, in most of the cases, do not have a single point of failure. They scale extremely well and they have security, etc. they train as part of the service. Next comes the automation part of it. So as you can see, out of the various aspects of designing your application in a cloud, automation is just one aspect. Very important if you really want your application to scale to that level, so automate for operational management. And this is true for almost every cloud or at least if you are evaluating a cloud, you should keep these things in mind that anything in the cloud should be automated. What that means is, launching a compute mode, attaching a storage, creating a storage, increasing the size of the storage, creating new routers or new routes, etc. should all be an API goal. And that is what typically happens as you can see for a few years in various databases. Everything just becomes an API goal. And all of these APIs are available through SDKs across various programming platforms like PHP.NET, Java, etc. So that's a very important aspect. So if you are planning to go for a cloud-based infrastructure, make sure that you have full visibility of what things can be automated or cannot be automated because this is the holy grail or cool point of going through the tool. So these are some of the services. As I said, this is the prime agenda as well. Welcome to those. So OpsWords, CloudFormation, Beanstack have a varying amount of flexibility into them and they are made for slightly different purposes. So I tried to convey this to all of you. And then there are some other aspects like bootstrapping, auto-scaling, cloud-mode, etc. So none of those two actually leverage that very important asset. So the whole concept is that to create a base minimum image of the operating system and let that machine, when it comes up, takes its character at the very last moment. What that means is the machine should by itself not be hard-coded on whether it's an application server or it's a data-based server or it's a web server. It should get its character at the very last moment. That's the whole idea of putting your workloads into your cloud. So that is called bootstrapping, auto-scaling, and some of the OEMs like Red Hat, Centro as Ubuntu now officially support these concepts in the cloud-like environments. Lastly, there is something called a sort of a cloud provider. For example, they will have their own way of billing, etc. So that is extremely important for you to understand whenever you choose that. So there are, in Amazon, there are multiple ways of doing that. I'll not go into this. This is both not the business cases as well. But there are, like, for instance, this reserve on-demand which allows you to predict your workload and get the maximum plan for your path by committing to some of those plans. So let's now jump into the automation aspect. So why automate? What can we automate? This is important. This is what you would like to understand here. And then see what are the tools and methods. So this was the third factory way back at the moment, 100 years back. So people used to manufacture their tasks, etc. and assembly lines, extremely good concept. And assembly lines actually help to create a commodity product, out of a luxury product, right? Drop prices so that it can be mass produced, sanitized, excellent idea and actually got something out in the market very nicely. However, assembly lines had their own problems. Those were all human-driven, right? So there were all the scope of errors, etc. And the quality was also somehow not consistent. So two subsequent tasks which saved assembly lines may not have the same characteristics. But apart from that, only black, you know that typical, the very famous comment from Henry Foo, you can order any color of the car as long as black. So fast forward a few years, not few years, 100 years and now we have robots manufacturing most of the tasks and that is where the automation comes in. So let us extend back into IT as well. So these are couple of my colleagues actually, they are racking the server. If you have seen a server, it's a 50-bit server. So they are racking the server, nothing wrong with it. Only issues that this is an upside down server. Right? So they are trying to put it upside down and show the ramped up in out of a slot by now. So let us quickly look at some of the pretty blocks that I'm not going to ask. So either there's EC2, for example, or as virtual servers, so we can select from the whole variety of servers which are available in the cloud. Route 53 is the DNS servers which allows you to register on the main and then use Route 53 to solve all the DNS requests, etc. Auto scaling is for horizontal scaling. So depending upon the node criteria, etc. it will keep on launching more and more instances and when the load comes down, it will formulate some of the services. So this was the compute and networking mix of the storage. So elastic block store is a block store as you can see from the nature of it. So you can attach it to your compute code, format it and start using it and you can format it to whatever file system you want to use to create it, etc. S3 is an object store. So it's a storage for the internet. There is no limit to how much data you can trumpet there. It's a simple storage service. It has 11, 9 store durability. It cannot be mounted like elastic block store but at the same time it has huge capacity both in terms of storage as well as how much throughput it can handle. Next is the instance stores. These are the direct attached storages which are available within the EC2 instances. Those are available in nature. Next are the databases. So RPS is the relational database service. So we have MySQL, Loretto, SQL Server, Postgres. All of these options are available in RPS now. Panama DB is a lower school, very low latency and highly consistent database, lower school database that we have. You don't need to provision the capacity and data. It just scales automatically depending upon the load that comes in or you can actually find your own requirements. Or you can run your own databases on EC2 for example, DB2s and all that. Now I spoke about the automation. So all of the three aspects that I spoke earlier the storage, computing, databases, etc. All of those are enabled or available across multiple platforms. And there are also integrated developer environments some of them will be show studios and Eclipse so that you don't need to do standalone development it can be integrated into those environments as well. Right, so how can we come to B-Stock? Elastic B-Stock. So simply say Elastic B-Stock, what it does is it takes the B-Stock you can upload your code. So suppose you have what warfine created for your web application or for a PHP, if you have a PHP app it will zip back that PHP app and upload that into B-Stock and it supports multiple platforms like PHP, Java, Node.js, etc. and once you deploy it on top of that, what it does is the host is already there, the operating system is already there. In fact, right until the application service is already there and your code just sits on top of that and that's it and your code will be up and running in less than 10 minutes. So you don't need to worry about launching a load balancer launching the EC2 instances, launching the storage and all that. So that entire aspect of launching a web application is fully uploaded to the service. Now you know that this is typically called platform as a service but we don't officially call that so we call it Elastic B-Stock. Now the thing here is that what it might differentiate against other pass-like offerings is that you still retain the full control of the underlying infrastructure. You can still log in to each of those EC2 instances that have come up do your own thing in there, change the storage or whatever you want to do in there. Now the context is black on black but what it allows you to do is it can maintain multiple versions of the applications. You can have version one, version two, version three, version four and so on and at the same time you can have multiple environments in an application. Why would you make that? Because you can have an application and you can have a dev test staging and the production environments for the same application and now of course can be maintained into this and if you want to switch from one to another it is as easy as swapping a dual or if you are not happy with the version four of your application you can always go back to the version three. That is not benched into WinStop, you won't need to do anything extra. And you can also save the entire environment configuration so that that becomes repeated. There are the CLI options as well and this is also integrated with Git so if you are already publishing your coding to Git then that Git repository itself can be used into WinStop to launch the environment. But these are some of the aspects that you can control. You can control the region so you can pick up the region whether you want to launch it in Singapore, you want to launch it in Tokyo or whichever region. Plus you can also select the container type. It can be PHP, Java, .NET, whatever container you want. It can be across a single server or it can be a fully production workload across multiple instances and all that and the databases can also just come out as part of this. So this is how what Elastic WinStop will launch in the background. It will launch one local instance, so this is a local instance and there are like 20 single instances which will get it in that code and the auto-scaling will automatically be configured on it so that you won't need to do that. And when the load increases the number of servers will automatically increase and shrink depending upon the load. And this application will automatically be available on something.plasticwinstop.com which is the URL for your application. You can also snap it to your own URL using the canonical URL. So these are some of the prerequisites for creating a CLI based environment. It allows for a zero-town tent deployment so as I said you can have multiple environments for the single application and you can switch between the various environments with the click over button. So if you're having with a staging environment and you want to now put it into production there's something called swap neurons. So just with swap neurons your staging environment will go into production. What is the cost of WinStop? It's absolutely free. So one of the things that get launched last part of WinStop is charging. So did you get a fair idea of what Elastic WinStop is? So I'll show you a quick demo. So it's a pre-recorded demo. I do not have any connectivity here but I think it should be a good idea. Now it's in the console. Pick up the relevant service in there. Give that service a name. Here you're selecting a platform. You can see there are multiple platforms. You can pick up that platform and select whether you want to do a test environment or you want to do a professional depending upon that. You can pick up that and in this case we selected a sample application. So in this way not too much if you just display a sample webpage. You can also launch your database along with that. Selecting an instance type. What size of an instance do you want to use in there? And what key pairs you want to attack? You can use along with that. And that's it. So now your application has been put in there and this is automatically creating that entire infrastructure in the background. This is a SKHT path. You will see all the events that are happening in the background. If you show a local as a being called, how most of them being configured, these are just being called, etc. So from that to this it will take about 10 minutes. Once that is done you will actually be able to see that all the events are successful. The status of the application is ready. Now clicking on that. So this is the sample page. As you can see that something got elastic green stock of the URL has come up. Now what we are going to do is replace this with your own custom code. So this is index.ahp. That's nothing but just place out below world. So we are going to zip that into a zip folder and upload this and update that green stock application that was just launched. Now you are loading a new version of this line up. Today you will be able to see all the versions of the applications. Now once this is deployed, refreshing the same URL you will see that this application has been updated in there. So this application comes up. So as you see the whole value proposition of green stock is that as a developer you don't need to worry about the underlying infrastructure. Paid storage, networking, load balances, computes and all that. Just push your code and let the green stock do everything for you in the pack. So basically it's a template for the blue stock. Does it provide terms and provide templating functions for blue stock like we can copy the templates? Yes. So these versions are nothing but templates. There is something about parameter groups in the background which allows you to say that I thought to always launch my servers on this particular capacity. So that becomes a template. Are you templating a green vendor or more on Amazon? More on Amazon. Yes, so in Amazon we do not have a concept of template. Yes, I think all machine images. But the whole idea of green stock is that you don't need to worry about images. You have your code and that is what you want to focus on. Just push that code on the green stock and let it do everything. I mean you can say that the template for the green stock gives you more time to say how to scale the database so to be able to do a lot of services. Yes. What is the point of application? You want to templateize that. So I'll come to that. So CloudFormation is the service for that. So there is a different level of abstraction that is there between all of these three services. So green stock, CloudFormation, Obstructs has a different level of abstraction and at the same time it has a different level of granularity under what control you can have on all of these. If what you told, you can create, you can do that with sequence terms like rd3, create, work, generate, configure. So it's all in one word. Correct, so it's easy for you if you know those terminology. There are a lot of things like recoverings which you need to understand. That's the whole value proposition of green stock is that you don't need to do that. You have your code just upload to green stock and let green stock take care of the thing. So you can always do it on your own. So you can launch and you see to attach a notebook and it's a code it and do whatever you want to do. The thing is that's an effort. You can avoid that by using these two. Yeah. Yeah. Correct, correct. No, so if you do not do vertical scaling, it will keep on launching more and more even micros depending upon your configuration. So it's always important that you take a estimate initially that maybe T1 micro may not be good enough for me so let me launch it with an urban media or an urban launch. And see how that spans out in terms of node and all. And then see how you plan as to where you want to cut it out and then go up or higher, up or lower into this little function. Sorry. Now you can mix and match because it will try to create a homogenous environment. Now comes the cloud formation. So cloud formation, it's again an abstraction at another level wherein what it allows you to do, you can create a JSON template of the entire infrastructure. Right? So in elastic things to create an application and throw it at the service which created it. But on the other spectrum you can say that I want to control my entire infrastructure from the cloud. And that is what the cloud formation does. So it makes the infrastructure as a code. So you can actually write the JSON template, I'll show you some of the samples in here. So you can create a JSON template, JSON file of your entire infrastructure and upload that. So you can even define some of the application stacks scenarios in cloud formation. So this is the stack. So suppose this is your infrastructure in Amazon. So you are using three clouds. So there are multiple services that you are using. Right? And imagine that you have a requirement where you want to redeploy this environment from Singapore into Tokyo as well. Or you want to create this environment in Ireland as well. Right? One way is that you keep on doing this manually one by one. Another is that you create a cloud formation and create out of it. And whenever you want it, you redeploy it and you create everything again. Without you having to do anything manually. So there is a company in Delhi who do this quarterly quiz competitions. So which are online coding and all that. So they also need an FP setup. They need like 30-35 servers which are doing only this, only the cuisine competition, the code evaluations and all that. So they don't do all of this manually. So what they do is they have a cloud formation template. They launch it and they create something similar to this. And they are ready with the coding competition and after 15-20 days when the competition is over they just terminate the stack and all of this gets related automatically. So that you don't run off forgetting a particular resource being released and still getting paid. So this cloud formation template is good when you want to have infrastructure with capital becomes repeatable. Right? So let us see what a template is. So the template can actually be stored in Git, Subversion or whatever the code requires doing this. You can create multiple, repeatable and mark this out of it. So let us do a quick amount to me of cloud formation template. So this is Jason. We recognize this of course, right? So this is Jason. So you can define, I want to launch this kind of resource with these kind of resources with these names and these particular properties of this particular resource. You can specify, launch for this, improve this key plan, improve this API and not this particular type of instance. Similarly, you can define multiple restrictions and so forth so that you can restrict the user that always launched a T1 micro and never launched more than T1 micro. So you can restrict that in there. And apart from that, you can also get an output. So suppose our infrastructure has been launched over in EC2 instance, you would like to know that you are allowed that instance so that can also be extracted out of the cloud formation templates. So that was what aspect, as to how you launch the entire infrastructure. Second is, so you have the EC2 instances running. What do you want to do in there? That can also be controlled using the user interface. So here, as you can see, the script automatically is installing all of these packages from here. Right? And apart from that, you can also, before you go, suppose you have something that will get repository, you want to copy it into a particular folder in this case, user local name, so it can actually pull that code and deploy it here. For your instance, so if not only automates the entire infrastructure, it customizes the operating system and the server-side environment and deploys with custom code on top of it. So all of these three aspects of automating your infrastructure and application can be covered through this. Now if you recall, the elastic being stopped, so in being stopped, you only focus on the code and let the entire stack be lowered because automatically, whereas in CloudFormation, you can customize the operating system to a certain extent, whereas everything about that is totally customizable by Reduce. So you can define whatever you want to do over that in past. So does that give you an idea of the difference between being stopped in CloudFormation? So I also have a demo for CloudFormation. So this is the CloudFormation portal. As soon as there are multiple templates already available on our web page, you can actually create a WordPress site through which most of the templates are already available. You can select some of the options in there. Since this is launching a database, the database parameters can also be configured here. So when the database comes up, it will automatically have some of these options pre-configured in. This is called tagging. So whatever resources you launch in Amazon can actually be tagged. So this tagging is a powerful feature. It allows you to do cost allocation plays charge maximum. So we'll talk about that. So the status log creation is in progress. Here also you can see all the events which are being launched. For this again, we'll do all of these. What else do we do? We start it in the JSON template. We can all make people getting logged here. As you can see, all the events are logged and we fill in the create country. Half of the people will see the output. So there's something called output tab. You can see the output of that CloudFormation template across the format. So now the entire application has been filled out. Now if you want to terminate all of this, all you need to do is right click on this and say terminate. So this CloudFormation will actually destroy all the resources in the pack. So in this demo, it's creating an image so that you will screw up that. Any questions about this? I hope it gives you an idea of the distinction of the two services. So now let's move on to AppSource. So now the two services that we spoke about being stocked and CloudFormation, we did not talk about how we're going to manage your applications. I think, just we would have touched that in this session as you have, say, hundreds of servers all having a particular version of an application. And now you want to update all of those hundreds of servers with a new application. How will you do that? So that is not explicitly covered both in being stocked and in CloudFormation. However, there are ways of handling that in being stocked. You can have another version of environment and swap the URLs. CloudFormation can create another stack and then swap the URLs. But there is no time to be handling that in the other two services. First, AppSource is actually made to handle that kind of scenario. It is an independent application management solution. So it is dependent upon the so if you are using SHIFT, it can actually take that SHIFT recipes and include that part of this. And it can allow the full control and automation of the application that you are living in. So these are some of the management challenges when you deploy applications and when you are scaling to a very large level you will see that none of the shell scripts or the manual efforts will be able to handle them. So it comes back very reliable. So this is the critical tool that you can test, publish, and monitor. This is just an example. So you can do the coding for the entire development system using Jenkins and once you want to deploy the tool you can push it to OpsWords. This is again an example. So in another environment well this is something that we do not explicitly control in an OpsWords like environment you can have a full control right from the underlying surface environment to OpsWords for this time. This is an app and database that is called a stack. And then you have the layers so web layer, app layer, dba layer so when you throw a server into a particular layer you don't need to think about how it's going to be configured. App source will automatically get a server in a web layer and configure a more improved server definition that you provide. And this is how OpsWords works this is the orange box in the middle that will actually keep on calling the OpsWords service and whenever there is a load release etc. it will do the beta data from OpsWords get the code and some of the beta information from S3 and get the actual code from GitHub deploy what will serve and publish the shelf blocks etc. back to the S3 and acknowledge back to the OpsWords is there is there like a dedicated Chef master server running for each of the stacks or each of the environments or OpsWords service will manage that so that acts as the Chef master so our definition of beta is different beta does not play that it's not production ready and ready for support beta means that we are still not at that feature parity as compared to the RPS beta service has been in beta for a very long time and got out of beta now but we took the support in production from our perspective it wanted to be feature parity with whatever is leading the customer yeah yeah correct but point is that there is no Chef agent that sets in a beat stop if you want to do your own you can do but then it doesn't come bundled as power to beat stop service and so this is as you can say this is heavily dependent upon the agent that sets on the EC2 in beat stop you can launch another version of the application or another and then do the swap URLs etc. but that is bit of a work if you want to customize that you can take the raw AMI that Oxford skills customize that with your own code and then publish it back into Oxford studios there is right now not a straightforward view of doing that but that's possible similar in beat stop beat stop case some AMIs which are built in into beat stop you can take that AMI launch these of Oxford does allow you run your own custom chef recipes that's all about this that's all about this chef so does it give an idea of your first ability services great job so this kind of summarizes the whole thing so you are beat stop Oxford so there is the convenience versus the convenience you just upload the code and forget in Oxford there is a bit of a work but it abstracts again a lot of information you can start from bare minimum and put it up from there and on easy of course you can do whatever you want to do so this is the whole nature of it I am on web services that's focus on your core application and let the entire undefinition obviously that's a thank you very much I think you take any more questions that you may have how do you get into that writing on these these all so these all these services are such a key there is no charge for every of these services what are the resources that get spun up by using those services are charged so there is I'll say I have 10 servers out of those 10 servers 5 servers will be run in 24 cross 7 the other 5 will only count 50% out of the time it will take so you can only estimate there is no guarantee that it will work with your end but depends upon your book yeah billing allows so I think he's talking about the estimated billing in advance before you employ an infrastructure that is difficult but that's the whole value of cloud that you don't I mean you worry about your cost but you can worry about the infrastructure they can they can they can they can they can they can they can it's not going to go down because they have that that. they will be serviced based on what's the curiosits mob because you know that all that There are data rates which you cannot predict. Data rates. Data out. Yeah, data out. Yeah, correct. Yeah, so you can predict the maximum instance. Correct. You can do an individual guess and then, yeah. That's correct. Great. Thank you very much. Thanks a lot. Thanks.