 Okay, I think that's good. Everybody that's not in the room now don't want to be in the room, so Good afternoon everybody. Thank you very much for joining us today or joining me today I know it's the last session of the day, which is usually the most Difficult session because you're tired of all the breakouts that you're in about throughout the day It's also after lunch. So the food has settled and you've become the logic so I really appreciate you taking the time to be here and Listen to what we have to say so I want to talk about Cloud Foundry on AWS and what really brought this on is We had a lot of customers that were asking us about how do Customers become winners on AWS or what makes a successful customer on AWS and One of the examples that always get brought up is someone like Airbnb So how does Airbnb become a first-class AWS citizen and be able to do and disrupt their industry in the way that they did and We kind of bring this down to three pillars that any customer Really needs to adopt in order to become successful and become cloud native and I'll speak about cloud native in a second But these three pillars are elasticity agility and availability a Customer needs that ability to elastically scale up and down when they need to The reason for that is firstly on demand How can you get the demand to your customers as the morning creases and match that demand? But also not over provision and overspend when you don't need that capacity So how do you grow and shrink? Dynamically and effectively without needing a large operations team to complete to consistently be there How do you how do you become agile where you can go from the inception of a new idea into the development of that idea and Delivering that idea in production as a feature or a new product within a matter of days as opposed to months and How do you build all of this in a platform that allows you to be 100% available or nearly 100% available? So Airbnb is the best example that we always bring up when we talk about this and not only have they been able to build a platform That is so successful. They do it with an operations team of merely five people a Five-man operations team that gives them the ability to do millions of reservations per night They iterate and release new features on a bi-weekly cadence And they're able to create an environment and a user experience that keeps leading customers come back Because they're able to focus on their business and their business model and what brings value to their customers without needing to worry about Things like database management or scaling out their application They have achieved this by creating a cloud native pattern So We talk about cloud native very often in the industry It's become one of those words that if you year it you kind of get a shudder because you're excited But then you break out in a cold sweat because you don't know how to do it, right? But cloud native is nothing more than a collection of the right patterns and adoption of processes Methodologies and tools combined in the right way to give you the three pillars I just mentioned to get the elasticity the agility and the high availability Using these patterns correctly Allows you to effortlessly scale and contribute to your business value Companies like AB and B have achieved this by adopting some of our native serverless applications They've got a very unique containerized implementation Adopting containers and using those to ship it across their environments in order for them to adhere to security measures Uptime and processing patterns Every pattern is unique to the business that runs that pattern Cloud native or a cloud native pattern to one business does not necessarily mean that it's the same pattern for someone else The tools that you use do not need to be the same tools that someone else uses a Very effective pattern is to use a combination of strong DevOps mechanisms that essentially takes you from developing or giving a developer Experience that's unified from the from his laptop or his development machine all the way through into production Give the developer the power to control not only the code that he's writing But also the environment and the dependencies that that code will operate it That allows the developer to have freedom and creativity into the work that he's doing It allows him to not worry about limitations that might be imposed on him later in the cycle by an operations or a security team Take that code and that definition that he that he explains in sign of his code of requirements that he needs to do and automate that through a CIC the engine that Regresses and make sure that it adheres to business policies that it does inspection on libraries to make sure that it conforms to the right libraries and Security principles that gets enforced by the security team a lot of our customers come to us and say We know we want to go and have something a DevOps pipeline or we want to use something that allows us to iterate faster But it we get stuck at the security team the security team our developers are now doing agile They're developing applications. They're pushing it into the source code our pipelines are picking it up But our security teams are still using Three weeks three months or even as much as a year to review that application before we can go to production Using a strong set of integrations in your CI CD practice You can automate many of your security tasks and review tasks that already do that is library scanning and gives that value and that power back to the developer and doesn't allow them to be blocked by security review a simple practice where a security team goes and Investigates libraries that are available in repositories like npm or Compose or any of those package management solutions for development tools like new get saw copying that source code and Storing them in a local repository for consumption within their environment Accelerates the development and the delivery for customers and Amazon comm we have thousands of third-party Open-source libraries that we develop against that our security team has already white labelled Reviewed and made available and we can consume that library in as many projects as we want Without needing to go through a subsequent review every time we use it because we lock a version of that source code Into our repository Bold against it and then consume that version and that source code that has been built in our applications So you need to be able to build all of that and deliver that cell to your basic environment This pattern doesn't change whether you go from Or deploy to something like a physical environment on premises or into the cloud environment So a cloud native approach is all about the pattern so how do we take this cloud native approach this pattern that I just spoke about and How does cloud foundry really benefit me in getting to that point of being a cloud native citizen? Well to begin off with cloud foundry Provides you with many of the guardrails that I already spoke about Out of the box cloud fried foundry provides you with a basic tool set to give this pattern a Production a productive run with inside your company it provides you with the central platform for administration This means that every developer every ops engineer every administrator and business reviewer has a central place That they can have a top-down view of all the applications and the value that your business brings to the to the environment It already sets up dynamic rooting for you which allows that that atomic unit that cell that the developer built can be deployed to any section of your infrastructure and Requests to that would be dynamically rooted So in a microservices world if one team builds for example of an authentication module That can be re or consumed by a different product inside of your business Rooting between those operations will already be handled for you on the platform layer on the platform level Role-based access control for deployments is a key requirement for all of our customers that are deploying applications into the cloud They need an extension of I am and be able to limit different teams from accessing different in Enterprise data sets or different applications and also for an audit trail They need to know when a user deployed a specific application or made a specific push for example So Cloud Foundry already has that in the independently or ingrained inside of the platform itself It provides you with an audible audible trail of The of who did what and who it didn't win and also by limiting access to different parts and portions of the business in One layer you don't have to set up a complicated multi-account strategy to control what applications run where and Then obviously application security that's CI CD practice with integration through something like concourse and doing library scanning scanning with all of the The third party software modules and partners in the ecosystem. That's available makes it possible for you to unblock at that layer Change requests can natively be incorporated into the platform Right, so we have this benefit of Cloud Foundry. We can make it easy for you to get to that cloud native pattern But how can we even make your life a little bit better by running this Cloud Foundry platform on AWS? Well, we have something called the AWS service broker at reinvent in 2017 We released the AWS service broker the AWS service broker is built based on the open service broker API specification and it integrates seamlessly with all platforms That support that specification most notably Cloud Foundry So in 2017 at reinvent we launched the service broker with 10 services the ones at the bottom right Early this month in April 2018 We launched an additional six services and then we actually have a seventh and an eighth service That's not listed on top of that We will be adding more services every four months and growing this list of service brokers as our customer demands Come in and more customers require more of our services So what does this mean? What does the service broker allow you to do in AWS? right So Pinterest is an interesting case study Pinterest has an architecture They have a pattern Where they use native AWS services to allow them to do millions of pins a day and how do they do this? They provision resources that gets deployed and stored on S3 and Then they use low latency meta text extraction using a service for that we use for example A new service that we launched lately called Amazon recognition So they have these static images or pins that gets posted by a user millions of these pins a day and Then they extract meta text information from those pins using our Amazon recognition service So the Amazon recognition service is a machine learning based service that Investigates and looks at a photo and then returns information like oh, there's a car in this picture There's a woman in this picture. There's a kid with a skateboard jumping over a bench in this picture This is probably a picture of a bunch of people at a park, right? It creates meta context for that image Pinterest then goes and they create a complete catalog of this image and Categorize it and then use that to better target images and suggestions to their users Now their pattern Uses native AWS services They consume native AWS services and if you've noticed Amazon recognition is one of the services that you can use and provision within the AWS service broker so using the AWS service broker and running cloud foundry your developer can build an application and Request the Amazon recognition service to be provisioned and set up on your behalf The developer never needs to log into the AWS console. He never needs to Look at the API manually using the CLI or create a bash script to do it on his behalf He just Exposes the service or request the service as an internal service in cloud foundry The platform goes sets it up provisions it creates the required roles and credentials that needs to what the developer needs to Access it and binds that to the application The developer then pushes his information or uploads his file to something like s3 and Triggers that run against the image at no point. Does this developer ever leave that unified experience that we spoke about earlier? He still stays with inside cloud foundry yet He has the benefit of a managed machine learning process that can do millions of low latency image recognitions per second That value allows your developer to focus on the value of that application that is building Without needing to go and deploy a convoluted mx net deployment inside of CF or an open CV image recognition Algorithm into CF and focus on building that machine live machine learning library His focus stays on how do I make that image recognition that was provided to me by AWS a Better integral part of my car of my application and how can I focus on giving my user a better user experience? The developer doesn't want to worry about the machine learning algorithm that detecting all of the meta information they want to focus on The user experience and the value that his product is giving to the customer So now you have a platform that provides you with the controls and the mechanisms and the automation and cloud foundry But also the integration through the AWS service broker to allow you to use native services To focus on the value of your business So it always is about the patterns and you can have multiple different patterns a Very basic example of a pattern is that you have a monolithic application that requires something simple like a MySQL database This database gets provisioned by the AWS service broker and ingest the connections during the username and the password inside of cloud Foundry That application can be shipped across multiple environments and use an on-premises MySQL database an RDS MySQL database Or any version of a MySQL database That's the basic pattern But we have seen that our customers are truly getting value from using our native services So let's look at this pattern for example, right? This is the pattern that I just spoke about You have AWS Services providing the basic infrastructure So you're running your AWS or your cloud foundry distribution within AWS across multiple availability zones And for you that for those that aren't familiar with what an Amazon availability zone is an Amazon availability zone One availability zone consists of multiple data centers Multiple data centers spaced out in a geographic area for for redundancy and for for security and for uptime, right? So in that configuration Your deployment is not only sitting in one or two data centers It is spanned across at least three or four data centers in front of that You said one of our application load balancers are highly available Strongly consistent and persistent load balancer that can handle thousands of requests per second You deliver your applications through this platform your API and everything for CF all of that goes through this load balancer How do you secure that load balancer? Do you have to buy something like an F5 instance? Do you have to go and set up your own IP table rules? using AWS you can use and leverage services like AWS web application firewall that allows you to protect transparently against things like SQL injection into your API's How do you protect against DDoS with a? Convoluted and complex setup of your DNS No, you deploy AWS shield in front of your applications AWS shield will effectively Protect you from DDoS DDoS attacks your persistent data storage layers The actual infrastructure layer that you want to run against for Bosch and for all those type of things and for the director Do you launch that mysql database on an EC2 instance? No, you use an RDS instance with a read replica in a second region Which allows you near hundred percent uptime and a RPO of a couple of minutes right What about storage across multiple nodes and application nodes? The AWS EFS system is a native NFS Capable storage layer that you can connect to all of your application nodes and store data across These layers to share things like station state for example between those nodes So that's just to get CF really Running well and very secured and in a good practice on AWS Now you can put something like AWS Cloudfront in front of that and ship all of your application endpoints to the closest age node to your user to one of any of our Nodes that have been deployed in regions all around the world inside of your application nodes You have bold packs that have integrated the AWS SDK So a developer Requires only that bold pack. It's already configured with the appropriate AWS SDK whether that's no JS go Python PHP or any of the available languages and he simply provisions the recognition service through the service broker The SDK has all the information that he needs to make the API calls he makes those API calls to push data to S3 and Therefore after that push that into recognition and get the metadata information back and store that data in his application So this pattern is probably one of the most complete cloud native patterns that you can have Without needing to build anything substantially different from what you ran on premises None of the value of cloud family cloud formation cloud foundry has been lost in this pattern This pattern is still a cloud foundry workload You have only added value to it by transparently adding other layers of native AWS service to it In this pattern, you will have a higher performance higher uptime and more security Then without the additional layers your iteration will be faster because your developers will be able to iterate and deploy quicker and Focus on the application for your business value as opposed to building out their own services like an image recognition service But what if I also want to make sure that I don't just want to store or use the average The Amazon recognition service, maybe I do want to create a on-premises or deployment that's capable of deploying on-premises and OpenCV application that processes images on-prem What happens then what if I do want to invest in that kind of architecture that my application can essentially switch out between recognition and my own OpenCV application This is a problem that has already been solved back in 2014 and That architecture is called the ports and adapters architecture The principle behind the ports and adapters architecture or the hegemonious Architecture is that as a part of the MVC class-based and functional based execution Each adapter that writes to a persistent function or feature Works as an adapter works as a port and each port then slots into an adapter for consumption So your application has a unified model. Let's call that image or image processing model You then have a background function. That's process dot or image dot process And you can switch that adapter based on a config flag saying if I am detected that I'm running within AWS Process this image using image recognition If I am running on-premises processors Using my on-premises deployment. I Don't want to go too deep into what reports and adapters Framework looks like but we will be processing or we will be posting a blog soon That goes into depth a little bit more on how you can actually use multiple services to achieve the same result in an abstracted way So keep an eye out for that blog And that's all I have to say today I have a couple of minutes for you to ask me any questions if you have any questions And I would just once again want to thank you for taking the time at the end of your day To come out and hear what I had to say. Thank you very much The end of a service broker. So the AWS service broker is an open source project You can access it in our AWS labs GitHub repository So you can go to github.com forward slash AWS labs And if you inside that repository do a search for service broker, you will find the resources in there Service dash broker. Yeah Is there any other questions? Yeah Yeah We have VPC VPC endpoints for many of our services I'm going to lie to you if I have to tell you exactly which all those services are We also recently launched something called private link Which also exposes a couple of other services and even allows you to create a VPC endpoint of your own For a service that you want to deliver So if you have multiple accounts, you can create a private link that essentially that gets exposed in another account or another VPC That can be consumed internally, which is a pretty cool feature And we actually have some customers that are playing around with that to expose things like API is on a platform level And things like that. But yes, we have many of those services and it natively and frictionlessly works with With these integrations. So if you want to store data in S3 I never actually have to egress out of the out of the The private network then a VPC endpoint works completely. You can also manage access To the data inside of that that endpoint by applying an IM policy to that actual VPC Any other questions? Great. Thank you very much everyone