 For our next talk, we're going to hear from AWS and Facebook connectivity folks working on bringing the Magma products to AWS and some creative approaches to that moving forward. We've talked throughout the day about Magma being focused on reducing friction, reducing costs, and we believe ultimately cloud-based deployments combined with edge deployments are likely the successful model. So I'd like to welcome Sigit Perangano, Priangoro, I apologize for murdering your last name, Sudhi Khande and Robbie Amdell. Sigit's a senior partner solutions architect focusing on Edge and Cloud at AWS and Sudhi is one of the development team at Facebook Connectivity and Robbie Amdell is a principal consultant telecom industry specialist at AWS. So gentlemen, take it away. Would you mind giving me the share permission? Robbie Amdell, Can we add Robbie as a co-host? I have it now. First of all, thank you guys for having us today. We are very excited to be in this call. I would meet today is Sigit Biryango and he's a senior partner and switch architect at AWS and Sudhi Khande who is network engineer at Facebook. And today we'll be spending some time talking about the overall solution overview for Magma on AWS and we'll dive deep into the solution itself and the architecture and we'll finish off with a quick demo to show you some of the capabilities that we built together. Now, before I dive deep into the actual implementation, I'd like to spend some time into really why it doesn't matter to the industry. Now, if you look globally into the industry and how the new industries emerges in new sites that require wireless connectivity and the number of the new sites needed is growing dramatically as we see that and actually Harvard research that they discussed the opportunities for industrial and commercial IoT deployment based in private LT and 5G and they shown millions of new sites needed across different verticals. And if you look at the industrial and manufacturing vertical, you see 10 million and a half just for that particular sector. And this is the reason why we really need that to be scalable and the coverage needs to be really consistent and mobility is really needed. And also the time to set up a network, it has to be optimal and the operation for those networks, it needs to be optimized and very cost-effective. And additional of that, if we think about the devices that need to be running on those networks, the expectation really that number of devices will grow from 150 million as we saw in 2017 to 750 million by the end of 2023. And for that to be really possible, NPN and 5G and LTE is really the needed technology to allow us to give this throughput and latency needed for this particular industry segments. And unlike Wi-Fi, which has some limitation in power and with Wi-Fi, you get up to four watts in power, which might not be optimal for coverage with private LTE and 5G with the licensed spectrum or CBRS. You don't have that limitation and you get better coverage further away from the base station as well as the security the NPN and 5G and LTE provides will allow you to include physical security elements by having the SIM card and the mutual authentication needed in both sides of the network to make the connection happen, which is very suitable for a lot of use cases, which has very security need requirements for the privacy reasons and so on. And this is why really NPN is becoming very important and Magma is good fit for such technologies. Now AWS and Facebook has been working together trying to deploy Magma and AWS and because of the nature of Magma being a distributed architecture and the capabilities that AWS provides for each cloud and the different offerings and different infrastructure to fulfill different use cases, AWS services was really a great fit for Magma use cases. And because of the complexity that can bring as well for managing distributed architecture having an automation and having a single pane of glass was really essential to make the deployment of the Magma components in different age cloud capabilities that AWS bring possible and manageable in a centralized location. And today we're going to spend some time talking to you about the automation framework that we built that will allow you to deploy and provision Magma components on different AWS services. I think before really we dive deep into the actual solution it's really important to give you some overview about the age services that we have at AWS and how we map that back to the Magma architecture. And for that I'd like to call and bring my colleague Sigit to take us through that. Sigit. Okay, thank you for the introduction. So as I mentioned from AWS and myself we've been working with Facebook Magma team for the past few months and how you know AWS services for the age can help deploy Magma. So as you can see on this picture, AWS has a complete solution and services for customers and partners such as Magma team to build interesting applications, interesting network applications. So this is range from AWS region on the left hand side as you can see which consists of multiple availability zones and also there's local zones and a cloud front point of presence. And if you can see on the middle there there's on-premises facility. So this is where AWS brings AWS outposts. So AWS outposts provides the same AWS hardware infrastructure, AWS services, AWS APIs and tools such as in a cloud formation, systems manager, AWS confee to bring the developers the same experience when they have to build on the region on the cloud or on the edge. So that's the goal. And if you can see a little bit on the middle there you can see AWS Snowball. So AWS Snowball is rugged device. It's commonly used outside of the data center. If you don't have any rack, you don't have any data center, AWS Snowball can help you to build Magma on-premises. So it's operating fully disconnected for customers that pass limitation on connectivity such as mining sites or school districts or housing and residential area. There are a few projects that are going on with this condition. So no connectivity to the internet, no region connectivity, Snowball can help. And if you see the next picture, AWS Snowball on top of the ship there, a shipping cargo. So AWS Snowball is the same family as Snowball, but with smaller form factor. It's like two kilograms, even smaller than your shoebox. So you can deploy private networks on the small services without any connectivity. And if you can see on the right hand side, there's IoT doing the rest. We are passed. So these are the AWS services that can be used to create applications on top of private networks such as video analytics, right, robotics application. So you have few projects going on with video analytics, like Internet and University. We deliver private networks and also social distancing with your analytics so that CMU campus can notify the officials if people are too close to each other and so on, right? So these are the services that can help developers to do this. On the next slide, let's go a little bit deeper on AWS Outpost, right? So as you can see, Outpost brings native AWS services, infrastructure and operating models to any data center. So basically when developer use cloud formation, AWS config systems manager, all the APIs that developers are used to deploy in AWS region, we also use that in Outpost. So you bring the cloud to on-premises, right? So this is where the terms edge cloud came from. And the good thing about Outpost is that it's fully managed and supported by AWS. There's at least one per second connection to the region so that sorry, I have my alarm going on. So that customers have access to the latest hardware, software and they don't have to worry about upgrading the bios, upgrading the software on the services, right? So just like the AWS cloud, we just use with your user APIs. So in this case, customers and partners such as my metine can develop and deploy with the same experience that you have in the cloud and on-premises, okay? So that's Outpost. The next slide is about AWS Snowball Edge. So Snowball Edge answers the requirement where the customers doesn't have a data center. There's no connectivity. There's no data center. And Snowball Edge is rugged. It's as big as your traveling briefcase. So if you're traveling to another city, you carry a briefcase. That's what Snowball is like, right? But it's rugged. And it has 52 virtual CPUs. It provides block and object storage and optional GPUs. So advanced machine learning and video analytics can be done in this connected mode, right? And with Snowball Edge, customers and partners such as my metine can build and deploy private networks for a rugged, disconnected, non-data center site environment, okay? For example, their customers in remote mining sites that want to be private networks, right? You can't put the data center there. Another use case is safe and rescue operation or even a school district for this COVID-19 situation where we can deploy private networks with Snowball Edge. And on the next slide for SnowCode. So SnowCode is in the same family as Snowball, but with much a smaller footprint, right? So it's about two kilograms, even smaller than your shoebox size. You buy a shoe, right? SnowCode is about that size. So it's smallest member of our air-based Snow family, two kilograms, that it has eight bytes of storage, three virtual CPUs for you by the memory. So with this, we can build a private network for a remote, standardized site requirement, right? For example, the first responders, right? They can carry SnowCode on a backpack and provide a private network for robotics where in our event session in 2020, we show our partnership with those robotics and the robots with four lights carries the SnowCode and AccessPoint to provide a walking private networks, right? Even in drone use case, well, the drone can carry the private networks. And SnowCode fits very well. If you want to build private networks that require supportability, you can ship it to FedEx, UPS and so on, right? And my team has been working with us to deploy Access Gateway on air-based SnowCode, okay? And the next slides, because we understand our air-based services on the edge, now we look how we design this, right? So the first case here is for connected campus, meaning enterprise sites that has internet connectivity, has connection to air-based region. So if you can see on the top box here, the top box means air-based cloud, air-based region, customers and partners can deploy provisioning and subscriber function, such as HSS or UDM, right? And if our customer use CBRS, for example, CBRS sites can be deployed on the cloud as well. The same thing with control claim, MME, HSS, SMF can be deployed on the region, okay? And then on Outposts, so Outposts sits on premises there, on the bottom box, customers and partners such as Magma team can deploy the user playing function, such as P-Gateway, S-Gateway, Security-Gateway, or UDM for low latency connectivity to outdoor or indoor radio networks, right? We just have to provide L2 or L3 router switches to connect to the radio networks. And there's also a possibility to deploy a full mobile core deployment, meaning that both control and user claim can be on Outposts. At the same time, the MAC application, MAC means Mobile Edge Cloud or Mobile XS Edge Cloud. Those applications that request low latency such as, you know, real-time video analytics can be deployed on Outposts, right? So this is a great use case for low latency applications. So this is the first for us architecture. On the next slide, I will show you the Snowcone, which is we are working with the Facebook Magma team as well, that, you know, we deploy user playing on Snowcone, while the rest of the core functions on the cloud to save space and to provide local breakout to the customers, right? So for example, as Gateway or Magma access Gateway can be on Snowcone, two kilograms, about eight inches long. So it's fully portable, so we can send it inside the remote mining sites, even in school zones, right? So this is the Snowcone reference. And the next slide is the same use cases, but with more compute power, right? There are some customers that requires application on the Edge, for example, real-time video analytics for physical and social distancing, or worker safety, or zone fencing in university, in school zones, so both the 4G or 5G core and the applications can be put on the Snowball, okay? So that's when Snowball is very soon with 52 virtual CPU. And this, to save time, the next slide is about combining both use cases, right? So you have customers with main campus such as mining company that has a main campus with outposts with connectivity in the region. At the same time, there's a remote panel site, a mining site that requires no family, such as Snowcone or Snowball. And the last reference architecture here I want to show in the next slide is where a customer doesn't need real-time use cases, but customers with IoT sensors, both throughput, and the customer doesn't want to deploy Snow or outposts. They can just deploy radio networks on site and put all the core networks control and use the plane on the cloud. And this is very useful for our customers that want to have basic connectivity, right? And Magnetin has deployed this as well. Electrons should be for, from Facebook, Magnetin will do a demo. It's very interesting, so stay tuned. And this will be a use case that we can demo in this call itself. And let's go to the next slide. So on this case, there's Antoine Magma solution on AWS. So Rabi, my colleague here, has been working with Facebook team to do it. So I will hand over back to Rabi and go ahead, Rabi. Thank you, Suget, for the introduction. That was really great. So of course, when we worked with Facebook to come up with Magma into InSolution and AWS, one of the reasons why we wanted to do that is really working backward from the challenges facing the industry and launching those products into the market. And if you remember, when I started my talk about the millions of new sites being rolling out and the need for deploying into a mobile network for those sites, it has to be done in an automated fashion. It has to be done in a way that easy to scale. And the approach we took is really by looking at the challenges that are facing the industry of launching those new services. So if you start by really just launching a product, and if you look at how CSPs do it today, it's a daunting exercise to figure out what is the right build of materials, what are the right components that has to work together to build the end-to-end service. And with AWS, because we do have the rich partner ecosystem and the APN network, that will allow us to quickly get partners on boarded into an end-to-end solution because those solutions and those partners are already running AWS and using the consistent API across each other. And that makes it really easier to streamline the process of really the partner network that we have in AWS. And the second thing is really about how do you really place an order and how do you service, do the fulfillment for the service itself. And if you look at AWS and the tools that and services we provide in that area is all around like if you look at AWS service catalog and the possibility to instantiate an entire infrastructure by clicking a button, and that was a great tool to allow us to streamline a lot of the ordering process for that. And the fact that everything in AWS is running as a code, as infrastructure as a code, then it will make it easier to version control all the infrastructure and deploy it in matter of minutes and have that ready and up and running. And third thing is around how do you take the service to be live and how do you manage the continuous integration, continuous deployment of MAGMA components on AWS infrastructure. And we do have off the shelf AWS services and tools that allow us to do that in a configurable fashion. So AWS allows you to create your different pipelines and configure different triggers into those events and create your own also flows integrated into that pipeline to allow you to integrate this thing and manual approval to roll out any updates into the network. And finally, when we talk about operating the network, it has to be done in a consistent way and the cloud watch and our third party orchestration solution will allow also easier to maintain and operate the entire network by leveraging our marketplace for those offering as well. So taking this into account, we see that the three use cases that can take advantage of a full MAGMA end to end solution and AWS would be a typical CSP providing and offering MPN services to their enterprise customer as well as CSPs providing fixed wireless access to their customer and enabling the broadband services to be rolled out specifically for rural areas where connectivity is very limited. And finally, the automation framework certainly can be taken advantage of within the CSPs public mobile network itself and leveraging those capabilities of lifecycle measurement of both the core and the network in using that automation framework that I will explain more in details in the next slide. Now to look at the architecture that we really put together, there are a few things I would like to just highlight in this architecture. The first thing is really what is running in AWS region. And if you look at the diagram, we do have all the services running in AWS region alongside the control plan and any IT application that is supporting this private network. For example, if it was a video analytics application, so you have all the application control plan running in the region itself with the possibility of course to run those application in the edge location as well to provide better latency. And the reason why this is possible because as I explained, the AWS outpost if you have it running on your on-premise that just an extension to your AWS region. And the same experience, the same console you use in to manage and control your AWS account will be used to manage and control your on-premise environment as well. And that makes it possible for the same automation framework, the same pipelines you have in AWS region to manage and lifecycle the infrastructure that you also have on-premise. And this makes it possible to deploy Magma components either in the region directly with no change, you can still also deploy it into the on-premise by just changing the configuration and the selection, the choices of where to deploy those Magma components. And if we look at the remote sites and the cigarette experience about the snowball and the snow cone devices that we have, the provisioning of those devices and shipping them with the right image that is needed to run with the network is also done automatically using the automation a framework. And not just that, the automation framework also gives you the API and in points to allow any portal system to be integrated into this automation pipeline that will allow the ordering fulfillment and the service fulfillment of this MPN to be done in automated fashion. And finally, this automation pipeline and specifically the other AWS code pipeline capabilities that we have within that will allow the repository that contains the actual code to be integrated within this pipeline to any push or any change into the repository will trigger the pipeline. And as a result, you will have the ability to test that in your testing environment before you roll this update to the production. And that's all done automatically within the framework that we have. And finally, the last thing I want to say is the marketplace will allow you to deploy any application to support and run on top of this MPN network from our API network in marketplace. And before I hand over to for the demo so you can see things in action, Allah just to clarify that the conceptually any of the components that supported by magma can be part of an SKU that will be part of this end to end building block and certainly the AWS marketplace will have all the application from our ISV vendors to also work alongside the solution that will provide the end to end for the specific vertical for the industry that we're trying to target. And with that, I'd like to hand over to Sudhi to take you through the demo. Thanks, Robbie. I'm going to share my screen. All right. Hello, everyone. Good afternoon. Good afternoon for the folks in the Bay Area and hello for everyone else in different time zones. My name is Sudhi. I'm a network engineer with Facebook. And today I'm going to be talking about deploying magma on AWS. My talk today is mainly spread out over four different categories. We'll first start off by discussing the magma architecture and the workflow for deploying all the magma components on AWS followed by discussing the data plane elements, deploying the data plane elements on AWS regions as well as looking at far edge scenarios using Snowcone and Outpost like Robbie and Sigit mentioned. And finally, we'll talk about building magma using the magma source code and building custom artifacts on the fly using AWS resources. After the slides are completed, we'll spend most of our time looking through a video demonstration and we'll also have a small sneak peek of Snowcone coming in between. All right. So let's get started. Our motivation today is that we want to enable any user who has a credit card to be able to deploy the magma core with a consistent, repeatable, frictionless and pay-as-you-go manner. We want to ensure that they are not encumbered by any CAPEX to deploy magma. So to go towards this journey, we start at the AWS Marketplace. The user goes to the Marketplace, finds a Cloudstrapper AMI and uses this AMI to deploy something called as a Cloudstrapper node. Now Cloudstrapper runs on Ubuntu 2004 today and it's deployed on a T3 micro instance. So from a resource standpoint, it's really not that it's not resource heavy. It's actually pretty simple and it should be on the free tier. You can think of the Cloudstrapper as the conductor of a symphony orchestra. It's the one single point of contact that gets to interact with AWS, gets to interact with Terraform, Ansible and any number of tools that we use to deploy magma services today and be able to do it in a very consistent fashion. The AMI allows us to make sure that all the dependencies that are required to deploy any of the magma services are already preconfigured on the Cloudstrapper. So the user uses this Cloudstrapper instance and it deploys the orchestrator in one of the regions. And as soon as that is done, now we've completed the control path. So then the next question becomes how do we deploy the data path? So the Cloudstrapper, we use that and go back to the marketplace, find the access gateway AMI and using that, we deploy an access gateway in one of the regions and connect it with the orchestrator. Now that we've done this once, we can use the same template file to repeat that same process in the same gateway or go to another region or use the same EC2 template, because these gateways are running EC2 environment, we can use the same AMI and go to a near-ed site like an outpost or even go out to a snow cone in a far-edge scenario. So the question now becomes, this workflow sounds very good, but how easy is easy? So let's look at the orchestrator first. At the simplest level, the magma orchestrator is a collection of orchestrator images and helm charts. Now, because we want the process to deploy to be as simple as possible, we bring in our Cloudstrapper node. The Cloudstrapper node, it uses a bunch of template files where we generate certificates for the deployment, we specify geography constraints for where the orchestrator needs to be deployed. So for instance, US West 1 versus US East 2, we can make a geographic affinity towards where it needs to be deployed. The domain name that's associated with the orchestrator and any additional modifications we may want to make on the Terraform site, we would do all of that on the Cloudstrapper. The Cloudstrapper runs Ansible and it interacts with Terraform to create the orchestrator. Now, going a little bit deeper into the orchestrator, we can further subdivide the orchestrator into three layers. We start out by deploying the AWS services. Cloudstrapper has all the necessary credentials to interact directly with AWS, so it goes ahead and instantiates all the AWS services and we use a variety of services to realize the whole infra starting from EKS, the Elastic Compute, Elastic File System, Elastic Search, Search Manager, and from there we layer on top of it the day of store. Because the orchestrator, which includes the NMS for Magma, maintains quite a bit of state, we do use AWS's RDS instances and we use one for the orchestrator and one for the NMS. So once we've done with the AWS services and data store, we now have the entire infrastructure we need to deploy Magma on top of that, which we do in step number three. In step three, the container images are finally applied onto the infrastructure that we deployed previously and then we get to see all the end-to-end systems come together. The Magma orchestrator has functionalities right from the NMS, which allows us to do full lifecycle management of the access gateway and even the entire ecosystem and also maintains subscriber databases. It allows us to aggregate logs, metrics, events, all from the gateways and all of this runs on the infrastructure we just put that the Cloudstrapper assisted in deploying. So now that we've deployed the control plane, we go to the data plane. The question then becomes how quickly can we deploy it and how can we use the same methodology to replicate to near and far edge scenarios? We again start this journey at the Cloudstrapper, but instead of filling out Terraform-related information in the Cloudstrapper, we specify site and gateway level specifics. Cloudstrapper makes use of AWS's cloud formation stack and it deploys a VPC, which we call as a site, with all the networking resources required and within that site we realize a gateway and the gateway is obviously, it is powered by the AMI that's already in the marketplace. Having done this once, if we go to a location where customer provides an edge site, we can go ahead and deploy the same gateway using the same AMI onto a snow cone or an edge on or an outpost. So it's that simple to deploy resources in AWS. So then comes the question, how about the distro? How do we make a customized distribution? How do we build it and create artifacts on the fly? So again, we start our journey at the Cloudstrapper. The Cloudstrapper in this case goes ahead and instantiates an EC2 or Lambda instance. Now this is where we do this in two steps. Step one is to create a build environment where we can say this is where all of our build process is going to happen. And when we create the build environment, we then go ahead and configure it to have all the dependencies needed to build all the Magma images. And once built, they're then pushed over to the image repository. In this case, it could be Docker Hub or some cloud provided container registry or and then we also push the Helm charts either to GitHub or to another cloud provider code repository. It's important to remember that this build node is a very on demand construct. It may be built with extreme amount of resources, but it's only built for a very short amount of time. It's a burst resource that's required. At this point, I'd like to pause my presentation here and move over to some demonstrations. We're going to be doing three demos today. We'll start out by first deploying an entire orchestrator. We'll first do the orchestrator followed by deploying the data plane elements. And then finally we'll finish out our presentation by looking at a demo for how we built Magma artifacts. The demo is going to start at a place where we have already deployed a Cloudstrapper instance using an AMI. So let's get started. The Cloudstrapper instance has been deployed and we're going to SSH into that particular node. Once we're in the node, we'll first make sure that Cloudstrapper has all the necessary credentials to communicate with AWS, to communicate with GitHub, and to communicate with Docker so that it can pull all the container images and Helm charts that are required. Having done that, the next thing to do is to configure our orchestrator. We like to provide some information in terms of how we want to run, how we want to deploy this orchestrator. We give it a cluster name. We provide a DNS subdomain. We provide the label of the container images that we're going to be pulling out. The version of Magma that we'll be deploying based on the GitHub tag. And finally the repository from where we'll be getting the Helm charts. In this instance, we're deploying the orchestrator in US Southeast too, which happens to be Sydney. And we can see that there are no EKS instances or EC2 instances running in that particular domain. And again, the subdomain we're using, again, also does not exist on Route 53 and there are no key pairs that exist. So when we run the Ansible Playbook, all of this should get generated and we'll have a fully functioning orchestrator after the run. The last thing we want to make sure here is to see that the container images actually exist in our Docker registry and we're validated for all the four images that we need that the tag 1.3.0 actually exists. This is looking, validating that the Helm charts exist in the repo that we're looking into. And at this point, we're ready to start running our Ansible Playbook. It's a single playbook and this entire playbook started around 1248 UTC and ended up 1306 UTC, finishing in about 18 minutes. In 18 minutes, we were able to go from zero to 100 and deploy it a full orchestrator. I'd like to pause the video here for a second and draw a quick parallel with what we saw on the slides. The last three commands that Ansible ran were the three Terraform apply commands. If we look at, if we think back to the sub steps required to deploy the orchestrator, the first Terraform apply deployed all the infrastructure, including the RDS instances onto AWS, it instantiated all of them ready for Magma. In the second Terraform apply, we seed all the secrets that are necessary to interact with Magma. And in the final Terraform apply, now that all the infrastructure is in place, we deploy all of our Magma services on top of this cluster. At this point, if we, the Cloud Strapper node, it will spit out the Qt config. And as part of the installation, it will also show the main.tf, the Terraform file, along with the Terraform state file. At the bottom of the screen, we see the name servers. Those name servers are what we will need to forward all of the DNS resolutions to, to ensure that our domain, that any public endpoint can resolve the addresses for the Magma components. We can now see that there are three EC2 instances running. We can also see that there is now an auto scaling group for, for these EC2 instances. Prior to running Terraform, prior to running the Ansible playbook, we did not have any RDS instances, but now we have two of them, one for the NMS and one for the orchestrator. And finally, this is the final, this is part of the, one of the services that's deployed on the, on, on the stack. The Magma orchestrator is multi-tenant. It is org aware. So in this case, we have been able to go to the, the master org and create a mantle test organization that we're able to test. And we're able to reach this endpoint via the web browser, which means that the entire orchestrator deployment was fully successful. At this point, we're done with deploying the orchestrator. It took all of 18 minutes from the time we started the, the tariff of the Ansible playbook to being able to validate. It started, it took 18 minutes to deploy, and then a few checks here and there to make sure that all services came up. At this point, I'd like to move over to demo number two, where we deploy the gateway appliances. In this screen, we're we're modifying some of the cluster elements where we specify which region we'd like to deploy the specific access gateway in and which AMI, which is already present on the, on the marketplace that we'd like to use to instantiate this particular gateway. We also would like to make sure that all the key pairs that we need to associate to connect with the gateways are present on AWS. Finally, we make, make reference to the stack mental essentials cloud formation stack, which would be needed if there were no key pairs that existed. But in this case, since we already have key pairs generated, we're going to, we're going to not need the stack mental essentials. Then we specify some site specific information. The cider, the cider ranges for the VPC, the SGI interface, the E node B, and of course the name of the site. We specify all of that in this, in this YAML file, followed by configuring the, the gateway. We specify an ID for the specific gateway and ensure that the variable association for the, the image is set, set appropriately. This point, we're ready to start rock and roll and deploy our site and gateway. We're going to do both of them at the same time. We'll deploy the mental park site and access gateway A at the same time. This whole run started at 2118 UTC ended at 2121 with the whole runtime of three minutes and 23 seconds. At this point, only the access gateway is provisioned with magnet base image, but it's not yet configured. We can see that there is a stack now for, for mental park and we're not able to see, we can see the EC2 instance that's deployed with it. We're not going to SSH into that environment. Without any additional configuration, we can all already see that the magma base image has already been loaded onto this particular gateway. Obviously the check-in, the challenge error and the check-in would fail because this gateway has not been checked into any particular gateway, nor has it been configured to communicate with any particular gateway or any particular orchestrator. To configure it, we run the second answerable playbook, which is the agw-configure, which goes ahead and deploys the control proxy.yaml, which ensures that the gateway talks to the correct orchestrator and the rootca.pam, which allows it to bootstrap. Currently in the demo, we're creating an access gateway B in the same site. So this is the part about scaling the number of gateways. Without having to do much configuration changes, we just run the answerable playbook again with additional variables. And within about a minute and 19 seconds, we've deployed a second gateway. The runtime this time was a lot less because we didn't have deployed another VPC. It was part of the same VPC. We'll now go ahead and configure the second gateway to communicate with the same orchestrator and have the rootca.pam, which would allow it to be bootstrapped. At this point, we'll go ahead and check this gateway into the orchestrator. We run a show gateway info.py on the access gateway, which gives us all the information that we would need to add this to our NMS. Adding just the hardware ID and the challenge key and making sure that the gateways have the control proxy.yaml allows the association to happen between this cloud instance and that gateway instance. And the rootca.pam assures that there isn't a nefarious gateway that's connecting to the cloud and that it is actually authorized to connect to that particular cloud. Now to force a check-in, we're going to just restart the magmity service. It usually happens on a periodic basis, but just for the purposes of this demo, we're going to restart it forcefully. And there it is. We're fully checked All the services on the access gateway are running. This is version 1.3.2, and if we refresh the gateway, we do see the health is good. I'd like to pause the demo here for a second and do a quick sidebar to the AWS snow cone. We received a snow cone from Amazon not too long ago, and within a few hours, we were able to work with them to get the networking, the site component worked out to a point where we were able to log into it and make sure that the snow cone had the base Debian image along with the magma services running on it. This is just a sneak preview. We're not ready to pass traffic through this yet because the second interface hasn't been turned up. However, without doing any additional work, we now have magmity fully running on this particular gateway. Going back to the video, it'll take us into demo number three. Demo number three is all about building our magma images from the Git source control and pushing it to our Docker registry or our container registry and our helm repository. To begin that process, we'll again validate that the AWS access key and the secret key are still valid and that we have the credentials for our Git repositories and Docker registry. We'll then go ahead and look at the build.yaml file where we'll make sure that we have the tag that we'd like to check out from GitHub correctly identified, the orchestrator label that we'd like to publish is defined, and the helm repo that we'd like to publish to already exists. You can see this is a brand new repo with nothing in it. After that, we're going to run the build provision.yaml. This particular answerable playbook essentially just deploys a build node. This one took just around less than a minute, if that's to deploy the whole build node. We can see that this also used stack formation and there's a stack build orchestrator in there. If we look at the EC2 instance, it'll come up as a T2 extra large and that is by design. The build process does take a little bit of resources. However, as I said in the beginning, this is a very on-demand resource and doesn't necessarily need to run at all times. We then run the ansible playbook build configure.yaml. This is the playbook that ultimately gets all the artifacts, deploys all the dependencies on this build node and at the end of the whole run, it would have published all the artifacts to the various registries. The build started at 0104 and concluded at 0117 in just under 13 minutes. We've deployed all of our helm charts and the four container images to the container registry. There it is, 01436. That's the helm chart we were looking to publish and here we're expecting to see the mantle LTE tag and there it is. All the artifacts have been published using a very quick framework which requires very little manual intervention and there are just a bunch of ansible playbooks that ultimately simplify the entire experience for deploying magma services along with building magma services. At this point, this concludes the demonstration but the question then becomes, what's next? Now, our principal architect of this work, Aaron Tulisi, he made a PR this morning to push all of this code into magma experimental. It'll be available on GitHub once it's been merged in. All of this code will be merged in shortly and the documentation has been written so that any part of it can be utilized at any point. The AMIs don't actually exist yet but the documentation here allows any developer to deploy their AMIs, create their instances, deploy their AMIs and then go through the process of deploying their orchestrator sites and gateways accordingly. I'd like to pause here and pass the baton back to Phil. Thanks, Judy. I think Kendall's taking it right now. Yes, I was going to take over. We do have a couple minutes for questions. If anyone had any question, please feel free to unmute yourself and ask them right now. Hey, I have a question here. Any chance to run the access gateway on some custom hardware, custom gateway at the edge? Any options to avoid Snowball, Snowcon or it's against AWS ADS? Yes. Thank you. Great demo, by the way. Sure. I think Sigit or Ravi, if you guys want to take that question. Yes, so we have sent our Snowcon to Studio D and Outrooms Team for exploration mode and I think Snowcon is a great service to start with because it's, as I mentioned, smaller than your shoebox size. It's two kilograms and we can send this to UPS or FedEx. And I think in the future, right, Judy? And I think we can work together on Snowball and larger devices. And in the future, if you pay attention to the AWS Reinvent 2020, we'll have a Snowcon with one new or two new size. So that will be very interesting to try on. So basically just to be understood, all the team here right here, exploring all the options to solve our customer's requirement together. So we start with Snowcon and then Snowball and then in the future, we will work with the one new or two new Snow, sorry, but Outbox. So that's basically we can share here. Everything to add there, Ravi? Go ahead. Yeah, just quickly. No, there's nothing new from doing that. Now, we won't recommend that because we Snowcon and Snowball and Outbox will give you that consistent experience and you will be able to leverage the same toolset and the same MIs that you use in AWS region to deploy and provision those devices. But yeah, conceptually, you can do that. Right.