 Hello everybody, thank you for coming and really appreciate everyone coming out to see us at the very end of the day and really it's exciting to be here in Tokyo for the first time, so I've come past 2018-2019 as a participant and this is my first time to be a speaker, so really appreciate you guys coming today. Likewise, my first time here so really enjoy it and hope you guys enjoy the last day here and kind of get along with the presentation. I'll introduce myself quickly first, my name is Mario Aloso, I am an enterprise technology architect at ExxonMobil, I'm actually based out of Kuala Lumpur, Malaysia right now, been living here for three years. I wear many hats, one of the hats that I wear is over our enterprise strategy in automation observability and a lot of our cloud native technologies, so a lot of the things that we're going to talk about being successful at the edge are the capabilities that I hope. I'm Chad Furman, I am the Senior Principal Product Manager for Ansible at Red Hat, also just a huge automation evangelist and I actually had his job two years ago, so we used to be partners and now I've moved to be the vendor so we still get to work together quite often. So what we're going to do today is kind of tell this in a story format because I think what a lot of people don't really think about is when you're looking at edge there's so much more to it than just, oh we're going to deploy an application. So way a lot of these things start is you have a single app that you're doing a POC on, you start testing it on, and then you got to think about how you're going to manage and scale this in a massive infrastructure environment like ExxonMobil. Okay, so next thing kind of like Chad said is you've got a POC, you've run some ideas, you've got something working, but now you're trying to see how you can kind of take this from the next level and scale it out. And one of the things you often see especially with devices on the edge or any POC in general is it's easy to manage in that small scale, right? And so when you're starting to think about trying to take this production and how do I start managing this past the incremental steps that we're doing, that's when you start to look at all the little things that you need to make sure that once you get it out there, you've got everything, all the foundations necessary to support it. So in order to really be successful at this, you have to build a foundation. And this is one piece that I loved coming from a large company to Red Hat is Red Hat, generally we talk about operating systems and Kubernetes and modern application, but there's so much more of the little bits and pieces to that. And really, it comes down to our meme here. One does not simply scale. And this is the first thing that I actually drew when I first came to Red Hat because I needed to explain and we will go into this in very, very like minute detail, but when you're deploying a large manufacturing site, anything that's out in the field or retail location, and I've run this past a friend of mine who used to work at Walmart. These are all the things that you need to actually get to a data center to near edge and far edge and then try and take data back from the cloud. So, or sorry, back from the edge and push it into the cloud so you can actually visualize the data and get all the things that you need out of it. Yeah, and one of the common things you'll see as we kind of talk through this is there's a lot of things that you'll look up here that you're probably very familiar with things that you've seen before, things that you're used to using in your data center environments, other environments as well. And so part of the conversation that we're going to have today a lot is a lot about how do we make sure that these things are in place so that you can manage this is managing a lot much larger scale with less access to the to the devices that you're putting on the edge. So let's just start kind of at the beginning here so to build out this foundation, you have to really start out with things like you have multiple groups that have different needs whether it be DNS or storage and things like that. So how do you actually track and manage and request all of those services and get it to a more DevOps type of methodology versus the way that we've done it before is you put in a ticket and then hope at some point someone resolves that ticket and gets you the resources that you need. Yeah, so we'll start off with service tracking and management so up here I'll go through a few of these things but we've all been there before you have things like automated governance right the policy. Maybe you have governmental policies that you need to adhere to maybe you have financial policies that you need to adhere to health care industry HIPAA, PCA, PCA and things of that sort right so when you think about the beginning stages, you have to make sure that you don't have to do that. You have to make sure that these things are in place that you're able to have to make sure that you have the governance to put your devices out there. Next we have ITSM so now you're starting to think about how do I make these service requests right how do I automate the ability to take in requests from customers from clients and be able to put these things in a way that's you know you're able to actually track the services and the requests and the response that go out for them. Next one that we're all very familiar with IPAM DNS. It's never a DNS problem. It's never ever a DNS problem. It's never the network fault right so here is obviously being able to track what you're putting out there at scale so DNS is obviously very very important and then identity management and we'll talk a little bit about that more when we get to the network portion but that leads into things such as zero trust understanding how these devices are put out there who's managing them who has access to them and then obviously one of the most important foundational parts is your PKI PKI infrastructure right how are you managing the secrets. The certificates the things that are going to make these things secure because the further out in the edges you get the less likely someone's going to be able to go touch it and the less likely you're going to be able to get somebody on the ground to be able to touch it right and so you have to be able to have security in place to manage those devices. So now that we've kind of had some of the security pieces are just like that basic things we need like of the thing that I was always was a bit surprised of coming to the vendor side from being a customer was we always go straight to operating system and apps and I would always ask so how are we going to connect to those apps so this is where we talk about network connectivity. So networking which a lot of people just assume is there so whether you're at a remote far site where you're using cellular if you're at a regional site like a bank or let's say a retailer. All of the things that you need for networking including the things on the left need to be there so you need to have DNS. So let's say you're running a retail location and you lose network connectivity how do all your systems know how to talk to each other. I don't know about Kubernetes but Kubernetes doesn't work very well if Kubernetes doesn't know how to talk to itself. So having the networking and the firewalls and the security and all of those things that you need in order to talk to those endpoints is crucial to actually being successful at the edge. And this comes from us working somewhere that has locations of 250 countries. So that to add to that when you're looking at some of the networking that's going to be on the edge you're also looking at connectivity right so you're looking more like wireless 5G those type of connectivity less of the hard wire. You know the typical copper fiber that you might be used to in the data center so once you start adding the complexities of that then you know we already know that the network is important but now you start to add a little bit more complexity on just getting the connectivity part down solid. I think what most people don't really think about but we really had to worry about was how do you actually do wireless security so wireless security ties back into your PKI which needs certs which means all your devices need to have certs on them and cert rotation and cert revocation to make sure that your devices are secure on your network and you're not just getting random people connecting to your network. Okay so now we've got it we've got network connectivity. We actually need to secure it traffic it or approve it and make sure that only the things that are getting on there and then of course monitor it, which leads to some sore. So a lot of assumptions come in now where we think we know everything that's on the network. We have all the identity that we need there. But how do we actually take that information pipe it into your sim store so your sock and come back and tell you if there's a security problem. So you want to be able to track everything that's going in and out of your network, your application layers, your hypervisor layers. And then of course locking down to firewalls and hopefully getting at some point to zero trust but we realized that a lot of people zero trust is something they're kind of just now starting to dip their toes into but. All of these things need to be there before you can even get to where we want to go next. So infrastructure. Now we're finally at infrastructure where 10 minutes in and we haven't even talked about actually building the basic compute that you need to run your applications. So now we get to hypervisors and containers and applications and all of the things we need to actually run the apps to generate revenue for our companies. So this is probably where a lot of people this is probably where a lot of people are more familiar with right so now you've got you've got the hardware. You've got the traditional Linux VMs Windows VMs the things that you're going to run your applications on top of now as we move more into cloud native and your applications are being delivered faster rapidly. So if you're using Kubernetes or using containers to manage those applications you have to make sure that you're able to deliver those things on top of here. But as Chad said there was a whole lot of things we had to take care of before we could even think about putting the foundations together to actually deliver those applications on there right. So now we're getting ready and now we're at the point now where we can actually provide our developers clients customers the foundation that they're ready to deploy their applications. Okay so now we have infrastructure we have apps. Now we actually need to get the sensors things like video cameras things like vibrations valve monitors all the things that actually help us perform our business cash register point of sale scanners things like that. So we've gone through all of this massive thing to finally get to that very end where this is where we actually make our money. So if you're at a store and you're running a cash register this is where the cash register to get scanned in. You get the data that data pushes up somewhere into a database or into the cloud so you can actually visualize the money your company is making. And this is the final point for us where it's like okay now we finally have a manufacturing plant. It's manufacturing products and we're able to actually scan all of the things that are happening and then feed that back into a feedback loop to get the information we need. But before we could get to that to be able to do it in hundreds of countries we had to do all of these other things and make them to a point where it was completely infallible because there's not people that are it people at an end site. There's not an IT professional sitting at your retail store that can go in and fix your network. There's definitely not one at a manufacturing site that's making us a purple alcohol that knows how to fix anything on the left hand side of this chart. And ultimately the goal is and as complex as this is the goal is to make sure that those are delivering the software and the applications to make this happen. Have a seamless of an experience as possible right and that's what we have to do to make sure that when we get these foundational parts right that we're giving them that platform to deliver those applications on top of. So real quick before we get too hard is this makes sense to everybody is this sound crazy or is everybody kind of like I see some heads nodding so okay. Okay, so in summary before we can get to scale the edge. Everything has to be automated everything has to be available as a service and it has to be something that is completely repeatable. Hopefully there's some sort of a service catalog. You have a place where you can request things you have pipelines that you can run through CI they can go and request the different components you need so that you're not. Stuck on an island unable to do the things you need because we saw that with secured with certs forever. Until the ACMI protocol came out a few years ago every time anybody needed a PKI cert they literally had to go and find whoever it was on the PKI team to get that for you and especially if you're talking applications and Kubernetes. And wireless certs and all of that there's hundreds of thousands of certs that you now are responsible for managing and unless you have that automated it is a nightmare to take care of. So this is what it actually looks like at the end of the day everything here needs to be lit up everything needs to be accessible and. In where the world that I came from even before I was at Exxon every single one of these icons was at least one team if not five different teams that I had to call and request different services from. And this is why I really became an advocate for automation because at the end of the day if there's an API that you can call and request these things it just makes your life so much easier. Cool so where does someone how do we actually get started here and what we're going to break down here is all of those pictures were nice right is all the little components that we need. But now you actually get down to the level of the services and the API's and the things foundation that you need to start implementing to make those things possible so we'll kind of run through a few of those. So there's a couple pages and we'll put it all together as we go through but if you look down here kind of I won't go through all of them but you look at like the three kind of columns and phases. The first one is is really what you call tactical automation right these are things. These are services when we talk back to the ITSM portion and kind of the service requests that are available to make these things happen. These are basic necessities that you would need to even get started right how do you have a self provision how do you have a self service API for whatever service you're trying to provide. How do you have a provisioning API to whether it's you know provisioning a VM provisioning any request what it is you have to have some sort of API to make sure that happens. Configuring whatever component it is you have to have network and storage automation and then obviously through ITSM some run book and ticketing automation but we call this tactical because in experience sometimes you have pieces of this in different silos. Some people have one some organizations have another but when you're looking at the edge and we talked about the foundation that you need to actually make successful. While there may be components of these that are developed in different organizations or different areas you do want to make sure that you have them all created at some point right. And so the best way to look at it is possibly you want to create them all together and some sort of unified unified manner so you understand where all the different components are coming from so you can put together the foundation. But ultimately these are things that need to be tackled at some point to get done and so here just some examples of how some of those are run. Whether it's network server automation ticketing automation self healing automation which you can do in bits and pieces at the end you know you probably want to be more deliberate about it in place upgrades IBM DMS and all these different things. Next you have you know the actual artifacts that are going to the device and how you're building it right and so that's when you get to infrastructures code and and and building using you know infrastructure code platforms such as Ansible. And other things to actually build the operating system the applications and the systems that are going to get deployed on top of those edge devices that you're actually going to use. And a lot of these are familiar familiar with the DevOps cycle. And then now you get into SRE right so DevOps is the practice of how you take these things and put them together but now you're looking at the actual operational impact of how you manage these things at scale right and and SRE always focuses on the reliability security of it how you get it out and scale and make sure that they're up and running which is such a huge component about edge computing is making sure that without having line direct line of sight of how these things are running that you have some sort of confidence and knowing that they're up and running and so SRE SRE principles are a big part of understanding how you can manage these things at scale. And a big one that really bit us that we really didn't think of so most enterprises have very locked down computers and laptops. So one of the biggest things that really enabled a lot of stuff was to actually create dev workspaces. So not that they couldn't do any good development on their laptops so we ended up creating VS code and code ready workspaces on top of Kubernetes so that they could all go and have a very similar developer experience so that way they're not. Oh, everybody loves it but it works on my laptop doesn't really work in production especially when you're pulling out to 5000 plus devices. And so this is this is actually phase one of the automation right so you've got to automate all these things to even kind of get started and some of that beginning phases that we start that we looked at before. All right phase two. So a lot of that there notice we didn't say Kubernetes or containers or cloud or anything there because that really was just about the kind of foundational fundamentals this is where you actually start doing things. You know containerization with podman and Docker maybe start looking at things like Kubernetes and actually getting that whole dev environment and maybe even the local environment working where they can do true testing on their local systems that would actually be replicated into the into the cloud or into the edge sites. API management is possibly one of one of the things that we both bought for many years because having a catalog of API is an essentialized place where people can go and say oh I need an API to call a virtual machine. In the cloud this exists like that that's there that's part of all the cloud providers but when you're trying to do this for remote site a and Papa New Guinea. How do I call the API to request a vpm in the middle you know the middle of a of a different country. So Kubernetes actually getting to that whole entry pipeline now so thinking about pipeline creation doesn't have to be on kubic could just be a CI CD pipeline and GitHub to get your application and your artifacts actually built and repeatable and tested. And hopefully start getting towards a software bill of materials where you know what's actually in there because that's going to come back on you later whenever you get audited. And then now of course you're getting into application integrations so how do the different apps talk to each other how do you make sure the certs are compatible that you're the certs are allowing you to talk to the different applications. How do you get into the full actually maybe start doing operators start doing things like well hopefully you've already been doing container scanning but absolutely start doing container scanning and making sure that your containers contain what you think they contain and are running the things that you want them to. Now we've got now you've kind of got the face three so we've got the foundational part. Now we're met and in the face to we've looked at all the kind of second level integration parts that help you actually put the application or services on top of it. And now we're looking at the advanced parts of it right how do you manage really the day to how do you make sure that these things are up and running. How are you managing them from the optics of in the in the view of the client and the customers that are there benefiting from whatever services you're putting out there so you're looking at unified observability how do you how do you view 10,000 devices like we said at the beginning and in some sort of you know high level a way so you can manage them at scale. Now you're bringing in a lot of the S3 components so SLOs understand the objectives. One thing about these devices right they're all going to be different. They're going to have specific use cases niche use cases for why they're deployed and where they're deployed so understanding the objectives of the reliability and the service for for what they're for what they're running and obviously doing some capacity planning. Again, when something's out there and you can't see and have direct line of sight concepts that you can borrow from things like chaos engineering reliability testing are going to be huge. Just being able to manage the scale of what happens when something goes down because some of these devices that you're going to put out there particularly an edge. You know some of them can have safety implications right some of them can you know really be detrimental if you're not if you're not aware of how they operate in the in the worst of times so. And I should say that so going through these three phases took about five to six years. This isn't something that happens overnight, but getting people to go from a very waterfall, very windows net mindset to getting into a Linux mindset and getting into these. These are kind of the baby steps of walking to eventually getting to cloud native development. And I think that's one of the biggest takeaways I would really hope that everyone has is that this isn't something that you're going to go back to work on Monday and say, hey, we're going to we're going to stage three. You start at stage one. So this is what all this kind of looks like together. And this is definitely kind of like that foundational curve of where where you really want to start at that tactical and you slowly move up through inference code. Get to your essential site reliability engineering. And that's where kind of in the middle of that edge progression you'll really kind of start to see that uptick of being able to deploy at scale. And that and this is where we kind of put this slide back up because this is my my chaos slide that I like to talk to you so any questions. There's probably a lot of things up here you've seen already. So the question was, can we speak a bit more about recovery? So depending on which type of recovery you're talking about like disaster recovery. Sure, let's go with recovery. So when you're talking cloud data of one of the things that I always try and think about and one of the things that we kind of fought a lot was, do we try and restore it to the model or exactly the point where it was, or do we live in a cloud native world where it's ephemeral. And I loved that conversation in 2014 when containers kind of really first started coming out and everybody's like, oh yeah, it's ephemeral and I'm like, but how do you run an application without a database? How do you do this? And so of course, that's when we started adding storage and things like that. So from a recovery perspective, I would hope that you have your backups also automated along with as part of this. And you saying that makes me think I need to update this deck with recovery as part of it. So thank you. I think there's still some considerations that you need to make for concepts like immutability and being able to provision rather quickly, right? I think of just like even back when I was an early sys admin where we use Kickstart just for everything, right? We were able to use that and we were really successful with it because we didn't have to be close to the box, right? We just had to build it up again and we had scratched this and we had things that we were able to use to make sure that when that thing came back up, there was some sort of data stream to come back and bring it up. So a lot of those concepts are kind of coming back again when we think about how we recover and how we get back to a state that we need to be at. The tooling is changing, but using that same sort of methodology is kind of what we need to make sure that we think of recovery in terms of speed and getting the data back. Actually, I just thought of another add on to that. So that's one of the things that I really like about the newer OS tree style Linux builds with Green Boot where you can have two different versions of boot. So when you update, you can go to that update version, but if there's a problem with that update version, you can actually just roll back to the previous version to boot from. So it makes your operating system level booting really cool. And also it does really small delta updates instead of doing full RPM updates because you're going to be pushing all of this over the air, which if you're pushing over cellular, you want to push as little as humanly possible because like it's really expensive, really fast. It's only one more session after us before the end of the thing. So please, any questions at all, I really, I really enjoy hearing what you guys think. And if you ask in Japanese, I could probably probably guess what you're saying. I think the first time that the word using the near and far. Is that a common word? So I've noticed this lately with edge conversations that everybody is using different terminology. And it's I've heard provider edge. I've heard retail edge. There's so many different ones. So tried to keep it really simple that near edge is more like your kind of robo your remote branch office or regional data center. So the probably less skilled IT people there. And then far edge is like your, your device level edge. So there's no skilled IT people there to manage those. So you don't have a remote branch office type of a scenario. It's just data center and far edge. Okay. That makes sense. Do you mind telling me what industry you're in? I'm curious. So you had asked about this. So the LF Foundation, LF Edge Foundation has a taxonomy, white paper that's available that describes it. And I was going to come up and talk to you about afterwards that red hat platinum, a premier member of LF Edge. So, but there is a taxonomy that they worked on to do this. It's very similar to this, you know, another, but there is a, there is something out there, but it is, it is an open question. Yeah, it is. So the LF Edge trying to basically put it out there and at least here's, here's our definition of it and here we follow into it. Yeah. Unfortunately, you see this a lot. Everybody tries to come up with their own naming for it. There was a while back there was the fog edge and these other ones. And I really hope that we can just come up with a common definition that everyone agrees to. But it also depends on industry. So different industries. So I work quite a bit with manufacturing because of my background. And if you say something to them like, what was the word that I said the other day? Oh, controller. A controller to us means something completely different than somebody into an industrial process control environment. A controller to them is actually what controls the robots. Whereas if we think of a controller, we probably think of either, well, in my world, it's tangible. It's called the controller or if in other different places controller can mean different things. So you're just a two tier model, not a three tier model. That's, that's, that makes a lot of sense. I'm seeing that more with people that just do cloud. So you'll have a cloud and then you'll have Edge or you could just have your data center and your far edge. I see that with retail locations quite a bit actually. So in a manufacturing world, they have a five tier model actually. So they have what they call a Purdue model. So there's five levels of networking. So there's a level of networking from the sensors, a level of networking to the devices that talk to the sensors, a level of networking from the devices that talk to the sensors to the database and then to the business networks. And that's a very, very common model in manufacturing. I just tried to do this in a simple way to kind of show how we've seen this in retail and manufacturing and where we came from or where I came from and he's still at. Okay. Anything else? I appreciate everybody coming. Feel free to talk to us anytime afterwards. I'm available. I'm on LinkedIn. Martial is also available there. So please reach out anytime. Thank you very much.