 So I think we'll get started. Thank you for joining. This is not the programmatic, pragmatic, how you do XYZ with OpenStack. But I think what's really interesting about this talk and as applicable today is how do you actually take an investment like OpenStack and actually move it into a hybrid cloud and then move it beyond that into what we call borderless computing. And so remember when you were young, when we started out in terms of finger painting, you can do pretty much whatever you wanted to. You could take anything you wanted, any colors, and put them anywhere you wanted to on a piece of paper. And being a child and doing that was actually kind of fun. But then you kind of got into KidnaGuard and everyone started saying, well, hey, make sure you start drawing in the lines. Don't throw all outside the lines. You actually get a frowny face versus a smiley sticker for doing that. As we got older, I actually started traveling. I grew up in Maryland on the East Coast. Been on the West Coast for 21 years or so. Actually no longer, not 25 years. And we had this notion of going from state to state. Maryland's a very small state, so you kind of move in between things very easily and you had this big sign. And I remember kind of saying to my dad, what the heck's that mean? He goes, oh, we're entering into a new state. But the roads were the same where they were a little bit crappier. Maryland had better roads at the time. But everything kind of felt the same, but it's kind of like, oh, okay, there's this change, right? Something is actually different. Cars are the same, roads are the same, but things are starting to change. And then we actually start to get on airplanes and travel outside of the United States and go somewhere else. Then all of a sudden it becomes very real that you're actually in a very, very different environment. You have to prove your identity. You have to, today, answer questions like, well, what are you doing here? How long are you gonna stay? What hotel are you in? We're really trying to figure out what's going on and the rules about what you can, can't do are changing. And so why do borders matter and why is this applicable to anyone at OpenStack Vancouver? Hopefully you understand that the exact same metaphor around finger painting and traveling applies to, how do we actually use, consume, and take advantage of computing resources? Where you still think of them as raw compute network and storage or think of them higher level services. There's borders, right? And there are technology borders. The most part computers are the same, the chips are the same, the disks, SSDs kind of are all the same. Yet when we talk about hybrid or we talk about borders computing, the notion of being able to seamlessly use all of these things together, kind of like finger painting in pre-kindergarten, really feels like it's something that can't be achieved today, that it's further out than most people think. And so for me, at least, it's kind of interesting to realize where we've been. The notion that we've kind of only been in this about 50 years, I think is a lot lost on the community at large of how far we've actually come. I mean, in 1970s we had mainframes. You had to go into a special building that you had access to, everything was kind of there. We all know these stories, but the timeframe of which we've actually looked in terms of, on here, 40 plus years is very, very short. We went into client server. I'm older, so I remember all these trends because I was living them, and we moved then from the desktop to what I consider some of the big mega trends that hit. Internet and cloud, and then mobile. What's interesting is that we have 10 times the number of users every cycle, and I actually think that's gonna keep going up. Even more importantly, we have a hundred times of the data being generated, and I'll explain why I think that's really, really important. This quote by my former boss, Eric Schmidt at Google, every two days we create as much information as we did in totality up to 2003. I think this quote's like three years old too. So the amount of data that we're actually generating, and the amount of data that we need to have access to, and the way we process it, and the way we make services out of it is gonna be a key into kind of why this notion of hybrid and borderless computing, at least in my opinion, is important. We all have some sort of touch base with information technology, right? IT. But the IT that I grew up with, going to school a long time ago, is very, very different I think than what the modern IT is today, where it's rooted around where's the data being generated? What APIs can we put on that data? Not only to access it, but to generate services that then present their own API. Right, so this data API and services and data as we know is always growing. I think the numbers are, any number you say is probably gonna be wrong. We were speaking with a Gartner analyst the other day and I made a comment that kind of made her laugh. We're dealing with 50,000 year old hardware in our brains. It's very hard for us to actually think exponentially. We think linearly one foot in front of the other. That's just the way we're built. So any type of estimates on data I believe are actually gonna be inaccurate even though they feel like they're just crazy in terms of how big they're going. But what people I think also are starting to understand is that data by itself doesn't really do a lot. And so you probably heard of terms like the API economy where things are being massively explored around how you actually access the data. And in services, right? Everything is adding new capabilities, new ways to process the data into information which then is useful and then we can actually use that to do important things. I currently reside in San Francisco and we have this notion of personalized everything. Uber and TaskRabbit and Caviar and all these kind of things. IT is around personalized data for everybody and every organization in an IT organization in my opinion. Cloud Computering has kind of actually liberated the fact that all of this data can actually be used. Now I believe and I don't know if this number is right, I believe the majority of data still goes useless meaning we don't do anything with it. But we're kind of starting to try to catch up and cloud computing was a big deal. Again, for me, I remember when it might take three months just to get a computer. We had to do a CapEx budget and a justification and send the order in and I always screwed it up so then we had to send it back and then it would come and somebody would burn it in and then they put the operating system on it and then three, four, six months later they'd hand it to you and say, here you go and it's like, well great, I need five more, what do I do now? And then you get back in line for a six-month cycle. The dawn of virtualization and what kind of VMware did in terms of extracting what actually existed on mainframes a long, long time ago but kind of popularizing it. I actually used it when it first came out but I used it for QA to actually do Windows installer so that I didn't have to keep re-imaging Windows machines. When they actually moved into the data center with a deal with IBM, it became this notion of dawn of virtualization and you came down to the number of days that it would take to actually get things spun up which was a huge deal compared to three, six months. Nowadays, most cloud providers, it's less than two minutes and with the new dawn of containers, right, which are very, very fast, sometimes sub one second, right, 500 millisecond on, for example, the platform that the company that I work for actually provides. So, what's interesting here is that trends go in two phases in my opinion. One is keep doing the things that you're doing, just do them faster. Then eventually you get to a point where you can't do it any faster and then you actually start thinking about how to do things differently. We're getting to the point where we're gonna start thinking about things differently. And at least for me and a lot of the people at Epsera and throughout the industry have heard me speak or know of me say the world does not need faster VMs or faster containers. They're fast enough. Now, how are we actually gonna do things slightly differently with the technology that we have? But this cycle is pretty amazing and if you look at the 45 year cycle I put up on the previous slide, this one is actually about 11 years or so, right? So in 11 years we've gone from three months and it's gracious, it's probably like six to nine months if I remember correctly, down to less than a second you can actually get a computing resource that you can log into and start doing things with. So there's clear benefits, right? The economies of scale reduce overall spending, capital cost shift from cap X to Op X. Now sometimes that can be more expensive, less expensive. The interesting part here is that it's choice and for me the only thing that keeps getting more expensive in technology is the people that are sitting in this room. Everything else is getting cheaper. So anything you do that involves more people probably don't do that, look at the other way. And cloud computing is a way to say don't worry about people to maintain the buildings, figure out energy, powering, cooling, building, keeping computers running, fixing them, all that other stuff. So even though an Amazon bill can be extremely expensive sometimes, right? You have to actually look at the head count that you're saving a lot of times. And you get a lot of business and technology, oh Julie. We had a little gathering last night and someone who kind of ran a lot of the technology at Zynga was there and he was talking about the move from Amazon back into Zynga with I think it was called Z Cloud and now they're moving back into Amazon, right? And he was kind of scratching his head at why they were doing it but the agility that cloud computing provides is pretty obvious. Yet these borders keep popping up, right? Zynga did a lot of tremendous innovation in my opinion on figuring out how to move from Amazon to Z Cloud, Z Cloud back and forth. But it's not easy and for the most part, at least from what I've seen is that people who want to actually move to the cloud, they pick one provider. I think Gartner just came out with a big magic quadrant that shows Amazon's way up into the right, you know, ahead of everyone else, two levels that people didn't actually expect at all. So most people are gonna go, well, if we're gonna go there, let's go with Amazon. So some of the benefits, private, right? Security, reliability, high performance, public cloud benefit, more cost effective, agile, flexible, easier to deploy. Anything you put in either one of those columns, I can promise you is probably gonna be a wrong depending on who you're talking to. But there's differences, right? There's reasons for certain companies. Last night again, someone said, hey, do you believe we're all going to public cloud? And I said, absolutely not. I do believe hybrid is kind of where everything's going. And that's why we have such a huge focus at Epsara on hybrid. But even when I was at Google doing Gmail, I realized that I would trust Gmail over anything that any IT organization could set up in terms of a mail system, knowing kind of what was going on underneath the covers. Data at rest, data in motion, access, audit trails, all this other stuff. And the ability that, at least for me, when I get into a hotel room, if I wanna make sure my wifi connection's working, I just type in Google.com and it'd enter. I assume Google's always on, it's always there, right? And yet, when I started there, they were built on computers that were put together with Velcro. They were literally open trays and all the pieces were Velcroed in. Hardware never talked to software, software never talked to machines, which was kind of a light bulb moment for me. These all have borders in terms of public and private. For the most part, anyone, at least from my perspective, who says they have a hybrid cloud strategy is saying these apps will run on-premise and totally new Greenfield apps will run in the cloud and never the two shall meet. And we know that there's just limited viability to that type of approach. And so there's lots of challenges around this. For example, private cloud is expensive to build out in terms of cat-backs, time, people. It's not only, it's not always as reliable as people think. Public cloud is perceived to be less secure and some of that's been kind of proven out, again, I would probably trust public cloud more, to be honest with you. The biggest one for us is public cloud has inconsistent across multiple vendors and your private cloud stuff, the ability to secure, regulate policy, regulate service access, regulate network connectivity between these things. And so for us, and I believe once technology actually gets to a maturity level that everyone can trust it, hybrid cloud is the real answer to have that choice. It is a lower total cost of ownership. I think the availability and the agility, kind of like what the Zynga model was going back and forth and being able to pick and choose depending on what they were trying to do. One of the other big things here is, is that compute, network and storage and doing that faster, as I said, is interesting, but we're gonna change. And where we're changing now is, is what I call the services ecosystem. And so if you look at Amazon, right, Amazon has, of course, lots of computers having lots of geographies, but the stickiness that they're really, really driving is around services, right? If you want to build your software faster, which by the way, software systems will never get any simpler than they are today. Tomorrow they're gonna be more complex. You want to build less and assemble more. And what Amazon's doing is, is they're giving you all the common off the shelf services to assemble a system and just build what makes you different. But what's interesting is, again, going back to my comment, most organizations that we've talked to will pick one cloud. But what happens if another cloud comes up and they really do have a service that you absolutely want for one application? Most people won't do that. They'll say, don't worry about it. At least to my knowledge, the only customer that Amazon has lost was to IBM and it had nothing to do with their computers or their network or storage. It had to do with a Watson service. And this company had an application that they really wanted to use state-of-the-art machine learning represented by the Watson service to do that. And so, hopefully this isn't news to anyone. It's a massively growing market. It's projected to get to about $90 billion in less than five years. That's pretty big. Even in terms of all of the numbers that you see flying around. And hybrid is obviously something that everyone has been talking about. But it means a lot of different things, a lot of different people. But I think the point that we can all agree on is it is going to be a massive opportunity. So for us and my perspective, it's not perfect and I think we all know that it's not perfect. Again, if any of you saw the keynote the other morning where Google and Rackspace were trying to actually demonstrate some of the hybrid cloud stuff, it almost worked. Well, almost isn't good enough. It has to be seamless and trusted and it has to be able to, in my opinion, deliver trust across any type of computing resources that you use. And that's hard and it's hard and for the most part, people think it's boring and it's not very sexy and nobody really cares about this stuff. But it's one of the biggest impediments to why hybrid cloud, at least in terms of actually using these resources totally fungibly and interchangeably, hasn't really occurred yet. Now what's interesting about the borders in my opinion is, and I had a reporter ask me the other day, they said, well, what should happen? And I said, well, what should happen is is we should probably have a common set of APIs across all vendors, both public and private, and there's a way to securely and transparently interconnect all these networks on the fly. And she looked at me, she goes, I agree. And I said, and I probably will die before that happens. It's just not gonna happen. But it feels like it should, right? There's still machines, some machine, there's the same machines all over the place. Yet, the way you provision them, the way you secure them, the way you actually manage them, the way you monitor them is totally different. The way you access services, the way you set up things like auto scale groups on Amazon versus all kinds of different things on Google Compute and SoftLayer, they're all different. And that just solidifies these borders, solidifies friction, and it drives people back to, oh yeah, we have a hybrid cloud strategy. We do this over here, and we do something totally different over here. So imagine if those borders didn't have to exist. And again, this is non-trivial and it's not easy, but if they did not exist, what would the world look like? What was interesting for me when I came into Google, I came in in 2003, was there was a couple of mantras that were in place. Hardware never talked to software. So literally, the hardware people never had to say dilly squat to anybody about the machine they were about to unplug, that they were walking down the aisle with the tray of hard disks, hard disks were the ones that just failed all the time. And that was kind of interesting because I had come from a company called Tipco for about 12 years where we really tried to make sure the hardware was reliable and we knew what it was doing and everybody was knowing where everything was running. The other interesting thing was is that at Google, software people didn't talk to machines. They talked to a system called the Borg, right? An intermediary in terms of saying, hey, could you please do this for me? I couldn't say, oh, I want that seat right there. The Borg would look across and figure out all the seats and say, okay, you're gonna sit here today. Now, what was interesting is at the time, the Borg was very rudimentary, but I think the light bulb moment for me was is that there wasn't a border between what I was trying to do and let's say Google search or Google ads. The interesting thing though is is if you got stuck on a machine where Google search was running, you were kind of SOL and your process really kind of got starved out. And so we would then shoot that one and then it would get moved around by the Borg until we actually got happy. But again, that was 12 years ago. Now, what's interesting is that consumer tech seems to be running ahead of kind of enterprise IT tech. And borderless computing is really getting real in consumer tech. I mean, I remember a time when you take a picture and then you had to plug it in and sync and then you had to copy it to get it to somewhere where you could archive it and then all of these different texts that slowly were coming and delivering the promise of borderless computing and it was not fun, it was a little bit painful. But for most part now, I take a picture. It automatically goes to Dropbox, goes to Carousel, goes to iCloud, it's all over the place. Luckily, nobody cares about the photos I take so if they're stolen, it's no big deal. But the interesting part is is I could literally take my phone right now, take a picture of you, pause maybe a second, put it on the floor and crush it, go get another one and all my stuff comes back. For most part, I can do that with this. What's crazy is is that, because I live in San Francisco, I'm a real big fan of Tesla, I have a Tesla car and everyone's like, oh, well Tesla cars are different or cars different. I literally about four months ago upgraded my car, I drove in, 40 minutes later I drove out. The car looked exactly the same but it was the newer P85D or whatever like that. All the stuff was already switched over, it's all electronic and all just reappeared. The car was a fungible asset, I could just throw it away and get a new one and everything came back. I think what Google taught me early on was is that IT and the hardware resources that we should use should be exactly the same way. Yet a lot of this absolutely has to work now type of technology and IT is it runs on that computer in the closet, please don't touch it, do not pass, go, stay away, right? And so we need to move past the border, so to speak. So at least for us at Epsara, we want to easily leverage any available compute resource and by compute I mean not just compute but network storage services. Across all clouds, all infrastructure, from any vendor, in a connected, seamless, but highly secure and trusted fashion. I've been talking a lot about trust lately and it's interesting because I can't really define it but I can tell you, you know when you have it and you know when you don't. And I'm pretty sure a lot of people don't have trust not only in the public cloud yet but they don't have trust in any type of hybrid cloud. And at least the customers that we've been talking to really get kind of wigged out when we try to draw a connective line in between what's running on premise in their data center and anything out in Amazon or Google or software or Azure. So why it matters, remember the modern IT economy and it's around data and then the APIs and then the services that generate more APIs which hopefully turn the data into information that's actually useful. The amount of unstructured data being generated is astounding and for the most part I'd say 85 plus percent of it is being unused today. I was at actually right here about three months ago or so, two months ago at a conference called TED. So TED is actually in Vancouver for the last two years. And there was a talk from an MIT PhD grad student and he talked about the fact that even our perceived world we only perceive like this much of what's really going on around us. And then he showed a picture of what looked like a picture and he goes, this is actually a video and you're looking at it and it's just like a picture and it's not moving at all. And it was a picture of a bag of chips and you can go on YouTube and actually see it now it's pretty amazing. And behind a soundproof barrier you had a camera, a high-speed video camera that was watching it. He flipped to a little bit of math and algorithms and he immediately flipped to his algorithm can recreate the conversation that was happening in the room between two people with a bag of chips in between them. The only reason I point this out is that it was literally looks like a bag of chips and it looks like nothing was going on there and yet he extracted all of this information out of it. It is a great talk, I would recommend you go see it. The coolest part was what he did next which was once he had kind of the infrastructure and all the stuff working, he took a weekend project to do the exact same thing but actually recreate the material composition of something just from a video. And so then he showed a picture of a tree and it was barely moving but all he needed was that little bit of movement and then he ran a fully automated simulation where he would pull the tree and it was materially correct. Now what's interesting for, again, us in San Francisco is what if you could just have cameras watching buildings and say, hey, this building might have a problem if we have an earthquake over 4.5? Point I'm trying to make is that it sounds like it's like, oh, well that's kind of way, way out there, some MIT, PhD, grad student type stuff but I'm telling you the world we live in today even as enterprises are trying to extract more information out of the data we're barely tapping what's below the surface and so the ability to bring massive amount of compute resources very quickly to a problem to extract things that people can't see is gonna be the difference in my opinion between who wins and who loses. So with IoT, IoT is a big thing that I've been watching. It's very interesting right now we're in the very, very first baby steps of sensors everywhere generating data. The trend though is pretty clear to see, it just depends on how long we get there and for me again that thing that if you ask me to define I can't trust is what we need to get there but I'll give you a quick example. So I'm old, I'm trying to run and stay in shape but let's say I have a pacemaker or you can talk about an airline engine, you know an airplane engine. Eventually the sensors are creating data that is being farmed, manipulated, watched through cloud services and then they wanna close the loop and send something back to change the behavior. We're talking about pacemakers and we're talking about airplane engines but within Enterprise IT we wanna take the data, we wanna process it, we wanna make intelligent decisions, actually we don't, we want the machines to and we wanna send control back to actually affect what's going on. And in certain situations it's very easy to see oh we can kinda maybe do that today and contrive examples but I'd ask you to kind of just think in your head, all right well what happens if we are talking about the pacemaker that keeps Derek standing up? What level of trust has to exist for me to trust that the sensors that's maybe on my Apple I watch if I ever get one and my Fitbit or whatever are trusted that they're trusting the data that they're sending to the cloud service. They trust the communication, they trust the cloud service. The cloud service has to be trust the communication back to the pacemaker to say oh Chakami's about to fall over and he's on stage, that wouldn't look good. You start understanding this notion of again that thing I can't define trust but how we actually need to look at that. And I'm not gonna do that on my iPhone, right? There's gonna be massive amounts of computing resources and things all over that I wanna be able to take advantage of whether I own them or not to do this. And so for us, breaking down these barriers is kind of the mission of the company. The company started about three years ago and we're trying to solve some really really hard problems but whether we do or not the industry has to figure out how to get rid of these borders in order to really kind of take advantage of what we have. Not playing the old card too much but my iPhone is more powerful than the supercomputers I used in college. And yet for the most part I use it to check email and look at Facebook and Twitter, things like that. But with machine learning, with all different types of things around anomaly detection, threshold management, there's a tremendous amount of information that can be mined from what we're already doing today. And we're not gonna do it with our own data centers, right? We're gonna do it with all of the resources that are available. And the person that can do that in a trusted fashion in the fastest way is gonna have an advantage. And so for us, with a hybrid cloud operating system, we wanna be able to just deploy, orchestrate, govern and again govern is just as bad a word as policy but across any asset that we might have access to whether we own it or not. We talked about the PageMaker example but this notion of even the passport, right? Which I have to use to get back into the United States tonight. Do I know you? Can I trust you? What resources can you get access to? Remember, there's probably, I might have my math wrong, getting close to eight billion people in the world. These are all talking about carbon stuff. Think about how many different software systems need identity and trust in this new world. The reason IoT is such a big deal for such massive makers is if anyone kind of missed the boat on mobile, they can make it up 100 fold in IoT. It's gonna be that big. So for us, policy and governance is the key. And again, these a lot of times are bad words because people see them and go, oh crap, that means we gotta slow down, this is gonna suck. We can't do what we need to do. I fundamentally believe policy, governance and security done right gets out of your way. It's kind of like guardrails. It just makes you safe, but it doesn't slow you down. And we know what happens when, well, I think most of us have seen what happens when companies try to go very fast and things get kind of loose. And all of a sudden you feel like you're safe, but you're not. I had a CEO who I'm not gonna mention, but I remember sitting in front of him one time and he says, Derek, he goes, let me make it as clear as possible. Go as fast as you possibly can but keep my name out of the newspapers. And I remember laughing and chuckling at that but then I also kind of said, wow, what does that really mean? One, he wanted hands off. He's like, don't tell me what you're about to do. Just keep going, but keep my name out of the newspapers. And he was serious, but he also was kidding. But then, first time at least in the United States history, I believe, a CEO lost his job because of an IT breach, not the CIO, not the CISO, but targets CEO, right? That's a pretty big deal. And so, companies need to innovate. They need to go faster, but how do they do that in a safe way? And most people think these are opposing forces and we don't. So, policy for utilizing infrastructure and securing access control and new technologies and environments. We did a demo yesterday in the marketplace room where we showed the notion of a developer deploying an application and that's not new and exciting, right? You want that friction to go to zero. But the interesting part was that policy was all around this workload as it moved from OpenStack to Amazon to Google. Totally seamlessly, yet governed by policy. Network access was totally exactly the same everywhere. There was no concept of, well, let's try to recreate that on Amazon and make sure we kind of got it right and hope that we did, all right? And it worked. I didn't have to high-five anybody. It worked every single time, all right? And we had every single region in Amazon represented, every single region in Google. I think the last little app we showed was a map app that just showed which application was responding and where it was on the globe and we kept hitting refresh and it's ping-ponging between Japan and Ireland and Europe and North America. The app was fairly simple, but it was connected to a database running in Amazon. We were redoing the network on the fly as everything was moving around and it was a trusted environment. I think a lot of people that came up after the demo were like, well, how did you do that? And I said, I can promise you, I didn't do it in a weekend and I didn't do it in the last six months and I definitely didn't do it by myself. So a lot of hard problems here, but they are tractable problems. So the future, internet of things, big data, this modern IT economy, how do we actually go fast enough and use all the resources that we need? When we started the talk, I told you this is not gonna be a pragmatic talk per se, but more of a kind of where things are going, but borderless computing driven by trust I think is the future that we're looking at. If we back it up today, what does that mean? What's the first step? And for me, that's true hybrid computing, right? And hybrid computing in terms of, we're here at an OpenStack conference. So how do we root some of that with our investment in OpenStack? Probably running on premise ball. A lot of imagine there's some cloud service providers here that are using it and then seamlessly extend that out to any compute infrastructure, any public cloud. And again, at least in my opinion, more importantly, a public cloud that either offers another competitive advantage, usually through a service, maybe through pricing, although I actually don't think you're gonna be, everyone says, oh, you're gonna move around to get pennies on the dollar. Well, if you've got 500,000 workloads, maybe. But it's interesting to see this technology at work in terms of leverage going, well, we could move by just clicking a button in less than three seconds, we'd have all our workloads off of Amazon to Google. But more importantly, it's like, how do we take advantage of these services that are coming online? I'll give you a good example of our 50,000-year-old hardware that can't think really well ahead, at least I can. 5G's about to hit in 2020. And I remember when we went from 3G to 4G and then 4G to LTE, and we're stepping up. And it's just not speed of access. Remember, I said, you keep doing the same things you're doing faster and faster and faster and eventually get fast enough that you actually do things differently? 5G is gonna radically change the way we do a lot of different things, whether it's conscious or not. And so now, what's interesting is, is that the telco providers, who to date really haven't been able to take full advantage of cloud computing opportunities, right? They might be able to come up with services that if something like OpenStack PlusUpSera exists and it does today, allows us to friction-free actually direct certain workloads to that, to take advantage of that service, all based on policy, all governed by technology that you can trust. It becomes very interesting that now hybrid means multiple public clouds, private infrastructure on OpenStack. Today, we've got the technology that you can see, you can feel, you can touch, we'll install it, it'll work, I promise. It's been about three years in the making on terms of how we did this. Today, we have SoftLare, Amazon, and Google, we're gonna add Azure. And of course, on the private cloud side, OpenStack, which is why we're all here. But then also VMware in terms of the 800-pound gorilla. What's interesting, though, is there's this massive ecosystem toolkit. I believe the trust comes eventually in an integrated solution. But today, I think most enterprises look at a toolkit, a toolbox of things that they wanna pull from, including OpenStack. And for us, we are very conscious about putting things back into that toolbox, and also partnering and integrating with things in there. Schedulers, network, container security are probably a lot of the things you've heard as you've been walking around the conference for the last several days. Thank you. I'm happy to answer any questions that you might have. Hopefully, you found it useful. Again, it's not how to deploy a heat map on OpenStack today. But I promise you, this will become relevant faster than we think. Again, because we're using very, very old hardware. Any questions? You gotta have at least one. So the question is, when we're moving workloads, are we touching a database or something in terms of workloads? Our system is comprised of three logical points. A routing plane, layer three through seven. A management plane, which is the persistent state of packages and jobs and where things are running and how they link and channel together. That obviously is being updated, right? It's not a single point of failure. It's a replicated system. When the workloads are actually moving, though, all of the policy is wrapped around the workload and it's being enforced all at the endpoints. So if I move you from Amazon to Google, but you are connected to a database in Amazon, we redo the network on the fly based on authenticated policy that's digitally signed when the workload appears in Google in our runtime. Does that make sense? Sure. So the question is, what happened to the data that my app was using when my app got moved? In the demo we gave yesterday to do app, the database was actually provisioned and introduced into our system from Amazon. So we just used RDS. And so it didn't go anywhere. So what we did was when the workload appeared in Google, policy was wrapped around and says, it should be allowed to have access to this database. And our system, when we see that, our system transparently figures out how to build routes on the fly, secure routes from Google all the way back to the database in Amazon. Now, if you wanted to move the database, that's going to be a huge issue, not an app-serra issue. It's just a general issue around, well, how do you move like an S3 bucket to Google Cloud or IBM software? And of course, there's technologies to try to make that a leaky abstraction, but leaky abstractions are usually bad, at least in my opinion. Yes? It's actually very similar to the question that the other gentleman was asking. You mentioned earlier about all these analytic data, big data, exabytes and however much it is. So a lot of the companies we talked to actually want to not really burst, like they want to do a hybrid cloud private and public, but they want to burst compute into the public cloud because of either video encoding or some kind of other analytics. So in those cases, you really have to move the data around because otherwise you can't really burst compute into the public cloud if you're trying to access data in your private cloud. So how does you guys handle that? Yeah, so that's a great question, which I don't have a great answer for. There's still nothing as fast as a truckload of disk running down a highway, right? Speed of light and latency is important. But what's interesting about the demo we gave yesterday, which might not have been obvious, is that our network topology and how we actually reroute and secure everything is full mesh. So in other words, when the app moved over to Google and was accessing the database, there was a direct peer secure network connection between those, and the latency between those is pretty darn good. That being said, there's also things that, and then this will really get woo-woo on you, but if I study the kind of the way the brain works, the brain works in layers. And so there's a massive amount of data coming into your visual cortex, which gets processed in parallel and then moved on to different layers. And usually the signal is higher, the entropy is lower, but the amount of data is less. And so I think you're gonna start seeing that. So again, my math might be off, but a GE engine on an airline generates, I don't know, 40 terabytes. We can't download that all the way to the ground and check it and then bring something up. But there could be something where it's a trusted compute that goes up to the airplane, processes it and sends specific amounts of data down. In your case, when you do bursting, I don't think cloud bursting is actually a really good example of hybrid in reality. But I do see DevTest get spun up and the data is actually being generated there. So in other words, it's not actually operating on something else and then it kind of goes down. Again, I said I wouldn't have a great answer for it. Data gravity, which is, I worked with a gentleman who kind of coined the term, I believe, it's real, right? Speed of light is a constant. We can't change it, at least I don't think we can. Any others? Thank you very much. I appreciate it.