 So good morning, everyone. Thanks for coming to my talk. So this is a talk about networking. And for those of you who read the talk description, the title is the same. The content is a bit different from what the description says. So I took some liberties. All right, so I'll explain about the talk in a few minutes, but first let me introduce myself. Put my picture up there. Not so much for you all, but for the folks that see the slides later. So I'm CTO at a startup. There are about 50 people in the startup. The startup is called Midokura. I've been there for six years, expertise in SDN for data centers. I'll come back to that in a few minutes. And previously I worked at Amazon on NoSQL databases and caching systems. I'm a software developer and architect and a team manager. Now Midokura was founded six years ago, not by me. I joined early on. It created and maintains a software project, an open source SDN called Midonet. And Midonet is open source SDN from 2014. We open sourced it. Initially it was closed. And it's integrated with OpenStack, Kubernetes, vSphere, and Eucalyptus. So down here I've, on the left, the three open source projects, OpenStack, Kubernetes, Eucalyptus, VMware obviously not open source. And then we have OEMs with Dell and Fujitsu. And we've been working for about six or seven months on industrial IoT projects. We have a customer and a partner with whom we're doing projects for smart factory and fog networking. So what does this talk about? So as I said before, my background is in data centers. And I've only recently really begun to explore what industrial networks are like. So I'm here to have a conversation, to throw some ideas out and see if they bounce back. I'm open and welcome feedback and pushback and so on. I'm happy to have a conversation, interrupt at any time, and ask questions. So I'm going to describe what I perceive as industrial network challenges, focusing on factory and plant networks, which is what I've been looking at primarily. I'm going to compare and contrast those challenges with what the data center has seen in recent years. And then I'm going to describe what kind of network I think we want. And finally, why virtualizing the industrial network becomes essential and a necessary step to achieving that network intelligence. I'll also say that if a lot of other talks have covered networking of gateways and networking of devices, I'm taking the perspective of the factory or plant network owner. So if we talk about security, I'm going to be talking about it from the perspective of an IT or an OT network operator that wants to protect their network from devices that you might be developing or gateways you might be developing. So before we begin, I'll be saying the words, maybe I'll be saying the words industrial, IoT many times. What do I mean by industrial IoT? So for me, it's a complex of things. It's a transformation that is happening. It's not necessarily a new phenomenon. The industry has been doing things like IoT for a while. What's different? You're gathering data. They've been gathering data. Optimizing, they've been optimizing. Maybe what's different is the pace at which new devices are being added to the industrial network. The kinds of devices that are being added, where they're being added, so devices that weren't previously connected are being added, and they're being added. Perhaps they belong to the OT in the sense that they monitor, they observe operational technology, but they're sometimes connected to the IT network directly. New devices are being added to collect existing data that's taken from systems that cannot be evolved, cannot innovate very fast because they don't have innovation cycles. That are short. And of course, then part of IoT is the systematic optimization of that whole product pipeline across different stages from supply to production and out to distribution. So what are the general challenges that industrial IoT brings here? So I mentioned this a few seconds ago, but there's this explosion of IP-enabled devices. Now, we do have best practices for how we develop these devices, and I heard a great talk yesterday about protecting the device from security threats. But in early wave of IoT, the devices are not very well protected, and we know about worms and malware like Mirai. And generally, solution providers, certainly again in the first generation, don't have that expertise. And their go-to-market requires them to focus on their solutions. And often we have in the data center devices that aren't protected, either because they don't have the power to do encryption or because their time-to-market required them to move faster. And from a network operator's perspective, which is so I'm a vendor. I want to sell to the network operator. And I don't think the network operator today can trust that the devices have been built correctly and secured correctly. Additionally, the problem is that even if they have, defense in depth, which I'll talk about later, requires us to secure at every layer. So another thing I've observed in my study of my recent study of industrial IoT is the solutions seem to be very vertically integrated. So solution providers tend to seem to bring devices, sensors, their own gateway, possibly a secured channel to the cloud, and then a cloud platform. That's not strictly the case across all providers, but certainly there are very integrated solutions and not a lot of interoperability across them. And please, if you know otherwise, I'm happy to take examples and go back and study. So I'll take notes as well. The technology is fragmented in the space. Lots of different protocols. Every domain, every industry, whether it's utilities, electric, power, manufacturing, et cetera, et cetera. Smart cities, connected cars. They all have their own protocols and their own vendors, their own certifications as well. The IoT is also forcing changes in the, or accompanying changes that are happening to teams. So in parallel with the IoT is ITOT convergence and industrial ethernet. So I'll just mention that briefly. Industrial ethernet and industrial IP is the use of standard ethernet and IP technology protocols to run the OT networks. Now that doesn't mean that they're the same exact devices that we use in the IT, and certainly not in the data center, because in the data center we're not ruggedized. But it's the same protocols, and there's a hope that the innovation cycle will speed up as the industry picks up the same technologies as the data center. And there's a push to break the silos between OT and IT teams. So the team dynamics change. And as you bring in devices and gateways into the factory and to the plant, there's also another interesting problem, which is who do they belong to? So I mentioned before, we've seen, spoken to people, and we've seen people putting sensors and gateways on the IT network directly. It's the OT team. It's the OT team generally that understands those devices. They purchase it. They're charged with making that project go live, but that's actually connecting to the IT. And if you ever, if as a manufacturer or a plant owner, you want to deploy your own gateways, you want to build a project in-house, then the question is, who manages that? OT equipment is traditionally very slow cycle, very slow refresh rate, no patching. Everything is tested and hardened as is. And you don't touch it, because it works. And we don't want to break availability and safety. So it doesn't move as fast. The devices that are being developed for IoT, even in industry, have different assumptions. So they're new, they're immature, they get auto updates, and they change quite frequently. The auto update policy isn't really even controlled by the manufacturer, the plant operator. Now let's focus in a little bit more on the security challenges. So the industry expects to be very heavily targeted. It already is. Thousands of exploits per year. And the OT technologies natively have very little defense. They were built on the assumption that there was an air gap, that they were isolated environments, that no one could go in, they weren't plugged into anything. So at the time, protocols like Modbus don't have any authentication, they don't have any encryption, and the same is true of other field buses. To make the problem worse, you can't take IT security products and use them in OT, because they don't understand the protocols of OT. They don't look inside the OT payload. So even if they can stop specific MAC addresses or specific IP addresses or UDP ports, they don't understand the Modbus commands and the Modbus parameters. So you need OT specific firewalls. So a little history here. There have been cyber attacks on industrial networks early on in 2000. And an example is the Venezuela pedevesa during the general strikes in Venezuela in 2001. The networks were attacked by disgruntled employees that were participating in the strike. However, the industry didn't become aware of attacks of the threats of malware until Stuxnet in 2010. And at that time, Stuxnet and then follow up the so called Sons of Stuxnet Malware like Dooku, Flame, Dragonfly, there's been a lot more awareness and fear in the industry. And therefore, OT specific firewalls are being developed. Not only do they exist today, vendors sell them, they're even being virtualized. And I'll come back to that later, why I care about virtualization of firewalls. Remote access is something that is needed. So this is unusual for me in the data center. Sometimes customers will give us access to go and debug our solution on their servers. But generally, they don't set up remote access because they have DevOps teams that read the manuals and know how to debug any product that they purchase. The data center is even more fragmented because in the data center, they're application teams. And application teams often own solutions that are siloed, that they themselves depend on, and therefore they will debug them themselves. In the industry, remote access is common. But managing remote accessing, revoking privileges, auditing, and so on is not well managed. We talked about auto updates again, so I'll skip that. And to the fragmented community, so whereas in the data center and in IT, we have a set of standards, a certain set of protocols, a certain openness, and therefore the security can be handled as a very large community. Since the industrial domains are very fragmented and each have their own protocols, it's harder to spread knowledge. It's harder to make advances and share. I'll come back to the auto update actually for a second, because I read of a chemical plant was making a chemical batch. And some auto update software triggered during the production of the batch. And it actually updated a piece of a component that was storing the trail for the process. And without having that trail of what changes are made as the medication is produced, the medication can't be sold because you don't know what's happened to the medication during production. So the batch was lost. So that's the kind of problem that isn't so much about security, but just stabbing ourselves in the foot. I'm mixing my metaphors. So then let's come back to this concept of the air gap, which is very interesting. So air gap was sort of a standard for a very long time. If you have a critical infrastructure, just isolate it. No one can go near it, don't attach anything to it, and so it's secure. But actually, that's been a fantasy for a long time. In fact, people take USB drives, plug them into a laptop, then take them from the enterprise, bring them into the OT setting. And the malware, things like Stuxnet and its followers, copycats, have many, many vectors of infection. And USB drives are one of them. And essentially, people in industry now argue that still talking about the air gap gives a false sense of confidence, and then just drives certain kinds of procedures under the covers out of the view of the network. If at least you acknowledge that you're going to need certain network communication, you can deal with and you can observe the trafficking to see what's happening. If you say air gap, then people go and do other things, right? They'll bring their laptop over, or they'll bring USB drives. And studies have shown that, in fact, even in organizations that think they have an air gap, there are many, many connections between the IT and the OT. Because even if you start out with an air gap, then there's some requirement that forces you to make a connection and break the air gap. And then you track that wear on paper and an Excel spreadsheet. So there's one more point I wanted to make about this air gap. Ah, yes. Yet we still hear anecdotes that in sensitive industries where the cost of an attack would be just so high if production were to stop, like oil and gas, that they're unplugging gateways that were previously connected to optimize. Because the benefits of optimization aren't significant enough to counterbalance the risk of production stoppage. So what happens to these players that are unplugging their IT from the OT? Eventually, nimbler players will keep at it and figure out how to do it. So they play with their competitiveness. So clearly, that's something that's not going to last a long time. So I'll talk a little bit about defense in depth. Defense in depth is a security posture based on fortresses. So fortresses, castles have multiple layers of protection. And the idea is that you protect everywhere you can. So not just in the technology, of course, then you need to protect across policies and procedures that employees follow. You need to protect physical assets, physically, cameras, security guards, and so on are all part of the defense in depth. And furthermore, of course, then the technology. So you want defense of the device. You have to build your devices securely. Then you want the network to protect you as well. And so on out. And clearly, separation between the critical parts of the network and the infrastructure and the non-critical parts. Something interesting in this space that surprised me. Trend Micro did a study just a few years ago about beepers used for alerts. So the systems are secure, but some component will generate an alarm. The boiler on the second floor of building two is overheated. And that alert goes through a trigger that sends an email and sends a beep. So people receive these things on their beeper. And the beeper technology is old. And it's not secured. There is no encryption on the beeper technology. So essentially, we're broadcasting. Many critical infrastructures are broadcasting sensitive pieces of information. The names of devices, sometimes some parameters, that can indicate what kind of device is there. Is it a Siemens? Is it a Rockwell? What kind of device is in what location? And a little bit about the procedures. So there's a lot of information that bad actors can accumulate, just passively monitoring your beeps. So ISA 99 is a standard in industry that helps us deal with security and implement defense in depth in the technology side. So this is standard. And there's a process to go through that there's guidance about how to do this. I just wanted to show you this picture. Imagine that underneath these squares, these rectangles, is a network. Your standard Purdue Para model network where you have the internet, the DMZ, the enterprise. And you've got some machines in each of these layers. And down here, you have layers 0 through 2 of the actual production process. So I've skipped that step of showing you that. So I apologize. What they ask you to do is essentially cut your network up into zones that share certain security characteristics and that will have common policies to protect them. So that makes these sort of bubbles. And then, those are what we call zones. And then across any zones that need to communicate with each other, you have these sort of red lines. These are the conduits. So very explicitly model what's allowed to talk to what. And this is defense in depth, applied to the industrial network. So once you've done this, you're very careful about any time you add any, only those red lines are the only communication that's allowed. And then, of course, you put firewalls where the red lines are because you either put firewalls bi-directional and you'll do whatever level of inspection you need. However deep you need to go into the package to inspect what's crossing. Or you might even do, and this is something I wasn't familiar with at all from the data center, but you can do unidirectional in some cases, unidirectional traffic using diodes. So actually, you have pieces of equipment which will only allow communication in one direction. That was news to me. I thought that was very interesting. So clearly then, you use physical means to block access in the other direction. So you could have just a stream of information moving in one direction. That said, the tools to implement this today in the network are not very advanced because these all correspond to VLANs. So you're basically going to set up a bunch of VLANs and subnetworks and IP address management for each one, routing. And all of this has to be done in some physical way. Because when you want to put a firewall here, how do you do that? How do you get the traffic to the firewall unless you have wires? So you're managing VLANs. And you're probably also using a spreadsheet to track your VLANs, where your devices are, what's connected to what. And spreadsheets are not a good tool in 2017. The zone and conduit design is spread across network switches, meaning that, and we were in the same situation in the data center up until a few years ago. And still, many data centers operate in this way. But people know. People have basically their VLANs. They have a script or they have a set of scripts that writes the VLANs. They can also synchronize. If they're advanced, they've written a set of scripts and software to manage their devices. There are many devices uniformly, but basically they're managing the configuration of individual devices via scripts. And so the intent of what VLANs, what zones, what conduits are implementing your defense in depth, that intent is mixed on the device with the actual state. And it's hard to tell the difference in an audit. What's the difference between what was meant to happen if the device has a port, or if the device allows routing between two subnets? Is that intentional, or is that a misconfiguration? And if you have a set of scripts and you've built your own system on top of the devices, then you're in a better situation. So that's the point. No distinction between intent and current state. It's hard to audit these. It's hard to change them. Because once you decide, actually, you know that zone, so if I go back for a second, you might have started where the process and the safety were in the same zone. And then you decide, well, actually safety should be isolated from process. And you have to rewire physically. For those that are lucky enough to use Wi-Fi, which to my knowledge is not very widespread in the factory and the plants, I've heard numbers like 5% of networking is Wi-Fi in those settings. I know that mobile and Wi-Fi in other settings, like smart cities, smart transportation, much more essential and used. But in the factory and the plant, not a lot of Wi-Fi. But even if you have Wi-Fi, so you save yourself the wiring, but the device configuration is still a problem. Unless you have wireless LAN controllers and you start to move towards virtualization, that's what they're doing. So I forget the name of this classic bicycle. Pennyfarthing? Pardon? Pennyfarthing. OK. The pennyfarthing. I thought it was a good example. You know, you can ride one. It's not easy to balance. Bit dangerous. OK. So and then again, hard to place firewalls. Every time you want to place a firewall between two things, you need to get the traffic to the firewall. So let me talk about my experience in data center networks. When we started in 2010, OpenStack, many of you familiar with OpenStack? So OpenStack was just getting started. I think in early 2011 got a lot of industry support. There were other clouds, eucalyptus, nebula, some others. And CloudStack, of course, I should mention. But what's happening is, as people really took on virtualization, so we went from, we had compute virtualization for a long time. We had storage virtualization for a long time. I mean, VMware much before 2010. But as people started to do cloud, what's the difference between cloud and virtualization and compute virtualization? Cloud implies self-service. Cloud implies dynamic resources. It implies dynamically scaling and all you can eat and self, not just self-serve, but also maybe self-troubleshoot. All sorts of unshackling of the application teams from the IT team. And that's why in the enterprise, many, many application teams were fleeing to the public clouds, creating panic and chaos in the IT teams and the security teams as well. And I'm not mocking. I think for good reason. So the network was seen to be in the way. The IT team in general was seen to be in the way. I remember when we launched our NoSQL database for the Amazon shopping cart, had to order machines, had to know exactly how many you expected to use, get as many as you, make sure you'd account for a bump. But what's reasonable? Then you had to get a load balancer VIP. And each of these takes time. It's a ticket filed in a system. So people move forward with cloud. And cloud freed them from that. Because in cloud, you could just go to an interface and cloud management system. And you could ask for the resources you wanted. But network wasn't really focused on initially in cloud. So we solved some problems, but we introduced others. Security was not looked at at all. And in fact, as services and microservices have become more and more popular in the data center, and we're seeing more and more these monolithic applications being broken down, we haven't really accounted for East West security. Because people are starting to do that today. But many systems are still not secure. Many microservice architectures, service-oriented and microservice architectures, aren't secure East West. So once an attacker gets in, attack can very easily propagate East West. So what happened was the number of applications, how dynamic they were, the business value that they introduced for a fast innovation cycle, for the competitive advantage, all of these things pushed the network in IT to become application-centric. So I'll focus on what we at Mirokura primarily thought about. So we were primarily looking at the network. We found that compute virtualization was working pretty well. Storage virtualization is still a problem. But we chose to focus on network virtualization. And primarily, our pitch to the community was decouple the physical network from your intent, your logical network, your application network. And this is not a pitch that all SDN vendors took. And I'm not saying that it's the right one. But it's one perspective. Our perspective was that you could build a physical network very easily. Easily, if you have employees with the proper skill sets, you can maintain, expand, evolve physical networks for scale, for stability, for both scale in servers and scale in bandwidth. But in order to give people that sort of agility that they wanted, you want to decouple the logical network and let the logical network be driven by APIs. And the logical network, therefore, had to have concepts like, of course, Layer 2 and Layer 3, switches and routers. But not just, you could do load balancers, source NAT. You could service chain firewalls and deep packet inspection and things like that. And along with that, then you also need to be cloud consistent with the cloud. So self-service of network, but also self-troubleshooting. So what do I mean by self-troubleshooting? Certainly, in the early days of SDN and the data center, very hard to debug these new SDN tools and technologies. Instead of using standard tools, you had to go read someone's manual. Your network engineer's training didn't apply. And the application developer doesn't really know much about networking. So that was a problem. And we were still very much tied to IP addresses and MAC addresses. And maybe Kubernetes and other container technologies have shown us in recent years that IP addresses don't really matter to application folks. Application folks just want the data to move. They don't want to know about subnets and routers and routing tables or what have you. But another thing that was tremendously powerful was micro-segmentation. We could actually give, and this is just standard today, that in the data center, every VM, every server, every container can get its own firewall. That's the concept of micro-segmentation. And that firewall can have a set of rules that is dependent on a set of tags, for example. So regardless of placement of the workload, it will have a firewall rule set that is consistent with the other parts of its tier in the application. And then I mentioned before, we were very much stuck on the physical modeling, the physical world, and the logical layer. Intent-based policy was an attempt to get away from that, and it succeeded to some extent. It hasn't been widely adopted yet, but this is where Cisco ACI is very strong and others, where basically you just say, I've got this group of endpoints. I've got this group of endpoints. I'd like them to communicate over port 80 HTTP only. And that's the contract. So you have endpoint groups and contracts. And so you can get away from understanding the network the way a network engineer does. You shouldn't have to debug the network that way. So that's what we did in the data center. So let's talk about differences. There are many, and please chime in. I'm sure I've forgotten many of them here. So the hardware refresh cycle in the physical world, in the factory, 10, 15 years, sometimes longer, it depends. It depends. There aren't any DevOps. Priorities are different. Priorities for security are different. So in the factory, you want availability, above all else, safety of the equipment for employees as well. In some industries, you just can't stop production. You've got oil coming in large quantities. Your valves and your storage is better be ready to receive that. You've got a bunch of electricity being generated, et cetera. In the IT, in the data center, data integrity, data confidentiality, it's all about the data. Speed of deployment. So I spoke to, what was a great example was, I spoke to Walt Disney Company. They described it as an application scale problem. So at the time, I remember them quoting something like 600 different applications running on their platform for various games and things that they were doing. So imagine managing 600 applications. How do you do that? I'm not sure that applies in the factory. I mean, I'm not sure you have a problem where you have many IoT vendors. But I certainly have heard that a factory has five or six different IoT vendors, each of which has its own device onboarding and device onboarding procedures and their own pane of glass for managing their solution. And then the speed. So clearly, in the data center, every application team wants the ability to deploy as fast as possible. Get out of my way, why should I have a window? And it's my piece of the website. And I can update it without taking down anyone else. Why do I have to wait, and so on? And by and large, companies have managed to do very fast deployment. So all of these things don't really hold. The factory is certainly much more static. So looking at all this, does the factory need to change? And I'm still arguing the factory needs to change. Primarily because I think we can do security better if we virtualize. I think we can place firewalls anywhere we want if we can virtualize. I also think that the team dynamics changing is also an important thing. I hear about technicians who go to install a device, a new device, a new sensor has to be replaced. So they don't really understand what they're doing. They're given a set of instructions. They plug in, and they call the IT person in a knock. And this goes back and forth. The IT person in a knock, it's not their solution. They say, no, I'm not getting any packets from that. They wake up the OT guy that designed the solution, and they get some manuals out. And this process takes hours. And if it doesn't work out, the technician drives home maybe from a remote location. And they try again tomorrow. And that's very costly. So what is it we want from our networks? So the first point I make here is, network should allow you to layer the policies of different teams and different perspectives. So the central IT team certainly wants visibility. They want governance. They want to see what's happening. And they want to impose standards. That doesn't mean that they want to carry the whole weight of the world on their shoulders. They don't understand the nuances, the details. So what you need to be able to do is give them that visibility and then impose your own on top. Impose your own additional requirements. And as a solution vendor, for example, or an OT team, look at the solution yourselves and see what's happening at the packet level. Because we are often at the device level. We're looking at data flows. We're looking at the temperature readings. We're looking at the humidity, acidity, whatever levels affect your process. But what about the health of the network itself? Are you looking at that? Are your devices doing something they shouldn't? Like in the case of the Mirai malware that in October was sending out DNS requests to Dyn. And so this is what I call self-service and self-troubleshooting. That if you allow people scoped visibility, so I have this IoT solution that I'm responsible for. So can I look at how it's behaving? How much data are my devices sending? Just my devices. Who are they talking to? Because remember that not always are the devices, especially if they're IP enabled, devices are not always only talking to the gateways. So for one thing, the gateways themselves aren't trusted from the operator perspective. And then the IoT devices, if they're IP enabled, they're just connected to the factory network. They talk to the gateway, but they might also talk to anyone else they want to talk to, and who's to say that the gateway itself is only talking to the cloud. Encrypted links, so you can do encryption, but the network should do encryption as well in case your encryption gets broken, or vice versa. You do encryption because the network encryption might get broken. Even within a zone, by the way, it's not even clear to me that devices within a zone always talk to each other. You have some zones where some things talk to each other, but it shouldn't be all open. Sometimes we have to, because the protocol requires it. But often in Ethernet IP networks within a zone, we just say, well, it's all the same components, it's all the same type, it's a bunch of web servers. They can talk to each other. That should all be locked down. So we should be moving to a network that only allows whitelisted traffic. The audit trail, so kind of standard in SDNs in the data center now is that everyone has a record, a historian, that knows all the flows that went through the network, all the actual network flows. Each is logged. This is just IP fix or net flow, but it can be much more detailed. But they also know all the changes that happened across all the devices in one place. You can achieve this in many ways. You can do this through logs. But having to build this yourself is not convenient. SD1, so if you do have a gateway, how does it get connectivity to the internet? What kind of channels, can it use mobile, can it use WAN, can it create its own private WAN over public internet? Sorry, the single pane of glass. Oh, I put single points of failure. That's my, OK, sorry for that. OK, so more characteristics about the network. So the network should be very prescriptive. Only allow very specific flows, only the ones that you know are safe. And maybe you can learn that. So nowadays, we can observe. We have tools that allow us to observe during a time and then map that onto a policy. And that's the policy. That's normal. That's the only thing you allow. Then you might make a mistake. So if you think you're going to make a mistake in doing that, and especially if you're trying to do this in a brownfield environment, then you better also support dry runs. So you can have, virtualizing allows you to turn your firewalls and your network policies into a dry run mode where they're just logging. I would have dropped that. I would have allowed that. At any discrepancy, they can alert on, or they can log, and you can study that before actually setting the policy to active and pushing into production. Easy rollbacks. Easy rollbacks across all the set of devices. Now, some of these things people do implement. As I said before, people write a set of scripts. I'm not going to go device by device. I learned my lesson last time. I mean, classically, they say, well, I know. I remember the time when I had a problem, and I had to go, maybe not, this doesn't happen so much anymore, but people had to configure one device at a time to find the problem, synchronize them, find which one is not configured correctly. So they wrote a set of scripts. But that's not standard. And they have to evolve that themselves. And who has time to keep track of that set of tools built in-house? Finally, context-based means a lot of things. So you want context-based, a lot of things. Context-based prioritization. I've got a bunch of cameras. Actually, we have this use case. In the Smart Factory project we're doing, the customer wants a bunch of cameras and sensors to be on a separate network, isolated from the rest of the equipment. And so you're streaming all this video, but you're not looking at it all. So the video you're looking at should get higher priority than all the other video. So the network should provide you an API so you can say that video right there, that's what I'm looking at right now. Similarly, context would include things like location-based policy. I can look at the video if I'm in the room. I can't look at it if I'm outside the room. You have situations like that as well. And of course, the network should be aware of identity. So integrated with identity. You don't even provision the link if the device is not authenticated. Of course, this can't be, this isn't true of everything, right? You don't have devices that can do that. Especially if you have legacy devices, you can't always do that. I have five minutes left. Is that right? Try to wrap it up. Let's see, where am I? All right, so a few last ones. So like in the data center, I'd like to see the network move away from policies that are based on addresses. Because addresses can change. They can be spoofed and so on. Let's do policies based on authentication and metadata. But who puts the tags? So we need some intelligence around tagging workloads. And then want very fine-grained control over how you implement security. So again, we don't build firewalls at Mirokura. We do the virtual networking. It's the plumbing, the redirecting of traffic, the implementation of this logical topology on top of a physical topology. So what we're enabling you to do is to basically, flow by flow, if you need to, redirect traffic to some kind of virtualized device, or sometimes physical device, so that it doesn't matter what your actual physical topology is, you can get the security you need when you need it without sending someone to the factory to Rewire or someone to reconfigure your devices. And the integration step is that if it is virtualized, you do want to have some sort of scheduler. And this is where Fog comes in, some sort of scheduler that says, I've deployed the DPI, the deep packet inspection, on that Fog node. So that's where you should be redirecting the traffic. And so we've done this in the data center where there's integration between service chaining systems, service chaining APIs, and workload schedulers. And this speaks to network function virtualization and then remote access. So we've seen a lot of people provisioning VLANs on the fly, ad hoc, to let someone in. If they're a little bit more advanced, they'll use a wireless LAN controller, can do that nicely. Or sometimes the device itself calls home and sets up a tunnel. And then they have SSH and all sorts of things. They solve the problem for themselves, but then you no longer control what's going on there. So my conclusion is that we need virtualization in the industrial networks to get all those benefits there. And then there are some things about how do you do it. Do you want to use overlays? Do you want to do encapsulation tunneling? There's a cost to it. We've had this debate in the data center and it goes back and forth. It's more efficient in the fabric. Oh, but there are advances in programmable NICs and Linux is getting faster at processing packets. So this will go back and forth. It doesn't matter very much because at the end what you want is whether it's done with overlays or not. At the end of the day you have a logical topology on top of a physical topology. And you don't have to, you can manage the physical topology without worrying about the logical and logical adapts. And then how do you actually get the packets in the first place, especially in brownfield environments? This is the problem we're tackling right now, which is where do we live? Do we live, does this virtualization technology live on a gateway? Does it coexist with some other software on that gateway? Or do we deploy new devices? Do we do bump in the wire? So maybe upstream of a switch, we do bump in the wire in certain VLANs we handle and the other VLANs we let the legacy system handle. So these are all challenges that we're sort of facing now as we, or questions we're asking as we develop our smart factory solution and our fog solution. Since I'm running out of time, I'm going to skip this because I said a few things about fog and industrial internet. Just a few words about this. So what role for open source in all this? There's clearly a lot of projects that address some of this, but mostly the open source has been on the device side, the solution side. But in open fog and the open IF map and open ICS, you have projects that are more on the factory operator side. But you have these questions I'm asking, can the gateway do network virtualization? Or can the gateway itself be virtualized and run in a fog so that I don't have to deploy many, many gateways? The gateways currently handle a certain amount of security. How does that interact with a system that's really just trying to focus on security? Or the gateways each have their own device management or they're tightly coupled with a cloud that has its own device management and so you can't get device management across different solutions. That's very hard for an operator to deal with. So let me stop there. And I think we have a minute for questions, so apologies for that. Yes? I'd say the Wi-Fi solutions are doing the best here. But if you look across the entire network, no. Yes? Right, although they'll talk to you about our air gaps, or they'll talk to you about the defense in depth project they did last year. So that was a system at the time. But up to the most recent audit, they'll know the situation up to there. Ah, no, no. And in fact, this is sort of what's, so I think ExxonMobil now has a project to sort of try to understand and standardize this level of threat. Yes? Well, the standards in each industry do take care of that. There are very clear guidance about what level of protection you should achieve in each domain. So you mean in terms of security, how secure are the gateways, right? So the open-source projects do pretty well, I'd say. But people do their own gateways. And the gateways are, you know, so there hasn't been up until recently, there hasn't been any, a lot of collaboration. So everybody does their own gateways. Security's an afterthought. So we've spoken to small vendors doing their own solutions that had to have their own little gateways. And they might write their own protocols sometimes. They'll write their own wireless protocol to save on a little bit of power. How secure is that compared to an open-source solution, right? Or they don't know about trusted computing platform approaches or they just don't have the time to implement, right? Because they're trying to sort of, they're competing on features, right? So the little security afterwards. And the buyer doesn't always have, for example, smart city or the buyer isn't always able to evaluate how secure the system is. Today, mostly the sensor vendor builds a gateway to sell with their solution. Today, yes, because essentially you don't trust that, you don't wanna share a gateway. Or there's been no, there's no force that helps you coexist on a gateway with someone else. So if something goes wrong on the sensor, you know how to fix it, what if something goes wrong on the gateway, who do you blame? How do you point fingers? And there's no currently no solution to that. You need someone to take responsibility for that. So EuroTech and Red Hat have a partnership now to sell gateways that can be multi-vendor. And Red Hat will support that gateway. Sure, I don't know much about it. I've heard about wireless hard, I've read a little bit. But so on the southbound side, the protocols do have good security. I'm not contesting that. Okay, we're being invaded. So well, if anyone else has questions, I'll be here for a few more minutes. Thank you very much. Thank you.