 Great. Mic test? Good. Thanks, everyone, for spending your time with us today. This session is going to be about OpenStack and how you're going to do IoT and NFE, and how to build the hardware underneath the cloud. So this is really from experience in seeing customers across the globe doing OpenStack. There's always this little bit of a gap about understanding what you actually need to build underneath OpenStack to get it to do what you want. So with me today, we have our current schedule where we'll go through, myself, we'll go through the first couple of requirements of doing the underlying kind of design. Then we have Noah here, Noah Williams, who will go through doing kind of how you use that hardware on top. And then after that, we have here Kevin Stoll, who will go through how to do even more high-level application focused on top of that hardware. So let me introduce ourselves here. My name is Eric LeJoy. I'm with HP Enterprises, as we all are. And I'm focused on OpenStack in mainly the MIA region, doing telco and NFE, which is now kind of bleeding over to the IoT region. Thanks, Eric. My name's Noah Williams. I'm an SDN and NFE solution architect. I work with HPE clients worldwide to help them develop their OpenStack solutions to build NFE clouds. Hey, good morning. My name's Kevin Stoll, and I work for HPE. I work inside of a group that professional services focuses on OpenStack, Staccato, and quite a bit of focus on internet applications, think microservices, and large capacity systems. Thank you, guys. So any questions before we go into kind of the material? Good, this is interactive. So there's two mics in the room. You can also scream out a question and I'll repeat it, and we'll make sure we figure it out. Some of the other stuff, too, is I'm definitely looking for interaction for you guys. So if anyone out there has experience with what we're talking about and you have even more data points to add on, throw it in the room. This is not focused on any vendor. This is pure knowledge transfer for you guys to take away. So IoT and NFE requirements. So what we're looking at here is when you go and build hardware, so how many people out here have actually built a server setup for building a cloud on? Okay, we have a lot in the back of the room. So that question's kind of geared towards are you a server architect or an open-stack architect or you're part of the team designing your open-stack cloud, this is geared towards you. And if you guys read the description for this session, it was really about walking away from the session and knowing what you need to do in terms of acquiring hardware to build the cloud for what you need. These are the main topics. Last year we presented on this, and this was really, let's say, a medium dive into all of the technical specs that are now in Metaca. So a lot of this stuff is in Metaca now, you can use it, whereas before you really needed to get upstream code and port it back into open-stack to get it to work. And having them all work together was even a bigger nightmare. So what we're gonna focus on first here is Numa, because Numa, if you don't understand Numa, it's gonna be really hard to understand the rest of how these requirements fall together. So I'll spend a little bit of time on Numa, and then the rest of this stuff, if you guys wanna ping us after the session, we can really dig down deep into those topics and talk about them with you and figure out even more caveats. And if you guys are having any issues, we have relationships with Intel, Melanox, and all the other vendors, so if you're having issues, maybe we even know the right contact to get you in touch with with those other vendors. Okay, this is one slide, and for me out of the three sections, this next slide's gonna be the most important one for you guys to take away from for this session, 40-minute session. And we will have 10 minutes at the end for questions as well. So Numa, who here knows what Numa is and has actually run workloads on Numa? I see actually eight hands right now. Nine, nine hands, nine and a half. I saw one go up halfway. So I'm gonna laser point to this. Hopefully this works for you guys. Actually, maybe this works. Yeah, there we go. Can you guys see that? Yeah, good. So Numa node, the way you can think of a Numa node is you have a socket zero, socket one, and what that really relates to is the CPU socket's on the server. So in the old days, you had a server that had one socket, one CPU, you had your memory, and everything was dandy. When you start doing servers with more than one CPU, you have memory that's attached to those CPUs, they're shared, and what really happens is you've got this bus between them called the QPI bus in the Intel architecture. It's a little bit different in AMD. And what you're doing there is when you share these resources, the resource sharing is actually flowing over this QPI bus. You can think of as a big bridge between these two systems inside of your server. What happens a lot of the time is if you build your server, and who here knows what a PCI Express bus is? Okay, good. Very good. So PCI Express, for example, you might have ethernet interfaces, say one NIC card with two or four ethernet interfaces on it. That might be plugged into a PCI Express bus that goes to socket zero. You might be running an open stack that has your VMs deployed to certain CPU cores that are actually happening to be over on socket two, which means that your data comes in on one NUMA node, has to go across a QPI bus, works up on a VM that's up on a couple cores here, and then it somehow has to go up, if you're lucky, on the same NUMA node for its outbound interface, depending on how you've designed your network. So if you really wanna get all the efficiency out of your system, meaning that you don't have to have all of this overhead that is really a result of bad design, you'd wanna have something like this. You'd wanna have your VM running on the same NUMA node as the network interface, theoretically. And then we're not showing it here, but there's also memory that's tied off of here. You'd wanna use huge pages off of the same NUMA node. So again, this is stuff we covered last year. It's on YouTube, but we also are here after the session to dig deep if any of these topics are interesting for you. And I lied earlier, this wasn't the most important slide, it's the one after it, which is really the knowledge transfer piece. So in here, I'm gonna just look at these because there are quite a few of them, but I'm gonna talk to them. So we have hardware, software, and then issues. And this is from experience of doing OpenStack with DPDK on telco and IoT-based workloads. We'll start with the hardware. So in hardware, if you wanna run DPDK, you have to research the NICs first. Don't let the vendor give you a hardware order with servers and say EmuLux NICs or some other NIC. EmuLux works as well. I don't wanna put any vendors out there, but what you need to do is you need to go to DPDK's website. Look at the supported NICs. And then the most important thing is you need to contact that NIC vendor because DPDK is not something that's built by Intel and everything's from Intel. The way it works is DPDK is a framework, Intel made it. But then there's a piece in there called a pold mode driver, which is really how the packets are pulled out of the NIC and thrown up to the VM. That driver is developed by the NIC vendors. So if you want DPDK 16.04, which is I think DPDK 2.5, the newest one, you've gotta make sure that that NIC vendor is actually built a driver for that version. They're tied one to one. So if you're making a plan to build a cloud with DPDK, you need to look at what NIC you're gonna use, then you're gonna need to go to the NIC vendor and make sure they made a driver for you. That's the key piece there. We already looked at NUMA, so the NUMA footprint's really making sure you plug those cards in the right PCI Express bus for the way you wanna design it. So if you have two 10 gig NICs with two ports each, maybe you wanna put them all in one NUMA node, or maybe you wanna put them on both. And if you do OVS, maybe you wanna split OVS on processes on both NUMA nodes. That kind of design. PCI Express version is important, and I think I'm running over time here, so I'm gonna quickly go through here. So make sure you're using, if you wanna have efficiency, PCI Express version three, keep everything on three. Or make sure that your BIOS allows you to go between two and three without impacting all the devices on the bus. VT-D, so IOMMU is a BIOS feature. Make sure your server has that ability, or you won't be able to get high-performance DBDK. Virtual function IO, which is needed by DBDK, needs that BIOS feature. NDME, so if you're familiar with the storage technology, it's replacing AHCI. This takes off overhead on the CPU, so this would give you more efficiency. That's something to look at in your servers. It's, again, another PCI Express connection. We won't go over the SRV part, but this is really important. SROV doesn't do all the workloads that you need because of the way it pins MAC addresses to the packets. If you have any workload, like a transparent firewall or MPLS, where you wanna have MAC addresses on the packets set, then SRV may not be the best thing for you. From a software perspective, make sure you get the right OVS version because you wanna look at the features like IGMP snooping, MLD for multicast, and other pieces. And from an IoT perspective where we're going here, also look at things like offloading encryption. So if you're doing to do TLS encryption for your tunneling of your IoT devices back to a central repository, or if you've done Femto in the past or anything where it's distributed servers, you're gonna wanna offload the encryption to an outside device because if you put it inside the cloud unless you have an ASIC or a hardware to do it, you're gonna really kill the CPU. Okay, and these are the last four things for me before I hand over here to Noah. Issues that we hit. dbtk2.0, which we used, does not support Jumboframe. If you have any workloads that need Jumboframe, that's gonna be a real gotcha. So it's still not in there. Intel's developing it. They're pushing it into the OVS project, which is really managed by VMware in a sense or that team. But anyways, it's getting there, but make sure if you need Jumboframes that you've researched it and you know it's there. The other thing is if not all VIRT IO drivers are equal. If you're VM, so for dbtk, you usually use Vhost user as the forwarding mechanism in the user space. We found that almost all the NFE vendors are using their own VIRT IO driver. They're not using the upstream one out of like Canonical or the OpenStack Linux community. You'll run into issues of MSI-X and that's the interrupt driver for PCI Express. So keep that in mind. If you start putting a VM on dbtk and the packets aren't going anywhere or you're having issues with the drops, look at that feature. I already mentioned Jumboframes, the nick and the driver, and then also there's lag. So if you wanna do lag where you're doing mobile workloads or other workloads where everything's in a tunnel and all the traffic has the same destination and source, MAC and port, you wanna look at some of the nicks that support a different hash. So basically it can look into the packet deeper than that header and actually separate the traffic of a bunch across physical links in a better fashion. So I'm gonna stop there. I know that was really fast. If there's any questions on this, come ask us AFAWords. We also have a bunch of backup slides to dig further into that. And with that, Noah, hand it off to you. Thanks, Eric. Click up here. Great, so shifting gears a little bit here. I'm coming at this from more of a networking angle, more of an SDN side of things where we're used to building the connectivity for your IoT applications. So how that applies to OpenStack is now you can instead of calling up an operator and negotiating some sort of network sharing agreement, that might still happen. Now you can actually build your own MVNO in a lot of cases depending on the legal ramifications in your country and you can go out and source your own EPC applications, your own mobile network core applications in conjunction with your standard IoT workloads in terms of data collection and analysis that we see pretty typical across the board for most IoT use cases. So we're gonna go into some of the SDN pieces and NFV pieces, how to build a mobile network slice and some other things. So I'm sure there's a lot of operator representatives here. The server provider industry is in this immense shift to move off of custom proprietary built appliances and move into VNFs and take advantage of the life cycle management and the automation that IT has been using for years. So right now we're kind of in this middle phase where most of the vendors out there that sell telco style apps, they've ported their stuff to an x86 data center architecture but they haven't really optimized it or deconstructed it or rebuilt it in a way that it takes advantage of microservices and a smarter network function virtualization infrastructure. So we're kind of moving towards this end state where we have true microservices, rapid resiliency and all the things that we have at a monolithic appliance layer now but in a much more prescriptive state in terms of how we want our applications written. So SDN plays in this in a big way, really opens up the programmability of the network, how you send flows from function to function across physical and virtual realms. And here at HPE we see the need to add more intelligence that's common across VNFs into the NFEI, into the network fabric. So if you look at kind of the traditional VNF approach or a traditional network function approach with a monolithic chassis, typically the traffic comes in some sort of line card, it's distributed across a set of resources inside this enclosure and then sent back out to line card back to the network after the processing's completed. This is most specific to data plane intensive VNFs VNFs that are seeing every packet from every user on a mobile or fixed line network. So moving away from that, when we see these vendors who typically had a shelf application move into the virtual realm, they're putting in kind of a front end load balancer in the virtual realm, you can't really build a high performance load balancer, you have limitations because you can only get as big as one server or one CPU socket, for instance. So you need to basically push more intelligence back down into the network and we use OpenFlow and SDN for that. So you can see a list of common functions that every data plane intensive VNF requires. And if we push that into the SDN layer, it allows us to take advantage of that commonality and solve a lot of the same problems the same way making economies of scale work better and work in our favor. So for access networks and service function chaining applications, we go one step further. There's many cases where you want to apply different service chains or different packet forwarding characteristics to different end points. And if you look at traditional, well I'll say traditional service chaining which is on routers kind of policy based route based, or if you look at network service header and other kind of router influenced based forwarding mechanisms, they don't really take advantage of endpoint awareness. They still are going hop by hop, deciding which resource to send the next, is next in the service function chain. So we go one step further and by adding a PCRF as a, or another subscriber information repository as an interface into the SDN controller, it allows you to more effectively program your open flow fabric to forward packets across your service chaining infrastructure. So this effectively lets you load balance on a subscriber level across all your available elastic middle boxes, your data plane, VNFs that are performing, filtering, AT virus, optimization, other use cases that need to sit in line. And then in turn, those functions can actually write back in to the SDN controller and do flow off load, elephant flow off load and move towards a, they can also do redundancy switchovers and move towards a fully programmable network where both the applications are feeding back into the SDN fabric, as well as the subscribers as they attach and detach from the network. So shifting gears again, moving to IoT, IoT is a lot of things. You have a lot of different applications. So I pulled out some different IoT areas here in the slide and created a sample service chain. So the top one, I took the more mobile centric endpoints that are possibly generating a lot of data and created a service chain that has IP fix record or basically net flow record generation, a mobile data optimization to make sure your throughput is ideal. And then because we're subscriber aware and endpoint aware, we can create and lifecycle manage custom firewall for each one of those devices. So both the connected truck fleet vehicle application and tracking airplane jet engines or whatnot, those can both be applied using a similar service chain and using the programmability of the network, you can basically increase the security for those devices by creating custom, basically firewall zones for each one, but managing those individually. So the impact of changing them is less and much more prescribed for the actual use case. So in this case, for all three of these use cases we're taking advantage of the programmability of the network and service chaining and we're applying that in a way that creates a more prescriptive network service for each one of these workload types. So we did the same thing here on the slide for applications control and sensors, as well as video where you don't wanna necessarily incur the expense of sending large amounts of 4K video through every network function on your network and you'd rather a lighter touch approach where you use a lighter touch firewall and just do some basic monitoring on the periphery. Another important aspect when we were looking at the network side of IoT and I alluded to this in the introduction, you really wanna protect your asset classes from one another as a network operator. So in this case, you could be an NBNO operator, you could be an NBNO operator renting a network slice from operators around the globe. You may own part or all of the network slice depending on how your integration is with the mobile networks, but it allows you to basically create a separate tenant quality service and assign dedicated resources in many cases. So, you know, IoT workloads are often very critical to the operation of very important systems and you need to take a more conservative approach when it comes to assigning resources and segmenting your network, whether you can't just throw everything into two availability zones and assume that, you know, all the workloads will receive equal treatment. You really need to be more conservative about how you apply CPU pinning and server dedication in some cases to your more valuable workloads and on the back end, you can charge your clients for that increased level of service, but still take advantage of all the automation that OpenStack and NFE bring to the table. So in each one of these network slices, we have an EPC, we have a service function chaining architecture for your inline network services, we have a group of what we see is common IoT applications. So you have a big data service cloud, you have analytics collection, you have data acquisition, you have device management and HP has assets in all these areas and you also have connectivity back to a private tenant network that may or may not extend to your client's premises. So the idea here is you can have your high value assets which might be, you know, the middle slice here and the blue slice that really need more dedicated resources because it's mission critical, it's heart rate monitors, it's industrial controls that if they screw up or lose connectivity, it's billions of dollars at risk, right? Whereas your video pump might reduce level of service, but it allows us to use the, using this new 3GBP work item to basically let everyone share the same E node B and then have a separate EPC for each application type. So this network slicing is probably one to three years out at least in terms of market adoption, but it opens up a lot of different avenues. I think there's another session on slicing later today as well. Another trend we're seeing in terms of the operator space is we have the ability now to put cloud managed resources using OpenStack and using SDN control at the edge. So in an enterprise use case or a landline use case, we're now sending out basically CPEs, customer premise equipment types that include some compute. It's not just standard IP forwarding or layer two forwarding. And then on that compute we have the ability to launch and lifecycle manage and put in some high value applications and we also have the ability to move that closer to the edge. And the same thing in the mobile network as we start to adopt cloud ran technology where we're actually running all the baseband processing that used to be at the base of the tower. We're now moving that to the first aggregation point in these access networks and basically enabling compute to exist there, run the baseband in a much more efficient way, make your mobile network operate as the first application, but then enable stuff like local breakout to a CDN, you can put additional content there, you can put basically loopback rules there for latency kind of peer-to-peer apps that need to talk to one another. So it gives you some new options in terms of where you put your workloads from an NFV perspective. Oh, well, take an advantage of using OpenStack and all your cloud management techniques. There's another session I think today or maybe it was yesterday about extending OpenStack to these CPEs, these endpoints, and having a small enough footprint for each one of these endpoints. So I'm not gonna go into too much detail there. Another kind of edge theme here is data acquisition and analytics is a huge part of IoT. The idea is you have hundreds of devices out there, or I should say hundreds of millions of devices, billions of devices out there that you're basically collecting data on all of them as part of your IoT application. So now that we have compute closer to the network edge, you can add virtual sensors and then do a first round of processing on that network data that you can also enhance that data with network analytics in terms of location and other things a mobile network will provide. So you basically wanna move your data processing closer to the edge where it makes sense, but still take advantage of the economies of scale and the centralized resources where you need to do those big data processing functions on really large amounts of data that require a dedicated data center. And with that, I'm gonna pass it over to Kevin who's gonna talk about some of the workloads. Okay, thanks. Okay, so this morning is Eric, Noah, and I were talking. There were kind of, there were two topics. One was what color to die, Eric's, or Noah's beard, right? Cause everybody has the cool mohawks and we wanna see color, right? So the other one here was, so Eric laid out basically a blueprint for high performance OpenStack, right? He gave you a hardware spec, software spec, some of the configs around that. And what Noah just laid out was how do we push that performance system kind of to the edge and then how do we overlay that or ask the network to take on some of that load, right? And how to start ingesting some of these systems or these messages from IoT devices. So to kind of put those two together, right? So I have a, Eric gave us a performance hardware basis and Noah gave us basically these functions that we can push into the network stream. So they just give us really good tools, but what do we put on top of it? Like how do we translate this into something that a business can use or a customer is gonna see benefit of and how does that, what is that meaning or how does that translate into something that's tangible or tactical? So a very generic and quick definition is Internet of Things devices, right? It's a euphemistic term at this point or an idiom, but these Internet of Things concepts have these seven kind of attributes, if you will, on the left side. And there's things like these are data devices or devices that are sensing something. Think of a temperature sensor or a door opening or closing or a motor coming on or off or something of this nature, right? And these are generating events and they're small messages, they're real-time messages, they're in high volume. Think millions or billions of messages across an ecosystem of devices. Just think of all of maybe the Nest thermostats, right? They're generating information about the ecosystem inside your home and think about how many of those has been purchased, right? That's really the biggest market there is the smart home devices or wearables as we all have. And some of these topics even translate as far back as I've worked on designs as early as 2004 that had to do with these same concepts around capturing network data or capturing security data from the network and processing those messages. They have the same attributes. All the logs from all of our systems have the same concepts or the same metadata or footprint, if you will. These are small messages, less than a megabyte or less than a kilobyte. They're generated in high mass. We wanna process, aggregate, correlate, understand them and react to them in near real-time. Real-time not defined necessarily in nanoseconds, but real-time for humans being near instance response, right? A couple of second response time. So these are attributes of an internet thing device, a type architecture. So I kind of already covered some of this, but what's that performance profile, right? That performance profile is this real-time processing. We don't really have tolerance to wait for scheduling processor time or network time and some of what Eric covered with us, right? It solves those problems. We get low-level access to the processor, dedicated access to processor and into the network. There's a tremendous amount of network traffic, right? And so we can't handle all that side of the application, so we're gonna ask the network, which is what Noah just talked about, is ask the network to address some of this volume or to deal with it in some way, route it in a different fashion or handle that load and offload it from the application or at least make it more manageable for the app. And then probably the biggest one that Eric saw for us was, I want zero over subscription resources, right? In a traditional virtual edition or hypervisor model, the whole thing is about getting a piece of sheet metal to host far more OSs than I had been willing to buy hardware for, right? Well, we're gonna pivot that model, we're gonna rotate that model and we're gonna say, you know, if Eric's gonna give me 20 cores in a system, I'm gonna give him one for system processes and I'm gonna take the other 19 to run my app. I want zero over subscription, I want very high throughput in that context. So what's a great use case for this is event stream processing, right? We just talked about this with the internet of things and event stream processing kind of has these attributes. I'm not gonna go into all these, you can read what's here on the slide but the macro topics are ingesting these messages, you're gonna do something with them, this is the pipeline, you're gonna filter and mash them, basically up fix them or enrich them, translate them, aggregate and compute them, you're gonna do something to that message to make it more than just the raw component that came off that device, right? We may store it for regulatory purposes or maybe because we wanna bill a customer for something around archival or backups, these types of things or even replaying those messages, right? You wanna see how fast you ran down a path or something like this and you wanna see that kind of animated in real time and then obviously analytics and then once we produced all this interesting data we have to present it in some format or some UX or consume it in some other fashion. Just by example, Amazon Web Services has a system called Kinesis and everything I just described on that previous slide, this is their view of what that looks like, right? They break down all IoT into three big concepts. I could have shown you the one from Google App Engine or Azure IoT, they're essentially the same with different colors and different names but the concepts are the same which is I'm gonna bounce back one slide after people get to you taking pictures. The concepts are the same, they're these high level concepts and we can consume these as offerings from public providers or if you're so inclined we can build them internally inside of a private network and adopt whatever security practices that network has. So balancing one more. Okay, so let me carry down a little bit of a story, right? If we were gonna build an IoT system privately there are a bunch of sub components that are almost required and the community kind of helped us predicate what those are and this is just a dictionary list of some of those things. The one I'm gonna focus on is Cassandra. So tying those together, Eric gave us a platform for performance, Noah gave us access to a virtualized network function that we can manipulate or ask to offload or take responsibility for some tasks. If I'm gonna pull those together and run, build an event stream processing system and I'm gonna run one of those sub components, what does that look like and why would I do that in the context of what Eric and Noah already laid out for us, right? So one sub component is Cassandra. If you're not familiar with Cassandra, it's essentially a key value store subsystem. It allows you to have quite a bit of persistence to disk on a very large scale. It has a consistent hashing rings that distribute the data that you'd like to store across multiple nodes. Those nodes correlate to disk structures underneath. These are homogeneous nodes. They're an ecosystem, so it's a cluster that communicate with each other on a peer basis and it allows you to run queries against this key value store without interruption if you have a node failure of some sort, right? So these are the basics of Cassandra. At a very high level, obviously. And what I'm about to show you is essentially the architecture of how you would run Cassandra and AWS EC2, but I'm gonna give you the blueprint based on what Noah and Eric just showed us of how to run that Cassandra without the performance hit of EC2, right? You're gonna get direct access to network disk and the CPU, right? So general requirements for deployment would be this is what an individual Cassandra node might look like if you had a piece of hardware, right? You'd want memory for the heap, for the JVM heap. You want X number of cores, so much solid state disk. You want RAID to be laid out a particular way and you want a network that's performant, hopefully one gig plus, right? Similar blueprint for virtualization, it's just smaller. So in a hardware cluster, we're gonna have maybe larger nodes, a lot of JVMs running in a virtualized system. We're gonna have even more nodes, but they're gonna be much smaller. I think in context of Superdome versus Raspberry Pi type model, but we're gonna have a lot of either one of those, right? So in this context, each node is 16 gigs of RAM, four of ECPU, two of the gig of solid state and one gig network. These nodes are pin to cores. That's this piece of what Eric just talked to us. They have access to dedicated memory segments. They have access to dedicated network, specific JBODs, individual disk as it were. We're consuming a lot of messages potentially and so we're gonna ingest these into Cassandra so we have access to a network that can offload or handle some of that work for us. And one of the most important things here in terms of event sharing processing, right, is no scheduling weights, right? None of the processor, none of the network, none of the disk. What Eric has laid out in terms of hardware tools is we have full-performance IOPS from an individual VM, yeah? So, you know, what can I do? So in that context, tie those three together. Eric gave us hardware tools, Noah gave us network tools, right? And then we overlay a sub-component of event string processing system, which then we can build an app on top of that to handle IoT systems or IoT applications, yeah? Here you go. Great, thank you guys. So, excuse me, here we go, call to action. So, summary, kind of wrapping up of what we talked about. Good, 10 seconds waiting period. Thank you for your attention. Dramatic pause, presenting with skill here. All right, so, write hardware, you know? From HP, we're gonna say, yeah, HP hardware, right? But the write hardware really comes down to what's your workload? Is it NFV, IoT, is it gonna be RSA, or is it gonna be typical regular servers, or maybe you're looking at stuff like ARM CPUs, which is, there's a whole nother branch, and by the way, DPK is coming to ARM. You should check out the, one of the OP NFV projects is doing that. So, write hardware, leveraging your bottlenecks. So, a lot of the stuff we talked about is how do you offload the CPU and put it right onto the hardware? And that helps do all the workloads that we're talking about today. And then the very last one is gonna be really around how do you manage the traffic going in and out between your IoT devices and your cloud? Well, it really comes down to SDN, service function chaining, and then that gives you your deterministic transfer of the IoT data between the IoT device and the analytics engine. So, you're really trying to get deterministic and let's say, jitter and lag going, lag, you know, deterministic lag, but getting rid of jitter in terms of your transfer between the IoT and the cloud. Yeah, outside of that, any questions so far? And keep in mind, we can give you guys the slides if you come up afterwards, and we can also set up some time to go deeper into the topics that we skipped. We have a lot of backup slides and other information. So, we have mics out there, if you guys wanna raise your hand or just ask questions and I'll repeat it. Any crickets out there? Yep. So, thanks for the intro. I guess my question is, you talk about the data services aspect, data acquisition and so on, and then the hardware optimization. So, from an end to an IoT use case point of view, how much benefit do you see that you do this hardware optimization versus a better data services pipelines, better scale out? So, where are the most bangled buck is? Two part question, two part answer, right? You take the... So, the bang for the buck in terms of getting, let's say the best ROI. Can we do it that way? Yeah, yeah, exactly. So, the best return on your investment would be, if you already have hardware there, jump in and figure out how you can utilize this stuff in, say it's OpenStack Cloud, right? Hopefully, get the new man in there, upgrade the OpenStack to see you have those features and that's almost zero cost for you, right? You just use what you have now and use it better. The next thing is you really want to take a look at, like what Kevin was talking about, what type of workload are you running in that cloud for your IoT? Yeah. I mean, that's really where we need to sit down and look at it and see what the packets look like, see what the flows look like, and then there's to be a different answer from all three of us based on what that would be. Yeah, some of that's predicated on a particular client's requirements. A great consulting answer is it depends, right? So, one particular client that Eric and I have been chatting with is they prefer to have many of these subsystems deployed on-premise at the buildings where the devices are generating messages and they want at least 50% if not more of the analytics and the pipeline performed on-site so that the connections severed, the building is still live, still available. I think traditionally, most folks would say, okay, we'll put some network edge devices there and backhaul all that across dark fiber and then we'll centrally process it. That tends to make more sense but you always have some sort of additional requirement that says, well, we want this building to survive any kind of sever that happens and so you have to put stuff on-prem and so you have more of a hub and spoke architecture which would predicate this kind of model. Yeah. Thank you. Hey, Kevin. You were talking about- Hello, Anthony. How are you? I'm good. You were talking about different app profiles and I think you had an AWS service profile that you had up there and you were talking about. So that one's probably primarily public cloud and there's also one through Azure and also through Google as well. Which one, if you had a customer who possibly was using a little bit of public cloud but probably a lot of private cloud, which way for that app service profile would you recommend that they go? Maybe add a little bit more to your question. I'm trying to piece it together. So where would you lead them towards if they were more private bound rather than pure public cloud play? Oh, for sure. So if regulatory restrictions or whatever their preference is for private, you kind of go down the path of building some of this on your own, right? If they have access to hardware, ironically, I would probably point them for this particular application go towards more of a purest hardware approach. But if they already have a big investment in OpenStack, hopefully, right? You can build that system to have overlays for the NFV and the performance tuning in the hardware. Maybe use the OpenStack cluster as a shared resource to run. I think a customer of you and I've dealt with the Run Hadoop and Cassandra in concert on that system. That may not be a production deployment. That may be targeted as staging, non-prod, smoke testing, that kind of thing. But I could see use cases there as well. Thanks. Yep. Great. We got one or one and a half more minutes. Any other questions? You guys looking forward to lunch? Yes. Me too. Okay, I guess thank you everyone for your time. Awesome, thank you everyone. Cheers.