 Oh, my God, I think. Uh-oh. A log of the evening, thanks for joining us this late in the day. I know we stand between you and the party, so we're going to try to make an entertainment. So tonight, we're going to talk about OpenStack, SCN, and NFV, all the three-letter acronyms in this one slide, so this should be good. So without further ado, let's jump into the introductions. My name is Valentina Laria, and I work for Plumgrid. I run product and solution marketing, training, and I do a lot of work with customers. I've been a member of the OpenStack community for a number of years. My very first summit was the Santa Clara one, so many years back, and it's exciting to see how the community has grown. And it's awesome to be in Tokyo. I love Tokyo. I do a lot of work with customers. I do a lot of education around product adoption, help customers take technologies and products and transform their businesses. And Plumgrid is a player in the neutron area, in the networking area. We have a micro-segmentation offering for OpenStack that enables multi-tenant solutions. It's built on a really novel and evolutionary technology in the data plane that it's called the Iovizer, and you'll discover more about it later on. And we provide a set of comprehensive networking and security services to OpenStack users. Hi. My name is Rima Jantel. I'm a senior solutions architect at Redhead. I joined Redhead out of Telco community. I worked for Verizon for about 14 years, lots of Telco experience. And now I'm bringing all that experience to this new software-defined way of doing things, working on NFV. I primarily work with our partners, like Plumgrid, focusing on all things related to NFV, OpenStack, software-defined networking. And without further ado, let's jump in. So if you attended the keynote today and if you've been around OpenStack, you probably heard about network function virtualization. A few years ago, and I was actually at the beginning of it, carriers realized that the current model was not sustainable. They had highly customized equipment that was very expensive and that was running services that were very complex, difficult to operate, maintain, basically had to be closely babysat, and operations in any Telco would tell you that it's a huge job to manage the network in a Telco, to manage the services, to make sure you meet all the requirements of your customers, as well as in many cases, regulations because of essential services that Telcos provide. So introducing new services in that environment was always very complex. Any new feature you want to add, it's 18 months lead time. So how do you sustain that? And at the same time, stay in business. So a few years ago, Telcos started looking at ways of revolutionizing their business, joining this whole agile over-the-top crowd that was taking their meals and eating them pretty much. And they started looking to Cloud. Cloud model, it seemed like a clear path towards a new, more agile business that would allow you to provide services faster, easier, better operated, and always on. So they still want their services to be available 24-7. That requirement doesn't go away. You want it to be affordable. You want them to be reliable. And you want to be able to serve all of your customers from consumer market to enterprises, small, medium businesses, everybody. And you want it to be affordable to yourself and to the customer. So in comes NFE, network function virtualization. But it has to be a very special type of virtualization It has to be ready for the Telco environment, which I already described, right? 6 nines, 30 seconds of downtime per year, but done in a different way from what Telcos used to do. Not because you have tightly coupled hardware and software and everything is deployed in quads for redundancy. No, you want to do it in a more agile, better managed way. You deploy it on quads hardware, so silicon that's available to everybody that can run x86 from a laptop to your outing. You use open source software, which is in one way cheaper, but in the other way faster changing because you don't have to wait for one vendor to add that one feature that you want. It can happen much faster. And it can be shared between many different users at the same time. It can be tested faster and can be deployed faster as well. It's community driven, right? More eyes on the code, fewer bugs. That's a red headway, right? That's what we are promoting. That's why we work with open source software. You have APIs that are open that are standard that anybody can write to. And basically, you still want all of that, but on the other hand, you want it to be fault tolerant. You don't want your network failing. You don't want your calls being dropped. You don't want your video to be lagging and all the other services that telco provide. You want your VPNs to be secure, et cetera. So fault tolerant. And you want to have automated deployment because you don't want to have to rely on one person taking care of everything. You want machines to take care of everything, right? So scalable, upgradable, all of that ease of operations is one of the top worries for a telco. And all of that, what they looked for is available on OpenStack. So your network function virtualization infrastructure and your virtualized infrastructure management, that's OpenStack. And let's look at it a little bit closer. So infrastructure is not just the hardware underneath, but everything that runs on top of that hardware. Software-defined networking storage compute and something that controls all of that infrastructure. So you want your controllers to be reliable. You want your compute networking and storage to be distributed. You want them to be also reliable, always available. You don't want any bottlenecks. You want high performance. And you want security because of the difference of all the tenants you have. You don't want them clashing with each other, right? If you have, as a customer, Coca-Cola Corporation and Pepsi-Cola Corporation, you never want them to see each other, but you still want to have the same infrastructure that runs applications for both of them. So security are also one of the top concerns in any telco. So you have this great infrastructure. You have a way to control that infrastructure. How do you manage and operate it? So something that's not currently in the OpenStack, but the work is being done to introduce it to OpenStack. Plus it gives other community to expand on it, not just OpenStack, but other add-ons, other products that can work around it. And that's management and orchestration space. So right now, Mano, as defined by Etsy NFV, has certain functionality. But that functionality is going to get expanded because what you want in your networking is high visibility. You want to know exactly where each session is going. You want to know where each packet is. You want to know where your events are. You want to know if a failure occurs or if it's about to occur is even better. So you want your ability to heal your services. You want the ability to deploy your services easily. And you want the way to manage those services easily. So that's what the goal of Mano portion of NFV is. And things that are already in OpenStack, like Cilometer, projects that people are working on right now for virtualized network function management called Tacker, but it's not enough. It needs to be expanded. You need more ways of getting information about what's happening in the layer of the infrastructure, as well as visibility into your applications. So that's portion that is not really present right now in OpenStack and can be still addressed. And the whole reason why you're doing it, the whole point of having NFV is your virtualized network functions. You the only reason you have a platform so you can run applications on top of that platform. And those applications can spend a gamut of IT traditional applications to very specialized telco applications. And here you have to remember that not all of those applications have been designed to live in a cloud. The ones that have been excellent, right? They're ready to have a platform for that. The ones that haven't, the ones that require certain hooks into high availability, into resiliency, into reliability, the ones that act as a pet instead of as a cattle, they also need to be able to live on this platform in the telco environment. You can just say, OK, just work with the cloud applications. It just doesn't work that way. So you have to take into account stateful applications, as well as stateless control plane and data plane applications. And you need to be able to treat each of those applications or services according to their needs. So with the VNF ecosystem, a variety of different vendors are providing applications. And some of them are porting them from the traditional ones. So you see things like applications from traditional telco vendors like Cisco and Nokia or Alcatel Lucent. And then you see newcomers to the space. And their applications are quite different and sometimes better suited for this environment. So as a whole carrier platform, what you end up is something that is cost effective, easy to operate, and can run a variety of different applications in a way that can be easily managed by a traditional telco operations. So you want to make sure they can expose, through very standardized and open means, all sorts of information about the applications and the platform and sustain the requirements of the resiliency and high availability that telcos are requiring. And from the point of view of OpenStack, being able to deploy your controllers highly available, being able to deploy things in a distributed fashion, being able to have your applications run on top of that platform is what will provide you with the carrier specific environment. And allow them, the companies that adapt this way allow them to be first to the market with the newest and greatest applications, even beyond what they've been able to imagine up till now. So from the point of view of RedHead, we provide that platform. We have a VIM, which is OpenStack, and the network function virtualization infrastructure in terms of compute, which is also part of OpenStack, and storage, which is our self-product. So it's a highly distributed, reliable storage. And then we work with our partners to provide all the other pieces that are not part of our portfolio. So one of the partners, such as Plungrid, provides SDN capability that give a lot of flexibility to the underlying networking that sustain this platform. We have a huge ecosystem of VNFs. We are very open, and we work with everybody who is willing to support OpenStack as their platform. And we work with the variety of management and orchestration vendors as well. So altogether, you get this full NFV realized and suited to the carrier environment. So SDN, if you're not familiar with it, one of the definition is separation of a control and data plane, where control plane is logically centralized. And data plane can be either software or hardware but distributed and controlled by the control plane, where control plane can control multiple distributed data planes. And instead of having hardware provide you all the functionality, you actually have your software functions running on top of very generic hardware. And following that thought even farther, each individual network function, instead of running on a customized monolithic hardware, very specific to a particular vendor implementation, proprietary, instead of that you migrated into the software world either on a virtual machine or going even farther in a container, running on a generic hardware where it can be very quickly deployed, upgraded, changed, updated that with additional hooks and additional development can meet the performance requirements, which might not quite be the same as the customized hardware but good enough and with some full tolerance and reliability as well. And let me hand it off to Valentina. Thank you. So we're just going back to the previous slide for a second. Oops, if I can. So what is common to this model is that you can see that there is an evolution. And what a telco or anyone that it's looking at adopting and a V is left with, it's really a variety of form factors, deployment models, and different levels of performance. So something that we started looking at as PlumGrid but also just working with customers was how do we bring some of those features and functionalities in a more abstract way right into the kernel. So that's what I'm going to cover in the next section. Looking at some of the technology evolution that have happened in the kernel space that enable a variety of virtual network functions to be brought right inside each compute node. When I say kernel, I mean I can take each of these BNS being switching, rounding, load balancing, firewalling, and deploy them right inside your compute layer, making them available to your bare metal containers, VMs in a more cohesive way. So I want to introduce you to a Linux collaboration project that is called the Iovizer. And I'm very proud of being part of that. And the Iovizer project satisfies the need for extensibility and programmability at the kernel level, where this programmability is extensibility now forces the creation of an even broader set of network functions ecosystem. Again, what it's unique from the traditional model that you're all familiar with is that instead of leveraging just traditional form factors of physical, virtual, and container-based appliances, you now start distributing the functionality of implementing these features inside each of your compute elements. So imagine, back to Rima's point earlier, imagine these as it can really enable a more cloud automation type of model for deploying these functions. From a performance perspective, if you have a common abstraction layer, then it's much easier to work with performance improvements and platforms and frameworks. And what the Iovizer is an example of such technology does for you is that it brings the ability to create right in the kernel very generic IO modules. And with IO modules, I mean the ability to define manipulations on packets, functions on packets. For those of you that are not familiar with this type of technology, the best way to think about it is like writing a C++ code program, something that you will write in user space. But instead of running user space, push it down in the kernel. And you can have as many of those programs run chain together, giving you the ability to form chains of functions and services right inside your kernel. Now, the Iovizer as a framework, it's a very generic technology. It's built on top of Birkin packet filters, BPF, for those of you that are familiar with this. And it can be applied not just to networking. So as I said, I spent quite a bit of my time with telcos and cloud customers in general. And one of the feedback that they gave me, they give me every time they see the Iovizer technologies, how important it is for them to look at how this technology enables visualization, analytics, tracing, right inside, once again, the kernel model. Traditionally, the way we be monitoring the networking infrastructure has been very often based on agents, sampling of packets, our spanning packets up to a collector is it would then go and analyze. So technologies like the Iovizer, instead, help you bring a visualization layer right inside each compute node, really helping with your troubleshooting and monitoring back to what Rima was talking about earlier, being proactive in understanding that an issue is coming. That something is about to break in your infrastructure. You need to go and fix it. Now, when you look at how the Iovizer impacts VNFs and NFV, what it does for you is that it takes any generic network function that you have seen implemented as an appliance, as we discussed many examples there. And what it does for you is it gives you the framework to push all these functions right inside the kernel space. And for those of you that are familiar with, network functions virtualization, you know that there is a concept of service chaining. So defining that for a specific application, for packets that belong to a specific application, you want these packets through servers as chain of functions. You want them to go through function one, then through function two, then through function three, things like routing, firewalling, load balancing. So what the Iovizer framework does for you is that enables you to define that chain right inside the kernel itself. Now, one of the things that gets telco customers excited about these type of technologies is that we don't have a crystal ball, right? We don't know exactly what the future is going to look like in a number of years from now. And the cloud model, it's really enabling a transformation of the business opportunities for the telco as well. So the beauty of technology like this is that they're fully extensible. So say that tomorrow you, Mr. Telco, have a new use case in mind and need a new network function or a new security function, a new tracing function. Framework like this enables you to just go and create it and add it right inside your service chaining. So this is a very strong transformation that we're seeing. And I'm going to give you some examples of how this gets applied to use cases that should come familiar to you as an audience if you have been looking at NFV as an application in your environment. Now, before I talk about the use cases, I'm going to mention a concept that it's quite unique to what Plumgrid does, but I hope it's going to help you visualize how these technology can then be applied to NFV use case. And we're not really talking products here, but just to give you an idea of how you can kind of templateize and summarize the network needs of applications. So we use a construct that we call virtual domains. Now, for those of you that are completely new to Plumgrid that I've never heard of this, the best way to think about it is that it's like a private data center. It's the bubble that you define for an application, for a user, for a tenant. And inside is bubble that it's completely coupled from your physical infrastructure. You can define any arbitrary network topology. So back to the NFV model, this will be your service training definition, where I say for my application, my application has two tiers. It has a database tier and an app tier. And they're going to be connected in a specific topology, and they're going to have some security policies in between them and so on and so forth. Now, these virtual domains is bubble, allow you to create your virtual network infrastructure and to define this chaining of features and functionality. Now, what technology like this, if you put lines instead of icons, will look very similar to what Neutron does with networks and routers and external networks and so on and so forth. Now, what a technology like the Iovizer does for you is that it takes this definition of a network and it makes us distribute it. What I mean by that is that it's going to make this network leave across every single compute, physical compute node in your environment. So it's a mental shift from functions, network functions being available through appliances, where you need to procure an appliance, bring it up, even if it's software. You're going to have to manually bring it up or automate the bring up of that and then worry about making it available, making it redundant, having it work reliably for a number of applications, making it scale out. This brings all these features and functionality right inside your kernel layer, making it a lot easier to bring the cloud model to a telco and satisfy the stringent requirements that a telco business has. Now, if we look at a couple of use cases that should be familiar. The first one, obviously everyone, it's kind of agreeing that the virtualization of the CP is one of the key applications for NFVs, what everyone thinks is coming, agrees it's coming. So what I want to show here is to show you how this kernel technology that I just discussed and obviously all the innovation that it's coming with OpenStack that Rima went through earlier on in the presentation enable a transformation of the CPE. So if you look at the existing solution, why is CPE such an interesting scenario for a telcos to take care of? You have these Edge devices, Edge Networking devices, they're standalone, they're spread everywhere and they do provide key services, things like IP management and quality of services, routing, NAT. Now, why they do provide critical services, they're usually built on pretty cheap hardware, right? So they're prone to failures and because they're running complex software, they can provide these complex and advanced services, they're also prone to failures from a software perspective. Now, one of the things that it's attractive to a telco that it's looking at the NFV transformation is to say, OK, how can I break away from this coupling of features with hardware in a scenario like the CPE? And how can I come up with a model where instead of having my control plane and data plane both running in the remote CPE location, how can I bring some of that back into my cloud, my NFV cloud, so that I can centrally manage and operate and push upgrades and add new features? And at the same time, have a simple, easier to upgrade data plane running at all the CPE locations. Now, I technology like what we have just discussed, so it's kind of the pure, let's start with pure kind of SDN model, right? So an SDN model will tell you, OK, you have control plane, data plane separation. You bring some kind of functionality with a data plane right in the CPE and you keep your control plane running in the cloud. Now, one of the challenges, right, is that often when you do this separation, you end up with a very basic data plane running inside your CPE. So you still need to rely on the kind of the central control plane plus an advanced data plane running in the central location to enforce the security, the NAT, and the more advanced features and functionality. Now, you can see it right away that a model like this doesn't quite scale well, right? Because if you need to plant packets back to a central location, your performance is going to be suboptimal. If you ever experience a failure and an interruption of services between the cloud and your local CPE, you're not going to be able to provide the features that you need. So you don't have support for what we call headless model, right? Where you chop the head of your remote data planes and you still want to be able to provide these services to the local endpoints. So technologies like the iOvisor and many others that are happening right now, what they do, what they enable is the ability to run the full data plane that you need right inside the CPE. So at that point, you have a pure model where your control plane is just a control plane, and all it's doing is interacting from an API perspective together with the rest of your open stack so you can automate it and configure it and the final sort of applications on top of it. And your data plane is then locally providing all the security routing functionalities that you need. Now, if we look inside the CPE and the example of an iOvisor enabled data plane, what happens is that we have our cloud control plane running remotely. We have like a local kind of mini control plane that can talk with the central control plane and take care of upgrades and stuff like that. But then you're capable of defining all your kernel functions that get implemented right inside your CPE environment. And I don't want to go into too many details but certainly open to talk more offline if you're curious on how exactly this gets implemented. But the idea is that every single packet that is going to traverse your kernel can be serviced through these functions here. So you can see how the model completely transforms from physical appliances to virtual appliances to these fully distributed in kernel kind of functions. Now, another example that I really like, I'm very passionate about, it's a customer of ours that we worked very closely with for a number of months now. And what they're doing is they're building communications as a service cloud solution. Now, if you ask them, they probably wouldn't even call themselves an NFV cloud provider. But what they're doing is really building, bringing a lot of the NFV models into their cloud environments. So what they do is that they work with small companies, small customers that are starting up a business and they want to create a call center, a support center. They don't want to incur the hardware expenses and software expenses of bringing up one of those environments themselves. And so they go and work with this customer to get these communications as a service environments being created for them. Now, when these customers first started looking at OpenSack, one of the biggest challenges that they faced was that they wanted to be able to create customer environments that would encompass a number of network functions chained together. And they wanted to use a number of different vendors concurrently in each of these environments. And they were using a collection of physical appliances and virtual appliances and trying to stitch everything together plus using some of the kind of physical network constructs of VLANs for isolation. Now, this is something that might sound very familiar to many of you because many come from that that's really the beginning of the journey. And they were sharing with us that it would take them things like 75 tickets to be open for any of these environment to be created and anywhere from like three to four weeks for a single environment to come up. Now, if you're building a cloud model and a cloud business, that's clearly not acceptable, right? So they started looking at solutions out there, trying to figure out, obviously, looking at OpenSack and started talking with the community and starting trying to figure out how to bring this agility and automation into their environment. And they really had the requirements to be able to carve out these multi-tenant environments, right? As Rima pointed out earlier, especially in a telco kind of scenario, it's very important to have this strict multi-tenancy and isolation so that you can isolate components throughout every function that you provide. So what we did with them is we helped them design a model based on this Iovizer kernel component where they could just define through the concept of virtual domains, these customer environments. Now, what we do when we define a virtual domain is I said earlier, we help you define how you chain all these functions together. Now, nothing tells you that all these functions need to be Plum Grid, right? So in this specific example, what we did was we picked some functions from Plum Grid and then we picked some functions from our partners and we worked with load balancers and firewalls and routing appliances and we helped them stitch all of these things together. Now, the beauty of this model is that you can fully automate it from beginning to end because you can define a virtual domain as a template and by defining some of these functions as third-party functions, you define that they need to be stitched together and we help you just automatically stitch all of them there. Some of the functions that are more common, we can implement them right inside the kernel, right? So again, you have this distributed high-performance layer that gives you kind of the switching and the routing and the isolation and the security. So I find this a very appealing kind of case study and scenario where you can see the transformation of their business going from three to four weeks and close to 100 tickets, right? As I said earlier, to running the scripts in few seconds and being able to bring up this environment. Now, I'm not claiming that it takes few seconds to bring up a customer from beginning to end, right? Because you still have to potentially procure new hardware and make sure the environment is up and running. But just from an automation and configuration of the environment, this is something that you can really squeeze into a very short amount of time. So for them, what was also important was to be able to scale out and scale in resources, right? As their customer demands would fluctuate, right? They wanted to be able to let them grow and let them shrink and have all the network functions go with that. So again, a kernel kind of base model will automatically scale in and scale out with you because it exists where you have VMs and it just doesn't exist where you don't have VMs, right? So it really helps with this kind of dynamic elastic model here. So hopefully this was a good example for you to get a feel of the transformation they were seeing in the, especially in the virtual network function ecosystem for NFV and how SDN, it's helping transform this business. And obviously how OpenStack, it's providing the underlying platform to foster this innovation. And right now, I would like to open it for any question or comment, if anyone? Sure. Do you mind using the mic? Since you have it right there. Iovizer is mainly developed in C, yes, in the sense? So no, there's frameworks. So Iovizer has kernel components and user space and development tools. And there's different flavors and SDKs that can be used and different languages. Yes, C is one of those. And if you guys are interested, there is sessions on that tomorrow actually, so during collaboration day. Any other question or comment? Well, thanks a lot for attending the session. We both Red Hat and Plumgrid have a booth at the marketplace, and the expo is open tomorrow morning. So if you want to chat more, feel free to stop by, and we're going to be hanging around a little longer here if anyone wants to talk more. Thanks again. Thank you. Thank you.