 All right. Hi everyone. So my name is Eric Sarot. I'm the software product manager at Contron Canada and today we're going to go over on what Contron does on the software side as our hardware manufacturer. So first things first, who are we? So we basically have a footprint in 12 countries worldwide. We also have production facilities distributed across the globe and and everyone right here today is from the communications business unit, which ties into the old OpenStack venture and all of us are located in Canada, Montreal. So who's our customers, right? Obviously, if we're talking about OpenStack, we need to talk about the cloud, right? So one of our main use cases for the platform that we've announced is for private cloud and hybrid cloud deployments as well as public. So basically the platform that we have is really niche on that front in order to quickly deliver environments. We're also really big on the telco and service provider side. We have 30 years of experience on this field with the strong ATCA background that we had and the engineering that we brought into a rack mount form factor. Also the media and transcoding use case. This is where our platform excels right now and we're bringing the new OpenStack use case to the same platform. We have the biggest density of 1080p 60fps capability out there and I'll cover one of the main customer segment that we're talking in with that and mostly ISVs and OEM. So most of you guys probably don't know us because we're a big white labeling shop. So what we do when we sleep is that customers like to put their brand and develop their own brand and we're not enforcing to be at the forefront of it. So again, white labeling, this is why most of you probably don't know us. So some customer examples where our products are used. Anti-DDoS, so if you ever got protected from an attack and not just null routed, it probably got protected by a contra unbox. At the core routing, so if you ever use the internet, ever looked at cat pictures or anything, it probably got routed through our gear. Any guys out there that's looking at video game streaming, some people don't get it, kids really like it, but one of the biggest platform out there is actually powered by our platform. And lastly, if you watch the world hockey cup, any of the ads around the bands are basically tailored to specific countries that are watching the series. So since we're from Canada, obviously it's Tim Orton's, but basically all that ad placement is powered also by our system. So this brings us at the hardware layer. So at heart we're a hardware provider and we're stepping into software. But this is basically the portfolio that we have right now that we're looking to triple in terms of size throughout the remaining of the year. Our main investor is Inocon, which is a subsidiary of Foxconn. So this unlocks us a wide variety of hardware profiles available out there. From NAS, commodity servers, carrier grade systems, modular servers like our flagship system, the MS29 series, as well as white box switches and on-premise appliance. So this allows us to deliver an entire rack environment from one provider. So when we look at a versatile catalog, we needed to make sure that the software approach made sense to it, right? Because otherwise what's the point? So what we did is partnered with Canonical and I see some familiar shirt colors there. So we're leveraging mass juju and landscape to deliver those applications. And we really add the notion of making the platform modular because ultimately who are we to enforce to customers technological choice that they're going to have to live with for the next cycle of five to ten years if you're looking more at a telco standpoint. So basically at the deployer layer, the technologies that we're using is mass. From the minute you open the box in order to get the computer running out of the box and making it work. And we're also leveraging juju which handles the deployment, the service modeling of the application. OpenStack is complex and it's not everyone that has the manpower or the money or both to just throw that at the problem and solve it. So we wanted to make sure that we were able to deliver something that works out of the box and make it simple for the customer to go up and running and think about delivering their application and not figuring out an infrastructure problem that they have no background into solving. So for landscape on that front it's about patch management and user management. So in that fashion you're able to update the systems more easily and it's a lot more convenient on that front to manage a larger environment. And we've also tackled an Agios deployment on top of it to really have the hardware and the applicative monitoring of the solution to really have an end-to-end monitoring environment but remain fully open source because we have a strong commitment on that point. And lastly, everything is deployed in containers. So this whole shenanigan should I do containers in OpenStack, OpenStack in containers and everything? Well, the lovely folks at Canonical are delivering it through AlexD which brings a level of efficiency and flexibility that is more easily achieved than just dropping bare metals and deploying OpenStack on it. And at the infrastructure layer where the applications are running, we're leveraging OpenStack Newton's release. So this is basically to run the environment. I don't have to tell you guys that to you guys, right? We also partnered with Sixwind in order to deliver DPDK for the environment. So one of the key challenge with OpenStack is delivering that network throughput and making sure that you're as close as possible to line rate performance. Standard neutron out of the box which is based on OVS is going to run at 4 to 5 gig throughput. When we're talking with the Sixwind implementation behind the solution, we're much closer to line rate. We're seeing 8.5 gig and there are some little knobs that needs tweaking to go even further. We're also using Ceph as pretty much everyone is to power the storage out of the box and it's also bundled with Ubuntu's Ubuntu advantage package because basically as a hardware provider we're not renowned for support so we wanted to make sure that we leverage a partner out there that's trusted to deliver that solution to our customers. So this brings us to our platform. For anyone that wants to see it, it's a bit too heavy to carry over here so it's at our booth A8 for anyone that wants to see it. It was also showcased during Monday's session from Mark Shuttleworth where he deployed Kubernetes on a bare metal system which was an MS-29 system. So what we have here is our flagship product. It is two integrated top-of-rack switches embedded in the system along with nine compute sleds or nodes which is either Xeon D, dual Xeon D. We also have an E3 variant with the CPU-GPU combo for anything that has to do with media and transcoding workloads. But in this case we're using the single Xeon D and on the first node we're deploying Canonical's tool in order to end the deployment of the solution and this option we're choosing to deploy OpenStack. So we're following the OpenStack Foundation guidelines of having tree controllers and two neutron nodes in order to have a fully redundant environment. And Sixwinds DPDK version of OpenVswitch is also deployed on top. The nice thing about their solution is that it remains transparent to the environment. So we're maintaining the Neutron API vanilla as is an OpenStack and behind the curtain it's Sixwinds virtual accelerator which is a tweaked up OVS that's in the platform. And lastly we have a slab for landscape for user management and monitoring. This leaves two sleds for either compute nodes in order to run small proof of concept, trial or benchmarks. But it also allows you to scale up when you're down further in the life cycle of the platform to keep expanding services that you need. Either you need more throughput out of Neutron or you're expecting to deliver load balancing as a service from Neutron so you want to add an additional slab for that. So you're able to grow it. And some of our customers are actually stacking six of those and deliver it as an appliance to their customer base. We have a really strong presence for the ISVs where they're trying to virtualize their dedicated appliances and where they're seeing our platform shine is that we bring the infrastructure problem and solve it so that they're already thinking about how am I rolling out this to my customers without making them too worried about virtualization or containers. So it really comes in as a turnkey solution that they know exactly what that box is running so closer to the dedicated appliance model that they're comfortable with but also looking into leveraging the advantages of containers, virtual machines and so on. So we have the entire portfolio, right? So this basically illustrates a typical 48-U unit, rack mount unit that we're able to drop at customer premises. So we have two top of rack switches. You can add a third one for management, obviously, if you will, which is 100 gig. We have other variants as well. So we propose a two-system deployment for the MS-29 system. This allows for a lot of versatility at the controller layer. And we also have commodity hardware. So this is your run-of-the-mill 2U dual Xeon E5, 12 times 3.5-inch, 24 times 2.5-inch to do the SEPH cluster multi-tiered storage but based on open source to really leverage the amount of data that you want to store but also keeping the price at something that's reasonable, right? And customers can use the MS-29 system or any other system out there. The basic guidelines that we give to our customers is as long as there's two nicks in a dedicated IPM iPort, you're probably much fine running in there. We're not expecting green field along all of our customer base and it doesn't make any sense to go and throw a rack or two of hardware because it's not meant to work with the solution. So we intend to play nice with some of the other hardware vendors that are out there. The biggest strong point on that front, it's really in an ATU firm factor, we're able to throw in more than 2,000 threads and nine terabytes of memory in ATU again and a very low power footprint. And coming in September, we're going to be looking at doubling those numbers. So just north of 4,500 threads and close to 20 terabyte of bandwidth in again, ATU form factor. So this is really pushing the batteries and making sure that power efficiency and density is able to be deployed out there. So what was our mission statement when we were figuring it out when you guys are probably like, yeah, okay, it's another open stack in a box. We've all heard about this 20,000 times, but here's the thing, everybody tried to keep something locked in, secure the customer in and the reality is that everyone's sick of it. Nobody wants to commit to one vendor to go into lockdown mode and be stuck with a solution three years from now and want to leverage new software or new options available out there. So we fully understand that. So we wanted to make sure that it's certain key to remove the infrastructure challenge from those ISVs and get it out and running from the box in a matter of hours. So it's about worrying to deploy your services instead of wondering, how the hell are you going to get that working in a week? We wanted to make sure you had a modular approach. So on that front, we have quite a versatile hardware footprint and we wanted to make sure that it's able to scale across the data center. So it is not limited to one rack. It's limited across multiple racks. Some of our customers like the fact that we can daisy chain it, but we made sure that we're able to spread it across the data center because nobody is going to dedicate a specific amount of racks. Ideally, you just want to keep populating the more you go along the way, right? And we wanted to make sure that we play nice with commodity of the shelf hardware that's available right now. And our strongest missing statement was aligning ourselves with the communities out there. So we wanted the platform to remain fully open source and that is why we chose Canonical on that front because there's no shenanigan in the scripts and the tools out there. It's open, anyone can use it. And we've actually worked with Sixwind to make sure that some enhancements were made and we didn't keep it to ourselves. It was actually published back to the JuJu store and now everyone can leverage the enhancements that we've requested. And there's no alteration to the distribution of OpenStack or any of other software options that we have out there because, again, we don't want the customer to wonder, all right, when I'm going to be about to upgrade that platform, am I going to run into any issues? Am I going to be able to upgrade? Am I going to be able to leverage 2,500 developers that are committing and dropping a new release every six months? No one can fight against that. So we wanted to make sure that we're able to deliver that in the system so no vendor lock-in. And lastly, the upgrade capability was making sure that we had that at the very heart of the product because, again, it evolves every six months and if we're looking at Kubernetes, which is every three months, well, you know, you got to keep up. Otherwise, the platform becomes sales very, very rapidly. And of course, selecting a partner that can go to market with us. So a bit of the info on where we're going. So since we have a strong presence on the telco and service provider side, we wanted to make sure to have the entire picture, not just solving a small problem and letting people figure out the rest of it. So the open source Mano model, this is a simplified view, so don't freak out when you go Google it. But basically, right now what we figured is the NFVI, the infrastructure layer, which OpenStack is a key driving force for both Mano and ONAP, which is the AT&T model that the Linux Foundation is now taking care of. We wanted also to enable the SDN part. So yes, there are deployments that are your typical Layer 2 and Layer 3, but it gets more and more interest on the SDN side and being able to shape your traffic dynamically and making sure that you can dynamically react to whatever's happening in your network. So we're looking forward to bring the ODL and ONOS part of the equation as a part of the environment. ONOS is more driven for code, so a central office redesign data center. So again, we went from a mainframe, didn't think it was cool, split it again, and we're pretty much going back to the mainframe. But ultimately, this makes a lot more sense for management and centralization control. And on the orchestration front, there's OpenStack tackered that's really gaining ground. There's also customers that are looking for branded solutions out there. So we're looking to play nice on both front and also include some of our partners that are here today that we've showcased. We have a demo working that is showcasing F5, Fortinet, and Brocade Viata, all working nice together, displaying a service chaining, which is usually the part where it gets uncomfortable when it comes to SDN and NFV. But we wanted to make sure that we were able to solve that on that front and display something that was actually working. So this is where we're going on the SDN and NFV side. And further down the road, what we're looking into more of a solution provider rather than just an hardware vendor is enabling OpenStack for media workloads. Right now, those customers are dealing with traditional bare metal. They have their environment that's doing the job, and it works great, but it's just sitting aside of the rest of their environment. And nobody likes to manage this third wheel that's not properly integrated in the rest of the environment. So this is about enabling Intel, GVTG properly in those environment. We're also working on a standalone Kubernetes platform. So again, this was displayed, and we were up and running in 10 minutes in a demo Monday. So again, this is going to be a quick one. And also big data. We found that we have this really, really niche sweet spot for map-produced workloads of cores, memory, and hard drive capacity that we are obliterating your typical dual Xeon E5 configuration. So any application that is meant for microservices and that really leverage when it's tread optimized, this is where our platform really shines. And down the road, again, Onap and Open Source Mano, we want to keep an open mindset because obviously different players will select different solutions, and we want to make sure that we're able to cater to both customers. So I'm running out of time. So if you guys want to come over, we have our booth just in the corner, A8, that we have the platform. So any hardware geeks, it's there. You can touch it. No worries, it's a mechanical sample. And it's what we deployed the OpenStack in Kubernetes environments on. You can also go have a look at our website, which is simcloud.com with a K. We have a German background, so obviously that's why the K is there. And also you can reach out directly to me at my email. So thank you for your attention, and I'm hoping to see you guys at our booth throughout the remaining of the day. Any questions? No? All right. Thanks.