 Thanks Chris. It's great to be here. So I mean we're definitely doing a lot more collaboration and cooperation with Ubuntu and why are we actually collaborating with Ubuntu? Because it's one of the host operating systems that we've actually embraced within our Telcom NFE product portfolio and going forward we're working a lot more with Chris's team and the plan that we basically have is to make sure all our VNFs can actually be instantiated and certified on Ubuntu cloud and for that it's really really important to actually work with somebody that actually gets what we actually need will actually be willing to work with us hand-in-hand and actually do the small little minor tweaks that we actually need to make sure that we can take a VNF have the same SLA same runtime characteristics running on a Ubuntu cloud and that's really all NFE really needs to be it's not really hard but for some reason a lot of people are coming here and they're just saying well you know just take your VM put it or take your application put it in a VM or a container and then basically just install it on third-party components it's not quite that simple and I think a company like Ubuntu actually get it because they've actually been like you know playing around with operating systems in the depths basically and we've been collaborating a lot more with Ubuntu going forward to make sure you know we get our VNFs basically to be instantiated to be certified on a Ubuntu cloud operating system so we can deliver the same SLA as if it was basically in the same physical component hardware box but to our end customers they can actually pick whatever specific x86 hardware from different vendors that they want from different PCI vendors and that for us it's really really important because we want to provide flexibility to our customers we're not so interested basically in shipping them monolithic boxes anymore we're interested basically in shipping apps we're shipping apps in containers and we want to have a host operating system that can actually give us the same characteristics as if it was basically us in the fine-tuning and that's really what Ubuntu is actually able to deliver for us in Ericsson so we're actually doing a lot of like development and testing CICD basically all based on Ubuntu okay thank you very much very much Chris thank you so we're very excited about that work with Ericsson and very excited to be working with telcos and customers with Ericsson in the in the coming coming months and certainly in 2015 so let's take a step back a little bit from there as I said at the beginning we see open stack in private infrastructure as a service public infrastructure as a service and a network function virtualization in telcos right now and can you put your hat if you're a telco right now people using private infrastructure as a service on open stack some examples in the room we know AT&T with silver lining is a great case study here NTT do a lot of work with them in terms of private infrastructure it's how developers want to develop apps you may have a very large bill at the moment on AWS and the question is how do you bring that development internally we talked to lots of customers now who will say that even though they're a telco even though they have hosting operations they may be spending a million two million three million dollars a month on AWS and that really doesn't make sense for businesses that take pride in running large hosting operations themselves these teams often move very early we see public infrastructure as a service here in France cloud what numeracy two fantastic examples of clouds that have been backed by SFR and orange both running open stack based on a bunch who and they're making public infrastructure as a service available and then network function virtualization and we're doing work in the background involved in projects like terror stream and domain 2.0 with AT&T so let's that's open stack in the telco let's take one step up again from there and think about what is the real problem that we're trying to solve here and the problem we're trying to solve is addressing these three massive changes in the industry declining revenues existing products simply not profitable or not growing in terms of their contribution to the business extraordinary growth in terms of the requirement for connectivity for bandwidth often from over the top providers were consuming that bandwidth and not necessarily contributing back to the costs of building it out and then this third big challenge which is new providers entering the market Google fiber companies like free here in France who are attacking the problem without some of the legacy hardware and infrastructure that some of the incumbents have this is a perfect storm of challenge for the for the industry and it needs to be addressed not just through NFV but through a fundamental change in both use of commodity hardware and in the use of of scale out operations that we would more typically see in a startup and to that point I just think it's worth dwelling on the WhatsApp case study no WhatsApp last I saw data from Andres and Horowitz on Friday they handling now 7.2 trillion messages a year total text messaging text messages a year now in in the world 7.3 trillion it's the same number of messages and what's the number that really stand out stands out about WhatsApp it's not the 19 billion dollars they sold for okay although that is a big number okay the number that stands out to me is 32 anyone know why 32 32 engineers in all development and all operations now they've got 550 million users 40 so 32 engineers that's one engineer for development and administration per 14 million users take that same ratio and apply it to British telecom how many people are they now allowed to employ to do messaging at BT 6 for all development of all voice text and video messaging take the number to AT&T it doesn't get much better you're at 24 okay that's the challenge that we have to address and so on that note thinking about that problem is being much bigger than just of just virtualizing vns that we have we have to fundamentally reinvent how telcos develop and then distribute services Martin thank you Chris for the introduction there so if we look at those new players we actually would love more telcos to behave in exactly the same way so what do they do differently this slide says how many machines there are per human now there's no numbers on there but like if you have a data center yourself how many humans or how many servers are there per human operator any guess a good run data center there's about that would be too optimistic there's about 200 if you're running a good data center now if we look at these companies how many are there per human operator it's actually 20,000 so one person is responsible for 20,000 servers how can they do it well that's something that we want to explore later on another thing that's happening as well is that this this avalanche of new software coming out all of it here is open source so you can go and try it but you just don't have time in a day to try all those new things and it's completely different than how you use to design software so managing this getting to use this in exactly the same ways as the ones that wrote it is really important you need to be able to be very lean and have the right ways to go and apply this everywhere in your organizations it's not easy so how do we want to help you well first thing is what the Googles and the Facebooks and and others have taught us is you have to think scale out instead of scale up bigger boxes are no longer enough if you're dealing with 500 million users doing things you need to design things differently and this is exactly the reason why our operating system is used so dominantly in public clouds because it's written for scale out it assumes that IPs are not static it assumes that there are changes happening all the time we're also working together with like the Facebooks and others on open compute hardware the type of hardware that no longer is installed little by little but they just wheel it in and it's all open open source hardware that you can see and and purchase as well and of course putting open stack on there makes it then extremely easy to to get the best out of that hardware but what if you have to deploy hundreds and hundreds of servers like they do on a daily basis you cannot longer go with a USB stick everywhere or with a DVD as you used to this is where we want to help we've made an open source tool called mass metal as a surface and it basically allows you to provision operating systems on that bare metal from zero it provisions Ubuntu but you might be surprised also to know that it does Windows and pretty soon sent us another operating systems as well so if you need to provision large amounts of servers you should definitely take a look at it but the real problem is what do I do with those virtual machines once I can get them in minutes it's like climbing a mountain you're climbing you think like three months to get a server now you get it to three minutes problem solved well you actually find there's a bigger mountain afterwards you now have to integrate the software and it takes you three months six months 12 months what we think is that actually should take you three to 12 or 50 minutes at most to deploy complex solutions and for that we made a new tool and it's also open sources called juju and we wanted to come and show you how it works with workloads that interest you not the word presses and the other type of things that you might see everywhere deployed now we want to show you telco workloads deployed integrated and scaled in minutes so it's demo time thank you Martin so my name is Samuel I'm gonna present you an integration of a global voice service system so if you want you can take your phone and call that number and you're gonna ask be asked a few questions so you can do that at any time you can use that from now to the end of the demo so let's have a look at this workload which is I want people to call a system be questioned a few questions record the answers process them and show a dashboard of that to do this I'm gonna need 15 VMs in this example and it's I need phone numbers that's our first thing don't need VM for that then I need an IMS so core of the telco network I need the answering machine so an IVR then I need to collect information to process it and I need to send it to the dashboard so that's good enough to start with then eventually you're gonna have to move that to production so what you want then is to add monitoring log management and TP and that's a bare minimum eventually you're gonna have also backup solution so if you look at it usually in a in a standard environment is gonna probably take you a couple of months now what we want to do with smart tools is downsize that to 30 minutes so how do we do that we think that we can design once and deploy many so we're gonna move to the demo okay this is the infrastructure that we are deploying it's currently on my Amazon right so you can see on the left here three nodes which are the IVR which you saw also before this is the core of the network and my screen is very small so we can't see the other bits here but it moves to production bits I can export that it's gonna save a bundle that the animal this file you can send it to anyone and they can now on another to do instance import it okay I've got it prepared for you with all the bits and pieces it's gonna deploy and there it is that is the environment of your client it's another environment it's your local computer if you run it on containers it's anything so you design a workload once and you can ship it to someone else and they can run it immediately plus adding some few bits of configuration and that's it so now if you call the system we should see those bits and pieces moving because those are your calls actually being processed so you can see the three questions are asking your role in open stack asking which distribution you're using and asking where you are in deploying open stack and then the system also processes your location so the map on the top right is displaying where is your base operator so who's calling I'll do a call for you if you want sorry about that okay that's our void system there's a demo time effect okay we'll do it again okay but like today or before so we switch back to representation yes now if you walk through the conference you probably have seen SDN or software defined network about 50 100 times everywhere it seems to be that a lot of companies think there is a reason to have a differently done and we actually want to help operators find which one is best for them so that's why we did the open stack interability lab so thanks to juju we can actually deploy open stack like Lego blocks but also take out one block and put another one in there this allows you to put any SDN in there and it's not only us that can do this anybody can do this you can choose an SDN put it in there test if this one is the right one for you and we deploy with another one and another one and another one so in that same day you can test multiple times and an open stack deployment with different components we do it about 3,000 plus times every month and we don't only do it with SDNs we also do it with storage and hypervisors and we do it with the latest trunk versions every day so if you are a provider of SDN solutions or storage solutions or anything like it and you want to test oil is the program that you want to talk to us about and there's some relatively big companies here that are joining you all right so we've covered this concept that there are we want to accelerate productivity within development teams and that means being able to access as a team lots of modular blocks bringing together and and then being able to roll out services quickly and the other part we're saying is that there are more and more parts of your infrastructure with with oil that you can then use and test so what about open stack itself like what does Ubuntu as an operating system and then as a provider of open stack provide to this ecosystem so we really think of just two use cases and one is a reference open stack which for many data center clouds is easily adequate and there are two options that we do that we help telcos sometimes operate those clouds without offer which is boot stack where we'll actually operate a cloud for a telco and then they can focus on running services on top and reselling them or we'll help support people actually build those clouds but as we go into the real NFV use cases open stack itself and in fact general distributions need work to make them really suitable for the type of use cases that people like Alan have got to address underneath the core network and so for there we talk about extreme open stack and for that for us that's an engagement where we'll work with a particular telco and let's say look at IPv6 support throughout the whole of open stack and whether or not that has been properly supported how do we deal with workload isolation how can we pin workloads to the specific cores on a system how can we deal with telco specific security requirements all the time that we do that work we're trying to make sure it's done upstream and arriving quickly in open stack itself and then making it available in the standard distributions but we recognize at the moment there's still a lot of work to be done on standard open stack itself to make it suitable for full NFV workloads and also on the distro itself any question all that sort of reference versus extreme and what needs to be done to open stack itself no all right so what I wanted to do was to get to very exciting I have to come up here who've been instrumental in in making progress around open VNFs and getting them ready and I wanted to first invite up here Paul Drew who's the general manager of the project Clearwater at Metaswitch to come on stage and talk a little bit about the project and what you're doing with it on top of open stack thanks Paul. Thank you very much thank you Chris I'm just switch slides here. Thank you. Welcome everybody I'm very grateful to Chris and the rest of the Ubuntu team for giving us the opportunity to talk here. I'm going to talk a little bit about Metaswitch networks and then Project Clearwater and one other project we've got Project Calico which plays into the open space and open stack network and networking space. So as Chris mentioned earlier on basically the whole network telecoms networks are moving towards being software based it is all going cloud based with generic hardware and this is very much where Metaswitch networks is playing as as a vendor we've been a long-standing vendor in the network been deployed for around about 30 years we have about 650 employees with a thousand plus global customers service providers selling into OEMs as well. So we've got a long heritage of building software and now cloud-based solutions and I'm going to talk a little bit now about the demo. So this is the sort of the architecture so Samuel showed this demo earlier on as you saw several of you called in on your mobile here it actually went through a true phone network they're based in the UK that was the number you dialed the UK numbers owned by true phone. True phone then forwarded it on to the Project Clearwater that was what was instantiated via the juju running actually in the cloud infrastructure and we've got the tele-stacks there with their application so when you made an incoming call it hit the IMS core the IMS core looked up the number terminating applied terminating services it's forwarded it on to Rescom the IVR that you actually heard was implemented on Rescom and then the push data into the dashboard and that's what Samuel showed as the dashboard we can see in real time the results of the survey that it was doing. So that's where Project Clearwater sat in that demo and how we fit. It is basically designed as an open source IMS core running in cloud infrastructure so we took the view that we were going to start off and build a ground-up cloud-based VoIP platform an IMS core IMS-based platform taking cloud technologies we use open source databases like Cassandra and Memcache D to allow it to scale very much based around at various discussions about how you scale a WhatsApp or a Facebook these are the sort of technologies they use to scale their deployments and this is what I think we need to do in the telco space like come from a heritage of building and scaling out scaling up with software but actually to scale out and have multiple versions pools of servers with distributed databases it really helps to design it from the cloud and leverage it put it in the cloud infrastructure so this is what Project Clearwater is and it's open source IMS core supported by MetaSwitch networks we offer much like a lot of open source platforms support infrastructure that fits around it it's built around open source and open standard APIs and then just one final thing we're also doing in the open stack and networking infrastructure Project Calico is another open source platform if you look on the left this is a standard diagram of an open stack networking stack it is deliberately there it is a complicated diagram I'm not going to attempt to go through it but it's based around VLANs and overlays Project Calico is replacing all of that putting in the layer 3 infrastructure a rooted infrastructure much the same way as the internet backbone has been built out and scaled over the last 20 years it's a rooted fabric uses BGP for doing routing so it is a completely simplified way of scaling your open stack and we're working with Ubuntu canonical to actually get Project Calico it's in the oil program and it being part of one of the options you get as a distribution for from open stack so thank you very much I will hand over back to Chris thank you thank you very much Paul if you hang around I think we'll have questions in a few minutes and that'd be great so Jean Durel great to come up the stage so Jean is the co-founder of TeleStacks and if you could maybe explain a little bit a little bit about TeleStacks and what you're doing hi welcome everyone thank you to the Ubuntu team to actually invite me to speak about our integration with JuJu and how we can scale NFV in telecom environment so I'm Jean Durel I'm one of the TeleStacks co-founders and TeleStacks is the company behind the biggest open-source telecommunication platform called MobiSense what we figured around building MobiSense over the past decade is that building telecommunication services is hard and it takes a lot of time and integration so we thought that we would also the the number of people capable of building telecom services is very much reduced so we thought about how we can go from there and leverage people that knows a lot about the web and how to integrate communication services into web application or be able to build telecom services that are as easy as building web services so we wanted to tap traditionally into the the blue circle which was the telecom developers into tapping into a lot more range of developers so that the telecom operators can actually use an ecosystem of partners and developers to build innovation and help them in fighting OTT players so we built a solution called Rescom that we saw in this demo where Rescom is taking care of handling all the complicated stuff for the developers which is all the traditional telecom protocol like SMPP SS7 SIP etc so the web developers don't want to learn those new protocols they just want to build innovative services so this platform allows you to integrate all communication features into your application be it voice messaging video very easily with standard APIs or standard language that web developers or mobile native application developers are very much used to we saw it also that sometimes developing itself is too time-consuming also build a visual tool that you can use to actually do drag and drops to actually easily create new services so we have a number of verbs that you can use to actually do text-to-speech audio conferencing, DTMF recognition very easily so you can build your IVR tree without having any type of development knowledge and do control the call flow of your calls directly from a visual tool which was actually used to build the demo application that we saw today the goal is really to make it as fast as possible for telecom operators to actually build new services and push them to market so that they can focus really on innovation new services and add value to their subscribers so all of this is supported by our communication platform open source you will recognize typical components or network elements I won't dive in too much into this this slide but basically Rescom exposes all the telecom network assets in an easy way for web or mobile application developers it's also capable of doing internet of things so typically we are focused on application to person or person to person but MobiSense play a huge role also in the internet of things as was shown by the FCC CTO ending Schultz in in a conference that happened last month but this is the next step is operators don't have to develop everything themselves so we want to introduce a telecom app store where the operators can actually leverage an ecosystem of partners developers and community of open source developers that already build application that can go to market today so we want actually to help the operator to install the solution and go to market right away without months of building new application or a lot of millions of dollars to building new innovative services and integration work so we want to be able for them to actually push some application that are available today from our partners of ecosystem to generate revenue right away when deploying the solution and that's it thank you very much I wanted to bring something to light we were actually planning to do this demo a week from now on another event I went on holidays last week and I told them could you prepare a demo here so in one week they actually prepared a telecom demo actually it was probably even less so that's the type of time frame our team got together with partners like Telestax and made the switch to make it and that's a type of time frames that we would like everybody to be able to turn things around in thank you thank you very much thank you very much for our guest speakers so I think we got time for some Q&A before we do a final wrap-up from from the room maybe some questions go ahead sir hello if you introduce yourself that'd be great it's just about to do that right so I'm here a good time for sake of completeness my questions are so low mine and may or may not be questions of my company which is Oracle and I have a question to the meta switch presenter so I understand the what you did was I saw architecture but I was not really sure what the grant architecture does deliver so is it like a telephone services so basically project Clearwater provides IMS core so in IMS terms ICSCF SCSCF and BGCF functionality and sits in the core core network that the Telestax was acting as an application server in IMS terminology so coming here and cloud computing was is new to me like since three weeks I was I noticed in many present presentations like who have I added a couple of those that a major challenge they face is predictability of performance like the hypervisor eating quite some time impacting latency impacting jitter and causing a lot of other hard to control hard to predict technical challenges so I'm wondering if within this demo you went into these or if it was more like a functional demo or I say never thought without context of this demo generally how do you think the this environment is ready for the for speech and telecom clouds okay so there are two sides that there's the signaling side so as an IMS core project Clearwater is just in the control plane just signaling and there is the media components and the media components and the media server was actually part of the Telestax application and are much more sensitive to things like you said like the hypervisor putting a halt on stuff and you know if you hold up your conversation for 200 milliseconds everybody starts going well what happened to the cool drop in your conversation flow stops so the media components are much more critical in terms of performance in open stack and cloud environments so clear will project Clearwater on the signaling side and it's actually you need to measure the latency through the system and check whether you've got enough capacity to go through it and cope with burst of traffic so you might have I know the old classic thing was the American Idol so so eight o'clock they say dial into this number and the amount of traffic in the network suddenly shoots up but there it's more it's a less difficult challenge to cope with the signaling load and signaling overload in the core of the network than the media and to cope with the media overload you need probably things like siov or a open v-suit like a v-switch which has been optimized to to route traffic through project cloud calico to route your traffic through that's where you tend to see the problems with performance is actually rooting through the networking stack in open stack and as a company we've got other products at play in that space that's where we've had to put most effort in and is the most difficult technical challenge to resolve so so if I may ask the media company representative to answer the same question for me please so I would like to thank Paul which actually we applied part of the question on the hypervisor layer where you can do a lot of optimization on that one thing we can see also is that OTT players already built the infrastructure on the cloud so it's happening today but they do provide voice video on the cloud so if you look at Google cloud example providing Google hangout type of services there is a lot of things you can do in terms of monitoring the media server the jitter buffer and what's happening in the media to know the quality of your call so you can improve the boxes where it runs and thanks to also juju you can read a breed environment so for example you can run the signaling side on the on the public cloud and you think that from your measurements that the virtualization layer is too much of a problem for the media you can also scale on hardware typical hardware box a machine as a service yeah so I think your point that we want predictable and deterministic performance from both the cloud and from the underlying hardware is absolutely noted think historically the industry's had to go to very custom hardware with very custom versions of Linux and in fact one of the things that we're excited about working in this space right now is trying to bring that deterministic performance from a much more standard of Ubuntu and then all the way up through the cloud so I'm really asking myself what the I still feel there is some kind of sense of imperfection and I'm asked and I think there are some different opinions I found a lot of tuning proposals here in the sessions and some of them call it the deep drilling TM and some other call it super optimization registered trademark and whatsoever but isn't that like over time it would be beneficial if OpenStake had something like Amazon has which is different types of instances I heard someone proposing today we should have 10 thousands of types of instances which are optimized for storage for network for this and that I'm getting headache already know when I hear the number of 10,000 yeah I think you should get a headache and I we've got time for one last comment I know we're out of time but Alan go ahead so I mean I think you bring up some really good points right but most of these are just configuration so for example like what you're talking about like you know for a memory and cash cost that's really got to do a configuration so for example like if you look and think a little bit differently about like vendors like ourselves have done in the past so what we're really trying to do is recreate to have the same deterministic performance as if we would have on a monolithic runtime box in the cloud and the way to do that is to have things like CPU core pinning to have a specific accelerated v-switch for a specific workload tailored take all of that and certify that on a Ubuntu host operating system and there might be some additional tweaks that we need to do in the kernel for example on a Ubuntu host operating system to make sure that those VNF workloads actually work on any x86 platform or even an ARM platform or something else okay thank you very much Alan all right I'd like to just wrap it up say thank you very much we come to the end of our session the key messages were let's get a an industry standard operating system out there that's cloud first open stack is ready for the convergence of both data center and of the network itself there's a an explosion of innovation now happening in terms of bringing both established VNF source established applications from established vendors into a VNF status as well as new players coming to the table and we really think that we can attack that core problem that we set out at the beginning of making sure that telcos can respond to the challenges of companies like WhatsApp with their magic 32 developers and 500 million users thank you very much for your time today