 Bienvenue and welcome to day two of the OpenStack Summit. Please welcome Chief Operating Officer, OpenStack Foundation, Mark Collier. We could make it here for day two. So I want to talk about a few things. And I want to first start with a quote that a lot of you are probably familiar with. It's from William Gibson. He's a futurist. Over 20 years ago, he said that the future is here. It's just not very evenly distributed. And that's a pretty powerful idea. I think that's why so many people get excited about this. But when you think about it, it's actually kind of sad, right? I mean, I would like to see the future be more evenly distributed. And I guess I'm a bit of an optimist in that I believe that 20 years from now, the future will be more evenly distributed. And there are a lot of headlines that will tell you all the doom and gloom in the world. But I think that there's a lot of reason for optimism. And I think we'll look back 20 years from now at this quote and say, he was right at the time, but a lot's changed. And in fact, the world will be more evenly distributed. And the future will be more evenly distributed. And so I'm going to be talking all about distributed, the concept of distributed as it applies to technology, of course, but also society and just all the ways in which this trend is fundamentally changing everything about our world, everything about our planet. It's the biggest trend in our lifetime, in my opinion. And I think if we pay attention to it, we might see some reasons why OpenStack does have some relevance in this trend. So when you think about evenly distributed opportunity, it's economic opportunity, it's technology access. Certainly, access to the internet is a fundamental part of the future. And not everybody's there yet. So William Gibson is right today. Not everybody has access to the internet, believe it or not. We are all very lucky. Everybody in this room is probably tweeting right now. Well, that's not the case everywhere in the world. But over the next six years, another billion people will come online. And the thing that is really important to note is that they're not just getting online. They're all going to have smartphones. So this is not very long from now, six years from now. Another billion people on the planet will have internet access on a smartphone. And that is distributed power. And when we talk about evenly distributing the future and economic opportunity, I think we have to not just limit our field of vision to the West or the first world or the rich countries that some of us may have flown here from or live in. We need to really think about the whole world and everybody's opportunity. And if you look at the projections in sub-Saharan Africa, you'll see that in just five years, two-thirds of people in sub-Saharan Africa will have access to 3G coverage. That's incredible. I mean, for me, this just is something I personally am passionate about. I think that everybody should have access to the technologies and the economic opportunities of our times. It shouldn't be limited to a handful of countries or people. And so I think this is very good reason for optimism that the future will be more evenly distributed, that we're on the right path. And if you think about those phones, those smartphones, that the next billion people will have, that all of you have in your pockets or perhaps that you're typing on right now, they're really supercomputers. I mean, the amount of power that's in your hands, that distributed power that you carry with you in your pocket is a supercomputer. So as we think about the next billion people getting that access, everybody gets a supercomputer, right? That's pretty exciting. I think this is a stat I recently read in a Andreessen Horowitz presentation that blew me away. So on iPhone launch weekend, Apple sold 25 times more transistors than all PCs on the planet Earth in 1995. That's on that one weekend. So that's just a massive transformation in terms of power in people's hands. And it's not just about these $1,000 phones that some of us may be fortunate enough to afford, especially if you're an open-stack engineer, you're probably doing OK right now. But there are a lot of people out there that are now having access to smartphones because of these types of initiatives. Google has a handset now that runs Android primarily in India. It's about $100. And the Firefox phone is targeted at $35. So that's a pretty big gulf in terms of $1,000 for a new iPhone versus $35. But the fundamental power in your hand is not that much different. If it's connecting you to people, if it's allowing you to organize as citizens in your country and affect change, it's a lot of power. So when you combine that ubiquitous connectivity, which many of us enjoy, but the rest of the world is quickly going to have access to, with a supercomputer in your pocket, that is distributed power. And I think we really have to let this sink in, just how fundamental of a change this is going to be for our planet and for governments and for all of us. And it's exciting because when I talk about power, I'm not just talking about MIPS and CPU computation and transistors. I mean, we're talking about the ability to make the world the world we want. And hopefully, in my view, it's one in which the future is more evenly distributed and those opportunities can reach out to every country in Africa and beyond. And to share a few more data points here about this trend, this particular graph shows the interest peaking in an application called FireChat. Now, many of you may not have heard of FireChat, but this is an application on a mobile phone. It can run on a $35 handset as well as a $1,000 handset. And this exploded in popularity in Hong Kong during the recent protests when people found that they had limited connectivity to the internet. And in fact, they needed to stay organized. And so they used a peer-to-peer application that was actually able to keep people connected through a mesh network without even having access to the internet. So that's distributed power. It's affecting the whole world. And another example that I think was really quite interesting that happened just in the past couple of weeks. Many of you may be familiar with this, but in Hungary, there was a proposal to tax the internet. So remember that that distributed power, one of the two key elements is ubiquitous connectivity. So they weren't going to cut off the internet. They were just going to tax it. And people were not happy about that. In fact, they took to the streets with mass protests. Do not take away my internet, they said. And then in fact, they had to cancel this internet tax. It was going to be a tax on bandwidth. And so the question you have to ask yourself is, in 2014, this happened in the last two weeks, where is the power right now? Is it the guy in the suit? Or is it all those people in the street holding up their cell phone? I think it's pretty clear that the power is shifting. It's becoming more distributed. And this is something that's affecting democracy in a positive way, in my opinion. And I think that's pretty exciting. So once upon a time, all you had to do was be the guy in the suit. And that's no longer the case. I don't have a suit on, thankfully. But anyway, I think these are all just signs of this trend of distributed power. But it's not just about citizens and democracy and organization and making the world we want. I think there's a lot of signs that economic opportunity is starting to become more distributed. This is a really cool excerpt from an amazing report that Bill Gates put out. Now, many of you, Linux faithful, once upon a time, probably had Bill Gates on your dartboard. But he's gone on to do some pretty amazing things with his billions of dollars. And this report I found incredibly inspiring. A lot of very good reason for optimism. And the picture on the left is Nairobi in 1969. And on the right is the same city in 2009. So I think that there is a lot of evidence that opportunity is spreading, and it's not limited to just a handful of countries. And in my view, that's a very positive thing. And I think some of the technologies, like ubiquitous connectivity, mobile computing, and of course, we know that all those experiences are backed by cloud. So eventually I have to talk about cloud. I promised Jonathan that I wouldn't just use this time to talk about my economic views. So we actually are gonna move a little bit down into from humanity to IT. So I guess we're gonna have to talk about IT eventually. And if there's any time left, I'll talk about OpenStack, but no promises. So IT, where does this trend towards distributed power play out in IT? My thesis is it's playing out everywhere all around us. So IT is certainly not immune to that trend. And so in the battle days as an employee, IT said, I built your Windows machine, Dave, and you took your Windows machine and applied the 300 patches to it, and off you went, right? You didn't have a lot of choice. And many of you probably didn't want a Windows machine, but that's what you got. And times are certainly changing in that regard. Jonathan mentioned this yesterday in his keynote, that power is distributing to the business units, right? They are making more decisions about technology than ever before because it affects the bottom line, because technology is strategic, because software is strategic, and central planning committees are not what they used to be. And that's A-OK with me. I think you'll see that the power actually is distributed even further than departments and individuals, right? I'm sure many of you, if you didn't like the phone that your company gave you or they didn't give you a phone, you went and bought the one you wanted, or a laptop, et cetera. So that's certainly a trend in the IT world. And to really kind of boil this down to my core belief here, which is that over the long run, distributed beats monolithic. Now, this does have some applicability in cloud computing. I promised I would eventually talk about cloud. So I think if we look at the cloud computing market and ask ourselves, how is this trend towards distributed power playing out in the cloud computing market? And I think, let's be honest, there's a monolith in the room that we can't ignore and they're a very powerful company. And look, they have some amazing technology that a lot of people have found valuable. But the great irony, I think, of cloud computing is that while cloud architectures are distributed by nature, in fact, we used to call it distributed computing before some marketing guy came along and said, you know, cloud is a better name for it. So the architectures have been distributed, but not the power, it's been fairly dominated by one player and the headlines will certainly bear that out, right? Amazon dominates, dominates, dominates. I mean, this is what we've been hearing in the headlines for a while, right? And they have a very impressive track record. And I think, however, if we take a little bit closer look, you know, it's fair to point out that these headlines are all from 2013 and things move fast in a distributed world, right? I mean, a month ago, Hungary was gonna tax the internet and now there's protests and they're not. So things move quickly and headlines in 2013 are not necessarily the best indicator of what's really happening on the ground. And so, you know, for me, I would say that, you know, if one monolithic provider were enough, you know, then what are all of you doing here? Probably have some other opportunities, some other things you're interested in besides Amazon. And in fact, if we look forward, you know, pass the headlines and look more at the trend lines, I think you'll see that, in fact, OpenStack, which is a distributed effort by design built by all of you, used by all of you, from hundreds of companies. You know, we're on a pretty good trend here and I think that this may not have shown up in the headlines yet, but in the trend lines, I would say that the distributed power of OpenStack that's built by many different people and used by in many different use cases is a pretty powerful thing. And you know, Amazon is certainly impressive, but I think one provider is simply not gonna be enough. And just yesterday, we heard from some very impressive users, BMW, Time Warner Cable, BBVA, who decided that they wanted cloud, but they needed it in their own data center. This is not unusual. This is a very popular model for cloud computing and OpenStack's one of the ways to do that. And so, you know, because software is eating the world as everybody has been saying for a while, it's no surprise, right, that everyone's gonna have a little bit different cloud strategy because it affects every business. So there's not gonna be one cloud strategy that's gonna work for everybody. There's not gonna be one cloud provider that's gonna work for everybody. And in my view, you know, what we really need is not one vendor, we need one stack. And that's actually what we can collaborate on across the industry with input from users. And in this world, we believe that the distributed trend is gonna play itself out with, of course, distributed computing, but certainly with a number of different vendors. You know, we have over 400 companies supporting OpenStack. This is one of the major reasons that we hear users are interested in adopting OpenStack because they have a lot of choice, as Jonathan talked about yesterday. We certainly have a lot of different models, okay, whether it's a distribution, an appliance, a public cloud, a private cloud, a hybrid cloud, and I'm excited for you all to hear from some of our users that are gonna be speaking today because many of them are pressing the envelope on the many different models in which you can use OpenStack to meet your needs. And it's certainly distributed by vertical. You know, we have banks, we have film and television. Many different industries are adopting OpenStack in their own way. And ultimately, you know, distributed R&D is what builds such a great platform on such a fast pace. You know, we've had over 2,000 developers contribute to OpenStack from 100 companies from all over the world. And so at the end of the day, I think that my thesis and my claim to you is that power will be distributed in every level, but including cloud computing. I do not think cloud will be immune to this trend. And I think power is actually already distributed when we think about cloud computing. So just share a little bit more data with you. I love data and graphs and maps for that matter. So hopefully you'll enjoy a quick tour through the wide world of cloud computing. So this map, does anybody know what these dots represent? Anybody? AWS, yeah. So these are data centers or regions where Amazon has public clouds. And you know, you might look at this and say, well, that's very impressive. They have a lot of capacity. They're in every region of the world. But if that capacity is not in the country that you need, then it's not really capacity, right? Like if you need a cloud in France and someone comes to you and says, don't worry, I've got one in Germany, you know, that claim is a little bit suspect. And I would say that it's not really capacity, it's audacity to tell you that that's good enough for you. Don't worry, we figured it out, we're in Germany. Well, that's great if you need the technology they have and you're in Germany, but if you need a cloud in France, might not be enough. And so if we look at all of the public clouds we have today, powered by OpenStack on the same map, you know, we have multiple clouds in Germany, multiple clouds in France, multiple clouds in the UK, and many other regions of the world, Mexico City, and I think that's pretty exciting. In fact, you know, just to highlight one example, a recent OpenStack-powered public cloud that you should really learn more about because they're doing some really innovative things. It's called Cloud and Heat. Now this is a company that is actually distributed in terms of how they deploy their cloud into multiple buildings. And the reason why they have these buildings distributed throughout Germany is that they are using the heat, the excess heat from the servers to heat the buildings, hence Cloud and Heat. So there are many different ways, many different models, many ways to innovate. And that is certainly what we're seeing with the OpenStack public clouds. And I believe that we will see many, many more throughout the world when we reconvene at our next summit in just six months. And last but not least, if you add in all the private clouds running OpenStack, just the ones we know about, the ones, many of you who've been nice enough to fill out our user survey and tell us about your clouds, I think you'll see that in fact, OpenStack is everywhere today. And that's just evidence that one provider is not enough. And I think that that's pretty self-evident at this point or we wouldn't be seeing clouds all over the planet. And at the end of the day, one thing I want to make clear is that I didn't get up here today to bash Amazon, believe it or not. I actually think that they're a very impressive technology company, and in fact, two out of three of the people we're gonna be hearing from today of our users are using Amazon with OpenStack. And that's great. And contrary to what you may see in the headlines, OpenStack actually has no natural enemies, believe it or not. OpenSource is really not about enemies. It's about helping everybody use the technology in the way that they want. And in fact, much like human beings have no natural predators, OpenStack has no natural enemies. Amazon's not an enemy, there really are no enemies. OpenStack is available for everyone for free. Anyone can come to the summit, provided you buy your ticket in time. So remember that next time. But we want everybody here. It's a big tent, and that's the nature of OpenSource. And sometimes that gets a little bit lost in all the headlines, but at the end of the day, it's exciting to hear from users today who are using OpenStack with Amazon. And that's absolutely the right thing to do for them. And if we look at OpenStack itself, the technology, I promised I would maybe get to OpenStack if we had enough time. So all right, a couple of minutes on OpenStack. So the technology itself is distributed, distributed computing, cloud, they're really kind of one and the same. And I believe that if we look forward a couple of years and we think about where OpenStack is going to evolve to, it itself as a technology will become more distributed, I believe. There are a lot of discussions going on about this in the design summits this week. And with the technical committee and a lot of the leaders in the community about how we can make it easier for our users to mix and match and compose different services made up of different OpenStack projects. Because you may not need every single project to solve your need. And so I don't think OpenStack itself as a technology will be immune to this trend. And I think we'll see stronger API contracts, more emphasis on how individual components can be mixed and matched to meet the needs of different people. Because at the end of the day, OpenStack is not trying to be a monolith any more than anyone else. If we wanna to be on the right side of history we should be thinking of ourselves as distributed all the way down. So the last thing I wanna do is talk a little bit about the summits. Now we are certainly a very distributed community. So we have people here in this room from 59 countries. And if your country is not on this map please come see me afterwards and we'll color it in. We have orange crayons in the back. But I think I got them all. So this is how diverse and how distributed our community is. And it's very important that we brought our very first summit in Europe to Paris because we have a huge community here. But not just to serve the needs of the community in Paris but we all came from across the globe to be here. And that's the way we manage our summits. They're all global by nature but we do distribute them around the world. And a year and a half ago we went to Portland, and then we moved on to Hong Kong, our first summit outside of the United States, a huge milestone. And of course, Atlanta six months ago. And today we're in Paris. So those of you who are astute observers may have noticed on the back of your hoodies that in fact the road does go on from Paris and there are a couple more stops planned for next year. So a few of you have probably deduced where we're going next year but I'm very excited to tell you that we are going to Vancouver in May of 2015 which is a gorgeous city. And I hope all of you can make it and take that train. There's not an actual train but it's a metaphor. Stay with me here. And then on to Tokyo. And what's so exciting to me about this trend, I'm into trends, if you haven't figured that out by now, is that this will be our third, by the time we get to Tokyo, this will be our third consecutive summit outside the United States which is just absolutely fitting for a distributed community working on a distributed cloud platform. And I hope that that trend continues and I hope to see you all in Vancouver and Tokyo and around the rest of the conference. Thank you very much. Next up to talk about even bigger thoughts is Tim Bell who runs infrastructure for CERN. So I'm very pleased to bring out Tim Bell. Tell us what CERN's doing. Thanks Mark. Bonjour à tous et tout. Thanks a lot for having a chance to talk to you about CERN's experiences as we move towards an environment around OpenStack. People often describe this as a journey. Many of the user stories describe it in these terms and it is really a cultural and technology transformation. So what is CERN? CERN is the Centre d'Opprètes de Recherche Nucléaire where an organisation that supports 11,000 physicists from around the world, another worldwide collaboration taking on difficult problems. And these scientists use the facilities at CERN in order to understand the universe. So this is basic research. What is the universe made of? How does it work? There are, however, spin-offs that come from this. Many of you with smartphones will be using capacitive touch screens invented at CERN in the 1970s, the World Wide Web in the 1990s but it's not the focus of the work we do. So what do physicists worry about when they wake up in the morning? We had a great event in 2012 in July where two of the major experiments at CERN, CMS and ATLAS, stood up and independently, without talking to each other at all, produced a fundamental particle the Higgs bows on with the same mass. This therefore accounts as a scientific confirmation and it's been described as being the equivalent of landing a man on the moon. At the same time on the personal side, Professor Higgs and Professor Englert, who in the 1960s had come up with these ideas, were able to then go to Stockholm to collect the Nobel Prize in 2013. So this is 50 years between an idea to actually confirming that as a result. However, we're not finished yet. There are some major concerns that we have around the universe, what it's made of, what does the standard models look like to describe how the particles fit together. Amongst the things we're puzzled about is why don't we have more antimatter? So the universe started off with a big bang, lots of energy, we ought to have equal amounts of matter and antimatter. Luckily, we are largely matter, there is some antimatter out there but it's really very small. So we participate in various experiments such as this one attached onto the International Space Station, which being outside of the Earth's atmosphere can observe antimatter particles coming in from outer space. But there are other problems we're facing. We've lost 95% of the universe. So when we look at the planets and stars and how they move and how the universe expands, we know that the universe should be a certain mass. However, when we actually count the planets and the stars, we see that we've only got 5%. There's something out there, dark matter, dark energy, which has to be present to describe why the cosmos moves as it does. Looking out into future discoveries, there's some really interesting questions about gravity. We can describe three of the other forces very well with the standard model that the Higgs has confirmed as part of the jigsaw, but gravity is a real problem. We suspect there are particles called gravitons and we suspect that these briefly appear in our part of the universe, the four dimensions that we perceive, but then could potentially be moving into other dimensions, a bit like if you're on a tightrope, you saw an ant and then the ant disappeared as it walked around the other side of the tightrope. So as we move the LHC further on, we hope to be able to discover some of these particles and understand the universe further. So when we're faced with a problem such as this, how do we solve it? The first thing to do is to bring together a large community and then the second thing to do is to design experiments. The LHC, the Large Hadron Collider, was conceived in the 1980s. At the time, a huge number of technology problems were confronting us, but we chose to set off on a path to build a 27 kilometer ring, 100 meters underground. Straddling the border between France and Switzerland in order that we'd eventually get to a point where we could construct these experiments. If you ever get a chance to go underground and we had around 80,000 people come over to an open day at CERN in 2013, then what you'd see is you'd see these blue pipes. They are actually surrounded by magnets and inside there are two one centimeter tubes. The magnets themselves are cooled down to minus 271 degrees centigrade. So two degrees above absolute zero. And the tubes inside have a vacuum which is 10 times less dense than on the moon. And in those tubes, we send round protons, hydrogen nuclei in two directions. And at four places around the ring, we cross the beams. When we cross the beams, they're at the four detectors. These detectors can be viewed as digital cameras. The slight difference between this and your standard Instamatic is that these ones are roughly the height of Notre Dame. They weigh about the same as the Eiffel Tower, 7,000 tons. There are 100 megapixel cameras and they take 40 million pictures a second. That creates, amongst other things, some great pictures. It also creates one petabyte a second of data. So to handle this, we have massive computer farms 100 meters underground, filtering this data down to levels that we can record, looking for things that we know are the patterns of the physics that we want to investigate. However, we then have to record this data and analyze it in detail. So how do we do this? In 2014, we've had the CERN Computer Center. We've got around 100 petabytes of data, primarily stored on tape. And we're currently recording our last year of running before we started the upgrade of the LHC to higher energy, around 27 petabytes a year. 11,000 servers, 75,000 disk drives. We've got some people who are pretty busy doing disk drive swapping. Looking out to 2015, we're going to be doubling the accelerator energy. This will certainly be a significant increase in data rate. However, as always, we look forward further than that. And when we look at the plans for how we can use the LHC and increase the energy, we're looking at 400 petabytes a year in 2013. Sorry, 2023. The compute power is likely to be around 50 times more powerful need than what we have at the moment. So with that prospect, then we clearly have to have a computer environment which is reasonably flexible to be able to address these kinds of needs and to be able to perform the computing necessary for the physics. So in the Geneva data center, we have a nice data center. It used to have one mainframe. It used to have one cray and it was great. It's got a raised floor. You can walk around underneath. However, when you put in standard industry servers now, you can only put in a small number. People ask, when are we going to fill up the rest of the racks? And we just can't. Six kilowatts per square meter is the maximum that we can call in this environment. For those of you that are interested, actually the center is on Google Street View. So you can go down and wander around. It had about 25,000 people through as tourists last year. So it's actually a tourist attraction since the facilities are paid by the people of Europe. It's only fair that they should also be allowed to come along and see what we're using the money for. The center itself, we tend to call the Geneva data center, but actually it's in France. So we're just over the border. And when I say just, CERN is international organization. So the data center and my office are over the border and I then walk 50 meters over to the restaurant the other side to get my cup of coffee, which is a Swiss cup of coffee. However, just a center in France isn't all that we're needing in order to address these requirements. Clearly, we are at a point where upgrading that data center would have been a significant investment. So instead, what we looked to do was to expand the computing facilities using other member states facilities. So we asked the people, the countries contributing to CERN to propose to us a data center location and Budapest in Hungary was chosen. We have 200 gigabit line connections between the two sites. So clearly we were rather interested in the discussions around internet tax in Hungary since that would have caused us a significant cost. So the good news is that we've got a new data center. The physicists are pleased because they've got possibility of additional computing resources. The bad news is that in today's economic times we can't be asking for more staff. Equally, given that we want to scale out the computing we have to make sure we use the resources we've got to the maximum. We have legacy tools that we wrote 10 years ago that we used to manage the data center. For those of you that are used to looking after 100,000 lines of Perl, it's not something you get up early in the morning for and come into work. And the user expectations are being set by public cloud services. They don't want to fill out service tickets. They don't want to wait weeks while machines are provisioned and cabled up for them. They want to click on the interface, get a cup of coffee and come back to a working system. So how could internal IT be providing them these kind of facilities? So what we did was to challenge some of the fundamental principles of an organization such as CERN. We're a research organization from the physics point of view, but from a computing point of view we're actually no longer leading edge. There are many other organizations running at scales beyond the size that we are. So we shouldn't need to do things that are special. We shouldn't be producing custom requirements lists that then mean we have to produce custom solutions. We need to be addressing the staffing question. People are assumed to the situation where you get double computing capacity every 18 months. People on the other hand, it's actually quite difficult to even maintain the level as technical debt accumulates and they're therefore needing to maintain as well as advance. At the same time, culturally, we wanted to find open source communities. It's very much in the culture of the organization to be contributing to open source and we wanted to learn from them and equally be able to contribute back those areas that we felt were of general interest but above all, use those communities also to execute cultural change within CERN. So what do we do? We sat down for a few weeks. We did a lot of prototyping and we selected a tool chain around these areas. So puppet for configuration management, Kibana Elastic Search for monitoring, Ceph for storage and then the key part of this was OpenStack using the RDO distribution in order to be able to bring a flexible and agile cloud to our users. So where are we now? We started off with what was pretty much a research project in 2011 with Cactus. For those of you that tried, it was an interesting experience but it was already clear that the rate of maturity of the software was going to exceed the speed with which we could be getting our organization ready for production. So we started the investment, started doing the tooling necessary, the training necessary and in 2013 in July, we went into production with the Grizzly Release. By production, I mean when someone creates a virtual machine, we promise we will maintain an environment with that virtual machine in the future. We currently actually have four ice house clouds at CERN. So the main CERN one is actually 75,000 cores because while I've been away traveling for the last week, the guys have installed another 5,000 cores behind me and I didn't have time to update the slides. We've got three other instances at CERN, those large computer farms I talked about next door to the accelerator. The accelerator's been upgraded, those farms are idle. So what did the guys do? They spun up OpenStack, 45,000 additional cores, 100 meters underground in order to be able to deliver additional simulation capacity for the physicists. Current outlook, we have about 2,000 additional servers on order and we'll be hitting probably 150,000 cores and total on the site between CERN and Budapest by the first quarter 2015. All code that we've written that is of any interest to the community we have submitted upstream, all the code we feel is not of interest is in GitHub under the CERN Ops Git repository. People are willing to welcome to browse it and see if it is of interest. So the two areas that we've looked at in detail are NOVA cells. This allows us to scale to the sort of size we're talking about and also to be looking out as we look at the challenges come 2023. This allows you to build up small units of OpenStack and assemble them together into what appears to the end user to be a single homogeneous resource. And this allowed us to simplify the end user experience for the end user while still allowing us to scale out the underlying environment. We have cells in Geneva, cells in Budapest and we have a set of front-end servers that arrange to direct the work appropriately and schedule it. At the same time, we're facing another set of challenges which is we're seeing a large number of OpenStack clouds appearing along with the four I talked about at CERN. We're seeing 50, 60 organizations that collaborate with us deploying OpenStack as well. And we're also starting to look at use of public cloud resources outside of CERN. Clearly, if we carry on growing we need to find ways under which to be satisfying that capacity requirement. So CERN has an organization that allows us to do collaboration with industry, the CERN Open Lab. And during one of the summits, we had some discussions with RackSpace where we clearly identified a common interest in solving this problem. So in October last year, RackSpace joined the Open Lab collaboration and then with that we set off to solve this problem. The code for this is not research. It's now in the production release. The server came out with IceHouse. The clients came out with Juno. People can now be deploying federated identity on OpenStack. At the same time, this is clear technology change but the cultural change in many ways is worse is a larger problem. We have people who can accelerate very fast, understand rapidly the techniques that can be used. On the other hand, we also have people for whom the applications they're running, some of these techniques don't appear so relevant and it brings us back to Hook's law, which is the law that says you can expand a spring to a certain level under load but then eventually it deforms. And it is key that while we focus a lot on the progress that you get on the one side of the spring, that you make sure that the tension doesn't get such that you reach that point. So focus on the new technology but also keep in mind that there will be some people for whom they need a little bit of persuading. So how do we go about doing this? We assemble the small team and it really was of this sort of size. It consisted of a mixture of experienced people who've been running services before and a set of new hires. Naturally CERN has a certain rotation of staff, many contracts are short term. These people came in with basic Linux skills and then leave CERN knowing puppet, open stack, elastic search, Kibana, they don't spend very long before they find new job opportunities. And with this team, we then went through the process of building something that we could show people. Demonstration is an extremely forceful technique. However, a number of people came up with thoughts and to be clear, these are rather extreme examples of how people react. There were people that felt it was going a little bit too fast. They wanted to take things more slowly. And we have a fixed window. The LHC starts up again in April next year. The transformation had to be done by then. In fact, this week while we are here, they've just started dismantling the old configuration management system that we were using. At the same time, there were the people whose services were organized in silos. This means they had the budget, the staff for the entire service. Now to turn around to them and explain that now, please give us your budget and here is a quota on the cloud is a major cultural change. We had the experienced service managers who had been running services for a long period and were finding that the people who were joining their teams actually had been taught puppet and open stack in university. And they were saying, you're doing it all wrong. You should do it this way. And actually, we had a skills inversion where rather than you join us sitting next door to the specialists and understanding how to do the tools, we're finding the new people explaining the more experienced ones the best way to go. And then finally, we have the people for whom their servers are precious. They have something where what they want is a unique configuration that you can't get from anywhere else. Now in many cases, these requirements can be justified and it involves a large amount of consulting and discussion to work out the right balance between implementing consistent environments and implementing specialized configurations that potentially cost elsewhere in the organization. So when I look back at how science has evolved, Newton wrote a letter to Hook, the guy that wrote the spring theory, where he talked about dwarves standing on the shoulders of giants. And science evolution has been on this basis. Each person makes a small progress. The next person takes over, builds on top of that to get further. Large collaborations like we have around the LHC, the Atlas collaboration is around 2000 people. These communities get together, work on the basis of transparency, meritocracy and a shared vision. When we look at the worldwide web, Tim Berners-Lee started off and had a very nice text line browser. He didn't feel that embedded graphics was really something that was needed. Now, if we'd been in a situation where the only browser selected was the one that Tim wanted, we'd probably still have a basic text interface. We might still be ending up on Internet Explorer 5 as the only browser that we could use. However, what we have now is a hugely vibrant ecosystem which is based on some solid core APIs and allowing healthy competition in other areas. So in summary, thanks to all of you for your efforts, when you do helping on an open stack, so when you're contributing code, finding patches in documentation, when you're working through with others on the mailing lists, when you're coming to meet up such as this, giving feedback on the user survey, just remember they're along with helping out open stack, you're helping us understand how the universe works and what it's made of. Thank you very much. Why is OpenStack? OpenStack features open architecture, optimal scalability, good compatibility and a comprehensive ecosystem. Why is Huawei? Huawei is a leading global ICT solutions provider and offers a wide range of NFV solutions and products. Huawei joined the OpenStack Foundation in October of 2012 and became a gold member in November 2013. From the release of Grizzly to the release of Juno, Huawei has dedicated itself to supporting the open stack community. The Huawei Compass Incubation Project has been developed as an external open source project and has received support from customers and partners. Huawei is committed to making contributions to the open stack community. Huawei is dedicated to continuing its innovation based on OpenStack, fostering cooperation and opening up further for business. The result has been that cloud computing solutions are now offered to customers in the fields of IT, CT and ICT. Let's work together and embrace OpenStack. Love tearing from Tim Bell, as always. I hadn't realized till he gave his talk that in fact with CERN having a presence in Hungary, that internet tax I talked about earlier would have actually affected him. If I'd known, I would have been in the streets protesting because nobody puts a tax on Tim Bell on my watch. So I'm glad that was reversed. So quick question for the audience. How many of you booked your travel here by going into a travel agents office? Now how many of you used an online tool to book your travel? Cool. So I think that's only fitting in the world that as Jonathan described yesterday where everything is on the web and powered by cloud that we get to hear from Rajiv Khanna who's vice president of infrastructure for Expedia, the world's largest online travel agency. So come on out. Let's hear what he's gonna say. Good morning. How you guys doing? All right, let's try it again. Good morning. Yeah, that's much better. At least I know everybody's awake. So I want to talk to you about all the cool things that we're doing at Expedia with OpenStack and how we are speeding up innovation and how we're getting faster to the market. To start with, a little bit more about Expedia. Expedia is made up of various different brands that we own across the world. And we are one of the largest, or the largest, online travel agency in the world. So I'm sure you've seen some of these brands and have used those. And if you haven't, I encourage you guys to go try it out. Across these brands, we operate 150 sites in so many different countries in various different languages. We get about 60 million unique customers that visit us on a monthly basis. You can shop 365,000 hotels and over 400 airlines on our sites. And we have 14,000 employees distributed across 30 countries. All this is really good stuff. Everybody loves numbers. But at the end of the day, technology is at the core of all of the things that we do. Like most industries, travel is a very competitive industry. It's important that we are quick to the marketplace, we are fast, and we have to be very innovative along the way. Otherwise, we lose business. So everybody, I'm sure everybody here has been to a party and participate in conga lines. You don't wanna see me do this, it's not pretty. So but conga lines are not that much fun when it comes to infrastructure. Delivering infrastructure, I'm sure you can relate to a lot of this. So here's the infrastructure conga line. In the front, you have data centers. Then you go to racks, you need to have servers. You gotta cable them up, and you gotta make sure you keep in track of your assets. Then you gotta hook it up to the network. Oh, don't forget to install the operating system. Storage as well. Then you gotta install the application, configure it. Get the firewall rules done, get the load balancer rules done, and on and on and on. And this is pretty common problem that exists across the industry. This isn't an expedient problem, this isn't conga lines exist everywhere. And as I said, there are a lot of fun at parties, but not when it comes to delivering infrastructure. So our goal is to actually get rid of the conga line. The other thing to keep in mind, so what happens when we don't have enough things in place, when we have shortage of certain assets? So what's the human nature? You go out and go to a panic bar. I know, I have a lot of things that's sitting in my closet, which I thought I needed at some point, but I only needed one, but it was available only in the short supply. So guess what? I went out and I bought two. And I have a pile of stuff sitting across in my closets that I never use. So what does that look like for the enterprise? It's money, it's real money. So if you think about it, in the data center, you have racks and racks of equipment that's consuming power, cooling, and using up money that is either sitting idle or it's under consumed. The whole intent for us is to make sure that we get the right efficiencies and we don't wanna be wasteful. A few things in your closet may not feel that bad, but if you pile them up and do all the math, it even adds up for you. So think about a scenario. You're a developer. You have this brilliant idea that's gonna add millions to the bottom line. And now you wanna go start developing that against that idea. But guess what? Get in the back of the conga line. You'll be ready to go in about three weeks. If you're lucky, it may be months in some cases. So asking a developer to put a ticket in is really gonna kill the innovation that they wanna do and slow them down. And our intent is not to do that. We wanna solve for that problem. So when we went out and asked our developers, tell us one thing, what do you want the most? And the response was we want fast. Not just fast, way fast. And then you can add as many ways in front of the fast and it's still not good enough because this has to be so much instantaneous. They want it when they need it. They wanna be able to consume it immediately. There's no need or there shouldn't be a need for a conga line. And then at the end of the day, all of this matters because we wanna be fast to the marketplace and it adds to our growth and it adds to our business. So when we started to design this, design a solution based on OpenStack, we set forward some principles that we wanna live with. It has to be friction-free. In order for it to be friction-free, it has to be end-to-end automation. It has to be self-service. Once again, we are trying to get rid of those tickets. But at the same time, we are a pretty large consumer of public cloud. It has to support both our internal private cloud as well as a public cloud. And we wanna avoid the vendor lock-in, not just from the hardware point of view for all the software that we consume, the public cloud providers that we consume. Well, we use AWS today, but we may wanna change that over time. So we wanna have that flexibility. And at the same time, we wanna be able to balance the needs of IT with the needs of business. It's obviously all about speed. It's about getting out and being innovative and letting people do what they need to do to be innovative, to help grow the business. But at the same time, we don't wanna lose control of the entire environment. So we wanna find the right balance in control with openness. And in most environments, we wanna own the business from our internal customers. We don't do very well when we go into a conversation about you must use this, or you must use this offering. So generally speaking, we heard about credit card consumption yesterday. People swipe their credit card and they know what's possible. So we have to have an offering that's competitive to the rest of the environment. So here we are. So when we started, we started this in April of 2013. And when we got started, we needed some help. We needed some partners along the way. We worked with Scaler and we worked with Marantos. With Scaler, we were able to get integration with a lot of our internal systems. We integrated with Active Directory. We integrated with our ITSM suite as well as a variety of different deployment tools that we have in-house. Along with some of the financial controls to be able to know who's consuming what so we can do show back. And at some point, we may choose to do charge back to our internal customers. End of the day, this isn't free. And you have to be able to figure out what the total cost is. Along with that, obviously OpenStack is in the middle of all of this. With OpenStack, we've been able to stand up an internal cloud along with the option to be able to go to a public cloud. We have a choice. This is all about choices. We have a choice to go to AWS. We can go to Rackspace. We can go to Google or any of the other public providers that we may choose to use. So if you look at this design, we are able to pretty much, we are able to address all of the design principles that we had. We have choice. We have the right level of control. And we are able to consume both internal and external clouds. And it's self-provisioning. And as a developer or as a consumer of this service, you have an option to interact with this through an API, through a CLI, or if you really choose to, you can use the GUI as well. So what's the outcome? So far, we have three regions stood up, two non-production and one production. The adoption of this platform has gone viral. We have over 20,000 instances that have been stood up to date. And they grow up, go up every day. And the way we interact with our development community is a lot different. We interact in a much different way than we did before. There's a lot less noise. There's a lot less tickets that we have to deal with. They are moving much faster. They want to be able to move at the rate of the business. And we are seeing a lot more, a lot better rational behavior. There isn't a lot of hoarding going on. People aren't holding on to the stuff. We are seeing the true natural consumption that we expected out of an elastic compute platform. And we are seeing that. People are creating machines. They're giving back machines. Hence, we've been able to grow the environment. And one of the larger problems that we have at the moment is trying to keep up with the capacity on the back end. And that's a good problem for us to have. And the net result of all of this is we are speeding up deployments. We are speeding up the way we do business. And we are obviously adding, as a result of all of that, we're speeding up innovation, which results in better business value. So what's next? We need to stabilize the platform. We've had some issues along the way. We believe we are almost there. We've done a lot of work in the US. We are a global company. We want to go global. Back to choices. We want to be able to put this footprint wherever we have presence and where it makes sense for us from a business point of view, but at the same time be able to leverage the Amazon map that was shared before or of the other public clouders as necessary. Because we don't think we need to be in every location and every region. And we want to be able to leverage both. We want to enhance the capabilities. We want to offer up database as a service, monitoring as a service, load balancing as a service, firewall as a service, really maturing the platform from an infrastructure as a service to a platform as a service. But we have lots of old stuff, legacy stuff. This is where we run our business today. We have over 30,000 instances of operating system running in the environment. We need to figure out what to do with that. We like all the goodness that we get from the platform. We want to be able to leverage that goodness within our legacy footprint as well. And our application groups have a lot of work to do. They need to be able to, now we've done the, I would say we've done a lot of the heavy lifting up front, but they have a lot of work to do to be able to figure out how to consume this platform. You have to be able to do some re-architecture work. Obviously, anything that's new that's being developed is being developed to this architecture, but we still have a lot of old applications and legacy applications that have to be redone. So in closing, I want to thank a few folks. I want to thank my team. They've had a lot of hard work over the last several months and they've worked putting in some long hours to get us where we are. And I want to thank them. Some of them are here, at least one of them is here and the others are not. I want to thank our partners. They've been struggling with us along the way as we figure this thing out. And it hasn't always been easy. And I want to thank the community. A lot of this wouldn't be possible without you guys. So keep going. We think we just getting started. We have a long ways to go. And once again, thank you very much. The world's largest online travel agency and one quick update I did hear on Twitter that in fact, the map of all of the attendees did not include some of you who made it all the way from New Zealand. So I'm happy to say that we actually have people here from 60 countries. So New Zealand, I know you're out there. So 60 countries, that's not bad for a distributed community. Next, I'm very happy to introduce Weston Josie from Tapjoy. So come on out. You all are a little, you're hungover. It's okay, you can just admit it. You've all were out last night, it's fine. Well, thanks for having me. As he said, my name is Weston Josie. I run operations over at Tapjoy, a mobile advertising company. I'm not here to announce that we are pivoting and moving and building our own large Hadron Collider, unfortunately. It would be a fun pivot, but it's not something that we're very good at. So let me tell you a little bit about our story in our first year on OpenStack. So some history. What is Tapjoy exactly? Well, we are a global app tech startup. So what exactly does that mean? Well, for mobile developers all around the world, we power a couple of things. One, first and foremost is monetization. How do you basically monetize the users that are coming to your app so that way you can run your business? Two is analytics. We wanna provide you a clear and concise way of understanding how your business is performing and how you might need to change it. Three is user acquisition. We help to get users into your platform so that way more people are using your app, your game, your newspaper app, whatever the case may be. And then it's also user retention. So how do we keep the users that you already have from leaving the platform that they're already on? So Tapjoy is quite large. We have over 450 million monthly users across 270,000 apps. It's an absolutely massive platform to get to work on on a daily basis. We have a worldwide presence. We're actually basically in every single country all over the world. The thing that I like to say is that the sun never sets on the Tapjoy empire because in the middle of the night when I'm sleeping, I still get pages occasionally because people are still using it in Japan somewhere. So it's definitely worldwide. So a little bit of technical details. As Mark was talking about earlier, I am that guy who does use AWS. In fact, we use it a lot. We grew predominantly on AWS for our first two years. We now to date basically have right around 1,100 VMs on AWS running at any given moment. That was taken last month. It will probably continue to go up every month here on out. We have active regions in Asia, Europe, and North America. And annually, we now process over one trillion requests, which is a lot to have to handle on a daily basis. So, OpenStack. It's not just AWS for us. It's also about OpenStack. In early 2013, no, 2015 hasn't happened yet. In early 2013, we began assessing the viability of buying our own hardware and starting to divest ourselves off of just using AWS. And so what we did was we analyzed the landscape. We looked at a couple of different vendors. We looked at a couple of different options. And we narrowed in on OpenStack very quickly. It was clearly the winner. It was clearly the front runner. And the reason for that is what you see in this room today. It's all of you. It's all the community that's come together around this platform. It's the fact that we have people like CERN and Expedia pushing upstream to the platform. It's clearly going to win. And so we wanted to get behind it. So, what we decided to do is we wanted to start looking for an OpenStack partner, a hardware vendor, and a COLO provider. And we started doing all of that in summer of 2013. It was important for us to find partners that were willing to work with us on our projects because in-house we didn't have that expertise on OpenStack. We didn't necessarily have a wide expansive expertise on how to build out the infrastructure that we wanted to build because we were very, very good at AWS. I can go in and I can shoot the shit with the best of them, so to speak, when it comes to AWS. But OpenStack, I'm a newbie. I'm going to a bunch of the talks here because I'm learning more and more about OpenStack every day. So we really needed to have some good people with us. So, the plunge. I have a great guy on my team, James Moore. He's unfortunately not here today, but he basically helped me build our OpenStack deployment from day one. It's really his baby. He did so much of the hard work. And honestly, he's probably better up here than I am, but we decided to build out what we called TapTroy 1, which we decided to build as a unit for our data science department. So at a company like TapTroy, we're obviously analyzing a bunch of analytics and a bunch of data on a daily basis. And we need a lot of different applications to run on that cloud and run performantly. So we run Hadoop, HBase, a bunch of complex data modeling, and that really powers our brain. It helps us make decisions about what to do in the moment for all the users that come to our platform. And what we did was we defined our requirements for what we wanted to build from the ratios that we were working with. And for us, it was really important for us to think about how many CPUs to disk. What was the appropriate ratio of RAM per CPU? Those are the sorts of things that we started looking at, and we weren't necessarily going to get that on AWS. And so when we started to build these requirements, we got to work from a blank slate, and it was the first time that we'd ever really been able to do that, because we got to define exactly what we wanted to build, exactly how we wanted it to look, and exactly the right ratios. So we launched this summer in June of 2014. We cut over with zero downtime, and since we went live, we've also had zero downtime. So if there was some wood up on stage, I'd be knocking furiously right now, because if you're an ops guy, you get terrified when you talk about downtime, but so far, so good. So what does it look like? Well, we have 348 data all-purpose nodes that we basically kind of built out. It's a 3U configuration, 12 nodes per 3U, and on each node, we have a 1265Lv3, that's a four core, eight hyper thread, two and a half gigahertz processor. These are the low power chips, and you'll see why in a minute. We have four one terabyte drives per node. That's one spindle per core, and 32 gigabytes of RAM. Each of these has a dual one gig nick for basically fully non-blocking networking throughout our entire infrastructure. Another reason why we chose this particular setup is that it's very flexible. It's very recyclable if we wanna use it in the future for our app servers, database servers, whatever the case is, and so we decided to choose this. We also have 12 management nodes. These are a little bit less sexy, a little bit less interesting, but they basically are 2650s, V2, 2.6 gigahertz, 128 gigs of RAM. These guys have the SSDs in them, and they have dual 10 gig nicks coming out. So for us, it was a density play. We did all of this over only three racks. Each rack can draw upwards of 17 KVA, and the reason why we were able to do that is because we did have good partners. The people who worked with us were willing to go that extra mile to help us design the infrastructure that we wanted and be creative. MetaCloud contributed very heavily to our network design and helps us to power our OpenStack deployment. We are running basically a MetaCloud variant of OpenStack, and Equinix is our COLO provider, and they did some fantastic jobs helping us to design the cooling and the power requirements to make this work. We are running this on the East Coast, and our data center is actually in Virginia, right next to the AWS facilities, and getting the sort of power density is not necessarily the easiest thing in the world. So some of the tools that we use, we have two open source projects that have recently come out. I hope all of you go and take a look at them today. We have a tool called Slugforge, which we released, which is basically Capistrano meets containers. It's a great deployment tool for basically rolling out your code or rolling out your infrastructure in a containerized way. It's really fantastic. I hope you come take a look at it. We also have Chor, which is our public, pluggable back-end queuing system, which we use to plug into systems like SQS, file systems, and a couple of internal queuing systems that we're working on at this moment. So I wanna get a little philosophical with you now and talk about why I think OpenStack is gonna win and why it matters. So I don't know if you guys have seen this campaign yet. It's the new Android L campaign that Google's been running. It's this Be Together, not the same campaign. And I think it's really powerful and I think it has a good metaphor for the entire OpenStack community. So what is it basically all about? It's all about the fact that fragmentation is okay, right? It was one of the big knocks on Android when it first came out. It was like, well, how do I design for it? As a mobile developer, what's the right screen resolution that I'm working for? What's the processor that's actually running on the system? How much RAM do I have to work with? And everybody was ragging on it and there was like, oh, this is why Apple's gonna win. I got three devices that I have to code for or one at any given moment and that's it, right? But Android, no, no, they were right. They had 11,000 distinct Android hardware variants in 2013 according to OpenSignals and I'm sure that number's gone up over the last year. That's just a mind-boggling amount of configurable hardware out there that you have to choose and you can play with. It's absolutely amazing and the way that Google did it is they iterated on it, they expanded on it and they improved on it and all the hardware vendors jumped on as well. So here's my challenge to all of you. I think we can get to 10,000 unique variants of OpenStack over the next couple of years. A few sizes does not fit all. As somebody who uses AWS on a daily basis, I can definitely tell you that there are times where I wish I had a completely different system that I could work around with. A lot of times I want a lot more disks than they're willing to give me and I don't necessarily wanna have to pay through the nose for it. There's only seven modern variants on the AWS platform right now. That's it, seven, that's all you have to work with. You may get them in slightly different sizes but underneath the hood, it's just seven, that's it. So I think we can create 10,000 different ones. I think we can create an ecosystem that is flexible and uses fantastic hardware and is all built on top of this great core foundation that is OpenStack. If you need a lot of CPU, throw in a bunch of CPU. You need a bunch of disks like we did. We needed four terabytes per node. Throw in a bunch of disks. If you want a bunch of RAM, toss in a bunch of RAM. So I wanna know who's got the great reference design for Xeon 5s because every day James Moore is pitching me on how we need to have these crazy CPU driven compute nodes sitting in our architecture. And I wanna know what other people are doing with it already and why it works for them. I wanna know who out there is already expanding with some of these one and a half terabyte vertically scaled nodes, the E7 series, specifically the 8893, because it's pretty interesting to see how that would potentially work on a VM based environment. Backblaze also just came out with their 180 terabyte pods where they blew Amazon out of the water in terms of cost efficiency on their storage. I wanna know who's gonna figure out how to basically integrate that into their core OpenStack platform. And honestly, I wanna know what your workload looks like. I wanna know because I wanna use it to help influence our designs. I want it to help to influence how we start to think about our hardware going forward and our designs going forward. So what will you choose? What's going to be your variant? How will it look for you? So how do we get there? Well, we've got a lot of hard work. It takes a long time to potentially spin up your first OpenStack deployment. If you haven't done it already, it's gonna take a little bit of time, but it's okay. Just gotta stick with it. It took 12 months of effort for us at Tapjoy and there were some bumpy points along the road with some delays in hardware back in January and February. Well, we made it. I also wanna know how do we get cheaper? How do we continue to drive down the cost of deploying your very first or multiple OpenStack deployments so that way there's no question about the cost savings that you can basically derive by doing it yourself? How do we also figure out how to do this so that way monthly bills hurt a lot less? Sorry, so that we have it more like the monthly bills because honestly going and asking for a multimillion dollar check from your board is not exactly the most pleasant process I've ever been through in my entire life. It's like going and asking for a seed round. And how do we win the hearts and minds of the next thousand startups? So let's be together, not the same. Don't just copy your infrastructure, iterate on your design, find some great partners, hire hard workers and it's okay to make mistakes. Ask for help and share your story. So if you wanna hear more, I'm giving another talk that's gonna go more into details about exactly how we accomplished what we accomplished. We're in Sal Pasi tomorrow. I'll also be doing a little bit of Q&A on Twitter. I'm at Dusty West or you can email me directly west at tapstra.com and yes, I am hiring in Boston, Atlanta, San Francisco and Seoul, South Korea. So if you're interested, come let me know. All right, thank you everybody. Thank you. Thanks Mark. That was great. I love it when users really get into the detail about what they're running. I think he's absolutely right. We should all be sharing our reference architectures and learning from each other, not just at the code level but in terms of configuration. So next up, we have a panel talking about telecos and OpenStack and how the two work well together. And once again, we do have a opportunity on Twitter for you to ask questions. Now last afternoon, yesterday morning rather, we actually did not get to some of the Twitter questions but I promise we will get to some of them today. The good news is many of the questions from yesterday were actually about NFV, explain it to me. What does it actually mean? Yet another buzzword to learn. So we'll get to some of those questions but please go ahead and tweet your questions with the appropriate hashtag and we will get to those. So next up, I'm gonna introduce Michael Still who's gonna be monitoring the panel. And he is a project technical lead for the Compute Project and he's kind of a big deal. So come on out Michael and introduce the panel. Wow, there's kind of a lot of you out there. So I know it's been a long morning and we all want to get to the break so we need to make this as interesting as possible but that's kind of on you, not me because you need to ask good questions on Twitter. But let's keep the ball rolling and get on with it. So I'd like to bring out my panel members please. So we have Marcus from Swisscom. Thanks man. Zheolong from Orange and Toby from AT&T. And I think I should get an award for getting that right by the way. So thanks for coming along. Now I think one of the interesting things about this panel is these guys are users. This isn't filtered by their vendors or anything like this. This is an opportunity to hear what they actually need from the OpenStack community. And so I think we should make the most of that. So let's start off with, could you all briefly introduce yourselves? Yes, so I'm Marcus Bronner. I'm working for Swisscom. I'm leading their standardization and that sort of particular because we believe that OpenStack is the de facto standard in cloud infrastructure. So that's our interest there. Zheolong Kong, I'm working at Orange Labs as a team leader. We are working in cloud computing in January and in particular in OpenStack since two or three years. And I'm Toby Ford. I work for AT&T. I'm responsible for architecture and strategy for our cloud and platform infrastructure. Been working on OpenStack for quite a while, four years. Cool, so let's start with an easy question. How are you guys currently using OpenStack? So on the Swisscom side, we actually use OpenStack from Piston with networking from Plumgrid. We use that as an infrastructure layer for our application cloud. So we run on top of this OpenStack, we run Cloud Foundry. Same here, we believe that Cloud Foundry might be de facto standard for the higher layers. That's where we basically try to produce our applications and where you also allow third parties to deliver their applications on it and use our infrastructure out of that cloud system. And that's what we do at the moment. That's purely IT-oriented work we do. And I think NFLE we're gonna talk afterwards. Definitely. At Orange, yes, we are using OpenStack in several ways. First, since nearly two years, we're working in the adoption and adaptation of OpenStack for our internal IT applications. Our senior VP, Jay Swish, has given a detailed presentation yesterday about this subject. Secondly, we are working with the company, CloudWatch, on building public cloud offers on top of OpenStack. And the last, not the least, Orange is contributing activity within the OpenStack community to improve the neutral component. Cool. And then for AT&T, we've been using OpenStack for the last four years and with a number of different partners, we've tried to stay as close to vanilla upstream to OpenStack as possible. And then where we've made any innovations, we've tried to contribute that back. We've used it primarily for internal applications and a lot of our developer-focused external offers. So our API exposure programs are all running on OpenStack. And then in the last year, we've really focused on NFV and targeted a few of those applications. Okay, so people keep using the term NFV. Give me a non-Telco description of what it actually means. Essentially, it's virtualizing network functions. Switches, routers, DNS, bind, the typical kind of functions you would need to run DHCP, these kinds of things, you would run a network. And then for Telcos specifically, things that get into mobility systems, the major three components of the mobility system, the RAND, the MPC, and the IMS core, these three components have many sub-tending components that would get virtualized in a NFV solution. So maybe part of the misunderstanding is, how do Telco data centers differ from the enterprise environments people might have seen before? Are we talking about just racks and racks of computers or is it different from that? In fact, Telco operators, such as the orange host and the manager, who do variety of different applications across many data centers. One particularity of these Telco applications is their huge heterogeneity in terms of their long-time environment. For example, the operating system, the mid-well, the library dependency. Secondly, many Telco applications should support real-time communication, which doesn't tolerate much latency. The real-time performance is also an important requirement. Probably to add to the real-time one, I don't think it's really about real-time, it's really about guarantees. So we have a set of services which require certain guarantees and also legally require certain guarantees. I think to add also, what I was not hearing so far is security. So there is, I mean, Telco systems tend to be secure at the moment. To a certain, at least to a certain degree, we can discuss the degree. So that's one thing which I believe is a bit different, though, I mean, with our footprint in banking, that's also need to be highly secure on the IT side. The last thing I would add and increment to that is existing Telco environments are very regulated, at least in North America, so the actual facility we use has a lot of legal rules we have to apply to. And so we're doing a lot of work to try to work around those as we go forward. So the next question I have on my little bit of paper is, why is NFV special? It seems to me that maybe some of the answer is real-time, but are there other things we need to be thinking about when we think about this use case? Yeah, so when we started to look into OpenStack, and that's a few years ago already, OpenStack was not terribly well suited for large IOLOids. And even if you really think, I mean, Toby was talking about in the mobile network, basically every mobile packet would flow through that cloud system, right? If you have this type of functions there. That's the level of IOLoad we need to have. And I think that's a bit of special, at least from how it was built up. I see a lot of work going into that direction now, and I like that. Yeah, I mean, I think NFV is an important thing for us because at one level it's about being more efficient and trying to run a more competitive environment, try to find something that would run multiple workloads and use up as much of the assets as possible. That's a very important driver for us because we have a lot of competition coming from interesting new areas, and they typically come at us with all shared infrastructure and all quite unified. This is not typically how Tokos have run, it's typically been very siloed, and we're trying to change that and pool our resources together on one common platform. So that's a key part of it. As well as the time to market the extensibility, our competitors are rolling things out much faster than ever before, and we have to make an environment, an infrastructure that will extend quickly. I agree with Marcus and Toby about the importance of NFV, and I would like to add a technical point. NFV doesn't not mean the simple virtualization by replacing physical machines by virtual machines for the technical applications, since the simple virtualization may lead to performance issues and doesn't provide the highly expected scale of capability. By seeing NFV, we are often talking into aspects. The first, having a career-grade cloud infrastructure, and secondly, adaptation of the existing technical applications and the platforms. So may I add to what Toby said since we wanna sort of move to a sort of relatively vertically integrated box type of business to this sort of platform, hopefully NFV platform with these functions on top, there the whole discussion comes in, sort of what's the guarantee platform gives to the VNFs, these applications on top, and who is responsible for end-to-end performance guarantees in this type of setting. So if we buy a VNF from a third party, we put it on our cloud system, NFV cloud system, who's now responsible if something doesn't go well, specifically if it doesn't go well and performance or security perspective? So I see we have some questions from Twitter, but I'm gonna ignore them for one second because I have a question and I've got the microphone. So what about less cool features? Do you guys need live upgrades, good performance at scale, cluster-wide scheduling, things that I don't see really being talked about as part of the NFV use case, or will you never upgrade your cluster, for example? Yeah, this is a particular sensitive point for me. I mean, since we've have a lot of sites with OpenStack, we really need help to make it so that we can upgrade and get rolling upgrades happening and continuous integration to happen for these sites. We anticipate deploying OpenStack in a large number of locations. Not exactly huge setups per location, but a lot of them. And that really requires that we get the lifecycle management down solid and we're able to not only deploy quickly, but maintain it over time. So that's key. And then obviously, if we have a lot of these locations, having some level of integration across site, it's gonna be essential. Okay, cool. It's very important, but I mean, I got the feeling, I mean, the whole cloud paradigm is actually exactly helping doing that. Now I'm not sure on detail how far OpenStack is in that. And if it's not, then it's a feature which is needed. I thought it's there. Okay, cool. So let's take some Twitter questions. So let's start off with what's your primary use case for OpenStack inside of Telco? What are you gonna do with these OpenStack installs? At the moment, as I said, at the moment we run application IT workloads on it. If we really go, and if we, it's all types of network functions are candidates. And we're quite frank. I mean, that's not something we're gonna do tomorrow. So we looked into it and it's basically a bit of a life cycle issue. So if there is a function at the end of life, we're gonna probably move that to a more cloud NFV-based system. The immediate benefit is not shown at the moment to sort of move before it's end of life. Yeah, for us, we actually, beyond the IT workloads that are in our existing system, I mean, we're actively pushing things that actually have already been working quite well NFV-wise in a virtual environment, things like Viata. People have used Viata as an example on top of virtual machines or containers to actually do one particular function in our network called the customer edge router, which is very a smaller kind of setup that doesn't have the performance requirements of a mobility system for all consumers that can be deployed on a one customer kind of basis, per customer basis. So we're starting that right now. So that's an initial kind of toe-in on this concept that does exist today. I think the obvious follow-up question is, will my mobile phone calls ever get routed over OpenStack? Will it ever be my fault if my call doesn't work? Yes, sure. Yes, I believe so. I mean, already today, a lot of our calls are on ending up on some sort of virtualized systems, not OpenStack yet, but sort of virtualized type of systems. I don't see any reason why not. Okay, cool. Yeah, I'm pushing as hard as possible to make that happen by the end of next year or in 2016. Okay, I think you need to come back and tell us when it's ready so that we can all make a lot of calls. Yeah, and I can blame you. So next question from Twitter, what improvements in OpenStack that aren't code would help you the most? So I guess people are thinking docs, deployment tools, that sort of thing. I think if OpenStack could provide natively integrated design, planning and configuring and deployment, automatic deployment tools to manage multiple data centers, it would be fine. And if there's tools could also manage the change management of the data center hardware, it would be better. I think the testing aspect, tempest and these things around being able to do full integration testing, API testing, this kind of thing is essential in my view to get to what we're just talking about. Yeah, it certainly seems like you guys care about reliability. Yes. Ah, which is kind of our next question, that's kind of funny. How important is continuous integration, delivery and DevOps in your NFV journey? That's a difficult one. We have long discussions. If we really go NFV, if your phone calls would rely on continuous integration at the moment, we don't feel comfortable enough to do that, be quite honest. If it's for IT systems, for applications, where they can fail, things like that, it's okay. But if it really comes to the core network bit, if our whole Swiss-com-wide network is down because something goes wrong, continuous integration is a bit of a stretch at the moment. Probably leave it to some early adopters first to try that. This area is very important to me. I feel like Agile, Agile methods, not just continuous integration deployment, but test-driven and just the sort of collaboration that Agile brings is an essential ingredient to what's transforming us as we get into this space. For me, I think it is an essential step for us to get to. I mean, I trust my car to be continuously buoyed today, so I will trust in the future that my phone will as well. So if some of you aren't going to be doing continuous deployment, how long do we need to support our stable releases for? Do you guys roll out code every six months, every 15 years? What's it look like? So today, probably we have the equipment there for something like 10 years or more. Or more, yeah, some of it even more. And you don't do firmware upgrades on that equipment once it's deployed? We actually wrap these certain boxes with plexiglass so they don't leak all over everything else. Exactly, some of it is no upgrade, and I don't touch it if it's working. Exactly. I have some bad news, by the way, but talk about that later. We need stable releases because I guess it will take some time. Testing was mentioned before. I mean, if you go through some test cycle and so on, you want to have a certain stability for a certain time, specifically for the open stack and really the infrastructure layer. I mean, if it's about applications and VNFs running on top of it, it's probably even a different discussion. Also, agility might be even helpful, but the baseline infrastructure should be rather stable. I think this is really the paradox or the problem that we have to address is if we don't do continuous integration, then we have to live with a lot of past sins and we have to find the right balance between the two because that's, as was said, how do we get to a really stable thing that is being changed all the time? Okay. So what's the interest in having the same level of openness in your network hardware that opens stack gives you? That's very, very high. I mean, I feel like this is another very essential point to what we're trying to get to. We've suffered in the past with being railroaded or stuck in a dead end with one vendor and one proprietary solution. And then I think we're realizing how important essential it is to be more open and collaborative. The various telcos around the world are solving the same problems. There's not a lot of differentiation in what we're trying to solve for. I think collaborating, being open, transparent about what's going on is a great way to help the customers ourselves try to get better value out of the telcos. So I think it's key. Yeah. I think as a telco operator, we're of course interested in hardware standardization but not to the point of open compute. For the moment, I think it's very satisfied to use 18.6 servers standard. It's already sufficient for telco operator. Looking at more Twitter questions. Are there any specific challenges you face with your open stack initiatives? Is there anything you'd like us to do that we haven't already discussed? I was listening to Tim before and it was sort of reminding me to our challenges if it comes to the organizational issues. I mean, yeah, in telecommunication you have a box. Typically you have somebody responsible for exactly that box or that service or that feature or something like that. And if we start changing that, do you also need to change your organizational structures and you need to take away something from somebody and give something new? So the whole personal issues around these topics, I think are a big hurdle. And that has nothing to do with whether it's open stack or not, but that's sort of coming a bit with the horizontalization and a bit with the sort of open way of doing things and doing things different, which also means there's sort of a mental change which is required in the component. That will take quite a bit of time. Yeah, for us, I mean, a key area that needs work is performance through x86 boxes as hosts, especially network, the number of packets per second, getting something that's more deterministic and low latency through boxes. So work around integration with open B switch and making open B switch whole, integration with the various SDNs that are coming to make the whole, an overlay solution, performance, these things I think are essential as we try to scale. We're trying to continue to do networking that's all on dedicated physical hardware. We want to get away from that as much as possible. So that area of performance and networking and then this is key. Plus the network grows out of the data center or integrates with the network outside of the data center. Yeah, that's good. It comes basically together with what you're saying. Yeah, I couldn't reiterate that point more. The WAN and the LAN are really integrated in the future. So Toby, you mentioned open B switch. What sort of networking model are telcos running? Does it tend to be something open like open B switch or is it a proprietary SDN? Well, we have instances of, we have three or four different variations. Almost all of them have open B switch as a part of the picture. So we have examples that are flat using physical BLANs and physical hardware. And then we've been using the GRE open B switch model for quite a number of sites. We have a number of other SDN vendors in our picture as well. It's still not any one of them is a perfect solution quite yet. Yeah, I would agree. We put a lot of belief in that this movement on SDN, open daylight might help, but it's not there yet at the moment. And we run at the moment, we run a plum group in the data centers thing. It's sort of B switch based solutions for data center bit. I think this is still open question. There's a lot of discussion around here. And I see a lot of Swisscom people in the audience talking only about this particular issue. Okay, cool. So we only have a couple of minutes left. Is there anything else you'd like to say to us that you haven't had a chance to say yet? Well, one thing certainly is pushing on the VNF vendors, the people that make the software to think more like my new favorite term, midget cattle instead of the pets way. Because Telco NFV solutions have the same problems that large enterprise backend systems have had where things get stuck in very monolithic way. Breaking that out into a scale out model and in a way that is truly scale out using smaller disposable components and not relying on a big VM or essentially replicating the vertical integration that you had in a piece of hardware and a VM. Getting to scale out for those things is key. And I think that's a huge opportunity in the ecosystem that's forming around us is the startups that are showing up that are actually building Telco solutions in a more truly scale out way. Yeah, it seems like it's not just performance that you guys need. You need reliability and upgrade ability and all that other stuff we need to do for everyone else as well. Exactly, cool. I would like to also emphasize the importance of NFV for Telco operators. Since I think without a carrier grade ready cloud infrastructure could be the reality. So we are glad to see that in OpenStack community there is already a work group, OpenStack NFV work group is created and is focusing on these issues. And we will be glad to see some concrete results in this work group. So Marcus, you have eight seconds. So I really would love to see more in the work in the sort of the packaging of the VNF and how you bring it onto the platform with certain guarantees or certain requirements this VNF has and have the sort of for a reliable system at the end of the day. Cool. Well, we're out of time. So if you could all help me thank the panel. Thank you very much. Thank you. I appreciate everybody participating. Now the Telco market is a trillion dollar market. So if we can get OpenStack helping them, that's a pretty big impact for all the work you guys do. So I just wanna say that we have people here, as I said, from over 60 countries. So please enjoy the rest of your week, make sure that you meet people from other companies and other countries, get to know them, share your stories. And when you go back home to those 60 countries that you came from, please bring back a little piece of the future and help bring OpenStack to your country and help me prove William Gibson wrong. The last but not least and most importantly, I just hope you all have a good time with OpenStack. Thank you.