 Users and customers here, they've got some great stuff to show. And so I'm gonna go ahead and start, bring up Previer Chondry from Bloomberg. And Previer is going to tell us about what they are doing with OpenStack at Bloomberg. So, Previer Chondry. Hey, what's up? So, a little over a year and a half ago when I first got pinged by a recruiter from Bloomberg, my first thought was, what does Bloomberg do? And so I've come to learn this, which is important. We're a services company. We're a services company that's incredibly focused on our customers. And the service that we provide is primarily financial data and analytics for people in the financial space. Primarily, we do a lot of other things too. Have a TV channel and a radio channel and all that kind of stuff. But just to give you an idea of kind of the scale of it, what it means really is that we're a technology company because we have to be a technology company in order to provide all these services. And just to give you a feel for that, on a given day, we process about or send out about 22 million instant messages. That's not terribly interesting. We also do about 220 million messages, so like emails. Again, that's kind of interesting. Then I came to learn that we actually run one of the largest private networks in the world. We have around 20,000 routers that run across our private WAN. We also have the largest server-side JavaScript deployment in the world. We have 22 million lines of JavaScript code that's in production running on our server side. And we also, this is one of the more interesting stats, process tick data from financial markets. So when there's market feeds that are coming in, we process all that data. There's 45 to 50 billion ticks per day. And for those counting along at home, if you were counting with a UN32, you would have overrun it 10 to 11 times every day. So that's kind of just a feel for what we do in terms of technology. And so when we started looking at OpenStack, we had some interesting design goals that we had in mind for this kind of elastic infrastructure that we wanted to build. The first focus that we had was primarily on high availability. That's really important for us. And it's incredibly important also when it came to our architecture to make sure that we didn't have an architecture that would have opportunities for cascading failures, right? We didn't want to have a cloud infrastructure that we put up that would have one thing that goes wrong and then cascades down the line and takes the whole service down. That's of course, it's not something that we can live with. In addition to that, another design goal that we had was to try to make sure that we were scaling down to small sizes, not only scaling up. We have a lot of node sites around the world, about 200 or so points of presence throughout the world. And the opportunity to actually have a programmatically defined infrastructure in those locations is huge. It actually saves us a lot of time, a lot of energy, and it makes us very nimble when it comes to actually being able to deploy those kinds of services to different parts of the world. In addition to that, another design goal that we had was to try to keep our stack as open-source as possible and in well-defined layers so that we could actually, if something went wrong, actually pull a layer out and replace it with a new piece. So kind of looking kind of forward into what we were, when we started kind of analyzing OpenStack and thinking about how we were going to actually use it, we came across a few kinds of problems that we had to solve on our own. So there was, I'm going to put them into two categories. There's kind of problems that are below OpenStack and then problems that are sort of above OpenStack. So when it came to problems below OpenStack, we had to sort out, you know, how do we do highly available databases, right? We ended up settling on using the Glera plugin for MySQL to do kind of a multi-master setup. We had to figure out how to do availability of the message queue. So we did some rabbit MQ clustering and had to play around with HA policies and things of that nature in order to get that piece, right? In addition to that, we also had to figure out even more basic things than that, which is what's the hardware platform that we're going to use? What kind of servers are we going to use? And it kind of related to that was also how are we going to do storage in a highly available fashion, to which we actually ended up using Ceph with the awesome guys from Ink Tank that have helped us out tremendously with that. So those are kind of some of the problems that were below OpenStack that we had to solve. And then OpenStack, of course, plugged in really nicely and solved a lot of our problems in terms of giving us the APIs and the programmatically definable infrastructure. But then there was another layer on top of OpenStack that we also had to work out. And that was how do we do some of the basic housekeeping things? So how do we do log aggregation and things like that from the hypervisor level? So we ended up putting together a system involving elastic search, log stash, Kibana. People have done that before. It's not really all that new. We also had to sort of define a layer for doing metrics for the hypervisor. So in terms of actually collecting information to make sure that our infrastructure was healthy. So we ended up using graphite and some tomfoolery with carbon relays and caches and things like that. In addition to that, we also had some orchestration issues that we had to sort out. So when it comes to individual hypervisors or an individual instance of a service, we didn't care so much about how many nines of reliability we were getting out of that. What we wanted was the ability to have our five or six nines of availability on the service level, like an aggregate, so that we could actually have an individual instance that dies and nobody cares. An orchestration service sees that it dies and relaunches one of the new availability zone if necessary. And so we had to work on that. And we ended up using no jitsu and we're actually looking at a few other folks to actually help us sort out that kind of problem. In addition to that, there's also what we wanted to do in terms of future things, looking in terms of operational efficiencies. And one of the interesting technologies that we're just starting to play around with is the VMS technology from grid centric. So really the main thing that I kind of wanted to emphasize here is we had to solve a lot of these problems ourselves, but we wanted other people to be able to learn from it. So go clicker. There we go. So what we're doing is actually, we just put this up today, so I just wanted to announce it, that we actually took all of the cookbooks that we've had to put together to solve all of these problems and we're making them available on GitHub. It's under an Apache license. And it's not that we're trying to actually go out there and make this like a huge project that we're trying to rally a big community around, but really it's for folks that had to solve similar problems to us so that people can learn from it. And of course, you know, comments, questions, flames, always warmly accepted. But other than that, thank you very much.