 Good afternoon, folks. Welcome to the HP-sponsored Lightning Talks. We've got a great lineup of speakers today. Just before I hand it over to our first speaker, I want to tell you about the draw that we're doing during this session. So if you don't already have a ticket, please raise your hand and until you receive one, you'll get one half of the ticket. If you're an HP employee, you do not qualify. And we're gonna, you get a chance to win this beautiful, it's actually a very beautiful 10-inch tablet that runs Android. So These are the Lightning Talks. They're gonna be five minutes and just five minutes. So I apologize if I have to cut somebody off It's not me trying to be rude. We just have eight or nine presenters today. Canadians, we do we do try not to be. Sorry about that. All right, so keep your hands raised up if you're interested in that ticket. We first have Lance Albertson coming up. He's from, he's the director of the open-source labs at OSU. So random applause for him. So my name's Lance Albertson. I'm the director of the open-source lab. If you don't know what we do, we provide infrastructure hosting for a lot of medium-to-large open-source projects. I deal with from Apache to Linux Foundation that whole bunch. So we dive into a lot of stuff. So what I'm gonna talk about is our adventures in installing OpenStack and using it with Chef. This slide is supposed to have a candidate, but it doesn't. Oh well. Anyways, as we all know, there's lots of services. It's really complicated. It's a pain in the butt. There's so many different ways we can install it. Me being a system administrator, I like having control. I like using the tools that I'm used to. And we've recently gone towards using Chef. So what we did was Chef to the rescue. Chef really helped us out, but it was really complicated in the beginning. We like it because we use it. Everything was on Stackforge, which is great. I had a really good repository to kind of lay things out, but it was a little complicated. There's lots of moving parts. It was kind of difficult to kind of see how everything was needed. Me being a new OpenStack user at the time, it was intimidating to kind of figure that out. So what we ended up doing was for those of you that aren't familiar with Chef, you create a wrapper cookbook. So we created our own cookbook called OSL-OpenStack. It's actually in our GitHub. When we kind of wrapped what Chef and the Stackforge had provided with cookbooks and we kind of put on our own little things. So it worked out really well. We could deal with site-specific things that we were doing. We could simplify how we were deploying our databases kind of with how we do things. One thing that was really annoying at the beginning with the Chef deployment is there was too many roles. It was just so many different roles. It was modular and we just paired it down to exactly what we needed. So we had a controller node. We had a compute node, eventually a sender node, and so forth. So it's been really good. And it's working out pretty great. To do that, we used a tool called Test Kitchen. It's a part of the Chef ecosystem. It's an integration testing tool. It's really awesome. It uses vagrant by default, but what we actually ended up doing is once we got our OpenStack instances up, we could use OpenStack to do our integration testing on Chef. So OpenStack's really helped us out on that side of things. So the kitchen OpenStack provider is awesome and it works really, really well. So we can do a lot of integration testing, right? Tests and everything. And the last thing I want to talk about was Packer. Packer is a cloud image creation tool. For those of you that know the guy who created vagrant, the same guy created Packer. It is an awesome, awesome tool. One of the things that I do with a lot of different platforms at the open source lab and I don't want to have to deal with five different tools to create an image. I want to have a sensible way of doing that. And that's what Packer does. I can make an image for vagrant. I can make an image for OpenStack. I can make an image for anything, any platform. Chef actually has a repository called Bento that basically has all the various Packer scripts that they use. It also has a lot of the things that you don't need to worry about when you're creating VMs on different platforms. Like all this, on W, you have to do this weird thing with networking and so forth. So what we did is we actually forked it and their repository is really bit on vagrant. But we wanted to make OpenStack VMs using it. So we forked it, created our own VMs. So if you want to be able to build all of your OpenStack VMs on various platforms, check out our fork. And well, except pull request. I need to talk with JJ and the Chef folks to actually get it incorporated upstream. So that's my talk. I don't know how much under five minutes I was at. All right. All right. Thanks so much. Next we have Jay Hendrickson, sorry. So is he in the room? All right, great. So for the folks that are just joining, if you haven't received your ticket to enter this drawer over here, raise your hand and we'll have one of my lovely staff members come over and give you a ticket. We start. Hi, my name is Jay Hendrickson and I'm a product manager at HP and we're going to talk about hardware. Somebody raising their hand. Okay, sorry, sorry. So, so OpenStack software, but deep underneath all of that stuff, there's hardware and it's not there just to create heat. It's actually the fundamental, the fundamental infrastructure underneath OpenStack. So we're going to talk a little bit about designing the hardware stack. So first is when you first start designing your hardware stack, you have to create a plan. And so the first question you might ask yourself is what distribution should I use? So you could go to an OpenStack foundation and grab all these modules and pin them together. And after several months or years, have some type of distro. Or you could go and get a distro from some of the many vendors that are out there. One is HP Helion, Red Hat has one, Suze has one, Ubuntu has one, my dog has a distro. It's pretty good, it's got a couple of issues, but it's pretty good. But once you come up with the distro that you're going to use, you're faced with the complexity of architecting this hardware. And the thing that you want to think about is you can't just say, okay, well what are public clouds used, and I'll just use that same hardware. Because the hardware, if you do that, you're going to have some serious issues, especially if it's your first private cloud. If you have 20,000 racks of servers and you use a rip and replace type of hardware, that works great. When you're building your first private cloud, it may be a very small cloud in a rack. And if you have a node go out there, it could be catastrophic. So you need to think about the types of hardware that you might use, and it's going to be a little bit different. The next thing is you need to expect that your workloads are going to change. So if you're doing traditional IT, maybe, you know, when I say traditional, I mean all the way up through virtualization, but you haven't done a cloud before. When you get a cloud working, the behaviors of the users are going to change. Think of it like your mom's favorite meatloaf recipe. And so for years and years and years, you've been making mom's meatloaf, and it's great. Then you take a culinary class. And you start saying, you know, I think, what if I try this? What if I may use less butter? What if I use more of this? Well, the thing is your recipe starts to change, and how you cook things starts to change. But that all happens after you took the class. So your workloads are going to change. You think you know what your workloads are, but you don't know what you don't know, and they are going to change. So you need to think about the hardware that's underneath because those workloads will change. So you need to expect that you're going to scale up, and you need to expect that you're going to scale out. In other words, you're going to add more memory to nodes. You're going to add more CPU power. You're going to add more hard drives for storage. You're going to do all kinds of things, and you're also going to scale out. When you build a cloud, you start out with something small, and you build it out. But you want to do that. And then you want to mitigate risk. And because after all, you're building a private cloud, you're going to spend, I don't know, half a million dollars, 400,000 dollars. You put your badge down, you go to your CIO. You say, I want to spend all this money. You know, maybe you want to think about getting some reliable hardware, some hardware that is easily managed, some hardware that's flexible, that you can redeploy as things change. So you want to be able to do that. And then, of course, the last thing is, how much time do you have? So I'm going to get to the point and let's talk about hardware design. So first, there's the management control plane. Now, I work for HP. I'm in HP server division. So the best hardware platform for the management control plane is DL360, one of the largest, excuse me, one of the most popular servers on the planet. And we do this, now, I'm not going to specify how many nodes you need and all this in the plane because it kind of depends on the distro that you're using and how things are deployed. So I'm going to kind of leave that up there for a second. Then compute. So we use DL360 and we use DL360 for our compute. They're very flexible. They're very reliable. We use 18 core Haswell processors in these Gen 9 platforms because it keeps a nice, dense compute platform. Let's see, I'll not read all this stuff out to you. I'll just talk a little bit more. For Swiss store, we use DL380s. We use large form factor drives because we're looking for capacity, not necessarily speed. Again, they're extremely flexible. As your OpenStack platform changes, you'll be able to... You'll be able to... Time's up. Time's up. Okay. Any last message? Last thing. Wait, there's more. Designing the hardware stack for your OpenStack private cloud. That is one Wednesday at 4.30. And I won't have to rush. Thank you so much. That was great. All right, next up, we're going to have the very funny and very personable Clint Byram. And a ticket for the dry yet. Raise your hand. Clint may... All right, here we go. All right, so this talk is like a Lord of the Rings marathon viewing party. And I'll explain to that in a moment. Why is it not on the screen? Please make it on the screen, Cody. That's fine. Can everybody read that? All right, can I talk now? Ah-ha. If the keyboard controls work, it'll actually work. All right, so I will explain why this talk is like a Lord of the Rings marathon viewing party, including the extended edition. But first I want to thank HP for having lightning talks. And thank the OpenStack Foundation for putting on this event. Also, I want to remind everybody that all of these words I'm about to say are not the words of HP. They are mine, so please don't get me fired. It is like a Lord of the Rings viewing party because by halfway through this, you'll be jealous of Gandalf for having fallen to his death about halfway through. So now I have a question for you. Does anybody know why OpenStack is like Chewbacca? By the way, there's zero content in this presentation. So you're welcome. Okay, OpenStack is like Chewbacca because outsiders don't understand a word it says, and if you do it wrong, you'll probably tear your arms off. Next. OpenStack is like a MiG-29. It's a beautiful aircraft. That is not why it's like OpenStack, though. It's like OpenStack because now that Larry Ellison owns one, nobody expects him to use it for its intended purpose. Switching to guns. All right, so this is for our host country. Thank you very much for your hospitality, Canada. I'm about to get thrown out, I think. Why is OpenStack like Poutine? No, not this Poutine. I am not making any jokes about this man. Not that one. This Poutine. Because right now you're all excited to try it, but by the end of the week you're going to be full of regret. I'm kidding, they're delicious. They're delicious. All right, so why is OpenStack like Canada? Anyone? Because it's awesome. All right, I like that one better. All right, no, because it's full of extremely polite people, except when the decision on what language to use comes up, and then the gloves come off. Lip-a-ton! All right, and why is OpenStack like the Holy Grail? Ouch, that hurts as a developer. It does exist. All right, because it promises all sorts of magical things, but it only delivers violently ejected cattle. All right, and why is this summit like the Rocky Horror Picture Show? All right, because it started out as a couple of lost kids in the woods, but I think they're going to try and serve us a meatloaf at the HP party. It is getting out of hand, folks. All right, that's all I have. Thank you and try the fish. So that's going to be a tough one to follow up, but I'd like to welcome up Shreem. He's an MVP. Is he here in the room? All right. Cloud Done, better known as Cloud Done. Hello, everybody. I cannot possibly beat what Clint did and... Well, I tried my best. But I think I'm lost, so there's a despite there. It just ended the first day. I'm already tired. We've got four more days. I can relate. So I'm Shreem Ramsubramanian. I'm founder of Cloud Done, a research analysis and SI firm. I'm also an HP Helion MVP. I've been plugged with the OpenStack community since cactus days. I started with my first install from Diablo. I'm here to talk about how HP Helion portfolio can help with your DevOps strategy. Now, who can define DevOps? What is DevOps? Well, if anybody knows, I would like to meet you in person. I want to learn from you guys. But until then, I'll start with the common DevOps tool chain. And again, it's a very short talk. I can spend an entire hour talking about DevOps and what we can do and what all the options are available. I'm just going to pick one and then try to see how possibly moving to HP Helion portfolio can help you. This is one typical tool chain. There's a various components here, build, CI, deploy. And then there are other tools and files that you would use like a backup or configuration management or source control. There's also commonly software for planning or issue tracking or collaboration. Now, you have a lot of infrastructure going on here. You'll probably have different environments, something to manage your CI server, manage your backup service, storage behind it, storage backend for that. So you have the operational complexity here to watch for. You have the multitude of environments and heterogeneous environments here. And finally, as an app developer, you can focus on your app. You don't really want to care about infrastructure. I'm not saying that the pain of infrastructure will go away or possibly you can move away completely, but at least by suitable choices, you can hope to have a better environment so that you can have more focus on app. But if you look at them, right, like, again, just like DevOps, I cannot possible list everything on the HP Helion portfolio. I'm going to focus mostly on IIS and PAS. If you look at the components here, some of them are services, some of them are software, and some of them, kind of, you can't call them as files, just block storages, right? And of course, there's people in process, a key part of DevOps. You cannot get away with that. So one possible way that you can improve your DevOps tool chain is, like, try to see if you can run your software on an IIS layer, right? Whether it's HP Helion OpenStack or HP Public Cloud. And then try to move your services wherever possible to the PAS layer. So what it could look like if you move to this... You can replace your build with Maywin, for instance, and then your deploy and CI with HP Helion CLI. You can try to replace your monitoring services with New Relic that comes with it. And then if you try taking out your services and then try running on VMs or instances on IIS layer, then what you end up is still like you still have to manage these things, but you will end up with a homogenous environment, a homogenous infrastructure. So you can take away the pain. You can probably also call that you can... putting everything in your one basket and if that fails, that's also going to be a problem. But at least you have a homogenous environment here as infrastructure so you can focus more on your application. Now, again, I want to say this is just like a possible indication or possible implementation. This is not going to solve all your issues, but something that you can think about. And I request you to take a look at HP Helion portfolio and see how it can help you to develop processes. And if you have any questions, feel free to follow at sriramandclouddown.com or hit me on Twitter at sriram here. Thank you. Great, thank you so much. So next we have someone from HP Labs and excited to introduce him. He goes by JK. So where's JK? Perfect. He's going to talk about something pretty interesting with regards to Neutron, which is, I know, all of our favorite topics. Hi, everyone. I'm JK Lee from HP Labs. I'm here to talk about an interesting way to express your complex networking policies and how to automatically compose them. So network management is challenging. Policy management is challenging because there are many different types of policies starting from security access control, QoS, middle box deployment. But the current interfaces for them are mostly fragmented and pretty low level. There's no single pane of glass where you can manage them all together in one place. And there are many different silos or stakeholders who have different versions of networking policies on the common network, starting from operators, tenants, and application admins. And we may also have some computerized programs like SDN apps who dynamically generate networking policies triggered by external events like a security and a forte event. But currently, the way to detect such conflicts between many different policies, typically angry phone calls from customers, hey, my network doesn't work, or then the way to reserve that conflicts requires some human effort to make another round of phone calls and then, you know, try to decide which one is more important or not. It's a little bit messy. So we propose a solution called the PGA, Policy Graph Abstraction. This is an application-level graph abstraction that allows users to specify their policy very naturally, as simple as drawing whiteboard diagrams. When you typically do, just discuss the reason about our networking policies. So we have a couple of examples from real policies. For example, the application administrator, Graph B, is allowing port 80 traffic, HTTP traffic from employees to the web just buying, drawing this graph. And then traffic should go through a load balancer, marked as an LB box. And it also allows the traffic from web2db and db2db by using the surf edge allowing port 7000. We have another interesting example, Graph C, which is SDNF from HPNet Protector. And it basically monitors the DNS traffic from normal hosts. And if something goes wrong or anomaly is detected, then the host is quarantined. And then quarantine hosts can only talk to the remedy server. And this is a kind of exclusive access control intent. And we have an intent API by basically marking that node as exclusive, that node edges cannot be changed at all, while no other edges can be added to this quarantine host. And so we have another kind of intent-level APIs that helps users to clearly define and clarify their user intents. For example, the cloud operator has a dotted edge to just express their pure service chain requirement which has nothing to do with access control intents. So we have an algorithm to compose them together into something like this. Although each individual policy grab looks very simple, the compose grab can be a little bit hard to manually compose. So we have automated algorithm that is also scalable. And it took about 60 minutes to compose a 20K enterprise IT AC policies. And you can easily walk through the graph and see that, for example, the engineering department deployed on campus A quarantined mark at the blue node can only talk to the remedy server while the engineering department and campus A with the normal status can have a connectivity to other servers or a host network. So we have an algorithm to implement this through the service chain as required by the individual input grabs. So currently we are integrating this into open-stack horizon GUI to enable users to easily drag and drop the policy grabs and edit them. And then we compose them together using our graph composure which is ready to be downloaded and deployed down to the neutral side. And we also have a policy enforcement based on dynamic external events. We have a brown back session Wednesday 11.30 so we can talk a little bit more about then if you can join our session. Thank you. So much. All right, next I'm going to invite Monte Taylor. A lot of you know him. He's a distinguished technologist at cloud in it. Possibly interesting. Okay, that's fine. So hi, I'm Monte and I'm going to talk to you about Glean which is the thing that I wrote to replace cloud in it because cloud in it wasn't working out for me. There's a couple of reasons for that. So it turns out when you boot a VM you need to kind of oh I can see it down there that's exciting. When you boot a VM you need what's happening at the time you boot it. Also as a precursor to this I use Ansible for orchestration so I don't need a lot of bootstrapping at boot time. I need a very minimal set of things so I don't need something with like ability to execute extremely intricate scripts or install things onto my VM when I boot it. I need something to get an SSH key on there. Maybe network. Depending on how Bong hits things are. So there's a few things that I absolutely have to consume at boot. There's really no other choice for it. That's basically a network configuration. I suppose I could build an individual image for every single different node in a non-dynamically allocated network but something tells me I would be blamed as being even more insane than I am right now. Maybe SSH keys. You can actually bake SSH keys into an image that you're deploying which we do but sometimes you may also want to be able to overlay a different SSH key that you might want to use in a test rig or maybe you don't like the idea of baking a public key into an image because you're freaked out by putting a public key somewhere that people can see it. I don't really know why you'd be freaked out by that since it's called a public key for a reason but there's many things that I don't understand why. The thing is the network information part of this should be really easy because there's this dynamic host configuration protocol called the dynamic host configuration protocol. It's pretty ubiquitous just about everybody on the planet uses it and has done so for 20 to 30 years. It's basically the basic thing that allows you to get IP addresses on your machines when they boot. Some people for some reason surpasses understanding almost as much as my obsession with this picture if you've seen my other talk today decide that they don't like to use that in their data centers because they're completely out of their ever loving minds because what those people think is a better idea is to run a custom written agent on a node that's going to do a file injection and overwrite my own config files that I stuck onto the node. I find that rude. So the thing Cloud in it is that it's great and it solved a lot of problems for a lot of the world and it handles many cases. It doesn't handle all of the cases. It especially doesn't handle this particular case where the networking information isn't actually available in standard places. Part of that has to do with patches that haven't landed to OpenStack yet. It also has some dependencies that conflict with my particular use case which is using spending up nodes that we use to test OpenStack. Cloud in it also depends on the same things that OpenStack depends on which means that testing the dependencies in OpenStack that I'm trying to test is hard. It's also kind of frozen because they're rewriting it. So rather than wait for that to sort itself out, I wrote a thing called Glean. It's small. It has zero dependencies. It only handles static network config in config drive or falling back to DHCP if your environment is sane and optionally it will read some SSH keys out of the config drive and it doesn't do anything else because that doesn't make any sense. So there's a patch up to Nova to put something like this into config drive. Hopefully the Nova team will land this in Liberty and if Nova team doesn't, I'm going to find you and delete all your code. But there's a thing like that and you can generate some static config from it. It's already integrated with Disk Image Builder so if you want to make an image that uses this, Disk Image Create, Debian Minimal VM, Debian Image Bootable in a cloud that uses this to handle all of its boot time initialization. You know what has less depends than minimal Python though? If you thought it was crazy for that, there's this language called Rust that they just release as a 1.0 and I do have a version of Glean that's in Rust that you can play with. We're not using this anywhere. It's not integrated with anything because they just released Rust as 1.0. But that actually has even less runtime instances. That's the crazy thing I'm going to talk to you about today. Thank you. Great. Thank you so much, Monty. Next we're going to have Godwin come up and speak about OpenStack Telemetry, which is something that is near and dear to a lot of our hearts. My name is Godwin Efjong. OpenStack Professional Services. I'm going to talk about Monaska monitoring as a service at scale. This is typically a presentation we give in one hour since I have five minutes. I'm going to go rush through. Okay. Okay. Monitoring, as it's needless to say, monitoring has been around for decades. However, the existing solutions do not address the problems or requirements of large-scale public and private clouds in terms of performance, data retention and security. Traditionally, performance and scalability and data retention have been limited to just hundreds of servers. But we know that in a typical large-scale enterprise cloud system that thousands of physical servers and hundreds of thousands of VMs that need to be monitored. Oops, sorry. What do you mean to do that? Why is it doing that? Excuse me. So monitoring as a service is not addressed by solutions such as Amazon's CloudWatch. It's not enough in the case that it's not open source. So now that we've addressed the problem my question to you is what is the solution? So the solution is Monaska. Monaska is an extensible scalable monitoring solution that leverages state-of-the-art analytics database. All components in Monaska can be scaled out and clustered for fault tolerance. All of its components are API. And the API supports querying metrics and measurements. With Monaska, we get what we don't get with other open source systems. We get metrics, alarm definitions, threshold calculations, all done server side. Existing monitoring solutions such as Nagios and Xabix do not even come close to the performance, scale and data retention capabilities of Monaska. What really sets Monaska apart is the separation of alarm definitions and the alarm creations based by using threshold engine patterns. Here, I'm not really going into the details but basically this is the Monaska very high level architecture. It uses the REST API as I mentioned. All the REST API authenticates against OpenStack Keystone services and all of them all of the APIs are associated with tenants and it supports multi-tenancy. This is the REST API published through Kafka messaging queue. It also has a notification engine that consumes alarm state transition messages from the message queue and also sends notifications such as emails to users when alarms exist. We use Kafka as I mentioned earlier and it also uses MySQL and plugs in exist for HBase and Cassandra and I'm not going to talk about that. The Monaska UI, as you can see here, is fully integrated into OpenStack Horizon Dashboard. Basically what it does is it visualizes the overall health and status of the core components of OpenStack so we can see the send and over monitoring, SWIFT and all. We have the VMs shown as DevStack and Minimon down there. Monaska has also been integrated with OpenSource metric dashboards called Grafana .org over there. So Monaska is fully OpenSource. The code is in the stack forage repository in GitHub. It is not at this moment currently an OpenStack incubator project but we are targeting incubation. We are working with the Slimiter PTLs and talking about how to integrate it into OpenStack. It is being used in production by companies like Time Warner Cable and Workday. We are working with companies that are contributing to the code, including HP, Rackspace and you can see the logos all there, Cisco, IBM, CloudSkill and IBM. My call to action to you right now is we are basically looking for developers and contributors. Basically this is an internal project we are working on in HP but we are looking for more developers and contributors to work on things like testing and so on. The resources you can check out the links there. Those are the links for Monaska. The Wiki page is there. Wiki.OpenStack.org Wiki Monaska. Thank you. Another round of applause for my good friend Godwin. So Monaska is in stack forage I think so you can actually go and see the code and everything like that if you are curious. Next up I am going to have Tarik as well as his colleague from Brocade. Has been here. So we are going to start in time. Can you do your presentation at the same time? Just do two at the same time? Oh boy. Going last wasn't a good idea especially after Jay Leno and the guy who we wrote called in it. Alright. So this is not my presentation. I cannot do NFE. Yeah sure. And then you guys go? Sure. So I am going to close this if it is okay with you and open up my presentation and put that on the screen. Let's see where it is. Yeah mine is not about NFE actually. It is not technical at all. Oh really? Okay so we go NFE and then I do mine. Okay. Hi guys. As you can see we are so well matched right now dressed exactly the same so Tom why don't you go ahead and open it up and then. Yeah hi. I am Tom Nadeo from Brocade. I run our open open source projects, open daylight, open stack, OP NFE. So we got together about a month ago and started talking about why don't we build a commercial version of an open source orchestrator and set out to prove that you could do this and that it works and it is pretty straightforward. So I think this was your slide? Oh no this was my slide. Sorry. So we had kind of a scheduling snafu and I found out like 10 minutes ago I needed to be here so sorry if I am a little discombobulated. So one of the things that we talked about and I have talked with a lot of my customers is that they are kind of sick of should I say VMware and so they look at all of this swirl of open source components and they say hey isn't there a way to put these things together to actually give me an open source version of this for all the obvious reasons right? So I guess in what 5 minutes or less if this is possible I mean maybe it is obvious to you all in here but we go out and we talk to people and they go really you can do this and it works and yeah no kidding this works. So we put together a demo for the layer 1, 2, 3 thing 2 weeks ago works great less filling all that stuff and basically we put together brocades open daylight distro with our virtual router so you can use it and you can use it and you can use it and it works great and I think yeah I think the interesting thing was that when we started talking about it it is in NFV if you look at it what is happening in the IT and NFV is trying to accelerate it in the network and it is a lot of things that we are doing care of in NFV where people are talking about it quite a lot but not many are doing so as part of this demo what we wanted to show was that what the art of the possible is and how myth versus reality actually works so we are talking about a lot of things and you know we at HP we have a lot of slides but being able to show a working thing even though simple that's what we showed and we thought what's a better way of with so many different open source open standard bodies and under the auspices of app NFV and I'm hoping some of you got a chance to attend some of the sessions today with using app NFV as the host two different companies who have different products we were able to bring them together so quickly and be able to to show this demo development time of less than two weeks a lot of the problem was people were on vacation different meetings and different time zones but we were able to put it together just because it's based on open standards and as you see we were able to use HP's distribution of open stack Brocade's distribution of of open daylight controller and we were able to bring it together in this time and being able to show a site failover and site failover you'll see and you'll say you know folks in networks have been doing it since you know networks have been around but the beauty of this was this site's failover was based on analytics so what the art of the possible it showed was that if a site goes down you don't have to blindly move everything over to the other side you can send analytics to a orchestrator that can then provide some commands or controls over to the ODL based controller part of the components to one side part of the other ones all based on analytics and we did it very quickly because of what open source open standards have been able to bring it thank you very much so you should also say I think the demo that we did is recorded or will be recorded soon it is recorded it's available as part of OP NFP it was done working with AT&T who came up with the use case and it is available for anyone who needs it it's available and we're also we're showing a demo at our booth of basically the same thing this week as well alright thank you very much alright so we have one more last presentation I did it a little over so I apologize but I'm hoping that the chance to win this tablet over here is going to make it all worthwhile so I'd like to now bring up Tom sorry Ben I was Tom and he's going to give us a pretty interesting presentation so but he might need some help so I think we can just drag it over hello everyone so this is going to be I know you guys had a lot of technical sessions today so I'm going to do something a little bit different my name is Ben Zadeh I'm part of Hylian Open SAC professional service team my background is development but part of my research during my graduate program was on technology adoption because human psychology was very very interesting to me so I want to talk to you about behaviors that lead to technology adoption human behaviors that lead to technology adoption and how we can make how can we use some of those and those learnings and apply them to Open SAC so this is a picture of brain before and after it was triggered by enjoyable experience and as you can see it's common sense enjoyable experience triggered more activities and you may be wondering how that may relate to Open SAC so many companies have figured out a way to directly connect pleasurable experience to money to adoption and one of those was Cialis Cialis came to market late the market was dominated by another product and they they had to come up with a strategy so their strategy was is coupons now if you take a picture of this coupon and take it to your local pharmacy you get three free pills and I figured out that when people are using these three pills if they approach them during that time period and give them more information it is more likely that they adopt that product because they experienced the product they had a positive experience now you may argue that taking a using Open SAC is more complicated than taking a Cialis definitely involves more people so with Cialis you have one, two if you're lucky maybe three people involved but with Open SAC you have a chain you have multiple players multiple decision makers they have this similar viewpoints they have this similar interest on the other hand the product is different you have a product that's open source it has many challenges unanswered questions and it's disruptive it's disruptive to many players so for us to be able to when it comes to technology like Open SAC we need to look at the entire chain that it impacts we cannot evaluate factors just by evaluating one part of that chain now what are the constructs that we need to look at so that's why I want to introduce you to these two models these are technology adoption models they've been researched to death I promise you that but both of them technology adoption technology acceptance model behavior they both tell us that the main constructs that you want to look at is perceived usefulness and perceives of use now loyalty, trust all those play a factor but they all lead to into these two constructs if you measure these and if you have these two then there's a huge possibility for your customer to have the intention to use a product so take that now so if you have dispersed multi-directional POCs within an organization you cannot truly measure those two factors that I just mentioned perceives usefulness and perceives of use truly because you're measuring one part of organization you're not looking at the entire chain you need to look at the entire chain how that impacts the entire chain to be able to truly measure if your organization will adopt Open SAC and that's why I introduce you to SPOC I'll call it trademark SPOC is a super POC it's a normal POC it's a POC that goes beyond just your DevOps standing up 16 nodes of Open SAC and try to do what they do with VMware it involves your developers they need to develop something that is a cloud-ready application and test that on your Open SAC environment it involves your business leaders to measure and do cost analysis analyze the burst into cloud compare that to traditional systems and that requires a little bit of so that's my call into action that was my presentation I hope you enjoyed it thank you so much I appreciate everybody sticking around I know that we're a little bit over I'd like to now do the draw for the HP Slate which I assume is why you're all still here so I'd actually like to call BDL Garby up to actually do the draw so you might know BDL Garby he's a former PTL something or another for Debian, whatever they call it does everybody have a ticket? and while we're waiting to get those tickets just a few more things after this you can go down to our booth we're going to have actual live racks of hardware there that's demonstrating hardware from multiple vendors very exciting we also have a number of live interactive demos be sure to check out our awesome area over here called the Community Lounge it's amazing view and we're giving away s'mores and these hoodies you can actually get badges ironed onto them there's like keystone badges and everything yeah there's free beer we do that actually I'd like to note that there's free beer at our lounge every day after 3 o'clock beer and s'mores and fire pits and all kinds of stuff oh and party Tuesday night alright well I got a number from BDL here it is 816 072 you have it? alright sweet whoo round of applause for the winner thank you so much