 Okay guys, may I have your attention for a second. Just a note. There is a board upstairs and you can put your lightning talk for tomorrow if you want and also all the attendees can vote for the lightning talks. So please just don't forget about it. That's it. And I think it's time. So David has the first lightning talk, so enjoy. So thank you. Can you guys hear me? I'm holding a thread. Okay, so as he said, my name is David Hallas and I would like to talk about how to smuggle TCP traffic through HTTP. I'm a software engineer here in Bernal at Red Hat working on the manager IQ UI. Among others, I'm also the maintainer of the SAS port of the pattern fly project, which is a UI component stuff. You can find me on GitHub and Twitter. So first about manager IQ. It's a cloud, web based cloud management platform capable of controlling all the things such as infrastructure, middleware and others. It can automate your workflow provision stuff and collect your capacity and utilization metrics. Looks like this. I'm not sure how it's visible, but never mind. So in the last few months, I was working on remote consoles in manager IQ. Those are basically web based consoles to VNC sessions or spy sessions being able to control the virtual machines. Those services are provided by the hypervisor. Their traffic is encapsulated in web sockets and the browser basically interprets those web sockets and renders it in an HTML canvas. The problem with this is that it's kind of slow because it's in browser and I was thinking how to improve this. So to understand how I proceeded, you first need to understand how this proxy was implemented. First, the client sends an HTTP upgrade request to the web server. The web server does a technique called socket hijacking which basically passes the TCP socket to a different thread. And in the meantime opens the TCP connection to the VNC server. If everything is fine, then there is a TCP tunnel, a TCP connection between the web server and the VNC server. The web server informs the browser with an HTTP 101 switching protocols response and the connection converts to web socket with a proxy translating between the two endpoints and basically you have the console in the browser. HTTP upgrade which is doing this web socket thing was introduced in HTTP 1.1 to upgrade from older versions of HTTP or HTTPS or HTTP 2. But basically you can upgrade to anything with the advantage of passing extra parameters in the URL or the headers or in the cookies. So I was asking myself, why don't we upgrade to VNC slash spies as we needed? For this, I needed to create a wrapper for the desktop which is kind of the same as the one in the web server. So first the browser calls the client proxy with a request. The proxy starts to listen to the local port. It informs the browser about the port that displays it in a user interface. You have to type it in the client. It's local or something. Then the client connects to the proxy. The proxy sends an HTTP upgrade which is getting hijacked as in the previous example with the only difference that it will use TCP instead of web sockets. So the implementation of this is called per. It's not fully complete. We have some issues which JavaScript library is not fully implemented yet. The server is implemented in Ruby as a rack middleware. The client side is a browser plug-in, a front-end library, and a native application written in go. It can smuggle anything TCP based through HTTP. And it has a nice logo of a per-in cat. The implementation is kind of complicated but SW3C is drafting new standards for opening TCP and UDP sessions from the browser. It might get easier in the future. The project is available on GitHub. If you're interested in making it better reach out to me or open an issue, and I have some stickers. Thanks. It was fast. Okay, so we have five minutes for questions. Well, you should understand the speed here as a user experience. Yeah, so not latency. User experience like you have a web widget or something in Canvas. You cannot be so fast as in the native desktop, see you write written clients such as Winnegrow or VNC client. So in latency it's kind of the same. You cannot really feel it because it's done look, wait, I'm going to go back. It's kind of done locally here and here and it is a normal TCP session. So the bottlenecks are here on the local machine. So it's kind of the same machine here. I'm not sure if you understand it or if I answered the question. So ManageIQ is a web-based interface and we have just a single port, 80 open or 443. And we cannot see what is behind ManageIQ. So we needed the way to smuggle traffic to HTTP to not open ports on this firewall. And based on the URL, what we use here in the HTTP request, we can select which VNC server we want to connect, which VM you want to access. So you can use the same host port and distinguish using URL or cookie. Yes? Yes, I was prepared for this question. So if you, yeah, so the question was if it will work through an HTTP proxy, if we tell to the HTTP proxy that it's WebSocket, even if it's not, it works. Maybe it's a bug in Apache, so if we have any Apache devs here, I found a bug. In that case, it will fall back to WebSocket and tunnel it through WebSocket normally. And pack it here, unpack here. Thanks. Hello, everybody. Can you hear me? Yeah, it's great. My name is Tomasz Kukral and I'm currently working as a cloud architect at Mirantis. But none of these worst use cases I will be presenting are connected with my current employer. I really have seen all of them in previous job when I was consulting. So I will try to present to you the worst use cases for SEF I have ever seen. And first of all, I would like to say that usually it's not the fault of the SEF, but people don't understand it. So SEF is usually presented as a distributed object storage with magic parameters and different type of storage which can provide. But, yeah, and I really think that it's a really magic storage. But you should think about the features it is presenting because, yeah, it can run on commodity hardware. But please don't try to run it on old laptops which you found on your office and put it into the server. Is any of you aware of how the SEF stores the data? Is anybody using SEF? Raise your hands. I will try to describe it briefly. So the minimal SEF deployment would have these two demons. The OSD demons is used actually for storing the data. It stands for object storage demon. And the monitor demon is responsible for taking care of the status of your cluster. It's checking that the OSD demons are working properly and this kind of stuff. Now I would like to skip all the things how the objects are stored in SEF, but there is something like a pool. And you can set these parameters for this pool. You can set, I think, maybe 10,000 of the parameters, other of them, but the size and the min size are the most important one. Size says it basically defines the replication factor of a cluster. So if you store one object and you will say size equals, for example, five, the SEF will save for you this object five times. So it's something like a rate, but it's much, much, much better. And the min size says how many of these objects are required to be stored in your cluster to be working? For example, if you have min size one, then you say that you need at least one, one of these objects to be in your cluster to be working. And there is really elementary architecture. You can see here that we have four servers. Few of them are running OSD demons. These servers are actually storing the data. And the rest of this, and the last server is monitor demon. And there is some switch because SEF is using network to communicate. I will try to skip this boring part, and I will come to the worst use cases. If you try to find the recommendations for SEF, it will usually be dedicated network and SSD journal, but I think that there should be big warning, probably in red letters, that this list is not completed because recommendation for SEF really depends on the way how you use it, and also what do you want from SEF and which features do you use? So, yeah. Many people start their requirements that they have to servers, and they want to run SEF on it. Please don't let your friends run SEF on just two servers because it's something like driving a car on two wheels. Yeah, it's working. It can be funny for quite some time, but I will not do it in production. So don't let anybody run SEF on two servers. Yeah, these people are funny. Actually, every use case, there were people who wanted to use this use case in production, so it really, it can be refined. These people really love rates, and they use rate volumes for all SD demons, and they think that it's a good idea because the data will be safe, and they just want to set size equals one, which means that each object of the SEF will be stored only one time in the whole cluster. So, setting size one is something like using rate zero over your network and all of your servers. So, please really don't do this this way. Yeah. So, we are using VMware, and we have read that the SEF is really good storage, and it's surprisingly cheap and really fast, and distributed, and we want to use it. Actually, the VMware doesn't have a support for SEF. So, you will end up with something like NFS exporter or iSCSI exporter, and this approach can be fine if you are using it only for one instance, but many people think that if they will mount huge volume from SEF and export it to VMware via one server, and then they want to just run all the instances like if the SEF was connected to VMware directly. No, it's not working, it's not scaling, and don't do it this way because it's not working. If you want to use VMware, then you should buy the storage which is supported by VMware, or use some real virtualization. Yeah, this one is also funny. Yeah, the MonitorDemon is called MonitorDemon, but this is the last thing which it has in common with Nagios. The MonitorDemon is here for monitoring SEF specific things like objects and OSDs are up, and it also maintains storage map and OSD map, and no, you cannot use Nagios instead of the monitors, and it's not fine to run just one monitor because if the server will go down, everything will be down. Yeah, it wasn't using all of the resources, so we also run on the servers, sorry, we also run many other services on our storage nodes, but SEF has one big disadvantage because it's not using so much CPU and memory until it will start replicating the backfilling, and then it will need all the stuff which is described in reference architecture, so don't try to use the rest of your power of servers which looks like it's not using because SEF will use it once. Yeah, and the second part of the people who thinks that SEF will save their life are people who want to have a replicated data because SEF is distributed storage, so let's mount Redusbug device on all the nodes and run database on it. No, it's not working this way, it's not going to fix your silly architecture design, and you should make a replication on your architecture level, not the storage level, not the lowest one, and it really will not work this way, and never ever mount RBDs on more than one node. Yeah, we want to have a replicated application, and we want to run it on US, United States, and I think that there should be Brazil or something. No, it's not working because you need really low latency between your OSDs, and you need to have a really high latency, and this will add really huge latency. I don't know what is the exact number of latency between US West and US East, but it's not working this way, and it's not going to bring your application to your application. Yeah, let's buy nano-instance and use five terabytes volume and run SEF on it. No, it will not work because there are some minimum requirements. It is technically possible to run SEF on Amazon Web Storage or any kind of public cloud, but I will not recommend it to do it. So, last thing, which I really love you to remember from my talk, is that SEF is not going to fix your silly architecture. It's just storage. It has some really magical features, but there are also disadvantages, and please don't ignore them. And think before you do and read the architecture data. Do you have any questions? Yeah, I actually have customers who are trying to do it very quickly, but every time, sorry, the question was whether it's just academic or I get customers who are running it this way. I have never agreed with the customer to use this kind of weirdos. I was also trying to stop them. Any more questions? Okay, thank you for your attention, and I will prepare some kind of workshop how to run SEF on AVS or this kind of stuff next time. Thank you. Okay, so hello. My name is Yirvaniek. I'm working in the Red Hat in OpenGDK team, and I'm going to make a little bit of the fun of the Java Web Start, and I'm going to send software development 10 years ago. So, Java. Java had good security much, much long time before its security was actually cool. It had really proper sign, signing, it had proper permission control, sun bouncing, it had remote launching protocol, it had auto updates, it had everything you could deploy your application easily, safely, and you was happy. Unluckily, the Java Web Start and the plugin, as designed by the sun, were always closed source, and it was bad, because no security audit was done before there was a time, and there were zero day exploits pretty often until the plugin became absolutely unreliable. Actually, I think that it was never somehow cool, and people didn't like it. Still, up to now, it's only solution for some corporations. Now, iSteave app. iSteave app is open implementation of Java Web Start and the plugin. It is trying to be fully compatible with the Oracle implementation, but its architecture is completely different, which is really good, because when Oracle bought sun, they were thinking about open sourcing the plugin and Java Web Start, but they didn't do it, and I think that it means something. So, yes, iSteave app is trying to be compatible, and here it is. Plug-in is dead. Deal with that. If your corporation is depending on the plugin, close your business. iSteave app team, which means currently only me, had some time for some fun. Mostly, it was Java Web Start HTML by the switch we are trying to keep some legacy games in uploads alive. We are running pretty well in offline mode. We have full restore integration. We are running in the command line support only. We have custom sandbox and many, many cool security features with the disadvantage that Java sandbox is no longer considered as the secure. If interested, see my last year presentation. This change didn't change a lot, but the concept about that changed a bit. So, there is one big benefit. Java is no longer slow monitored at its worst. People are always complaining Java is slow and useless and whatever, but I think that the hardware finally catch up and Java applications are really reliable fast. Or maybe the promise which the sun engineers gave us some 10 years ago that we have the same code, but we are running on better and better VMs, so it is becoming faster. Maybe it came true. Maybe we can put this all together and we can run all these cool features in a real normal application as the sea. So, what we need? We need everything that we need for a normal application. We need source codes. But in Java, we can sign that. So, everybody knows that they are ours. We can get dependencies and additionally put manifest file inside that. And we will have our network launching protocol file. And we will create, in addition, the launcher which will be placed on the pass. So, we are shipping only the launcher. Everything else will be done by the ICT web. This works perfectly fine also with the uploads, but please don't really don't do that. It does have some disadvantage. For example, the resources are downloaded on first time and they are always updated on the lunchtime. And it has also some advantages which are more or less the same. ICT web is trying to do its best to change these advantages, but they are remaining disadvantages. Still, demo. So, we have our application which is really simple. It's application like this. It's just listing the file. So, something like LS command. Just a signature that we know that we are running actually really my application. We built it. We have it in some jar. Here it is. We signed it and we create the launcher. This is the most important thing. The launcher, this is the only thing we are shipping to the customer is launching the Java Web Start. In this case, it's my custom built. It knows on which URL to find the launching protocol. And it's passing all the arguments inside. So, the application will behave like any other application, not like some strange Java application. I will launch my small internal server. Here we go. And here we go to play. So, what's happen when we launch our G application? Okay. Oh, what's that? Some certificate. Anyway, yes, we hate it. Everybody hates it. When you are using HTML properly, you will have about five startup dialogues. So, you need to start to love them. We can run it. Oh, here it is. We have LS. Yeah, invented the universe. We can launch, for example, this and my home. Sorry. Is it better? I will. Is it better? Okay. So, we can see that really the arguments were processed by the Java application correctly. Now, yeah, this is really nice. What does it mean? This is some signed application by some strange dev conf. But I don't trust it. It's supposed to only read the disk. It should not access anything which is actually doing, because it signed application so it can do anything. So, we can limit it. It will only read my system. And it's working again. But it's good. Otherwise, it fails and it should fail, because I forbid it completely to do anything on my system and it was trying to read the system. So, that's it. However, I am still seeing something called regular applications. And regular applications should behave correctly like this in a command line environment. So, what will happen when we disable the display? Here it is. The dialog again. You cannot get rid of it. Here you go. But I see this is something which nobody likes. So, you really cannot remember that. And now we are finally in the stage when the Java application, oh, was that. It was not remembered. That's not nice. Okay. Everybody close your eyes. Maybe it will start to work. Anyway, I will try to remember different from the GUI. Maybe it will work now. Now it will not. Somebody is still looking. Not asking anymore. So, yes. It's general web style. So, it's remote. It's trying to be secure. You need to keep it in a sandbox. You can adapt the sandbox. It will keep to make you angry by the various certificate issues or something like that. But at the end, you can really configure it. And you can run the Java web application like any other application normally. And it has some advantages and disadvantages. And as I was saying, this is working pretty fine also in the applet mode. This is some page on the web. This has several applets which are cooperating by the JavaScript. Here I have a launcher from that. You can see in the game, Java web start, some arguments, HTML and the web page. And some secret switch which is behaving like this. Hopefully it will work. Does it run? It's relying on the network in this room. So, probably not. Who is downloading the movies? It shouldn't matter. It should be okay. How many minutes I have? Oh, I hear it is. It's all the applets on the page really started up. And if they are supporting the JavaScript, they can communicate like any other legacy applets on the same page. So, this user can run it from the command line as a command. It will work for him. The applet will work for him. So, yes, that's for it. And that's all this demo was about to be. So, thank you for your attention. You know, let me... Do you want the remote control for the presentation? You have to mix the minutes including the... All right. So, they told me I have 30 minutes including... I'm just kidding. All right. So, everybody, my name is James Lubaki. I work in the integrated solutions business unit Red Hat. I want to talk to you guys about some of the things we're doing around strategic design. So, how many of you guys have heard of OpenShift? OpenStack? CloudForms? Manage IQ? Ansible? Okay. All right. So, one of the things that our team struggles with is bringing those pieces all together. So, we work on something called Red Hat Cloud Suite which basically tries to integrate all of those and have a cohesive user experience. Which today, we don't really have because all those are developed by open source community. So, I want to talk to you about one of the things we're doing and trying to make that user experience and design bring all these together better. I'm going to go really quick. But before I begin, so, here's a question. What percentage of people younger than 30 dream in color? Raise your hand if you have a guess. 100%? Anybody else? Guess? Come on. It's 20, it's 80%. Okay? So, what percentage of people older than 60 dream in color? 20. So, why is that? So, the idea is that color TV was invented in the 1960s, right? It became mainstream in the 1960s at least in the US and they did a study and when you have color TV people suddenly were able to start dreaming in color. It's pretty fascinating, right? Like, you wouldn't expect that to happen. So, the idea that we had was that if people can see things, it removes a lot of ambiguity about what you want to deliver. So, we tried to build rapid prototypes and basically bring clarity to the things people want to see. So, when we talk about running OpenShift on OpenStack, or running OpenShift and extending it with Ansible, what does that mean? We're actually trying to prototype these things in a lab to show what these look like. So, what we're not doing in our team is we're not building projects. We're not building open source projects. Those already exist. We're not trying to build a product. We're just trying to understand and prototype how these things should come together and build them. And we're also not selling this. So, when you see this demo, it's not for sale. But, we're more than happy to have you send us your contributions, I guess, right? Okay, so, there's a group at Red Hat called the Open Innovation Labs. I don't know if you guys have heard of them. But they're a consulting organization and they can help people build micro-services based applications. So, a customer might come to us and say, I want to take and build a new mobile app using containers and Kubernetes and Ansible and all these things. And we actually help them develop that app. They come into a lab with us. They spend eight weeks with us, a residency. They pay us to basically develop their new applications using the new technologies. So, we actually built a, so, over on the left-hand side is what they have. They have their own lab with all this racked gear. We have an office in Boston, one in London and one in San Francisco. We're looking at opening ones in the future in other cities. So, what we do is our team down at the bottom works with the labs team and we understand what they are doing with their customers and what they need to operate this platform. So, initially, let's say you want to develop a new application that runs on OpenShift and OpenStack and leverages Ansible and CloudForms. You're actually going to, like, multiple user interfaces. It's not, like, cleanly put together from an architecture standpoint yet. Right? So, we basically jointly create personas with them and we build prototypes. So, literally, we will work with them and we have a group of engineers and we will go build the front end UI and the back end and figure out how this stuff works. And the whole idea is that we can validate that with customers faster. So, let's say we have an idea of, you know, we really think that Ansible and OpenShift can come together and extend OpenShift to do really interesting things. We can actually go try that in a lab and then enterprise customers, large banks, large, you know, animation studios, retail customers will come and actually tell us what they think about it. And then the whole idea is we validate the prototype there. We share it back with our engineering teams, product management engineering. And we can take concepts that are in that prototype and we can deliver them in products. So, we go, we shorten the feedback loop. Instead of, like, a six-month feedback loop of build it for six months and then ship it and then find out the user experience isn't correct, we can actually do this very quickly and build concepts, validate them with hands-on users, and then we can actually deliver them. So, the whole goal being that, you know, we get rid of these prototypes over time as these upstream communities build the functionality that we need. Does that make sense? Alright. So, what I want to show you is a quick prototype, one of the ones that we're working on right now. So, this is actually, if you were to go work with the Open Innovation Labs, we're building a console. So, this is, how many of you have heard of PatternFly, it's a UI development framework. So, we have a couple of PatternFly UI developers on our team and we've actually built this. So, if you come to the lab and you say I want to build a new application, I can come in here and actually create a new topology. So, you'll see I've already created one here and a topology can actually have multiple projects in it. So, I can actually relate multiple OpenShift projects, I can also create one for AWS, OpenStack, Rev, VMware, whatever providers I want and that creates a data model and then I have multiple promotion stages dev, UAT delivery, right? And what happens is when I hit the build button, that data model is saved to the back end and then Ansible picks up and automatically deploys those on any of those cloud providers that I want, OpenStack, AWS. So, this basically takes all the time you need to provision OpenShift any of your DevOps tools that you want that don't fit inside containers today and all that and automates it all so that on day one you can start, come in and start deploying new applications. So, we're building this prototype today and basically once you hit the build button you can actually see that this gets provisioned multiple times. So, we're finding that this is actually helping us because we're able to go back to the OpenShift development team, OpenStack development team, CloudForms team, Ansible teams and then tell them about the personas that are using these, the gaps that we have in our products and things we need to close and then figure out where to develop them. Instead of just automatically thinking we need to develop a new feature in this product to solve it, we better understand the whole use case. Alright, so, one of the places we're going is actually integrating a couple of ideas. So, we're actually starting to onboard other people's technology into this platform as well. So, how many of you guys use like Elastic or the EFK stack in any of your, anybody? Elastic, FluentD, Kibana, no? Logging, data, metrics. So, if you use that what we're starting to do is we're starting to actually deliver services through this platform onto there to better understand how they connect. So, when someone goes to that interface we'll actually be able to deploy the logging service, connect everything up and give them contextual searching. So, where we're going is basically where you'll be able to select any of those topologies in there and you'll start to see logs and data from there. We're also doing this with insights. Has anybody heard of Red Hat Insights from Red Hat? Okay. So, Red Hat Insights basically what we do there is we collect data about other people's deployments when they share it with us and then we marshal all of our support cases. We have over a million document or a million closed support cases. We keep them in Kafka and we do some case matching and we could actually predictively kind of tell you what's wrong with your environment around stability, security, reliability and availability. So, we have 30,000 over 30,000 documented K-Base articles and so what we're going to do is we're going to start integrating that. So, what you'll start to see is underneath all of my stages I'll have a little health indicator and it'll say there's 13 issues in your development environment for example. I can go to there and I can see that there's nine stability issues. I can actually click through what the exact issue is. This one being the RPM command breaks due to RPM database problems. When I select that, it'll actually tell me how to resolve it. That's based on our knowledge base and case matching. And then we're actually going to figure out a way to actually include resolution in the pipeline. So, I'll take an Ansible playbook I'll add it to my pipeline. The next time a build happens, problem gets resolved. So, we're trying to like think way far outside the box, bring the user experience together, develop quickly and start to get things validated. So, anyway, that was a lot of information. This usually takes a lot longer. I usually give like a longer demo and show the back end and how it's all running on OpenShift and how we're working with people. But hopefully that gives you a good idea of how we're trying to use kind of a strategic design approach to validate things. So, and please feel free to hit, you know, send me a tweet or a direct message on Twitter or email me. Any questions? It is actually. So, there's a GitHub repo called rht-labs where the majority of the code sits. There's a little bit of stuff with certificates that we're doing that we keep private. But yeah, absolutely. So, rht-labs has the console and those pieces and we're basically doing like two week sprints. You're more than welcome to join us too if you want to help. Cool. What size T-shirt are you? A large or a medium? Alright. It's a American medium so you might need a small. Alright, thank you. Thanks.