 Well, hello and welcome again to yet another OpenShift Commons briefing. Today, we're pleased to have Brad Nicola from Code Envy talking to us about Eclipse Chain, which is the underpinnings for the Code Envy offering. Those of you who don't know, Code Envy was recently acquired by Red Hat because we just thought it was such awesome stuff. And we're really pleased to have them as part of the Red Hat family now. But I really am going to let Brad introduce himself and his topic, containerized workspaces for cloud-native development. And the format for today's session is he's going to give a presentation and some demos for about a half an hour or so, and then we'll have Q&A, live Q&A afterwards. If you have questions, post them in the chat. I'll try and get to them. If I can answer them, I will. Otherwise, I'll read them out during the Q&A and open up the lines for people to ask questions. So without any further ado, Brad, please take it away. Thanks very much, Diane. Thanks for having me. It's interesting, actually, because Diane and I met at Summit before Code Envy was acquired and talked about doing this. So it's kind of fun to have set it up, been acquired, and now doing it from kind of within Red Hat. So what I'm going to talk to you, though, about today is Eclipse Che, which is, of course, an upstream open source project, specifically about containerized workspaces for cloud-native development. This is an area that I write about periodically on medium and other places. I've been a committer on Eclipse Che for the last couple of years. I've been an executive with Code Envy for the last three. And the history of Che is actually kind of interesting. So I'll just briefly go into that before we jump in. Che actually started not as an open source project. It actually started as a proprietary product and strictly as a cloud IDE targeting JavaScript, which was a pretty obvious place to target a cloud IDE. As the technology grew and the community grew a little bit around that product, it occurred to Tyler Joel, our founder, that really it needed to be and deserve to be an open source project. And so the code was donated to the Eclipse Foundation and became Eclipse Che. Before the code was donated, though, in late 2013, the group of developers who built the cloud IDE actually completely rebuilt it as a set of microservices that were containerized in Docker. So we've been using Docker for a really long time and then certainly gone through a lot of the kind of early teething pains. So things are much better with containers today than they were back then. But we'll talk a little bit about kind of the state of things today. Specifically, one of the things that I find interesting in when I look kind of at a very, very high level at the market, and obviously this is a big simplification, is that you have over 80% of enterprise companies at this point reporting that they are using continuous integration tools and doing continuous integration. So that's a huge percentage, really. It means that very, very few now are not using it in some capacity. But when you look at the development side, that part is not continuous. That part has not yet been automated, containerized and kind of made more efficient. In fact, really the technology that we use as developers to build all these machine learning, cloud native, continuous integration, all these wonderful tools to help everybody else. The ones that we use ourselves are still locked very much into our desktops. It's a paradigm and a model that was brought to the fore 20, 30 years ago and has really not evolved significantly since then. And that I find just surprising, to be honest, is what we're going to talk a little bit about today. Now, Chase started with a vision, and that vision was pretty simple, really, that anyone at any time should be able to contribute to a project without needing to install software, without needing to configure it, without needing to figure out what version of the build system do I do? What parameters do I pass in? How do I connect this piece of functionality to that one? What if I have the wrong package on my machine? All those things which often are subtle or very obvious inhibitors to people absorbing and playing with open source technology. We just said, you know, it feels like we're far enough along in the technical cycle that this shouldn't need to be the case. And as we began to look at the problem, we kind of started with the obvious question. Well, what's needed in order to make a contribution to a project, an open source project or whatever? You need an IDE, obviously, or some kind of code editor. You need to set a project file, which is your source code, but you also need a runtime because no one writes code that is never going to get run. Now, a desktop IDE has a concept of a workspace, and when you speak to somebody who uses a desktop IDE and you say the word workspace, what they think of is the project files and the configuration around that. But what it explicitly excludes is the configuration of the IDE itself, the installation of the IDE itself, and the installation configuration of the runtime. That's a lot of things, really, that it excludes. The other thing we very quickly realized, and I think now, especially as cloud has become so pervasive in every part of what we do, is that the local host run times, which have kind of stood us well as developers for so long on our laptops and our desktops, really are constrained. They're constrained in a lot of ways. The obvious way is scaling, although not everybody builds apps that actually need more horsepower than their laptop has. But they're constrained in two other important ways. They're very difficult to share when it's on my laptop. I don't really want to just go and hand my laptop off to another developer, but if I need some help, I'm kind of limited to screen sharing, for example, which it's a bit like trying to drive a car from the passenger seat. You could technically do it, but it's an enormous pain. Controlling the laptop itself is also fairly constrained. There's a lot of variables going on in my laptop at any point. In fact, at the beginning of this presentation, you saw I had issues with blue jeans because I had slack running. These are two unconnected apps. There's a reason for them to affect each other, but they just do. And so that becomes a source of sometimes issues and problems. So the solution was conceptually simple, but technically quite difficult. And that's that we needed to redefine the workspace to include the IDE and or editor, all the project files and configuration related to that, plus the runtime and everything needed by that application in order to properly execute, but also to be able to be debugged, to be able to be built, all those things that developers go beyond just running the app, but are critical to what we do day in, day out. And what we kind of dreamed of as we put this together was saying, Hey, you know, continuous integration has revolutionized the way people build apps, the way they move those apps forward in the pipeline towards a release. We should be able to do the same thing for development. It should go beyond just I'm an individual coder. I should acknowledge my place within a larger community, a larger team and recognize that standardization of run times of tools doesn't necessarily mean a reduction in choice, but it can mean a reduction in issues and problems and speed bumps. All right. So enough talking. Let's see what what she looks like. Now, one of the easiest ways to get into Eclipse Shea is obviously from the site. You can kind of go in and do either a SAS based install. Well, I shouldn't say install. You can open an account in the in our SAS or you can do a local install. Now, both get you the exact same functionality. The SAS is my preferred way just because I find it the easiest, frankly, I don't need to have Docker installed. I don't need to do anything with my own machine. And I kind of escape those local host, you know, runtime limitations. Now, if I do that, I can go and just create a workspace, of course. But the other kind of cool thing about packaging every piece of a project together into a single configuration like she does is that you can now start to create launch points for projects. So if I'm particularly interested, let's say in vertex as a project, maybe I've never played with it. Instead of having to, you know, go and configure everything manually, I could click this button or in fact, I can go directly to the repo. And if I come down, you'll see there's a developer workspace button. If I just click this, it's going to go and pop me right into Eclipse Che, running in the code MV cloud. And it's going to configure me a workspace, perfect for vertex. You can see it's grabbing the Docker container now. It's going to instantiate that Docker container. It's going to then inject agents, which is what's happening now. So we keep the recipe for the Docker container standard so that it could be a recipe that is pulled, let's say, from your production environment. And then after the fact, we inject agents, which developers need things like, you know, terminal access, SSH, perhaps a debugger, language services to allow you to do autocomplete or error checking for the particular language. Once the containerized workspace has started kind of under the hood, so to speak, then we start the IDE. This is a lightweight browser based IDE, as you can see. It takes about 100 megs per tab in my browser, so not a great deal. But it is an actual IDE. It does have, you know, debugging it has autocomplete. It has a lot of the functions that you'd expect out of an IDE. The nice thing about it, though, is I haven't needed to install anything. I haven't needed to configure anything, although I'm running this right now on a laptop, I could and have run Eclipse Che and Code Envy like this on a Chromebook. We actually have customers of Code Envy where the dev leads sometimes use their iPads to go into Code Envy, look at the code and do checks for, you know, code reviews or whatever have you, so they don't necessarily need to be at their computer. Now, I mentioned autocomplete. If I hit control space, I get standard autocomplete. And if I do that again, it's basically instantaneous, you can see. Now, the reason why, and a lot of people that kind of surprises them, they expect a cloud IDE to be laggy because logically what you imagine is happening is that my browser is sending a packet of information back to some service, which is going to analyze it kind of in a centralized way, interact with the code in a separate container and then send the result back to my browser. But that's not what's happening here. What we've done actually is inject the code, clone that into our, my personal container. Inside that container, we've injected the language server, in this case for Java, so that language server is operating directly with, in kind of a local host relationship to the code. And the only thing that gets sent between the browser IDE and that container are the set of events and very, very small, lightweight kind of messages, but not full content. That content is then stored partly on the browser so that it can be optimized. As a result, you get very, very fast response time, even over quite a large distance. Now, I can do other things, like let's take name, this is such a small little app, I don't know that this is going to be the best example, but let's see if I can get them on the same. All right, so I'm going to take name and now let's say I want to do some refactoring and I want to remet to my name and you can see done. Now, this is too simple an app, but if this was a multi-project, multi-class package app, the refactoring also works kind of at those levels, so I can refactor the class, I can refactor the project, the packages, it's going to move everything around for me. It's got error checking, so if I do something silly and just add in a package that I'm not actually using, it's going to go and say, hey, you're not actually using this, you don't need it, so I can go and kill it, clean things up for myself. And you saw there, and actually let me just do that again to call attention to it a little bit better, that the autocomplete is of course location specific, in this case, because I'm up in the import section, it's going to show me autocomplete of a bunch of packages, it's not going to go and show me individual classes, and of course I don't see packages when I'm down, so I'd be the class. I can of course navigate around the file, this is a very small and simple file, so that's not going to be super exciting. I can enter open declarations, I can navigate the file structure, and if I do control F12, I can even see all the inherited classes and jump to those, so I could go and jump to the actual Java object class and scan down and take a look at that, which is obviously a fairly meaty one. So all those kind of editor interaction things are there that you would expect. Now, you'll also notice that at the bottom here, I have a terminal, now in this particular case, I'm only running a single container, the color by the way of this container name, in this case WS machine, indicates it's running status, so in this case it's green, all is good. I've got my terminal here, and I have root capabilities, so I can do whatever I want, I could run the top if I want, and just check out what's in here. You can see that if I go into the actual project, why am I having so much trouble here? Fingers are not behaving properly, apologize guys. And you can see that I've got the same structure, actually I guess I should open it up here, that I would, hello. So these are both using the exact same source code, pulling from the same location, the changes that I make down here will be reflected above. So if I go back down here, and let's go into vertical, and there's my new file.txt, and I of course can edit it, and you can see the edit is there. So lots of interaction, I can use this just as I would any other IDE, and of course I can use, I should be clear, any other IDE. I don't have to use this browser-based IDE, so if I'm particularly married to IntelliJ, or the Eclipse desktop IDE, or Sublime, or whatever I want, I can actually use a mountain sync between my local machine and my local file system where that desktop IDE will interact, over to the container where my app is located. This gives me the kind of familiarity of my desktop IDE for the points where I'm doing actual editing, but allows me to build, run, execute tests inside the containerized runtime, which is provided by Eclipse Check. So for some people, that's kind of the best of both roles. Obviously we have Git and Subversion menus I can interact with with Git over here, check out all the branches, origins, local branches, everything else, or Git is typically installed also in the command line, and so I can do it directly from the command line, whatever my preference is. Now, let's actually shut down this, and let's show a slightly meteor example. So let's go back into the dashboard, and although this is, as I said, kind of all hosted on code envy, which everything I'm showing you here is Eclipse Check, it's just that it's being run inside the code envy cloud. So let's go to create a workspace, and let's do something a little bit more in depth. So the concept in Shea of a stack is quite important. A stack is basically a set of runtime, infrastructure, and dependencies, which I can inject my code into and get my running application. Now, we package a number of stacks in what we call of our stack library, and you can see for a large number of different languages, you've got all these different pre-built stacks, kind of optimized for that language. You can also import a stack if you have a recipe elsewhere that's in a composer doc file, or I can just start typing a Docker file here, Eclipse Ubuntu, and that's gonna get me a new Ubuntu container from the Eclipse Foundation. But let's go back to the ready to go stacks, because what I actually wanna do is do a multi-container application. Increasingly as developers, we're working on more complex applications and microservices, and we wanna be able to kind of replicate the production infrastructure when I'm doing my dev. And that means having multiple containers with their own network in place. Let's choose a Java in my SQL, that's gonna give me a Java stack and a MySQL database separately. I'm gonna come down, I can give it any name I want, let's call this one commons, let's call it OS commons. You can see I can control the amount of RAM allocated to my database and my dev machine. Now, because I'm running this in the code and be cloud on a free account, I only get three gigs. So you can see I've kind of optimized for three here. If I was running this in my own cloud instance where I controlled the resources, or if I was running this on a server, I of course could ramp these up much, much larger. Now this is gonna deploy the pet clinic application to this particular environment. Let's go ahead and create it. Now the process is gonna be the same, except in this case you can see that we're kind of prepending the Docker instructions here with which of the containers it's operating against. So of course we start the database first and see the Docker container was instantiated. Now we're executing all the various commands in order to start up MySQL, load in the dataset and et cetera. Once that's complete and the database is up and running, then of course we'll start our app server container. So then have it connect back into that database. There's an overlay network that's being run between these two containers. And again, this is my own kind of private containerized environment. Anyone else who comes into chain runs the exact same stack with the exact same app will get an identical containerized environment, but it won't be the same one. We will each get our own copy. And that's because it's very important as a developer that I have kind of my own sandbox, a place for me to play. I might want to invite other people in, in some cases to help me to work on the same app with me, but that usually isn't the default. I want to typically have my own space. Now you can see my OS Commons workspace has started. The IDE has been spun up. And now you can see we have a dev machine and we have a database machine. Now, this particular terminal that I'm seeing here is in the dev machine. And so if I look, there's my Java web pet clinic app and I can go into the source code here, of course. Let's split this and let's add a terminal by clicking on that little button for the database. And of course, my database looks totally different. Now, one of the things, let me move this over the way. One of the things I didn't really show much in the last part of the demo, and actually I'm just going to zoom out for a second, is the command bar. And this gets pretty interesting in this type of configuration while I have multiple containers. Because now I can do things like a build and deploy on my dev machine, okay? So that's going to start, and that's running all my maven commands. You can see they're going by rather quickly, but they're there. And if I go into run and into my command palette, I can see I also have a show databases. And of course when I run show databases, I want to run it against the dev machine, sorry, the database machine, not the dev machine. So now this is showing me the databases that are inside my MySQL and there's my pet clinic. Now I'm not a super expert in databases. So to be honest, if you'd asked me how to show a database in MySQL, I'd have to resort to Google. The nice thing is that when I create these workspaces for a project, I can embed these commands into it. So I can basically preload a workspace with a set of commands that have value for people contributing to this particular project and show databases, maybe one of them. I may want to clear data, I might want to reset data, I might want to change or shut down and restart connections. All of these things can be built in as commands and each individual user then still of course has the ability to override commands, change them or delete them. So for example, if I take the build, you can see in this case, this is doing an MVM-F clean install and it's doing skip tests. Maybe I don't want to skip tests. So I can either create a new build no tests or I can just delete this because it's only in my workspace and when I do this change and save it, it's not changing it for everybody else. It's not changing it for the template, it's only changing it for me. So let's go ahead and put that away. Let's come back to here and our app server has started up and I can now click this little blue preview link. Now the preview link is going to open up in a new tab and this is going to point to whatever domain and my Eclipse Chay has installed at and automatically select the correct port. And that's important because if you've been using containers much, you'll know that when you open port 8080 in a container, it doesn't get externalized in the container as 8080. It's actually mapped at runtime to a port selected at random from the ephemeral range. So you always have to kind of figure out with containers where your critical ports have been mapped to. Eclipse Chay takes care of that for you. We actually have a number of macros that you can add into your commands to determine this so that when your preview URL is generated, it just automatically will choose the correct port. And if I were to rerun this and that port changes, of course my preview URL port will change as well. Makes things a lot easier. Here's my pet clinic app. So of course it works as you would expect. I'm going back and hitting the database for all these things. It's just the standard pet clinic app. There's not much too exciting about it. But what's I think important here again, just to kind of reemphasize is without having to install anything, without having to configure anything, I am able to provide for users of my projects, project developers of my project, contributors to my project, a very simple single link, which brings them into a web interface in which they can edit, build, debug, run, change, contribute to pull requests to any project that they want. They don't need to know a great deal about the project. I can preload it with commands so that everything is at their fingertips. It really reduces the bar for community contribution. And our experience has been that it makes teams, whether they're teams working on open source projects or on closed source enterprise projects, far more efficient, because now it's much easier for me to focus in on what I need to do as a developer and much less on the infrastructure I need to do it. Last thing I'll talk about a little bit is sharing. I can, of course, take this URL up here at the top. If I give that to somebody else with sufficient rights, they can actually come into my workspace. Or what I can do is go back into the workspaces, select my workspace and go to share. And now I can actually add people explicitly, they have to be in the system so that I know that they're kosher. I'm pretty sure that I've got, yes, my personal account there. And then I can go and remove them, change them, anything else. You can see my SSH keys have been dropped in, a list of projects. I didn't show this actually, but within an Eclipse Chay workspace, you can have multiple projects. So you're not limited to a single project. In our parlance, a project basically has a one-to-one relationship with a repo. And so it's important, as you can imagine, to be able to have a single workspace with a single runtime able to support multiple repos worth of source code. If I'm interested about the runtime, I can go in here and look at all the things running in my database, the size it is, all that good stuff, the different agents that are running. I can look at the environment variables which have been set. I can look at the dev machine, same deal. And I can get the full config as a JSON file that I can now pass to somebody else. So the other way I could share this is simply by taking this config, copying it and sending to somebody an email or an IM, and they're gonna get an identical configuration to what I have in every way, which is really good for quickly onboarding people or getting help from somebody else. All right, so that's the demo. Let's stop this and let's just jump back to the slides for a little bit because I wanna talk a little bit about futures as well. All right, so as Diane mentioned and I alluded to at the beginning of the call, CodeNV engineers originally contributed Che to the Eclipse Foundation and the employees of CodeNV are the leads and many of the committers of Che. Now, of course, Che today has over 100 contributors and the committers are much broader than just CodeNV employees. But there's still a bulk of them there. In May 2017, so about a month or so ago, Red Hat actually acquired CodeNV and we're very pleased and excited to say that the big part of the driver of that was the strength of the Eclipse Che project. The great news for any Eclipse Che user is that as you might expect, Red Hat will be open sourcing most of what was proprietary CodeNV into the Che project and that's important because the relationship between Che and CodeNV was a kind of upstream downstream relationship which is fairly typical. CodeNV took Che and then added a number of kind of what we'd call enterprise features. So LDAP integration, OAuth integrations, the ability to do multi-user and multi-tenancy, horizontal scalability across clustered nodes to allow for very, very large scale dev teams. Integrations with third party tool chains. So all that goodness that was in CodeNV targeted enterprises bought by many, many enterprises is now going to be pushed into the upstream Che project for everyone to consume as part of the open source. And we're super excited about that. It was something that as CodeNV we didn't have the resources to be able to do but it's just always something that we kind of wished we could do. So big thanks to Red Hat for supporting us in that. And that makes Eclipse Che 6 which is in development now. I think possibly the most exciting Che release we'll ever have made. You're gonna get multi-user and authentication from CodeNV, user team and organization management from CodeNV, permissions, fine grained, security SSL from CodeNV. So all that stuff and workspace idling that a lot of people who used Che were like, ah, I wish I could get that stuff because that's critical stuff for me and my team but maybe I can't afford CodeNV or whatever have you. Now that's all gonna be in Che 6. Also very, very exciting. Red Hat engineers for the last nine months in fact have been working on porting Eclipse Che which was kind of built on top of Docker and today relies very heavily on Docker to use it a new SPI which allows for Kubernetes and OpenShift as alternative what we call machine implementations. In other words, when I start a workspace today in Che it always starts in Docker. I don't have a choice. Docker is the orchestrator but with Che 6 I will be able to select amongst orchestrators. Do I want it to be part of a Kubernetes install, an OpenShift pod or Docker. So I have a lot more flexibility this is gonna make it I think a lot more appealing to a lot more organizations out there. We're also heavily revamping the IDE. You're gonna see an IDE that is not just more powerful but simpler and more beautiful and that's important to us. We've always taken the approach and believe very strongly that the IDE within Che we didn't want it to be just like the Eclipse desktop IDE or just like the IntelliJ IDE. Although those are impressive tools and they're very powerful we felt that they had tipped over into a level of complexity that was unnecessary for many people. We've always wanted the Eclipse Che IDE to remain streamlined and as simple as possible while still being a full blown IDE something powerful that can do refactoring that can do debugging that can do those tasks which you expect at the core of an IDE. So I think we're gonna make another big advancement with that in Chase X. And the last part is that we're gonna be expanding the language support via new language server protocol integrations. Now if you're not familiar with the language server protocol it's pretty exciting. It's a protocol standard that was developed by Microsoft, Red Hat and CodeNV. And really kind of flattens the way that LSPs have been done in the past. So rather than building a language server that works only with one IDE it detaches those two concepts and you get a language server which can be plugged into VS Code or Eclipse Che or Eclipse desktop IDE and on and on there's more and more IDE's and editors are supporting the LSP a protocol going forward. But it also means that folks like Eclipse Che where we don't necessarily have the community contributions for everybody to just go and start building a brand new PHP or Python or C++ language server for Che we can now absorb those from wherever they pop up in whatever community. So for example, early in the Che 5 release train we got a rather wonderful little surprise when Zend who are kind of the preeminent experts in PHP contributed a fantastic language server to different debuggers and a bunch of kind of tutorials and docs to Eclipse Che to kind of make it a powerful tool for PHP. Just a wonderful edition of Unity and just kind of happened seemingly overnight because once you've got the LSP there plugging it in is not that hard. I think that's gonna be a really exciting release. Now overall our strategy is to make Eclipse Che into a complete workspace server for both individuals and teams. And that team aspect I think is what's gonna become much more enjoyable in Che 6 and I think will be the next big leap that Che makes. So Che is gonna be capable of running either as a standalone kind of work group server for groups of developers who really just want what Che has but also very exciting for us is it's gonna be embedded into DevOps suites like RedHat's OpenShift.io where it plays a part in a much larger embedded tool chain and where it can now kind of reach earlier to kind of help add context and value to the work item planning section and can reach forward to help with things in the CI CD world. So let's talk briefly about OpenShift.io. This is kind of futures stuff and I will caveat this by saying I'm quite new to the OpenShift.io team but I'll tell you a little bit about what I'm excited about. So this is not necessarily official positioning or anything like that but it's just, it's how I think of why OpenShift.io is exciting to me because I believe that every development project needs to be managed, developed, tested and deployed with professional tools. This is what we saw when you're building code and be the company was that it took a lot of effort not just once but in every sprint for us to build out and kind of maintain and streamline the tool chain that took us from issue planning through editing, through continuous integration through deployment for our SaaS. So kind of similar to how we looked at the development problem and said why should an individual developer need to configure their laptop just to experiment with or contribute to an open source project? We kind of looked at this and said, wow, Red Hat has really taken that vision and made it even bigger because they're now saying why should a development team need to go and configure a whole host of DevOps tools and then keep those tools integrated, keep those tools optimized? What would happen if that whole tool set could be brought together, pre-integrated and then especially if an analytics engine could be attached to it so that it could begin to learn how the whole process and the code itself could be made better by, for example, alerting developers that a new package they've added includes an LGPL license or letting development teams know that a package that they're using in their app is not used by very many people in the community and that a more popular package which seems to do something very similar is XYZ. Giving that development team an opportunity to kind of evaluate that choice to be consciously choose to use something unusual and different, maybe it's because it fits our needs better or did we just happen upon a kind of narrow little path in the woods not realizing that there was a super hideaway right beside us? Maybe we wanna jump back on to that thing which has larger community support. So I'm very, very excited about this. As you can see, the Eclipse Che is going to play an important part in OpenShift IO as kind of the coding engine, the IDE engine within this but fabricate Jenkins, a brand new issue management, all this intelligence and machine learning plus of course OpenShift itself are just kind of a host of tools that surround this and I think make it even more exciting. All right, last couple of things I wanted to talk about was bring this back down to earth a little bit. Now, what I'm showing you here is the impact that using these containerized and shareable developer workspaces had on CodeNB's dev team. So we did this test for six months, three months before the team made the change and three months after. And this was done, this data and testing came from a couple of years ago at this point now. But what we saw was pretty impressive. Obviously administrative tasks didn't change but we did reduce the amount of time that was spent building environments, waiting for tests to execute all that kind of stuff. So there was a lot more time the team had for coding. Now that made a pretty substantial impact on our releases. We went from one product to three products. So you would think by expanding the number of products and packages that we were delivering to market that with the same size dev team, I should add that the number of commits per day per engineer would drop and that it would be longer to get a commit into production because of that greater complexity. However, thanks to optimizations that we were able to realize through using these containerized workspaces, the number of commits per day actually increased because developers could now flip from branch to branch project to project, repo to repo much more quickly. And that helped to significantly reduce the amount of time it took to get a commit out into production. This is of course, since dropped even more from 5.2 to today. And so I'll wrap up there. Thanks very much folks. I'm really happy to answer some questions now. Well, there's a bunch of questions in the chat. So you can pop over there and Veer had a couple of good ones and I'm wondering if I can get Veer to unmute himself and ask directly. Yeah, this is Veer. Yes. Thanks, Ryan. So the first one, I see that it's the, we are creating containers behind the scenes, right? So when you deploy a multi-tiered or when you're trying to build a multi-tiered application, each component is becoming a container and it is running. So it is creating a containerized workspace. Now, is that container, I think a part of the question is already answered by Peter. He said that my initial question was, are we actually saving this container itself or container image itself and then passing that container image? Or are we passing the code from that container? What is the intent there? And I think Peter said that it is still Git-based. So you are using that, so based on that answer, my understanding is that we are using that container as a workspace that's it. Ultimately, the deliverable from this IDE is still the code to Git-based. So there is still Git-based and the code still resides in Git. We didn't wanna change that. Git does its job very, very well. Now, when you talk about passing a project from a developer to a developer though, there's a nuance there. If all I want to do is pass the code itself, then of course Git does that perfectly. I don't need anything else. What Eclipse Chay allows me to do though is pass to that developer, not just the code, but also the runtime needed in order to execute that code. And that can be extremely helpful. Now, to go back and answer your first question, Vera, the image itself is not passed from developer to developer. That would be kind of unnecessarily heavy. What I pass from developer to developer is the recipe for that image or really for the workspace. And that's what I'm showing now. So this JSON file you'll see includes information about the, for example, the memory for the database and for the dev machine. So it knows how much to provision in terms of size for those machines. It has the compose file built in, but it also, as I get down here, did I go past it? No, it's right here, it's right in the middle. It also of course passes in the Git location and the branch, this can even include the commit ID itself if I wanna get very, very detailed about what I passed to somebody else. And so that reduces the chance of misunderstanding. I can send somebody this file and I know that they're going to then land in an environment that is set up the way my environment is set up, that they're gonna land, that they're gonna pull their code from the correct branch and even commit ID that I need in order to get my question answered from them, for example. So that gives me a lot of kind of precision. Does that make sense? It does. I follow up on the same exact thing that you said. I do understand the reason for passing this JSON instead of an image, which makes sense from a developer to a developer, but if I could get an image, for example, outside this, then can I not take that image and run this directly on my box and use that image directly? Like, at the end of the day, it is a Docker container. Can I not? Well, absolutely. Oh yeah, sorry, I didn't mean to imply that you couldn't do that. You absolutely can. It's just that the image itself is quite large and so generally people don't want to move it around. Remember that for a team, and here I'm gonna be talking a little bit more about Eclipse Chase 6, than Eclipse Chase as it exists today. But for Eclipse Chase 6, the concept would be that you would run a centralized kind of Eclipse Chase cloud, let's call it, for your team. And so the execution of those containers would happen on a centralized set of resources. As a result, you know, I don't need to move an image back down to my machine. I can simply kind of move the pointer around and people can execute those images on that central resource. Now, that always still gives me the option to pull the image down to my local machine, run on my local machine if I want to do that as well. Great, great. I have, I had another question, but if someone else wants to ask in between, that's also fine. I see there are some questions in between. So ask your question, Vir, because you're unmuted and it's timely. Okay. Can you talk a little bit more about integration between a desktop IDE like Eclipse or IntelliJ or RedHats, JVDS, which is nothing but Eclipse, and Eclipse Chase, and I have two parts of that question. First is how does it work? Second is what is the use case for that? I mean, is there a reason why someone would want to do it? Yeah, so some people just don't want to use the cloud-based IDE. And I think it's really kind of that simple. I like it. Obviously a lot of people like it, but not everyone. You know, naturally, if someone has been coding for 20 years with the same IDE, they've developed a kind of muscle memory with that IDE and they may be much more efficient with that particular desktop IDE than in changing to another one. Others in some cases, to be perfectly honest, and this should be no surprise to anybody, the Eclipse Chase IDE being five years old, let's call it at best, is about 25 years, well, it's maybe 20, 15, 20 years, less feature growth than what if something like the Eclipse desktop IDE does. So naturally, the Eclipse Chase IDE doesn't do everything that the desktop IDE does. For those reasons, some people like to use their local IDE. In that case, we've got, as I said, a container which will perform a Mountain Sync using Unison to do a kind of differential transfer between the local file and the code-nv container so that you're not shipping the whole thing back and forth constantly, which would get quite slow. Okay, so how does that sync work? So basically, you create an SSHFS connection between a local folder on your local machine and the file system of the container that you're targeting within code-nv, within Che, pardon me. There's an initial sync that's done by Unison to make sure that those two file locations look correct and are in sync. And then as you're typing and changing on your local desktop IDE, and that's changing files on your local folder, it's sending those changes up to the Eclipse Chase container. Again, only the difference is not the whole thing. So it can take a little bit of time to first run as it does its initial sync, but then it tends to be fairly responsive after that. And as I said, the use case for that is for situations where somebody doesn't wanna give up their desktop IDE, but does want to be able to take advantage of greater resources or greater flexibility in running builds and executions in a containerized workspace, maybe on a shared server, in the cloud, in other locations where they could end up with more horsepower than on their laptop, for example. Yeah, perhaps it makes more sense when a team workspace is coming to existence, right? Yeah, although there's a, we do have a number of customers, for example, who have just very, very large and demanding apps and some things for data science and things like that, that need a very large amount of horsepower and just don't run happily on a laptop. And so in those cases, the developer may want to use a desktop IDE to do the editing, but the actual execution of the jobs is much better suited to being executed on a much larger server in the cloud. Okay. All right, thank you. There's one other question from earlier on, Jack had asked, can you compare stack versus factories versus workspace? Great question, yes. And something I really should have done. Thank you for that question. So a workspace you can kind of think of as the thing the developer interacts with and touches. Each developer gets an instance of a workspace and that's where they do their work. A stack you can think of a bit more like a template for the runtime components of a workspace. Remember that a workspace includes the runtime components plus the source code plus the IDE. So the stack really is targeting that runtime aspect. So when I choose the express stack, it's going to come and give me the certain Linux version, MongoDB, Bower, Node, Express, NPM. It's gonna put all that set of infrastructure and packages and dependencies which I would need to do express development into my workspace. It's still up to me to tell it where do you get my actual source code? And then I still have of course the option of using the Chai IDE or a desktop IDE. So that's a stack. And so what people tend to do is for their particular project, unless it's fairly vanilla, they'll often create a custom stack. One that is really appropriate for their particular project includes just the tools and packages that they need for that project to work. And I can share those stacks in the same way so that I would share a workspace. So I could share that with the rest of my team or my community. Now a factory is something a little bit different. So what I can do once I have a workspace is I can say let's, I wanna now make this available to anyone this exact same workspace. So the runtime plus the location of code plus the IDE as a one click link. So I'm gonna call this OS Commons and I'm gonna create it. And just like that, I now have two links that I can use. Anyone clicking on either of these two links is going to get their own workspace, but it will be 100% identical to the one I created for this particular demo called OS Commons. I use, and the only difference really between these links is one is kind of obfuscated and one is fairly open. So if I'm working, let's say in an enterprise and everybody knows everybody, it's useful to use this top one because it indicates who's the owner that created it. So if there's a problem with it, you can come and yell at me. If I'm doing this maybe for a public project on GitHub, I'd probably use the second one because it needs to be maintained, but I'm not sure I wanna expose all the information out to everybody who I am and all that stuff. Within this factory, I can further customize things. I can tweak the recipe if I needed to, although that's not very common. I can change the amount of RAM that's allocated and you can see I have all the commands that are available within that workspace. So if I don't think people are gonna use one of them, maybe I'm the only one that likes to show databases, I can just delete it or I can create a new one. And then I can also have it execute commands upon open. And that can be nice. Again, they just simplify things for people. So for example, within CodeNV, when we had a feature branch that people were working on, one of these factors would be created for the feature branch and there would be two of them. There'd be a developer link and a reviewer link. The developer link would just open up as you'd expect into the code so the developer could start working. But the reviewer link would automatically build and run the app in whatever state it was in, kind of the state of head of master. So that a reviewer kind of didn't need to interact very much with the actual IDE. They just click the link, IDE opened, the latest code commits ran, they clicked the preview button, bang, they're in, they can look at the app. But it still allows them the freedom to go back into the IDE and analyze the code if they needed to. So factors are really good in other words for sharing a particular workspace with a larger group. Now the last thing that I'll mention that isn't in the menu because it's done more at the command line is something we call Chater. Now Chater is kind of cool. With Chater, you actually drop a set of config files into the root of your repo and then somebody can actually run Chater up on their local machine. And it will actually not just clone the repo but actually pull down all the containers needed in order to run it on their local machine. So if you do prefer running a ClipsJ on a local machine kind of in your own little sandbox, then a Chater is kind of very similar to the factory concept but brings that to your local machine. Well, that brings us almost to the end of our hour here. And I think there's been a couple of other comments and questions about OpenShift IO. And I just wanted to reiterate what Peter said in the chat was that OpenShift.io is in heavy development in alpha and we've all been a little overwhelmed by the interest. So if you're interested in participating, go to OpenShift.io and sign up and then go see if you can find your sales rep from Red Hat and they can nominate you for some early access but just be aware that it is emerging tech. We're really looking for feedback from our customers and potential customers and people that just have had the time to give it a test run. So please do give us feedback. And that brings me to my final point and question for you, Brad, is how do people get in touch with the project and what's the best way for them to reach out to you and to the folks working on Eclipse Chay and maybe end on that note and slide somewhere in GitHub? Absolutely. So the Eclipse Chay project is hosted on GitHub. You can go to eclipse slash chay. Everyone hangs out here. This is really the locus of everything we do with Chay. So if you have questions, you can file them as GitHub issues and people will jump all over those. You can check out pull requests, of course. But in the Wiki also, we have things like the roadmap. We have weekly milestone meetings, planning meetings which are in blue jeans. So you can just click the link and join it every Tuesday at 7 a.m. Pacific. And you can talk, you can look at kind of how to contribute and our coding guidelines and all that good stuff. Always, always looking for kind of new input, new thoughts, new contributions. If you have other questions, you can also email me bemickley at redhat.com and I'll be happy to respond as best I can. Well, thank you. And I think when we have the next release out that maybe we'll get another go round and as it comes up and services more in openshift.io, I'm sure I'll have the openshift.io guys come and give a more full blown talk on that whole workflow as well. So thank you very much, Brad, for taking the time and for everybody who's joined us today. Again, I'll reiterate, tomorrow is the Kubernetes 1.7 release update from Clayton Coleman, same time. Check the calendar at commons.openshift.org slash events for other upcoming things. We've got a really full summer full of briefings because there's just, things are supposed to slow down in the summer and they just are not. So please do keep an eye out for new topics and if there are topics that you wanna deeper dive in, please let us know and reach out to us at Openshift Commons on our Slack channel or hit us up via Twitter at Openshift Commons. So thanks again and this recording will be posted probably in a day or so on the YouTube channel and then in a blog post with reference links back to some of the Eclipse project stuff on blog that openshift. So that's all we have today. Thanks again, Brad, for all your efforts to get this done. Thank you, Diane. It was a pleasure taking part in it because we've got a great kind of meeting going on here. I love these topics, they're all really interesting. All right, take care.