 Oh, okay. All right. So, as far as being a testing platform, one of the first things you do when you want to run Archelion is to select some type of container that you're going to run again. That could be some examples of being a Tomcat instance, a Wildfire instance or a JBLS EAP or Glassfish or whatever. That's kind of the target environment that you want to run this or these test against. Or it could be a remote thing like an OpenShift instance. And then when Archelion starts up, it will then connect and communicate with that container. And based on how you set up the test, it will then start to deploy the deployment that you have defined, the target, the scope of this test essentially, the application that you want to test that's being deployed to the target environment along with a bunch of Archelion things. And then you can then move the whole test execution over into that runtime instead of having it or only having it run inside the TIPS IDE. Now you have the advantage of actually being inside the runtime that you want to test. That gives you the advantage of actually having access to all the things that runtime provides, like security managers, entity managers, data sources, GMS connections, and all of those type of standard things. And when Archelion then is done running the in-container test, it can then move the test results back over to the client side and everything looks like it were if you ran a normal JNU test. And then of course it cleans up the environment. So that's the basic what Archelion as we know it from the Java E space does. And this is what the JUnit version of the test would look like. So you have the JUnit extension point that says handle the execution process to the Archelion runner. At this point Archelion has control. I can do essentially whatever it wants. One of the first things it does is to look for that deployment annotated method and relies on the library called shrinkwrap to bundle up whatever you want to deploy to this application server. So you can add the classes, packages, files, full jars. You can resolve them from the Maven or PAS stories and so on. Then it attempts to reuse the component models that exist in that environment that you're targeting. In this case we're using CDI so you can inject back the live beam that is actually being deployed and running in the server. It's not a mock. It's not a proxy of some sort. It is the actual thing that the server operates on. And then you come to the test method. And the test method executes as any normal JUnit test essentially. You get the values and you assert on the state. That results in that that result is then being handed back down to the client. That was in container version. This is the client side version. There's kind of two points. There's the interest of being in the environment that you want to test. And then that's the point of being on the outside of the environment because you have remote entry points like a JXRS service or a remote EJB or HTML pages or whatever. The only difference from Arcillion point of view is that you then set that a deployment is not or is testable false. Instead Arcillion won't do anything besides deploying this for you and then you're up and running. Then you can, as we're using here, we're using the rest assured library to communicate with one of our rest services. So just set it up. For those who use Arcillion in the past and a bunch of the extension, we know that it's a bit complicated to know what works and what doesn't necessarily. So we started something called the Arcillion universe bomb, which includes all of the different extensions and try to align the different versions so that everything should work together. So instead of relying on the Arcillion bomb, you can now rely on the Arcillion universe bomb and you just follow the same pattern being org Arcillion universe and then the name of the module or extension. So in this case, we want to use the Arcillion J unit integration, but it could be the test and G integration. It could be the cucumber integration, the Spock integration or any of the other test frameworks that we support. And then in this case, we're using the chameleon container, which is a proxy container essentially to allow you to call all the other containers. So there's one dependency to include all of the different J-Bus versions and the different Glassfish versions and so on. So let's see that up and running. That's the basis of Arcillion. So this is the same test that we were seeing on the slides, the incontainer version. And as far as it's being configured, we can see in the palm here that it's just relying on the universe bomb, it's a bit small maybe, but and the chameleon driver. And it's defined in the Arcillion configuration file that this should be running on a wildfire 9 manor server. And some of the features of chameleon is that as long as you haven't defined where this server is, it will actually download it and extract it for you. It makes it easy to demo it and to run from CI systems and so on. So since this is just a J-Unit test, the only thing we need to do is to right click and say run as J-Unit. Run as J-Unit. It's starting up the server, flowing the application and we've got a green bar. You look through the log and see what's happening, but it's not that interesting. Do people understand the conceptual of Arcillion normal thus? Any questions around that? Maybe it's as the other way. Is there anyone who understands it at all? Two, three, okay. All right. I can explain more. So what Arcillion does in the background here is kind of the first steps that we looked at. So as far as you have defined Arcillion, it will then look for the deployment. It will start that container that you have configured, which is in this case the wildfire container. We started as part of the test high cycle. Then it will deploy the deployment and then it will move the whole execution of the test over into that container. So you're running inside that container. Now that's the wrong test per se, but this one, we could actually for the fun of it. Let's see. Let's start up a wildfire server outside of Arcillion's control and then change it to the remote adapter. So they will communicate with some existing server. And we can debug that server. And let's see. When we then run the container test, we can set a debug point on the inside the test method and see where that actually executes. If we rerun that, see it's not starting it up anymore. Then we have a break point. And you can see the actual break point, this stack of it. Maybe a bit hard to see, but it's coming in from the HTTP server on the wildfire side. So it's actually not executing in the IDE, but the request is forwarded and executed in the remote environment. And then it's being passed back to the client. So it will see and feel like it was executed as a local test, regardless of it being executed either in Eclipse or in even Surefire or whatever. So the principles around Arcillion in general is the test case itself shouldn't have any information about where this is running. It should only define what it needs to run. That to be able to create portable tests in the sense that they can swap between the different environments. You have the same test case or test suite run on both class version wildfire and see that they work in the same locations. And it's going to execute from wherever you want it to, being in IDE or maybe in Surefire for instance, or Cradle or whatnot. Some of the big problems that we're trying to solve was the big integration test suite type of thing where you end up writing a test, going to Maven, having to build the whole thing that can take an hour, and then you're going to have one hailing test. So we wanted to be able to be in the IDE and just code the test and run that single test as if you were the normal unit test. And we achieved that somewhat through the use of the shrinkwrap libraries which can use the incremental compilation that happens in the IDE and just package off the stuff instead of having to go through the whole Maven build. And we called it a testing platform instead of a testing library because it's really what it does, is just to manage the life cycle of different things you put on top of it. So it manages the life cycle of Selenium for instance through the drone extension. It handles the life cycle of containers with the container extension. And it can be used from multiple different testing frameworks or it can be used as a standalone thing outside of a testing environment. And it's built to be flexible and extensible in the sense that we wanted, we don't know what libraries or component models we need to support today. We know which ones are here today, but we don't necessarily need which ones, know which ones come tomorrow. So it's built the Brown extensions and the Ritchers. So the platform can kind of stay the same, and you can just kind of evolve it along as a modular system. So we're going to look at one of those extensions today, which is Arkelin Cube, which is the extension that operates and controls the life cycle of Docker containers, which is the basis of this talk essentially. So Cube is, as I said already, it manages the life cycle of a Docker container or multiple containers. And it has some magic, so you can either use just the Docker containers as is, or you can map up a container adapter from the Java E world on top of a Docker instance, and Cube will kind of handle all the IP addresses and port mappings and all that stuff for you. So you just say on that Cube there is a a Java server and then Cube figures out the rest. And of course, you can operate on multiple containers at once and then kind of orchestrate how they run together, how they're started and when you want to stop them, etc. And it's not just built around the idea of being a application server inside a Docker image, but it can be used with any library, anything that starts a port essentially, any language. So, drop wizard, Spring Boot, Netty, Node, Vertex, Bash scripts, I mean, whatever you want to make. To Cube, it's just a Docker container. So we're going to look at the first test. Kind of some of the premise of this talk is to see how we can use Docker to enhance our current testing techniques as well as how we can test in general Docker stuff. So the first test here is a simple Wi-Fi based Docker image that is custom made and opens up all the management ports and so on. So we're going to use Docker as a, I don't know, it's not a pre-configured image, but we're going to change the state. So it just starts up a Wi-Fi server and we're going to be deploying into it as we were doing in the other example, but this time we're going to do it through Docker instead. So the test case itself, and we're also going to use the extension called the Archelian persistent extension, which helps you deal with database data essentially. So you can say, before this test method starts, we're going to insert this hammer file with data and make sure that the state of the database is correct. And then we're going to execute this different JPA queries against it and see that the results are as expected. The only thing you have to do when you're using the universe bomb to get Docker in there is to add the Archelian cube Docker dependency. And to do the persistence is Archelian persistence. There's a pattern here and how they think the name, and then Archelian chameleon. So this is the Archelian XML file when you're using Docker, or sorry, cube. There's an extension section for Docker. In the minimalistic, there's a bunch of rules. They will, if you don't define anything, it will try to figure out where it might run these things. So if there is a, if you're on a Linux box, it will look for the Linux Docker socket. If there's a Docker machine up and running, it will see if there's only one running and that kind of thing. So it tries to kind of figure out where it should run. But in this case, we're telling it to run on the Docker machine that's called Dev. And then there's a definition of the image. So there's an image called Wi-Fi. We're going to use the Docker file that is located within our project structure to build this file. No, sorry, to build this image as a part of the test run. And then there's a couple of port bindings, fairly standard Docker stuff. Due to how Wi-Fi works with the authentication mechanism and so on, you have to set up some user as long as you're not hunting on the same host. So that's why these two are there. But as far as the chameleon configuration, just telling it that it's going to use a Wi-Fi 9 remote adapter and a combination of cube and the Docker integration will then set the IP address that it's going to use and look at the port bindings, for instance, that you have remapped the default 880 port to be 881 and kind of update all of that configuration for you. So let's see how that looks. So yeah, right. So this, as far as the JPA part of this test, it's going to run against the default installed H, what is it called? The default database, example database, what is it called? H2, thank you. H2 and nothing else. So it's all internal and all inside the same image. Test case itself doesn't really... Is that the right one? So, in this case, it's the same as we saw. Well, it just do run as jammed. And now, if I find out a little window here, you should see here, well, it's unclear now, but... Now, we're inside the dev Docker machine, and we can see that the Wi-Fi image has been started up. It's been up for nine seconds and there it's starting to run the test. And we should see the Docker image go away and boom, gone. So that's the most simple version of CUBE. Starting up some existing container in some existing Docker image, building the image when it starts up to match your... I can look at that as well. So we define here, we had the Achillean XML pointing to our local folder here that has all the Docker file and what that's needing, essentially just setting the password and exposing the ports. Yes, absolutely. The format that you saw here was our own, essentially. It's loosely based around a thing and how the variables that the Docker service rather takes in, essentially. But we do also support the compose format. The compose format came after we started that thing. That's why that's not the default option at the time. Can the test run without these ports? No, you technically could. They would only work on the local host, essentially, because in a normal remote machine you wouldn't see the container's IP at all, most likely. But running on locals, you could do that. You could configure it to run directly on the container without going through the hosts, I believe, as possible. So I'm going to look a bit on your orchestration part of it. Means several dependent Docker containers that need to be started. And some of them need to be started up in a different high cycle than what the test container is going to start up in. And you can compose them based on different templates that kind of extend each other. So as far as JPI part of this, we're going to use this time, we're going to use Hylqly. It's going to be the same task as we saw before. But this time, it's going to be configured to run against a MySQL instance instead, which then is two different Docker containers. Configuration doesn't, I mean, the basic is still the same. There is now a property that we have exposed that says auto start containers. You can put an expression there to say which of the defined containers here, either to wildfire or MySQL, latest that you want to start up before anything else happens. And then we are adding a link so that the containers can kind of see each other based on the name. Based on the property there that you see, say which one you're going to auto start, you can have multiple different MySQL images, for instance, that have a link. So it could be a MySQL 5, it could be a MySQL 6, it could be a Postgres. And just based on swapping on which one you start first, essentially, is what will define which link the wildfire server will see and which database you will be running this test against. And then it allows you to fairly easily spawn out multiple different tests over multiple different database servers and so on. Which is an interesting thing. So, right, to your point, I guess, or to some of it, that format is the Arcadian cube format loosely based on TIG and some other options. But it looks fairly similar to how the Compos format looks. So we also support the default Docker Compos format. So you could, I mean, that would be the that would be the version of the same configuration but done in the Docker Compos format instead. And you just define that you want the definition format to be Compos. So, let's look at that. So, again, this is the same test that we saw before. We can just run as we should start to see some wildscale containers starting, wildfire starting, green bar. That was with two containers, just splitting up the same same example as we had before but now over two containers and cube dealing with both of them and should have stopped both of them as well. Any questions around that? Yeah. Well, you could, let's see, you could technically in this file, you can define, instead of this property, you can say MySQL, for instance, and then you can say wildfly. But if you just say Arcadian cube, when it tries to determine this order, it will look at the links and so on as well. So it will start things in parallel if it can, but if it needs one to be started before the other, then it will, yeah, make sure that MySQL is up and running before the wildfire one starts, for instance. Basically, based on which containers links to which, essentially, I will figure it out. Okay, that was orchestration. Containerless. So containerless is, what an Arcadian container is but for a docker. So you can deploy docker images, you can deploy templates, you can deploy it into a docker host, essentially. And that's where all the other alternative hangwatches and servers come in. So you define that to be the cube containerless extension, you want to add that. This is our JavaScript application, just has a simple REST API, not too fancy. And then in the same fashion, we have the deployment method, but in this case, we're actually deploying the docker definition instead of a Java application deployment to an app server. And beyond that, we kind of have the same testing abilities moving down. So there's a docker template instead of a docker file. And I think the only variable we support for now is the deployable, deployable highlame, which is the content or which is the tar file that's the output of that deployment method. So if you want to dynamically build up some kind of files to be added and automatically build as part of the test suite. So now you can sit and hack in the IDE, you can make changes to the JavaScript application, start up the test case, the image will be built with the JavaScript files on the IDE and then start it up. And yeah, that's essentially the same as far as the configuration, except that now you have a containerless configuration instead of the chameleon one we saw before. And the cube format is the same. Just to see the runs as well. What's the npm install? We're done already. Oh, there's an npm start. So that's just the pure JavaScript application as opposed to a Java app server or anything like that. So that could be a Python application, it could be whatever. Running a bit short on time, whatever, two minutes or something? Oh, seven minutes, okay. So I'm going to drop the last demo and I'm going to show something that's brand new, so new that it doesn't actually work yet. So some of the advantages when you have a container-rise system and you're controlling all the servers, essentially, all the different Docker containers is that you can also start to fiddle with het work and you can do all of these kind of magical things that you really haven't been able to test very easily unless you have some kind of VM system. So the new extension to cube is something called cube queue. Queue from Star Trek also knows that is the entity that has control over all space and time and everything, right? So that's what a queue essentially does. Queue intercepts your normal Docker composition setup and inserts proxies in every possible fashion. So when you should have been reaching your container, you would actually be communicating with the proxy and then it comes to your container, then that container talks to another server, but there's a proxy in between. So now you have control over all the different endpoints. So that allows you to do things like this, for instance. You can say queue when this block of code executes and make sure that server one, which is our container, any communication happens on port 85 will time out within five seconds. And then you can see, did my database driver actually handle that correctly? Did my rest service he connect and so on. So we can run this now just to see how that looks. It still just ran as a normal test case. It's a bit small probably, but you can see the original configuration on top. We have two servers. There's one server that communicates with the other. And then the proxy comes in and kind of overrides all the communication between them to go through itself. And then you can programmatically control the communication flow between them. So we got a connection. He said exception because we told it to essentially do that. But of course you have other options on queue as well. So you have a connection. You can set the bandwidth. You can tell it to be just down. You can set the latency. You can slice up all the packages into much smaller bits. And you can have a very slow, slow close for instance. Either on the upstream or the downstream connection that kind of view. So that gives you an extra level of control. So what's coming? The Kubernetes and OpenShift 3 support we have kind of in the latest alpha releases. More work will put into those. Also around the core OS and MISOs as well. So Qube is essentially the Arcillion abstraction that deals with anything dockery and the control over those. So if there's any next steps for figuring out what Arcillion is and be a part of Arcillion is to go to the Arcillion org website which has a lot of guides, et cetera. Or you can join the discussion on discussed Arcillion. Any questions? Anyone have any? Yes, that's me. What exactly? Well, as far as integration there now is, you can use the containerless container as before. You can deploy a docker image that you're creating locally. Push that to the OpenShift instance which will build that and start it up. That's one face. And the other one is to start and control pods, whichever pod you want. And essentially do the same thing as you see here or that you saw here. It's just some extent works in OpenShift 3 now. OpenShift 3 will also add on a bunch more stuff to it. So you have more control over communicating through services, setting up routes and all those kind of features as well. But the basics as far as starting up an image, own OpenShift, starting up multiple images in OpenShift, doing the orchestration parts of it. And also building something in OpenShift is also functioning as of Alpha 6. Oh, yes, scarves. Scarves, right. Scarves. Not currently. We haven't done anything against the swarm yet. But that's, yeah, that's something to explore. Then we're out of time. So that was me. Thank you for coming and listening.