 So, welcome everybody to this OpenShift Commons briefing. We are really pleased to have with us Ivan Dwyer from IRN IO. And we're going to talk about running and handling asynchronous workloads in OpenShift today. And we're very pleased to have one of our OpenShift Commons members and one of the newer ones and a long-time partner with OpenShift to be doing this session. So, I'm going to let Ivan take it away and introduce himself and we'll have Q&A after the presentation and the short demo is done. The recording will be available afterwards. Excellent. Well, thank you, Diane, and thank you to everyone who's on the line and joining for this session. This OpenShift Commons briefing on handling asynchronous workloads in OpenShift with IRN IO. And what that really means is a fairly new concept of event-driven computing in this kind of modern cloud era. I'll introduce myself real quick. My name is Ivan Dwyer. I head up the business development unit with IRN IO and primarily work with our partners in both the cloud technologies and developer services ecosystems, build out really meaningful integrations that solve complex business problems across a wide range of industries. And Red Hat over the years has been a great partner to us as a growing company and it's really exciting to watch and be a part of the OpenShift developments and where things have been going and where things, you know, the V3 release, the Red Hat Summit a few weeks ago. It's all very exciting stuff and we're happy to be part of it. So, we have a lot to cover today. So, let's jump right in. You know, I always like to kind of frame the conversation first. So, kind of briefly talk about the modern cloud and what that means for developers. Then we'll kind of get into this event-driven computing pattern and it's really the beat of the session. We're talking quickly about what IRN IO does and where we fit in this world. I'll give an actual live demo and then we'll talk about how IRN IO and OpenShifter are integrated both in the public cloud and have a cloud to pump. Now, it really is just an exciting time to be part of this ecosystem and even more exciting time to be with developers, you know, companies of all kinds that really recognize the importance of developers to their business and providing the support and resources to really make great things happen. With this, you know, there's just kind of endless possibilities for innovation. The cloud has really come a long way to make that possible. Let's briefly take a quick look at our history and we don't have to go back that far to do so. In the quote-unquote beginning, you know, we used to have to deal with racks of servers. When we needed capacity, we added more servers and more and more are always involving IT. Our applications were packaged as a single entity and had to scale as such. Deployments were done, major releases only and those happened very infrequently due to the kind of long testing cycles and having to reach milestones and kind of waterfall model. And as it relates to this topic, any handling of asynchronous workloads had to be built by hand. There was just no tools around it. And, you know, all of this led to, you know, a lot of waste and inefficiency. It's, you know, but it was all we knew. And then the cloud came and changed everything. You know, infrastructure became virtualized. Applications were broken apart into more logical tiers. Software could be updated a little better. And in our world, you know, some toolkits, you know, came about and middleware components came about to kind of build these things out more effectively. You know, all of these things were definitely a step in the right direction, but there was a lot more left to be desired. That's kind of where we are in this modern cloud now. You know, containers have really taken over VMs in many ways, not always, but in many ways. Microservices and all of its buzziness has become an extremely effective way to kind of architect large-scale distributed applications. Software is continually updated, you know, behind the scenes. And, you know, these asynchronous workloads become API-driven, kind of without the need for an additional work or, you know, software definitions or custom application development. And this kind of modern cloud era is where both Red Hat and Iron Io sit, which is why it's really exciting to be a partner with Red Hat in the open shift world. Now, this is the very familiar, you know, modern cloud stack and it was first born as a way to provide on-demand compute storage and networking resources so that applications and APIs could be built and deployed easily. But we found that, you know, the IS and SAS were just not enough in most cases, just given the complexity of managing and configuring applications, deployments and services and so forth. And so this is really where the past layer is coming to play in the more recent years and providing, you know, the glue between infrastructure and services and everything needed to kind of really power the application so that the developers can really just kind of be developers. And, you know, of course, Red Hat was very quick to recognize the need for this platform layer and so OpenShift was born, you know, providing a fully comprehensive environment for powering these applications. You know, in both a public and private cloud platform, you know, with the battle tested Red Hat and in Prize Linux under the hood, I really believe that OpenShift is unique in its offer and really has a great place in the ecosystem. And being able to extend, you know, working with partners in ISV such as ourselves really gives developers everything, you know, they would need to innovate. And that's what it's really all about, you know, empowering developers so that they can do their jobs and innovate so that they can kind of really focus on the business logic and end user solutions and not have to worry about what's working under the hood. So with that, you know, developers want abstraction. They don't want to have to worry about dealing with infrastructure. You know, they want to be able to get up and running themselves, you know, without involving IT. And they, you know, want the freedom to use the languages and tools they're most familiar with and what fits the right job. And of course, they want consistent environments across dev, test, staging and production while having to do a ton of configurations and always checking where they are in the life cycle. But of course, at the end of the day, and most importantly, you know, developers want to write code. So that's really where we are now in this kind of application world. And this modern cloud stack really provides developers everything needed to build, deploy and scale applications. But as it relates to this conversation, what about the workloads that happen in the background? Do they kind of follow the same model? Can we apply the same principles and technologies? Yes and no, but we'll dig in a bit further here. And just for a point of reference, GitHub once said that they were 50% background work, meaning that everything that happens on GitHub, half of that is happening behind the scenes away from the user interface. And so they've done a great job of building out a lot of this asynchronous functionality. And that's kind of what we do here at IRNAO and what we want to talk about today. As we kind of said, there's this new theme of event driven computing. And it often is, in these cases, the patterns have been around for some time but there's kind of fresh principles to go along with the current landscape. And given the proliferation of IoT applications and the kind of rise of popularity of microservices, more and more workloads are happening in synchronously triggered by some event. That event could be an actual real world event. It could be an application. It could be a user on mobile. It could be a machine. There's all sorts of things happening and it's really important for applications to be able to react accordingly. And that's really what we were talking about here. But before I really get into the actual pattern, I want to make a distinction between applications and tasks. So we really understand why there's a need for a different type of kind of platform environment for this type of work. On the application side, they're hosted and they have to be highly available. Traffic is distributed by load balancers and capacity is adjusted by adding and moving instances, the kind of elastic promise of the cloud. On the other hand, tasks only really need the runtime available for the duration of the process itself. They're not load balanced. Traffic is handled by queuing up jobs and scaling is done by adding more concurrent processes within the same resources instead of having to scale up and down the instances. Also, when looking at where to make the kind of distinction within your own applications, it often comes down to what's real time user facing and then what's asynchronous in the background. And that's really where we start to kind of look at our applications as a collection of components and features and processes where we can kind of make the distinctions. So when you're building out these kinds of applications, it's all about kind of identifying the right pieces. What's part of my core application and what is an asynchronous task. And as we've kind of seen microservices come about, a lot of these tasks actually follow the same principles. So when I talk about microservices, I tend to be talking more about micro tasks actually. And I think that's kind of the logical evolution of where that kind of pattern is coming. And in the characteristics and processes of the processes are very similar to that of microservices. They should be independently developed and deployed and follow a single responsibility. They should be stateless, very easily to be interchangeable, minimal dependencies and of course asynchronous. But from a functionality perspective, it's kind of the processes that fall outside of the user response that could be calls to third party services, things that are very long running and you're doing some encoding. Any kind of transaction, a billing process, anything that needs to really scale out at first, doing something a million times very randomly. And then of course anything that's scheduled. So if you look at kind of cron jobs, those are basically just tasks that happen asynchronously. So when we talk to our customers, we kind of do these reviews of their applications and we identify the right kind of pieces within that that kind of make sense in this kind of task centric model over kind of an app centric model. Getting a little more specific, we see a wide range of use cases over here at IronAO and just a few of the more common occurrences, sending emails and notifications individually and bulk really makes sense to be done asynchronously with you might be connecting to a send grid, using send grid as a service or Twilio as a service. You might have to send a million emails at once and those are very small processes but need to scale out in a very kind of large and time sensitive manner. So that's a good fit. We see a lot of multimedia encoding. These can be very memory or CPU intensive tasks. They can take a very long time if you're dealing with satellite imagery or even medical imaging. These things require a lot of heavy lifting and they require the right resources for that job itself and they all have an asynchronously behind the scenes because no one's waiting for a one terabyte image to be encoded so it's outside of that kind of user loop. Again, transaction processing. So billing is always an example is when you buy something on Amazon they give you the confirmation right away but behind the scenes they might kick off a variety of jobs, processing the credit card, writing to a database, adjusting inventory. All of those things are happening asynchronously and they don't make you as the user wait for all that to happen before giving you the next step. So all of those things have happened in the background to fit this task model. Crawling the web is a good example of a schedule job. You might daily shopping sites might daily crawl comparison sites might daily crawl a bunch of shopping sites pull out the latest data. Those kind of things happen behind the scenes and fit this asynchronous model. And then as it relates to just the general transfer and processing of data, we see a lot of this in the IoT world, passing data from source to destination, collecting data from sensors, delivering that to data warehouses, back end systems and doing any of the processing in between. Those are all asynchronous kind of use cases. There might be some third party services that you're connecting to. I mentioned SendGrid and Twilio are great examples. And then anything that's scheduled for a daily email blast. You don't want to think about that as part of your application. It's just kind of this asynchronous task. So we run into all of these. We see a lot more, but I would pick these as probably the most common. They fit our platform, which we'll kind of get into a bit. But thinking of it from like a workflow perspective, I mean how do all these things kind of work together and what does venture event really mean? Essentially it means responding and reacting to an event trigger automatically and then executing a process or chain of processes accordingly. So those triggers can happen in a variety of ways. Webhooks is a great example. When you update your repo in GitHub, it generates a webhook and you can then send that into Slack and notify your entire team. Those webhooks are triggers that can then kind of kick off a variety of work flows and things. Callback, same thing. It's just within your code. Direct API call. So within your application, you can just hit the task and the task has an endpoint. So it's just a simple API call to trigger a workflow. So when you're dealing with IoT devices, maybe the sensor hits an API endpoint when it triggers or captures an image or something. Stream processing is another interesting one. Streams can just kick off a continual workflow of processes just kind of as it goes through passing more and more data through. Again, I mentioned the transactions. So transactions could be just kind of an API call. And then the schedule. So it's something that happens regularly. Now, these triggers, you know, kick off processes, but they all generally will need to end up somewhere. And so those are kind of the, the destination might be, you know, database or analytics system. It could be another API. So you could be using these adventure of a workflow to build out your application APIs that then extend to the developers in your ecosystem. So an IoT platform provider might expose an API and they have all these adventure of workflows behind the scenes that actually generates the data that builds that API. You could directly to your app UI. So mobile applications could have these workflows in the background and continually update the front end to trigger a variety of notifications, whether the push notifications, you know, and then you can send all this stuff to logs. I mean, this is so much data being generated these days. But one thing that's really kind of important is this, this execution component, of course, and that's really where this task-centric platform comes into play. Now AWS calls it Lambda functions. We call them workers, you know, it's really just kind of a matter of taste. And, you know, essentially what we just talked about briefly is what these patterns are. What is the single piece of code that does one job? And then the platform provider, and this is what we do, is we abstract away all of the thinking around operations and infrastructure and choreographing those tasks. We just run them. So these things in this adventure of a workflow, these tasks just need to be executed automatically. Developers never have to worry about spitting up infrastructure to run them. And that's really why it's kind of such an exciting thing to be doing and why it's a little different than deploying and scaling applications even in the platform as a service layer. Now, another thing that's really important to keep in mind when kind of building out these workflows is the importance of leveraging a message queue within the pipeline from the triggers to the execution areas to the destinations. You know, the queue acts as both a way to kind of dispatch the workload, but also a way to kind of persist the task state. You know, as these workflows tend to be crossing, you know, various systems, you know, might be even be crossing firewalls. You know, you're dealing with a lot of data in transit. And, you know, it's just, it's really important to keep that queued and persisted so that you're not losing anything as part of the workflow. And maybe if one of the tasks fails or one of the result endpoints can't be reached, you know, you still have that state persisted. So you're not losing data and you're not losing the kind of execution path along the way. So that's really, really important and something that we promote to all of our customers is to make sure that everything is queued. And we're dealing with asynchronous work here. So the queue is a key piece to that. And we'll talk about how we solved that with our message queue. Okay, so moving along. So with these kind of new patterns around, you know, eventually workflows and tasks instead of applications, you know, developers have kind of this new set of goals. And it's very much in line with how we look at the platform and how we look at the service is just applications. You know, we want to build these very highly scalable and reactive backend systems. And they have to, you know, respond automatically. But we want to do all of this without having to manage infrastructure. Developers don't want to have to deal with infrastructure. They don't, because it's event-driven, we don't have the best idea of capacity. You know, it could be very, very unpredictable. We don't want to worry about having enough, you know, VMs and servers up and running to handle the scale. I just kind of want to know that I can run these processes without having to worry about that. I want to be able to dispatch and distribute these workloads without having to write a ton of configuration scripts. You know, it should just kind of be done under the hood. What platform as a service does for applications, you know, we want a tool or a platform that does this for tasks as well. And, you know, just being able to collect and transform and deliver all this data very seamlessly. Having these pipelines and workflows just kind of connect and not have to do a bunch of translations or work with proprietary formats. And then all of these components that handle these kind of task-centric workloads, you know, we want those to be kind of integrated within our application platform. We don't want to have to maintain two different systems for our applications and our tasks. Now, of course, these goals introduce a whole new set of challenges, just as anything else with a new pattern. And, you know, I found this funny tweet when I was looking into microservices a little more. You know, we found over the years, of course, is key to what we do, that building functionality for asynchronous concurrency is extremely complex. There's just a lot of moving parts. There's a lot of components. There's a lot of things to deal with, and developers don't want to have to deal with. And, you know, this is a challenge that we've taken on and is really key to what we aim to solve, you know, that there's this kind of need for a task-centric platform to handle these workloads. And that task-centric platform needs to be very tightly integrated with the app-centric platform. And that's really where we see OpenShift and Iron.io coming together to form this comprehensive developer-oriented platform. Okay, so where do we fit in this world? And, you know, what do we do in the context of event-driven computing and these kind of asynchronous workloads? And that's always been our focus. You know, we build technology for powering asynchronous workloads, you know, meant for distributed applications of all kinds, like mobile, web, and kind of IoT. And we do this through a variety of services, including an IronMQ and IronWorker. And so IronMQ is a message queue service. And then IronWorker is this task-centric environment. It includes a scheduler, it includes a runtime, and it does all the choreographing under the hood. And IronWorker has IronMQ built into it for acting as a task queue. So you can use them independently and you can use them together as one complete task-centric environment. So from a developer perspective, working with Iron.io is meant to be very simple, much like it is with OpenShift. So developers can really focus on writing code and without the hassle of having to kind of deal with the rest. So our process is, you know, just like any kind of development platform, you start with the build. You build your tasks. In any language, we have native SDKs for most of every language, and then you can containerize them even with Docker. That's kind of where we're moving is this full native Docker support. And then once you have your task code built, you know, we upload it instead of kind of think of deploy. So you upload it to our environment and commit it to your repo and then package it and then upload it to us. And then that becomes what's, code package. And that code package is then ready to run at any given time. Then your responsibility is just set the event triggers. What is going to kick this thing off? Is it a schedule or is it some webhook? What is it? Or do I just want to run them on demand? What you don't have to worry about is spinning up any infrastructure to run it. It just is there available to run because it's been uploaded to us. You can handle it. And then it scales and you don't have to really think about that. All you have to do is set the concurrency level. I want this task A to be able to run 100 times concurrently. And then we distribute that workload accordingly on demand based on the incoming volumes and there's no provisioning needed on your end. So we see this as a very simple kind of task centric development workflow and it's really user friendly. We have a very easy to use simple REST API to do all of this. So in our world we have a number of concepts and a number of key words that I thought was worth going over just because it's maybe a little different than an application platform environment. I mentioned we call these unit workers so that is the task code and what we call is a kind of unit of containerized compute. We have what's called runners and that's basically the runtime agent that spins up containers and processes the workloads. DAX are the kind of base language and library dependencies that are DAC images. So if I want to run Ruby 2.1 choose the Ruby 2.1 stack and Q is again this is how we dispatch the workloads through our message Q service. Schedule is pretty straightforward much like Chrome but managed and in the cloud so you don't actually have to name that. Concurrency is how we deal with scale so the number of tasks run at the same time this isn't parallel processing this is concurrent processing meaning that I'm running 100 of the same job at the same time but not necessarily parallel and then clusters this is important when we get to our deployment it's kind of the location environment for the runners. What does the actual workload processing? Get under the hood. As I mentioned that building the stuff out is fairly complex and just from the work that we've done over the years and the future set that we've developed we definitely know the challenges but we also know what needs to be done not just from this kind of choreographing of task based work but also from like a management security perspective. So starting at the top we have our native libraries for the major popular languages. That's really all the developers need to interface with because those natively interface with our API and then the API handles all of the features and so from a code management perspective we do code history, code versioning so it's very much like checking in your application code. We have a dashboard and various monitoring features along with it so that you actually can manage and maintain your tasks, your schedules and your queues. Within the actual choreographing of the workloads, I mentioned the queue we also set priorities so some jobs might be more important than others and schedules of course and an interesting thing and this goes back to the difference between application and tasks you know it doesn't go down you're not dealing with downtime you're just dealing with okay this job is persisted let's retry it and we'll retry it until it succeeds or we'll notice that it's just never going to work because the code is wrong but we can use the dashboard and monitoring tools to figure that out and then we handle security multi-tenant service you can authenticate them encrypt a lot of the data in transit and the code packages themselves and we provide various logging and integrations to kind of see what's happening Getting into the components under the hood we maintain a fairly complex environment under the hood again abstracted away from the developers but moving from left to right we kind of have these two components that manage the priorities and the schedules of jobs jobs get placed into a task queue and that queue is IR and MQ and then what really happens here is from a developer's perspective I've uploaded my customer code and that code is matched to a docker image so we have these runner on the right here and those runners are one docker container and that docker container is spinning up pulling jobs off of a queue spins up another docker container that merges the customer code with the base docker image, executes the task and then kills that container and that's a continually running agent on the runners server and that's just pulling jobs running jobs and those are just spinning up and down and all happening concurrently and what's important to note is we can deploy that runner in the public cloud or on-prem and so the workloads you can choose where you want the different workloads to be processed so it's really flexible and it's deployment and of course when do you want to use this and so we hit all of the major themes of the modern cloud, microservices obviously a hot term but it kind of makes sense to think of these independent services as tasks same with mobile, running a serverless back end as a mobile developer you don't want to have to think about infrastructure so we provide a mobile back end it doesn't interfere with user experience we're seeing a lot of really interesting things in the IoT world I mean it's very close fit asynchronous in nature so it's really very close fit with us so you can kind of choreograph these workloads using both our queue for passing data and then obviously the task centric kind of platform and then again hybrid so deploying your workloads in public and private cloud environments and where I think where unique is in this hybrid world is that it's a really easy way because we're dealing with tasks and not large applications it's really easy to offload individual workloads to the cloud so if you're in a large enterprise that has a lot of sensitive data, you might have some workloads that you want to offload a cloud and others you want to keep in-house we make it really easy to pick and choose by breaking apart things into individual tasks so it's a really good fit you're using the same API in the private and public cloud with us so it's really consistent and great kind of migration path for some large enterprises okay real quick why you know you would choose us you know again kind of mention this concept of serverless environment you kind of power these really large scale workloads without having to ever think about provisioning and managing structure of course there's no such thing as a serverless environment but from a developer perspective it's entirely serverless and that's a theme that we've been really talking about for years now and just kind of starting to make sense to developers and so it's pretty exciting to see people build out some really really crazy scale, high scale tasks and the first thing they always talk about is how awesome it is that they don't have to worry about infrastructure again so that means no ops you don't have to really worry about the configuration of the dispatching workloads and managing infrastructure and you know scaling at this kind of workload level so you know we've shrunk the unit of scale to a task running inside a container with very minimal dependencies and so that makes for more effective scalability and again we have a very developer friendly API with client libraries across all major languages you know this entire environment makes it really easy to get up and running in minutes and we integrate closely with the platforms and again we can deploy to any cloud public or private I think I'm going a bit over time so I'm going to power through some of the case studies here real quick just to kind of back up what we do and why choose us but Glitch reports a great example when you have a story breaks you know they have a mailing list you know or subscribers in the millions and they need to hit those subscribers very time sensitively with that new story before it becomes you know old news so anytime as an event it triggers this kind of master worker that goes to the database and spins up then thousands of tasks within our worker and then each one of those runs concurrently to send thousands of push notifications but because they broke that apart they can send millions of push notifications in under a minute so they really distributed the load through us and they never have to worry about having the infrastructure ready for handling that load because they know that we just we scale it out on demand when that event happens so that's a really really common and great use case for us Hotel Tonight another popular mobile app it used us to build a pretty complex you know ETL pipeline you know collecting data from a variety of sources and passing it through this workflow that does various filtering before you know delivering to its end destination and each step in that pipeline is done through Iron Worker and that's just running 24-7 and they never have to think about you know the what's worked the inner workings under the hood they just know that it's always happening they get updated through monitoring tools that we provide so they can be confident that the data is getting from source to destination effectively using Iron and a great example of a mobile back end user is on tap it's a very popular app it's like 4 square for beer it's just one guy who's been working on it is in spare time so he obviously doesn't want to have to power a back end or worry about infrastructure but every time someone checks in you know that kicks off you know up to 10 different transactions whether it's right to a database post to social media so each one of those tasks runs concurrently and then refreshes the results so by using the concurrent processing he was able to cut his response time from 7 seconds when it was done serially to 500 milliseconds done concurrently so just a few quotes from those who those use cases and we talked about this how people are really into the idea of not having to worry about infrastructure so they can just really focus on building out features and worrying about business logic so that's why we're pretty excited in seeing the number of customers and use cases continue to grow within our platform okay we're going to give a quick live demo I'm going to remind everyone of my job title before I give this demo so it's just going to be very simple kind of hello world in Iron Worker and how that scales and then we'll walk through our dashboard real quick okay it's going okay alright so we're just going to look at a quick grouping task here as it is about as simple as it gets and so if I wanted to just kind of run this as a Ruby task okay so we're actually yay that's pretty pretty exciting okay but now let's package it to Iron Worker so what does that mean? I've taken this code this is the meat of my code that's my task so what we have we have what's called a worker file and this is our packaging format and for this very simple job I just select the run time and I say exec hello rb and that's all it really needs for a more complex example I might have some gems that I need to include or a config file that includes keys for the service that I'm using but for this one it's pretty basic so how do we run this? first let's kind of talk about local development so we've done a lot of work to make kind of a Docker native so let's say alright let's kill that let's look at this so what we're doing here now is we're using Docker locally to run this task and we're choosing the same stack as it would run in our environment so this is a very good way to run what you'd run locally knowing that that's the exact same environment reduction so for more complex examples with different dependencies this is very important to ensure a consistent environment so we'll run that in this so this is actually running inside a Docker container okay again not that exciting but still kind of shows the workflow there okay so how do we run this in Iron first let's look at a couple of steps here we want to package it so we'll zip it up and then we'll kind of upload this in Iron Worker upload is how you do this Iron Worker is our CLI Mr. Ruby CLI and then we'll queue it so just to put those in okay alright so let's look at this real quick so we've uploaded, I've got my project because I had my token and I'm uploading hold on I'm getting into chat messages here so yeah it was a 20-bit stack any bit of who? got it so maybe your headphones or your connection oh sorry that's okay I'm going to step myself up because I'm echoing okay alright let's hope that improves okay this has been uploaded and we queued it and we ran it so let's go to our dashboard here so this is our dashboard so where does it run this is my task oh it just was completed if you look at this this was the time it was started it took 24 seconds and it ended here and revision is my third revision of the code because I had to update that hello world script so many times let's look at the log here real quick so this is the output this is what happened this is it running in Iron Worker this is in the cloud just a quick talk through our dashboard sorry so I can schedule this let's say I want that job to run every day I'll pick my hello run every one day this job becomes a schedule job and it will execute again the main code is versioned so we can kind of and we can set this is a webhook URL which could be your event trigger these are interesting so if I want certain jobs to be able to scale out more than others because we a lot a certain amount of concurrency I can say I want this to be able to run 10 times concurrently I want to retry it only three times and just end it apply a lot of things we can do at the code level as well and then we're kind of watching our usage over time and then we're in two tasks we're in 22 tasks I can also get to ironmq from here I don't believe I haven't set any queues yet oh I did this one, yeah I was working on another one this is running AWS not much usage here but this is our dashboard now this is the open shift so I've got my open shift application here I set this up I added both ironmq and ironworker so this dashboard is what I would get if I click this link so it's fairly integrated and the way I get these services is I go to the marketplace so both ironworker and ironmq are available here so I've deployed my application as it is to get a task up and running but how does that fit in my applications themselves so within a very simple Rails application I might have a controller action and I might say 10 times let's run this ironworker task so I'll just include our gem and then just create the task and run it 10 times so in my frontend application which is about as boring as it gets I will just do this hello world worker and hello world hello open shift worker trigger and then if I go back to my dashboard it'll actually see six are running right now they're running running running complete so these are running you can see the status is running complete complete so I just ran that 10 concurrently so imagine a much more complicated use case than that I'm going to the log same process all of that running so you can kind of see how easy it is to kind of get jobs up and running within iron.io and also how quickly you can just bind the service to your applications with an open shift online and then integrate the iron API within those applications I'm going to go back to the presentation real quick because I want to talk about the different deployments online and enterprise so let's okay that is not me DJing that's a friend of mine that I am a DJ okay so iron.io and open shift this was our booth a couple weeks ago at the red hat summit as you can see we are an open shift partner and a member of comments and you know we have a variety of deployment models you know we started a public cloud business but through popular demand started to do on-prem deployments and then in the middle you know we found this kind of sweet spot for enterprises and businesses who wanted all of the benefits of the cloud but wanted it in their own environment so it's dedicated to that it's the same multi-tent service it's the same public cloud scalability it's everything except it's about that except it's their dedicated public cloud and so we cross from pure public to very deep on-prem secure systems behind the firewall and so both of those are all of those alignments fit within open shift deployment models we just show the online integration very simple just add us to the marketplace and then open shift enterprise which is kind of more private cloud on-prem deployments so here's us in the marketplace we just saw that but how do we integrate with open shift enterprise and one of the huge reasons why I'm super excited about open shift v3 is how it fits within our own deployment and packaging models so you know adopting Kubernetes under the hood and actually Docker is the way to package and deploy these services fits with how we do it as well so both iron mq and iron worker can be packaged via Docker containers so that makes it easy to integrate deploy we went through the Red Hatter container certification process for iron mq already so that's done and we're doing the same for iron worker as for actually deploying it you know using Kubernetes under hood for highly available service deployment we can deploy iron mq as a service side of iron worker service as pods you can scale out accordingly and also the task runtime can be deployed as pods as well and that's where things get really interesting is we can scale out the runners in our environment within this Kubernetes pod so if I let's say I need 100 concurrent workers available to me I can spin up the pods or spin up the runners inside pods and then just throw jobs at the run time and it all fits within the open shift enterprise service model and then you know as you would by any other service just scale via the replication controller and just add nodes and that's another thing that makes open shift v3 so easy it's really easy to scale up and down so you can do that both for the service instances so if I want to scale up iron mq I want to make sure I have enough nodes to be ajmq but again add more nodes for workload capacity and so that's where the runners kind of come into play being able to scale out the task sensor to work runtime using the same scale out model as you would with services and that's huge because that gives the operators of open shift enterprise a really easy way to add more capacity in the same way the same way with services and applications and then we work together with customer and with the open shift team to do the service broker so similar to the online bottle where you find the application to your account or find the service to your applications and then provision accounts accordingly can do the same thing in the enterprise world and again iron ios multi-tenant by design we can make that multi-tenancy segmented to the organization for doing the enterprise deployment so it's all very tightly packaged together works very well with open shift v3 model for enterprise deployments and we're super excited to see people adopt iron within open shift and happy to be part of the ecosystem okay I will end things with a quote that just basically perfectly frames this deployment model for both open shift and iron io and I found this through IDC is that this concept of public and private paths in this hybrid world is where workloads can be directed to either public or private instances and how enterprise is set application policy so that is exactly what we talked about distributing workloads in different environments based on where they should be some are secure on prem some are less secure to be done in public cloud but being able to direct those very easily is going to be super important for building out these kind of hybrid cloud solutions both open shift and iron io are all about being flexible and are all about being integrated nicely to work together and so that's going to do it as far as next steps we're here at iron io you can find out more one thing we like to do with some of our people who are interested in learning more about it is set up a pair of programming session where we can actually give you a hands on walk through of a platform and give you a much deeper information about iron io and the hello world one I gave we can do architecture reviews and this is where we get into the kind of identifying where it's task centric and computing model in the most sense and then you can start a free trial with us you can get up and running in minutes and you can find us in the open shift online marketplace and search for iron io or iron worker okay with that I think I'll open it up for questions all right well Ivan thank you very much on for this it was certainly an eye-opener for myself I didn't realize how easy it was to embed asynchronous even though it was a pretty simple demo of the hello world it was pretty awesome to see one line of code basically being able to add in asynchronous tasks that was really an eye-opener for me and I'm always excited to see pair of programming on the menu because I think that's a great way to learn how to code yep we haven't had any questions in this chat so that's how it means you've had answered most of the questions I'm just going to unmute everybody if there's anyone who has a question you're welcome to ask but otherwise I'd like to thank you again very much for sharing this and hope that everyone else will give it a try on Open Swift and give us your feedback on it great well thank you again Thanks we'll talk to you all soon this recording will be up very shortly