 Hello, everybody. Welcome again to another OpenShift Commons briefing. Today we have special guests from IBM, from the Cloud Pack for Applications team. We have David Harris, who's an offering manager for Accelerators, and Chris Bailey, who's chief architect for the Cloud Native Solutions. And they're going to be talking about IBM Cloud Pack for Applications and the new Accelerators for Teams. So, and stick around for the Q&A afterwards. And please post any questions in the chat. Thank you. Hi, everyone. I hope everyone is doing as well as can be expected in these times. My name is David Harris. I'm an offering manager for the Cloud Pack for Applications, and in particular, I focus on a set of capabilities and content that we call Accelerators, all with the aim of simplifying the development and delivery of building Cloud Native applications in the enterprise. I'm joined today by my colleague Chris Bailey, who's chief architect for this. He'll be on the Q&A helping out as I'm presenting, and afterwards as well. So, I'd like to start by giving a bit of context around the Cloud Pack for Applications, the motivation for the approach that we've taken with this offering, and ultimately what's in the box. But I'll focus primarily on, as I said, development of Cloud Native Solutions, including some tech preview enhancements we've recently delivered in version 4.2. If the demo deities are with us, I will hopefully show you some of this in action. We should leave some time for Q&A at the end as well. So, broadly looking at the application space today, we kind of see these three challenges fairly ubiquitously across enterprises. And in a way, they're very familiar, but perhaps they have some nuances in the new world of hybrid multi-cloud. So that's namely, how can you deliver faster? How can you contain operational costs, whichever are increasing? And partly what's causing that high cost is the complexity that we're seeing as enterprises are also adopting this transformational journey that they're on from legacy applications and methodologies towards the Cloud Native platform. What does all this mean? Obviously, there are a lot of different viewpoints on this. I've called out one here. This is an analyst report from my IDC. You can kind of see on the left some of the stats that they're calling out that over the next few years, we are going to see an increase in Cloud Native development for applications. We're going to see a huge increase in the number of daily deployments that's kind of driven by more broad adoption of DevOps practices. And all of this is ultimately underpinned by this great shift we've seen towards deployments on containers. And on the right hand side, they kind of talk about the need for a deployment and development platform, which helps you build these applications which can run across distributed infrastructures that it talks about this concept of an application stack to provide those Cloud Native capabilities that we think of like elastic scalability, the operational concerns. And key to all of this is a microservice development or like a microservice architecture for development. And key technologies emerging such as Docker containers and Kubernetes. And we're going to talk in this presentation about how through accelerators, hopefully we can drive towards this future state where teams are more efficient. So we kind of believe with the Cloud Pack for applications that any successful application strategy requires you to be able to balance your investments. So to be able to deliver those new experiences for your customers more rapidly and not be disrupted by new companies emerging in your industry. But that you also have this existing investment and there's business critical applications that you need to keep running. They need to stay on supported versions of software. And for some of them you will also want to unlock more value in them. So how do you bring them into a containerized world? How do you provide APIs so that they can be leveraged by your new Cloud Native applications? And with the Cloud Pack for apps, basically what we want to try and do is to provide everything that you need for today and what you will need tomorrow. And it would be very rude for me to not mention OpenShift and an OpenShift Commons briefing. So across our portfolio we've been going down a container based direction for quite some time now, specifically a Kubernetes based direction. We see containers as kind of the practical way of achieving multi-Cloud portable workloads and Kubernetes as that way to have a common management and orchestration for those workloads. Together IBM and Red Hat have contributed to almost every part of the Kubernetes platform. And we do truly believe that building this in the open has been critical to the breadth of the ecosystem that's built up and the rapid pace of innovation and adoption that we've seen around these technologies. But we also see that what's crucial to our customers in particular is that it's the integration of that innovation into a coherent, secure and supported offering. And that's why IBM in particular has really doubled down on OpenShift as the foundation for our CloudPak strategy where we deliver a common experience for all of our solutions now. This is kind of our bill of materials for what's in CloudPak for apps at the moment. As you can see on the top left, we have the licenses for those traditional runtimes of the familiar WebSphere family of products and JBoCAP. In the bottom left, we have a rich set of modernization tooling to help you unlock the value in those investments. So this can help you analyze binaries and start to move towards containerized deployments. And on the right hand side, which is what we'll talk about today, is how to develop cloud-native new applications. So this includes the OpenShift platform, the complete set of IBM and Red Hat runtimes, developer tooling and this set of capabilities that we've called accelerators for teams. Coupled alongside this, CloudPak perhaps has this concept of a ratio table within its license that basically allows you to repurpose your entitlement over time at a pace which suits your particular company's modernization journey. So when looking towards cloud-native, we kind of see enterprises having to face two challenges at once. Namely, how do they deliver faster? But how do they retain enterprise governance? Solving these at the same time can be quite challenging. We started doing some research and some interviews with some of our clients and this was a developer journey for an EU bank. And what they were basically telling us was it was quite typical that it would take 90 days to develop and deploy a new solution. As you can see from this, the emotional journey that this developers on is not a particularly pleasant one. And a lot of time is getting spent bogged down just getting set up, getting infrastructure and onboarded and actually getting configuration and integration right. We saw this particularly as they were starting to move those applications out of development into staging and production environments. And what we'd like to do with accelerators is to transform this cycle into one where a developer truly feels productive for the whole time that they're engaged on the project. And this is our notion that we want to take that 40 days into a matter of hours through the ability to kind of codify the decisions that were being taken and handoffs that were being made between teams. So what we try to do with accelerators is to enable multidisciplinary teams. So that is anyone with a vestus today in how a solution ultimately gets delivered to collaborate, codify their decisions so that developers are empowered and safeguards are in place so that issues aren't encountered further down the line. We provide content capabilities across the full SDLC. So starting on the left, how do folks actually design and ideate around a cloud native solution to solve a business problem? We provide a set of content with the IBM garage. So these are services organization who have best practice agile principles. They have a set of proven out reference architectures and we have built exemplar applications using those and the cloud pack for applications. We provide a collaborative solution development tool which helps development teams actually be productive much faster and ultimately this allows us to automate day zero if you will, as well as capabilities to ensure that you've got through CI CD and speed of delivery and day two operation observability and maintenance. So it's not just a zero, it's everything for the lifetime of this application. Now the core set of capabilities that we have been working on across version four of cloud pack for applications are application stacks, developer tooling to help create applications with those stacks and tool chains to facilitate the building and deployment of those applications in a governed way. Starting with developer tooling, we kind of recognize that it's important to let developers use the tools and ideas that they love and so we have always tried to take a flexible approach where we have had to introduce new concepts like the application stack, we're making sure we're supporting developers use them in an easy way. We've been working very closely with Red Hat on this to have a consistent opinionated view on how folks should be developing cloud native applications. So whilst we had initially started with CLI, which we open source called App City, we've recently announced that we have started to move towards Odoo and in future we'll provide the same capabilities through Odoo instead. This is one of the examples of where we're starting to converge on technology choices. Looking now at application stacks themselves, so the best way to kind of think of these is a best practice technology stack for containerized applications. So this will include a particular runtime and framework. We support the full set of IBM Red Hat runtimes here so you can see Open Liberty, the new Quarkus runtime, the JS Spring Boot. They include common operational capabilities, so for example health checks so that an application can be automatically restarted. We include end points for monitoring and for open tracing, the stacks themselves are semantically versioned for auditability and control of updates. We provide a lot of labels on the actual deployed artifacts so that you can trace back to where this application has come from. This allows an enterprise to kind of deploy and manage these applications at scale. They are of course completely customizable so they are a starting point but obviously we recognize that you may want to change a particular technology choice. It's not something that you're already using or you would like to provide a stack which better suits a particular workload type should we say. The one way that you can do this is through this concept of stack inheritance so stacks can build on previous levels of stacks and you can gradually abstract concerns away from the application developer. On the left we have this base stack for Node.js which is very similar to an S2I approach where you package your application in a VAS practice Docker file within what we call the cloud native stacks such as Node.js Express. This is where we include a pre-configured Express server which has those endpoints I mentioned for metrics monitoring health things like that and you can even go a step further. We don't include the same cloud pack for applications but we've prototyped it in open source where you can basically provide a functions as a service like programming model by including the connect middleware so developer just has to write a request handler effectively. One of the great things about this is it also allows for much easier maintenance so for example if I wanted to bump the version of Node.js in that base stack from 12 to 14 and the any stacks which extend that would also be updated so the Node.js Express stack and all those applications built from either would just have to be rebuilt to get that update. Finally we'll take a quick look at the pipelines that we provide so we provide a set of pre-configured tasks that's all based on tecton running on OpenShift pipelines that can easily be triggered through webhooks and github to set up CICD workflow and one of the great things or one of the reasons that we chose tecton in particular is that it's a Kubernetes native technology which allows for a much more consistent approach in how you manage your overall technology stack for application development and deployment. We provide a set of capabilities for running tests, for linting, for signing images and for verifying that the development stack that someone has chosen to use is one that is approved for the particular deployment environment that you're targeting. So I'd now like to talk to some of the tect preview additions that we've made and this is where things get quite interesting. So as I mentioned we look to provide content which spans across the full SDLC and that truly enable multi-disciplinary teams to collaborate and build solutions. The most recent capabilities that we've added are to help architects and business owners in particular lay out a cloud native application topology of multiple connected microservices and backing services that can solve a business problem, whilst continuing to show that developers are productive and that these solutions can be delivered quickly. And we're starting with the fundamental building blocks for those solutions, so namely microservices which communicate over rest and microservices which are event-driven, so reactive microservices perhaps using a technology such as Apache Kafka. So I'd like to walk you through the workflow that we have with these. So we start off with what we provide for the garage as best practice reference architectures. We actually bring this into this workflow. We have the best practice application stacks for creating applications. We deploy through a series of operators which I'll talk about in a bit. Clients can work either with the garage or entirely using the on-demand materials that they have through things like the Cloud Architecture Center. And a solution architect and business owner can start to collaborate in this tool we call Solution Builder. This allows them to load those reference architectures to learn from or start from scratch as blank canvas and drag components onto it to design their intended solution. They can then generate that topology and what this will do is set up all the the Git repos for the microservices as well as GitOps repos for configuration. I will talk a little bit about GitOps in a moment. We provide the build pipelines so that you can go from the microservice source code to the GitOps topology which will have the built image ready for deployment. And we also provide the deployment pipelines and the operators so that you can actually run those applications on OpenShift and these will automatically connect to each other. They will be viewable in the OpenShift topology viewer. They will basically paint as green before an application developer is even involved. When they do get involved they simply have to check out the respective GitHub repo. They can start to commit their changes. Those will trigger and build and deploy pipelines and of course operations aren't legal from this. They have full control of how things get deployed through that configuration as code approach with GitOps. I did promise I would talk a little bit about GitOps. So this is a methodology which is starting to emerge. Weaveworks I believe coined the term and we do have a very good blog if you'd like to find out a bit more about it but effectively it is configuration as code and it allows you to record your intended state for a Kubernetes deployment within your source code repos or within source code control so that you have that audit trail. It's very easy to do rollback and to promote between different environments. So the approach we've taken is to have a GitOps repo per environment, developer checks and code that automatically gets propagated to the dev environment. Pipelines will deploy those to a development environment on OpenShift. Once you're happy with that we provide a tool called services promote which will collect all the information from the dev repo propagate that into the production repo and then trigger another set of pipelines to deploy. We also use this tool called customize which is now baked into a kubectl itself and that allows us to tailor the configuration through overlays. So things like the number of instances you would like that can be altered between development and production environment without having to change the application itself. It's all underpinned by another concept we're introducing called service binding. So service binding is an open specification which both Red Hat and IBM and a number of vendors have been collaborating on working to define and working to provide services which are coherent too so that people know how to connect to them and it basically enables the dynamic discovery and configuration of microservice to microservice and microservice to backend service. How does it work? So if we look at an example, so this is a very very simple topology. I have microservice A, microservice B which have a dependency on microservice C. If you're familiar with OpenShift and Kubernetes this might be a typical location for those microservices. Storefront Dev is the name of my namespace it's exposing on microservice A, microservice B are exposing on port 3000 because they know JS and that's their default, microservice C is Java, exposes on 1980. So if you were to connect to these you would usually need to know that up front and that means that you end up, you can actually code that into your application but it means that you end up very tightly coupled or ideally you would want to be able to do it dynamically and the way that we can do this so this is a truncated version of our deployment configuration for microservice C and the addition that I've highlighted here is the statement which basically says I am going to provide an open API so a REST based endpoint for other microservices to connect to. And the effect of that is that we use it to create what's called a secret and this is a way of basically passing configuration around and within that secret we put the full address for the microservice including its port and potentially any operational or optional context so if you wanted to say respond to v2 because you're using v2 of the API all that can be put into the secret. So microservice C creates this and it's basically a container of configuration on how to connect to it. So microservice A now does, so this is for example an OGS front end that wants to connect to C in its deployment YAML it adds a consumed statement which says I'm looking for something called microservice C which exposes a REST point and basically says give me that secret. So we take it we inject it into microservice A by default we do this through setting up some environment variables so that the application can read from those variables and knows how to connect to microservice C. The same secret can be injected into microservice B what's important is that they no longer need to know in advance the location for that microservice C instead they're dynamically discovering it by stating that they consume microservice C and this is useful for when you start to move where that microservice is located so for example if I move microservice C to a staging environment and it suggests now changes I don't have to change anything in microservice A or B because that that discovery and that connection is all handled through the secret that's microservice to microservice discovery with databases it's it's a slightly different process but effectively works in the same way we use something called the service binding operator and microservice C will create a service binding request. The first part of this the backing service selector basically asks to look up for a particular service and for a secret to be created if it doesn't already exist and within that secret just as before it it puts the full address for the database but what it also adds is the credentials that are required on how to connect to that database so the reason this is an advantage is because the alternative is you would have to either store this within the application source code or within the application configuration in the GitOps repository and generally it's it's accepted that storing credentials in this way is a bad idea and I think we've seen a number of recent exploits of this which have been very public. So that's the secret created for the back end service the next step is to sort of say which microservice actually needs this so within the application selector we're saying that this is required by microservice C and that will then get the secret and have that information injected in the same way as before. So with the combination of service binding and producers consume statements we now have full dynamic discovery and configuration for both microservice to microservice and microservice to back end service and this allows for true portability of applications through different environments. So I'd now like to stop sharing a presentation and instead if the demo gods are with me we will share a live demo just give me one moment. I'm going to assume everyone can see my screen so please shout if you can't. In the purposes of transparency I'm going to set up a entirely new GitHub organization just to show that I'm not cheating and introduce you to Solution Builder I'll make it a little bit bigger so that hopefully you can see it but this is a tool that we use to design application topologies. As I mentioned it includes reference architectures that we have collaborated with the IBM garage on. This one coffee shop was actually from a collaboration with Red Hat in how to explore the difference between REST based microservices and reactive users caucus as well as open liberty. We have storefront which is a simple REST e-commerce solution with web bff pattern for back end microservices which connect to their respective data stores or as I say you can start from scratch we provide three components at the moment REST microservices, reactive microservices and database. I can click and drag them into a topology just to make it easier to drag them together so in this one I'm going to have a reactive front end a reactive back end and a REST back end and that reactive back end is going to connect to this postgres database so I can say within my reactive front end what topic I want to produce messages on I can drag the connection between that front end to a back end this will automatically receive the information for that topic I can add bindings so that I can connect my reactive back end to a database in the same way I can add a binding to connect my reactive front end to a REST back end. You can give the components different names you can choose on a base set of technology stacks so if you didn't want to use Java open liberty you could use spring boots you could use Node.js you can use caucus and within the blueprint itself you can give the application a name and you can provide some choices on how you want to do GitOps and how you want to configure Kafka should you be using that which we automatically include if you start to include reactive components so I'm going to start with coffee shop I'm going to save this as my own version I'm going to tell it to go to my newly created GitHub organization I'm only going to create a dev GitOps environment for now and everything else I am going to leave as default just going to save all that and once I click generate what that will do is is scaffold out and those GitHub repos and the GitOps repos that I was talking about earlier so I just need to give it my token I'm just going to quickly grab my key so you can see it start to generate and you can expand each of these nodes to see what it's doing so basically what this is doing is creating the repos and then populating them with scaffolded out applications which already have the appropriate configuration so that they are deployable to open shift and that they know how to connect to each other once they are it shouldn't take too long for these ones as I have done them fairly recently and it caches post them there we go so if I go back to my GitHub organization do refresh you can see the the three microservices that have been created and the GitOps repo so I can now quickly show you what the development experience will be like as well let's make sure that's empty I'm going to clone one of the microservices and open it up in VS code so within this we have very simple application the the base for a rest microservice basically just spits out a dummy hello world page as well as the operational capabilities such as microphone metrics in this case open API and who else does it have all the deployment configuration which we can see here the important thing that is as shown in the diagram it has this provide statement so that it it is showing that I I provide a rest API and other microservices can connect to me so that's a very quick demo of how quickly it is to get everything set up and running I am going to go back to the presentation now I believe I just want to kind of like summarize what what we've just seen so the capabilities that we provide in accelerators allow for multidisciplinary teams to to design applications based on best practice reference architectures and compose solutions of connecting microservices and back-end services in solution builder this can be used to automate the development or automate day zero with all the required source code repos being set up all the required configuration for ease of deployment to Kubernetes and OpenShift how to connect those microservices together and GitOps is provided as a way to have a single source of truth for your for your deployments on OpenShift which makes it a lot easier for operations to control and very easy to recreate should there be a disaster recovery situation the application stack capabilities providers cloudpack for apps mean that those scaffolded applications already have health checks for Kubernetes restarts and they already have Prometheus metrics included open tracing performance dashboards which can be visualized in Grafana and the applications also show up logically within OpenShift apology viewer and the approach that we have taken when cloudpack for applications is designed to help you meet the twin challenges of both deliver faster retain governance make sure that you're using pre-approved application stacks but all of this is based on open standards and open technology with crucially enterprise level support thank you very much for listening that has been a lot of me talking but hopefully not not too long and we have got some time for questions so I'd like to stop sharing now thank you David that was great and I'm so happy that the demo worked I mean you talked a lot about well first of all you said it's for two tech preview for accelerators for teams and just for people that are watching that's four two version of cloudpack for applications not OpenShift four two correct so cloudpack for applications four two includes OpenShift four four and when are you moving to other versions so when does four three come out I guess is better so cloudpack for applications is on a quarterly release cycle typically so if you follow that lat train of thought it would it would be third quarter when we see a new update so does that mean what you've shown tech preview will now be GA in third quarter because it's great not necessarily so the reason that we've we've put this out as tech preview is is really to show the intent and the journey that we're on but we want to make sure that we can really get feedback from our customers to inform the design decisions early on because there's some things that we we kind of know is in inherently we will need to add or new features that we will need to add but we'd like to hear from our customers where they want to draw the line between what is a developer's concern what is an operational concern what configuration do they need to declare upfront when designing solutions and how can we best help them so we're already engaging with a number of our advisory board clients on this but we wanted it in the hands of developers and architects as soon as possible which is why it's tech preview now can you share with us some things that you're hearing from your clients on so one thing that we saw very quickly is how we had designed the the use of Kafka so we currently have the point-to-point connections between reactive microservices we found that there there are some very common patterns where a microservice would produce and consume on the same topic we didn't have a way to represent this with the view that we had set up so with we're currently working on a way which mitigates that and the topic can be a sort of shared asset that's viewable in the viewer we knew some things around securing communication with Kafka which we currently aren't able to do but a a near update will be able to do some things along that line we are hearing that a lot of folks have their own application stacks they would like to begin so we're looking at how we can make this tool potentially more flexible there's a number of very early requirements that we've had that are really exciting for us because it shows us that appetite in what we're doing I muted myself so I could listen to you um as you see the uh you're showing us the journey which is great how do you see the journey going you know more how do you see cloud native development and what you're working on and get ops where do you see the future going um but I actually asked like to ask that question of Chris if he can if he can share his views there hang on yeah hang on just a second I'm getting oh which Chris sorry go ahead Chris Bailey you have too many Chris's sorry go ahead Chris yeah sure so um there were a couple of foundational pieces that um you know you really need to start being able to to build complex solutions and manage them at scale um so that was bringing in things like get ops so that you have a multicomponent management system um it was bringing in service binding um which makes it possible for us to to migrate applications between environments that's kind of a foundational basis um but um you so service binding is something that we now need to drive out through the community to get all of the the providers of operators um making that possible now one of the things you'll then see in solution builder is that we'll start to dynamically read from the installed set of operators that implement service binding as a set of services and capabilities that you'll be able to use in solution builder so the capabilities you'll be able to use will start to to match um what you've got available in your in your open shift install but going beyond that we'll start to to look at some of the non-functional concerns and by that I mean things like security so as David said in solution build today we work with StreamZ as the implementation of Kafka um but it lays down um an unsecure Kafka um and that's because we don't yet know what the security requirements would be for the application so one of the things you'll see us start to do in the future is allow you to configure security policies um so should all of the traffic on a Kafka topic or between two REST based microservices be encrypted um and do certificate exchange so so we kind of got this this foundational set of capabilities but in the future it's going to become more about the richer set of components you can use more complex applications um using you greater set to services and then being able to start securing those um starting to potentially add um requirements that you want to have in terms of performance um and understanding what your performance criteria is um and building custom dashboards so all of the components we lay down today are already you know health check and metrics enabled but you still have to build your own dashboard um the fact that we know the components you've got in solution builder we know the services that you're using like Kafka we can actually automatically build custom dashboards that represent all of your microservices uh represents you know the topics that are are um being used in Kafka giving you an application level dashboard and if you tell us things like performance requirements then we can start to overlay um you know alert manager to alert you when uh SLA's are being missed or you know for optime or or performance criteria at the front end and then I'm going to assume other day two operations and then also um you know you mentioned high availability earlier um so more of those enterprise grade features yeah absolutely the aim is to try and you let that solution architect define not just the topology but the requirements on the solution and helps them make that true yeah I think that's something that we've we've seen start to evolve a lot over the years in terms of like the access code approach so we talked about skittops's configuration of code but we're already seeing solutions around like compliance as code um and also being able to to store that decision in a way that can can be policy enables a much faster delivery cycle certainly and it kind of couples with that idea of of shift left where if you can you can already put these concerns up front in the development lifecycle you're you're much faster and you don't have the more costly discovery and a remediation when it comes later down the line hey Chris short I know you wanted to jump into you're welcome to you know I'm good now thank you Chris Bailey for chiming in I appreciate that thank you no that's um I mean that's great to hear the direction that you are going especially all the enterprise grade and security I know we would love to focus more on security as well and I've heard you talk a lot about collaboration with red hat such as service binding um and other teams and also um the tecton you mentioned open shift pipelines now so what are your thoughts on server lists or service mesh and other technologies that are coming out yeah um certainly since the acquisition of red hat what we've seen is um red hat obviously stay an entirely independent organization and that has been really critical to the growth of the open shift platform um but what it has allowed for is for IBM really to as I say like double down on the capabilities that are provided there um and to reduce the overlap that we've seen or the competition that we've seen where necessary so for example things like um K native as a way of doing serverless deployments of containerized applications we we started by shipping our own version of k native and now we work very closely with red hat on red hat serverless so we leverage that capability instead um in the same way we used to well I believe we used to use our own Istio and now we use red hat service mesh instead and starting to integrate things like key artly so yes there's definite there's definite collaboration and consolidation I guess um because for us that that allows us to build value on top of that rather than be producing something that is fundamentally the same um so yeah I think that's the trend that's going to continue to grow and we're already looking at um the new runtimes such as quarkus which is obviously a great a great choice for serverless java Chris any additional yeah and well yeah just um just say that one of the things that we did in the the runtime component operator you know which is how we deploy and manage these services is we made the use of uh k native and open shift serverless a deployment decision um so all of those micro services are already ready to be used using open shift serverless um and all you have to do is in the the app deploy dot yaml so your your kubernetes configuration for deployment you can set a single flag um to true and that will deploy it as a an open shift serverless service rather than as a you know a regular kubernetes pod um so we built that in in from day one because serverless is something that's going to be coming increasingly more important over time so everything that we we kind of lay out and we built um we kind of said that if you car treats serverless as a scaling policy right what you're saying is you want to scale on demand from zero up to however many requests you've got and then back to zero afterwards so it's a scaling policy you could also have a scaling policy that said i want to have six replicas so that's the way we represented it in in the runtime component operators configuration you choose whether or not you want to have a scaling policy that is serverless or whether you want to have a static number of replicas or replicas plus pod autoscaling so how has it been working with open source i know you have before in the past but you know now diving further into it so i would certainly say for large parts of ibm it's no different because we've been doing it for 10 or 15 years i don't think there's there's a single project that i've worked in in ibm that is not now open source the first project i worked on that wasn't was ibm's implementation of java but that became open source through open j9 and in the eclipsed ecosystem so i think pretty much every line of code i've ever worked on in ibm is is an open source so for the largest waves of ibm it's been no different and in fact you know that the fact that we've been collaborating with red hat on things like service binding on get ups and so on the fact that you know there was an acquisition actually it's not going to change the approach because all of this collaboration has been done through open source communities and for a lot of the things so tecton for example now open shift pipelines same with things like istio you know ibm and red hat were already collaborating through those open source communities so in a lot of ways things haven't really changed you know we're still collaborating but then we did before it was kind of a leading question because i i've seen a lot of the collaboration especially you know run times and other teams and upstream projects and it's just it's always amazing seeing everybody work together you know in the open fashion and seeing it better both you know cloud tax for applications as well as the platform the openshift container platform so absolutely i mean i had other questions hey chris do you have any other questions in the stream uh no do not sorry that sounds good okay those were most of my questions too is there anything else that you would like our audience to understand about a pack of application applications or your direction or even any technical information because we have a lot of technical people that watch this i think i've certainly spoken enough that my voice is going i will let chris add anything that i've missed um i i i think that the only thing to to kind of end with is you know as david has said what's out there in terms of you know solution accelerators solution build and so on it's there as a tech preview you know our intent is to continue to build on it and it's like any open source community or any you know tech preview product the more feedback that we get from users the more we can do to build something that actually does what users want it to do so i encourage you to kind of reach out to myself david through whatever form works best you know whether that's email twitter linkedin through any of the open source communities that we work in and and give us some feedback i'll put my um our contact details both email and and twitter on the screen now so if anyone does want to get in touch please do thank you and uh some final thoughts too you mentioned the service binding collaboration uh the hour right before this one is the developer open office hours and next week at the hour right before this one is going to be service binding so um open office hours on service binding so anybody that wants to join that um also and commons.openshift.org you want to join the open the open shift commons community and this will be posted on the open shift commons youtube channel as well as all the streaming services so thank you again very much appreciate you joining us today and sharing all your insight no problem thank you very much for having us and giving us the opportunity