 Good evening, everyone. So it's my first job in Singapore, although I wanted to join a while ago, but thanks to my colleague, he actually convinced me, and it's a good thing. Because at Red Hat, we have actually a prison in Singapore since 2000. I can't remember exactly, but before I arrived in Singapore in 2008, and I'm here almost 10 years. So I've done some events, but mostly events driven by Red Hat. But I think as an engineer at Red Hat, I think I'm very, very happy to be able to share our experience, and especially on the open source side. So I think I hope you learn something today. So I'll talk about a bit of the history. I mean, most of you know open source. It's true that we work from garages sometimes. We work from Starbucks. We work from home. We have also, I mean, Red Hat is actually different from the initial startup. Actually, Red Hat is a, and I'll talk a little bit more about the history of Red Hat, where, how it started and when they acquired us as well, to give you a little bit of perspective about, you know, how open source happens, you know. I'll talk about some of the projects, the Java user group. So I'll try to focus on Java, obviously, but not just Java. So I'll talk a little bit, you know, about the other projects. I'll talk about things also related to what's happening on the JVM, both the JVM, all the way up to enterprise Java and the future of Java, and also the JDK level. But it won't be too technical, but I think what we suggested with Michael, I think in the future, if we really want to deep dive in some of those areas, in some of those technologies, we can do that. And I think I'll let Michael talk about, you know, some of the events we have in the future, but you'll have plenty of opportunity to do that. Some of them actually like white fly on the toe, and also a little bit about the future of, you know, enterprise Java. I don't know if you heard about white fly actually, it's here. That's good, that's good. And micro profile. So I'll touch a little bit on that, not just briefly towards the end, if you're interested. So for many of you, Jebos is synonym to an app server. Jebos actually started with MacFury in 1999. The goal was to actually provide an open source alternative to the proprietary world, the vendors like IBM and BA at the time, where some of the first to implement some of the Java specification around Java development on the Java web servers, so whether the serverless specifications, the EGB specifications, all that were actually proprietary, but Jebos was one of the first ones to deliver an open source implementations. So the first release was in 1999. Actually, I was actually working at Barclays Capital at the time on the e-commerce team, and we used to already introduce some extreme programming techniques, especially pair programming, so I was sitting next to one of the developers and I was like, you don't seem to work on a project, we were working on master data project, and it was actually working on an open source project. But for a good thing, we were actually, like many people at the time, became a bit resistant to be forced to use the big app servers at the time, where usually it was a mandate from the application architecture team to use the EGB as a web sphere, and a lot of us were looking at alternatives. So one of my guys actually already started to join the Jebos team and working already on the JMX macro kernel. So there's a bit of history. Jebos was acquired by Red Hat in 2006. Just before I joined Red Hat, I was a customer, I used Jebos for many years, and in 2007 I joined, so it's going to be 10 years actually next month. And I'll talk you through some of the evolution of Jebos, and it has actually followed some of the enterprise, on one end the enterprise Java specifications, because we are licenses to Oracle and Sun before. And also we have to answer some of the demand from the users. I'll talk about the community as well, and when we'll go a little bit in the open source model, the open source culture, how is that dynamic, and what happens, how things grow and develop. So we've been acquired by Red Hat, as I said before. So Red Hat has been founded in 1993, got IPO'd, we got acquired in 2006. There's a lot of major milestones, I'll focus some on the middleware side mostly, but there's actually an important shift also, especially after when we started to release OpenShift Enterprise, where you started to see people moving workload, not just on premise or virtual environment, but also on the cloud. So we had Jebos running on Amazon, as we had AMIs already for Amazon, and then we all started to work with OpenShift to actually provide our own implementation running on those environments. And that shift has triggered a lot of changes, not just the TCK or the enterprise or the various specifications, but the shift in the industry has triggered a lot of things that made us change Jebos fundamentally in its core, and I'll go through that as well. So we did a few other acquisitions, and some of, we acquired Fidenri also, the mobile backend as a service, and that was also another change in the middleware space. We talked about Java, but Fidenri is actually a Node.js backend. So it seems a little bit different. I think about Jebos as a Java-only shop, but we're not actually, we do a lot more than that, and I'll touch on that later also. And for API management, we also acquired TreeScale. So when we acquire companies, and I'll talk a little bit more in details later, sometimes these are, they may be proprietary companies, and we go through a process of open sourcing them. Okay, so let's talk about the open source model. So, I mean, you talk about, so that's the garage, the people, you know, it's not just about developers. Initially, actually, let me step back a little bit. So you see one million projects here, and when I say we open source some of the software, here some of the, these are from BlackDoc and Northbridge. So in 2015 future open source survey, we have to also, what we provide is IPN denification to the people who actually subscribe to our stuff. Okay, and that's how we do it. When we go through, let's say we do an acquisition, when we go through the open source process, we have to make sure that the source we deliver is actually compliant, you know, from an IP perspective. Okay, so, and as part of that process, we of course go through all the code, and if you look at, just take Jebo's the application server, just in itself we have hundreds of sub-projects in it. Okay, and I'm talking just direct dependencies, they can be also intransitive dependencies that brings others. And similarly for Node.js, you know, it's even a bigger ecosystem. Okay, so I think, so that's represented the open source world, the open source world is actually a very, very large. But Red Hat doesn't code all of that, obviously. There will be a lot. But what we do, and what we very good at, is actually bringing the first class components together into something that's stable, that is standard, and then there's something that is supportable. Because what matters to the enterprise to users is actually something that they can reproduce. They want to develop and deploy exactly without changes. You know, they want to focus on the application development. They want to make sure that, I was talking to Michael earlier, one of the biggest problems, let's say, more and more we hear about these bugs, like hard bleed, for example, how do you fix an hard bleed vulnerability in a deployment, in a thousand servers, where you have different versions of an app server or different open source components you've downloaded. How do you do that today? So I think that's what we answer. That's where Red Hat actually brings the knowledge into actually providing mechanism to deliver asynchronously those fixes to you. So I think this is more than just people coding and pushing on GitHub or Subversion. There's a lot more behind it to harden, certify it. So we work also with other... We're part of the expert group, for example, with Oracle, IBM, and others for JavaE. And the same for any other standardized other bodies, as a web services, you know, and there's a lot of them. So I think we work with other and macro profiles lately. Also, we work with IBM and others to collaborate and drive and use open source as the channel to drive through standardization. The shared problem has solved faster. I think when I was a customer, actually, we had problems or needs and requirements, and we started to contribute to open source. So very often, open source projects start from a need that actually Nike recently released open source something. I'm sure Red Mart will probably open source something. And so I think it's not driven by product marketing, product management. It's actually driven by real issues and people working together. And you see that we've opened stack, you see that we've further projects and docker, same thing. So when this project I initiated outside, we actually contribute also to them. So we're not just consuming, we are actually extremely invested into them. We could contribute to your project by 90%, 99%, but sometimes it's just 10%. In the Linux kernel, for example, we are one of the largest contributors. Transparency, that's very important. Everything we do, including in my team, engineering upstream, we do it on GitHub or openly. So we have discussion forums. There's nothing that we do in my team that you're not aware of that you can't find. All our discussion IRC are available. We do that on three nodes, every meetings, every discussion happens openly. So I think this is very important. There's nothing we hide, and that's a culture that's in our DNA at Red Hat. That started at Jebos and it was a perfect fit. So there's nothing we... And the same thing with the macro profile. We encourage that collaboration to drive innovation. So that's really at the heart of how we function. And in the culture of Red Hat. So it's something that's very strong actually. Do you want to use a Mac that creates... Sometimes I have comments as well. But I think this is normal and it's actually a pleasure to work in a company where we actually walk the talk. So typically what happened to this project, and I mentioned one of the work we do is actually take those projects and actually productise them. So make sure that they are certified. Make sure they run on all these operating systems without regulations. They are certified against all these JDBC drivers, against all these architecture, whether it's ARM, RISC and others. So that's the work we do. That's the work that people don't have to do because this takes a long time. I'll go into more details later actually in some of that. So that's not only at the run time level. We do also contribute, for example, for pattern fly is actually a project. You can actually access it patternfly.org, I think. It's actually a set of Angular based or a kind of look and feel and open source UI component that you can actually use to actually assemble UIs. And we use that for all our projects as well. So everything we do, even from the UI down to the run time, we make it open. So now I'm switching to Java. I'm sure that reminds, that's something, a picture that is common to many of you, but is Java dead? No. I think that's something I wanted to touch on because I think that came up quite often. And sometimes due to the silence of what's happening to the enterprise Java specifications, et cetera. But Java is here first. It was actually designed for IoT for embedded devices. If you remember the first iteration of Java. Now we see it a lot in microservices, whether it's a micro profile, Wi-Fi swarm, PyArrow, Spring Boot. It's Java-based. All the Android ecosystem is also Java-based. With new JVM-based languages laid on top of it, like Kotlin or Ceylon. Event-driven reactive frameworks are based on Java also. You look at Vertex as one good example. And it's also adaptable to new prodding. So I think Java is here. We actually invest a lot of time and effort in Java. Of course, as I mentioned before, we also have investments in Node.js and other languages. So I think Java is definitely here. So I just wanted to get that out of the way. So we have investment. I mentioned investment in Java. What does that mean? So our commitment is obviously as a licensee on Java. So we actually, who has used Java E6 here? Java E6? Yeah, a little bit. A lot of the specs of the enhancement in Java E6 actually came from JBoss. CDI, bin validations, et cetera, actually have been contributioned from Redat, Redat engineers. On the JDK as well, we actually, Jason Green wrote a blog recently and actually all the work we do with Oracle and help Oracle actually around Java 9 with Jigsaw project. We actually introduced modularity because we were waiting for it, supposed to be in Java 8. We introduced it already in JBoss AS7 with JBoss modules, because there's a lot of goodness there. We wanted to inherit from the OSGI world, for example. Some of the notions that we wanted to introduce. So we have actually a lot of expertise and we contribute that directly with Oracle. And also, we invested in the future and working with other groups and I'll touch on that later at the end of the presentation around the future of Java in the enterprise, because Java is here to stay. There's a lot of requirements, especially with the direction we take around cloud-native hybrid environments, very high density constraint devices. So we're working on that. Of course, we have committers and lots of contributions to OpenJDK. So Red Hat is probably the most active there on the Arch64 ARM processors, also on the ultra-low latency garbage collections. So we have actually projects that are an extreme memory, extremely large heap size. So that's definitely something that, in JBoss, we are very interested because the OpenJDK, laid on top of the operating system, is very important for us for optimization, especially for near real-time applications like trading applications, betting applications, military and governments, where we actually also provide them with solutions. We have to definitely work on all those levels. Just one question. Do you know how the OpenJDK compares to the Oracle JDK in terms of popularity? Actually, that's a good question. I don't have stats, but I know that when people running Linux, typically use OpenJDK, of course, but we give the choice. We actually test IBM JDK, Oracle JDK and OpenJDK. So I don't have exact number, but definitely OpenJDK is a very, very popular implementation. Of course, I'm talking from a Reddit perspective. It would be good to do, I don't know if there are data around that, but it would be interesting to check. So, and in JBoss, well, what's our goal? Our goal is actually to deliver middleware to you guys so you can run times, but we also provide tooling. We also provide capabilities to manage your applications, what you deploy them. So when I talk about management, we application performance monitoring, logging metrics. We also have bytecode instrumentation, so you can AOP style. And tooling, we actually work very closely with Eclipse. We also work with Microsoft, actually on visual code. Actually, we do the Java part of the visual code. Very strong collaborations. Recently, actually, since we announced a collaboration with Microsoft, we actually have teams working actively to make your life better. Actually, whether it's Java.net, I think there's definitely investment in those directions. We just don't do runtime. We do integrations also. We have integration runtime. So around, we did the acquisition of few source, if you remember. So KAMEL, Apache KAMEL is a project that we have. It's not all about Java. Apache KAMEL runs on car front time, or SGI runtime. So we have expertise not only on Java, but also SGI and Node.js. So it's definitely expertise we have internally and wherever workload you decide to go with, we have those capabilities. We also have mobile, mobile from the SDK to the push notification servers to also the mobile backend as a service. Again, we support, we have teams writing Swift, Android, whatever clients you want, even Firefox and all these open source equivalents. And finally, we also work very closely with Docker. I'll mention that later. All our projects, you can download them as zip, but they will also available on Docker. So you can do a Git pool and I'll mention that later also. So I mentioned tooling capabilities, multiple run time. So OGI, Node.js, Java. We also offer cloud services. So everything we have that runs on all the project we do, whether even if it's security, identity management, we have a project called KickLock. The app server, they run as well on bare metal, on virtualized environments or on the cloud, whether you decide to do a deploy on Amazon or on OpenShift or on Azure, actually, and Google Cloud Engine. So we support that as well through the OpenShift runs on Azure today. And you don't have to worry about things like, I mean, one of the last questions we had in some of the recent meetings is how do I do, for example, discovery? So we actually work and implement all these integration with the different cloud providers. So there is auto discovery, et cetera. Some of the projects that are very popular, I mentioned the JBoss application server. So we had the seven iterations that we actually did a rename of JBoss recently. So even the rename, we do it the open source way. So we did actually ask the community to come up with a number of names, which shows about three, the top three, and ask also our legal team which one were the... Some names were definitely not right, but I think we came up with Whitefly. So the reason why we did that is because we wanted to dissociate the app server because we have so many projects that when we say JBoss, everyone thinks it's an app server. So we renamed it to Whitefly. So we have a number of other projects. Some people, I think a lot of you probably know Netty. Netty was Apache Mina before. So when we hired trustee, actually we started Netty with him. Now he's working at Line. He worked also for Twitter. And the reason why, actually, a lot of our engineers are very much sought after. We have some of the bigger players like Apple, Twitter, because we work on technology that work at scale. We actually invested a lot of effort in those projects with actually those big players also. And actually it's a good thing when actually they hire trustee or Norman. We work together. We work on the same concerns. And as I mentioned before, the open source model is about solving common problems. We share the same problem. Typically on Netty, that's exactly what happens. So whether Apple invests in it, Twitter or Line or Red Hat, these benefit everyone. So I think this is very important. I'll talk a little bit more later about Undertow. That's another example. We were satisfied with Tomcat, but we wanted more performance. So we started to, one of our team members actually working out of Australia, not in his garage, but probably in his own bedroom and came up over Christmas with Undertow. And actually it's an extremely good project, which actually we actually replaced entirely in Wifi9, I think, we replaced Tomcat with Undertow. Actually Tomcat, it's called, we used Jebos Web, it was actually a fork of Tomcat, which is the same code base actually that Tomcat replaced with Undertow. Okay. I'll spend a bit of time also on some of the other technology we contributed. So sometimes we find projects like, as part of the spec, you have also to provide Java SE, sorry, some of the security aspect of Java E. We actually implemented it with PicketLink. PicketLink became quite popular and we wanted also to provide something that could be used across other runtimes. So what we tend to do is actually look at those capabilities, extract them, and we made the PicketLink actually turn into a project called Keycloak, it's actually a SSO identity manager service that supports OOF and other protocol. Not only SAML2, but the new ones as well. So I think you'll see that we follow very closely of course the trends, but also we share the same problems. The community help us to be really in touch with what's happening. So to the point actually, Red Hat used Keycloak now to do its own SSO across all our infrastructure. And Vertex, I mentioned I think someone, are you using Vertex by any chance? No? I see you smiling, so... I'll show you if I tell you. We ran one of our key products. We ran the first prototype internally in Vertex, which was about two times faster than our own implementation, over five years. Vertex is actually based on top of NetE. It's very good. I don't know if you use Vertex, it's an even loop mechanism, so asynchronous. We can use... I think we have Clément Escoffier from Vertex coming soon, so he will have a talk probably in June, so that'll be great if we can actually assemble all the people interested in that technology stack. Vertex really has its third iteration, so it's even better today. So I think you'll be... I'm sure you'll be happy to join that, so we'll keep you in touch with you when that happens. The last one, this funny icon there, if someone knows, I have something to give away. There, yeah. Who knows this one? You can do a Google search without the picture. Okay, I'll give you five seconds. It's like an owl, something. It's called an ocular. We also do it openly, you know, those icons. Actually, you can go on jboss.org slash design and you can download the icons. But ocular helps you. Same thing. We wanted to give the ability to do application performance monitoring, especially now with everything that runs on rest. People who are interested do business transaction monitoring. What happens when you... Especially with Vertex, for example, when you have a synchronous call, a synchronous call, how do you trace those business transactions? So that's what... So we work with open tracing, actually, and also with Zipkin. We support Zipkin, so we actually bring all that goodness in there. So you can actually monitor these complex microservices environments. Distributed tracing, distributed testing and debugging is a complex subject. So that's an area where we invest. So I mentioned before, we not just deliver things as a Zip, but if you're... Who is using a docker here? Oh, a lot of people. That's good. So it's a docker pull-away. So whether you're interested in writing e-applications or reactive applications, you want to add performance monitoring or SSO, identity management, or even if you do in-memory data grid or distributed caching, you can do that. So we put everything on docker, obviously, and that's the best way, probably, for you to consume very quickly. We have, obviously, on GitHub, we have all the quick starts, everything to get you up and running for every single project we have. Okay. So that's a lot of videos and conferences. So 10 years of Jebo, so what happened during these 10 years? I want to do the mapping with what happened with Java E, the enterprise Java world because a lot of what we do has been driven, especially on the Jebo's application server side, has been driven by the enterprise Java specification because we are compliant to it and we also have to follow, basically, the spec and to certify. But at the same time, we also have actually reacted and also be proactive also to some of the trends and I'll touch on that. First, when we started with, in 1999, we started with a Jebo that was based on JMX. I think everyone knows about JMX. Very, very easy to configure. You also have all the tools and the instrumentation available to control your app server. So that was one of the good things. It was still small, very easy to use people. It was popular because people, instead of having a one gigabyte download, they could actually go online and unzip, download and unzip without paying anything. So that's how it got popular. Then we started to move to Jebo's AS5 and 6 and we changed actually the JMX microcunnel to, we wanted to have modularity. We wanted to give people the options already to start to assemble the app server they want. And we did that with Podger Microcontainer. That took a little bit of time. The reason why at the same time we, someone was asking me, Jebo's AS5, and I was a customer actually, I was not an employee yet. But Jebo's AS5 took a long time to come. And one of the reasons for that, it was also at the same time as the acquisition. And that's where I actually read that. We actually had what we call the productization process. That's why today we can reproduce any builds. We can give customer API or developers, anyone who uses Wi-Fi or EAP actually. It's what we call a productized version of the upstream code. That's where we bring the API stability, the 50 plus or 100 plus certifications. So people actually, when they take a release family of let's say EAP 6, when they do 6.4, no regression will be introduced. So we have a release taxonomy of major, minor, and CP, cumulative patch that allows us, for example, to fix a hard bleed problem. You don't want your entire infrastructure to go down because you fix something. So I think these took time. And all the knowledge and the expertise that we had read that, we actually brought it across to Jebo's and make sure Jebo's projects actually could be hardened, stable as the target users of read that was. For example, we are in the stock exchange, we are on military, government, transport. All industry verticals use Jebo's today as they were using Rails. So we had to be mission critical already. So that's what happened here, that transformation. We were not anymore in the cool kids delivering like quick, fast. So we still do that. We still have a very fast iteration process, but that kind of slowed down here. But now we actually accelerated. We renamed Jebo's AS7, that's where it started to work for Jebo's. Now we started also to follow and be in lockstep with the spec, but also contribute to them. So we did a lot of contribution here. We also renamed it to Wi-Fi. We also brought a lot of improvements like Undertow. Undertow, as I mentioned before, is the Jebo's web replacement, the Tomcat replacement. We wanted also in 2010, we realized that we wanted to have higher density in the data center. So the operation people, people running like the red marks. Memory costs money. The current options you had if you were running a Java application and we wanted to scale out and to bring more and more servers, you're going to have to spend more and more and more. And if you have maybe 10 instances, that's okay. If you run maybe 200 meg for each of them, just without your app to start. But when you have a thousand applications running, that starts to matter. So what we did is actually with AS7, we actually totally reviewed and went back on the drawing board and made sure already that we were ready for the next generations of high density cloud environments where we could actually, and even we had OpenShift starting. So for us, it was already a problem that we had to solve. Obviously we couldn't put IBM or WebLogic there. It was already quite significant to run. Could we give already to developers on OpenShift a cartage where they can run a Java application? It would have cost us a fortune. So that by design, we were forced to actually go back to the drawing board and really think how we're going to actually serve those kind of requirements. So with AS7, we reduced by 10 fold the footprint of our app server. So to think of a slide later on, but we did a great job at that. The thread construction, the bootstrap, the startup time, so excuse me, we did actually some competition to do a sub one second startup of all E6 services. And I think one of the Japanese jugs, someone started in 800 milliseconds or the Java E6 services. So that was the driver behind it to allow for high density. We also did lots of changes. We wanted better management, the ability to patch things very quickly in those environments. And so all that work has been moving towards cloud native applications to support that, to also hello for cloud readiness and scalability. We introduced, just checking the time. Modularity, I mentioned the spec for Java to support modularity was supposed to be earlier. And I think it will happen only with Java 9 today with the Jigsaw project. So waiting for that, we actually started to bring those into AS7 with JBoss modules. So in 2010, we were already at the vision to be where like drop wizard, spring booties, and a Wi-Fi swarm and no macro file. We had that vision initially to actually started to reduce really the core. We actually reduced the core to capabilities that are revolving around control security. So we have the same control planes, same security, same privileges, et cetera, to control and bring up and down services. So that's the work we've done here. And we kept bringing it. We also introduced other changes around, I don't know if you use messaging. So Apache ActiveMQ and work with us to actually bring OrnateQ together. So it's the Artemis project. So we bring that also a very high-performance JMS implementations. And of course, some other thing with the latest JDK server-side JavaScript capabilities that we introduced also with undertow. So you can run Java JavaScript on the server now with Wi-Fi through undertow. So that was for the app server. Now when I talk about cloud, a different type of architecture, like microservices architecture, our answer has been the Wi-Fi swarm. So I'll touch a little bit on that later. And we actually certified, I mean certified, we actually micro-profile 1.0. The micro-profile 1.0 is actually Jaxi, CDI and JSONP. So I mentioned the improvements that we did very much focus on performance. That's an example when we started in 2000. That was actually a great Indian developer summit in India. I don't know if you've been there. We usually attend a great Indian developer summit every year. One of the reasons actually, Bangalore is the number one city in the world to use our technology. And actually six of the 10 top cities are in India. So that's fantastic. So this is actually a two cluster of four pies, Raspberry Pi, first generation, 256 megabytes, running an application that was using a GMS and state fully GB. So that was running on batteries and with Wi-Fi and we were actually asking. The application was running. It was shared with the public and we were asking people to unplug, dismantle the things and the application was still running. I was accessing it through my laptop. So it shows actually that we actually finally achieved our goal to increase the density per core. And I think it's a one core machine, the Raspberry Pi, the ARM chip. So less than 800 megahertz, 256 megabyte of RAM. So it was really low spec. So I mentioned undertow. Some people maybe, who heard of undertow here? Oh yeah, okay. Oh, quite a lot of people actually. That's good. Actually undertow has been used actually quite a lot. Even with Spring Boot, that's an option. You can choose between Tomcat and undertow. And one of the main reasons why we chose undertow, why actually Stuart Douglas, who actually is the project lead for undertow, wrote undertow is for performance. I think the more we actually started to go through the performance, you know, the stress on the app server, we will quickly see where the pain points are. So one of them actually, the bottleneck was Tomcat. So there are other bottlenecks of use. But the more we see, we were replacing those bits so we could actually have no less and less bottleneck especially around the HTTP traffic. So it's, I think, I can't remember how old is that project, probably three, four years now when it started. But we could do actually, we could saturate network cards. So one million transactions per second, zero bytes type of course. But that's definitely the goal. It's used now in the Wi-Fi application server. And also we give it to customers now. We have the concept of back time. So in the open source world, when we elect something, a project that's sufficiently robust, we have that period we call back time. So the good thing with open source, you have that notion of feedback, which is very important. And then the project lead assess that feedback and is the only one that will say, okay, this is ready. So it's our QE. The QE is the entire world. We have thousands and actually millions of downloads. The committee is not just people who write. It's people who download, give us feedback. So I think that is very important. So that's where we actually promoted it to be used within our products. So because of the stability, this performance, and the maturity and the back time. So there's a lot of goodness. Of course, the HTTP upgrade would support multiplexing. So you can do multiple protocols through the same entry point. You can actually do blocking and non-blocking use case. Some applications actually do blocking. I mean, red marks certainly. I don't know if you use a little for that, but definitely. You can also have a, of course, 3.1 container support, load balancing and reverse proxy, including with HTTP2 and HTTP2S. So that's a lot more. So of course, there's extension to, so LPN is an extension to TLS. So that requires HTTP2 over SSL. And with your SSL support for much higher performance gains for encrypted communications. I think the way we work, we have one project lead. And typically we share the load across. So I think there's probably 40 people or something working purely on upstream white fly. You can see actually on the white fly website all the contributors, some of them from Red Hat, some are not from Red Hat. So to give you really an exact size, you really have to go online. So some projects have, I don't know exactly for undertow, but you just go on the undertow page. Sometimes the team can be two or three people. Sometimes it can be hundreds, but it depends on the project. And the team actually is the upstream development. There's also the people who actually are part of the QE side, QE part, so they write test development. This test development part, there's the productization part, means how do I make undertow run within this container and these containers on this operating system, on those architectures. The team is actually not just one or two person or 10 or 20, it's actually a much larger community and also internal people at Red Hat. There's a whole chain. So when we have a project like undertow, the time it's actually you do a pull request or you do a build and the time it's out, there's actually a lot of process involved. So we do have a CICD infrastructure. You can see it online, actually. You can go and participate. I think there's actually, everything that's on the Wi-Fi is actually public. So you can see the builds running. You can start them yourself, I believe you. And we also use cloudbies. We use our own infrastructure. So we work actually with a lot of people, depends if we own the project or not. But I think that's usually how it works. So what we introduce also, we introduce a lot of other things. Like for example, I mentioned hard bleed or other things. So we know that we have, there's something called victim database. So there's all the signatures of static, you know, so it's all the vulnerability signatures. So you can do also vulnerability detection so we have a Maven plugin for that, for example. So when you do a pull request, that generator builds, so you probably see that in a CICD talk. It's a similar principle. So we have a continuous integration, continuous development. So that is totally automated. And the more we find bugs and tests, we bring the test upstream, obviously. So we have different checkpoints, different exit criteria across those when we promote one build to the other phase. Okay. And to the point where we bring all that goodness, actually, and I don't know if you saw the Red Hat Summit, we're actually going to give all that to you now. So you'll be able to create a project. We already started to do that with a starter, whether it's a spring boot, whether it's a vertex project, it'll be in your cloud environment. You'll be able to edit it with EclipseChay, for example, or you can edit it on your own editor by doing a git clone and then push your changes later. Then that can also trigger different CICD pipelines. So you can take your jobs, do your work, commit it, it'll trigger, and then it'll be promoted or built. So I think that all that chain is going to be really our focus today. Okay. So that productivity. So that's the focus we are. Bring all those first-class projects into this environment. Okay. Okay. So I mentioned Wi-Fi Swarm when we started in 2010 to start to do modularity. Lots of people use actually only one or two parts, you know. More and more now it's really rest, persistence, messaging, maybe transactions and other things. We use rest, more than rest, so more than rest, and JPA. Yeah. What API do you use typically? Do you use rest? Do you use persistence, JPA? And what else do you use? Do you use messaging also? We're not using JPA. No. So what do you use for persistence? Depending on the application, but sometimes we use Morphia and mobile directly. Okay. Some other systems are using Postgres, but I'm not sure what. Through Ibanez or through what? I don't know, I'm very secure. Okay. Okay. I mean maybe I rephrase my question. So typically you expose a rest endpoint and you persist. The use cases are less and less beyond that sometimes. Of course there's some logic. Maybe you have some business rules or rules and genes, you know, that process some of the, you know, make some decisions based on the data or the, that you receive, but typically you use very little. So that's where we are. And when we started Wi-Fi Swarm, that was the notion. Okay. So that was the driver behind it. Okay. So, and also to create an executable jar, similarly to what Spring Boot and Drop Reset does. Okay. Because our stuff was embeddable, but through Archean, we did actually those tests using Archean that created those jar and you could test them like JUnit. So I think that's what Archean was created for. But that's how we started it. Now the direction we've been taking actually have also influenced also the micro-profile implementation. So we've been working with IBM and Tommy Tribe to actually evolve that to a group, a cooperative group and potentially a specification and a standard. Okay. So I keep talking, so maybe a little bit of water. Do you have any questions so far? So that is the next step, as I mentioned here, in just reading the slide, in enterprise Java evolution. So, and that's something like, I mean, lots of people have a stab at it, you know, because as I mentioned before, we don't need the whole specs. That's why I was asking, what do you use exactly? And we realized actually that the profile of those that we deploy are very, very small. They have a very limited number of APIs. So you don't need the whole server. You just need a set of functionality of capabilities. So how do we address that? I think that's what drove the discussion around the creation of micro-profile. And, but we didn't, the people who started to talk, and it's through an open collaboration, and with actually a number of players we are listed here. So the people at Payara, which is Glassfish, I think, yes, no Payara is, so you have a blank. Red Hat, Wallfly Swarm, the London user group, Amok, which is a project also very similar that implements the micro-profile, IBM and Toby Tribe. So we actually sat down together, and I think Mark Little announced that last year during the Red Hat Summit. So you'll see more details there. So, but typically it's our answers, those people's answers to how to bring actually enterprise Java in those environments, like cloud-native type applications, hybrid clouds, lower density, smaller set of APIs. And actually the first set of APIs that have been delivered in the release 1.0 is Jack Saris for REST, CDI and JSON, OK, JSONP. And that was in September 2016, so that's already there. We already, with Wifi Swarm, we already support that. So that's something that follows the similar principle to open source, which are a great vector to bring standardization, OK. So no point creating a standard and then go implementation. So open source is a great way to validate. Like when we do, we call it back time for our runtime. This is a very similar principle, OK, to build consensus and to standardize, OK. So I think this is a, when it's done in an open collaborative manner, it's much, much more efficient. Just one question on that. Yeah. I think there is a risk that because, for example, what we saw with JPA, we have a specification and several implementations, then it's harder to evolve, right? Because you have to wait for the application servers to implement the next version of the specification. So is it going to be the case for micro profile as well? Like is it going to be frozen for like two years? No, no, I think that's the problem we want to address, actually. That's why we didn't, I think there's no intent to do standardize now, OK. To allow that velocity, you know, that release cadence where we capture as much feedback as possible, yeah. OK. There will be only one implementation for now? Yeah, so there is a roadmap, actually. I don't have the details here, but there is definitely a roadmap with different releases in the next year and two. And what people want is people to contribute, collaborate. You can actually go directly on the microprofile.io. So it's a formative thing, obviously. So that's started now. And you can actually join some of the microprofile forums today. You can see the discussion live, actually, between our teams, IBM, the London groups, the Brazilian groups as well, and AMOC as well. And you can already run examples. So this is already running, OK. So you already have run implementation available, OK. So the idea is to be vendor-neutral. And that's the key thing, OK. So that will prevent what you just have, what you just say. And actually, it's actually an Eclipse Foundation project. And that is also another important step, because it's based on meritocracy, excuse my English, and ensure vendor neutrality, OK. So that's very important. And that means that meritocracy also drives the ability to change leadership over time, you know, based on your investments, your commitment, OK. It's also a great legal and technical infrastructure for such project to live. And it accepts actually Apache license, OK. So it's very friendly, OK. So to remove actually all the blockages that you just said before, yeah. So that's the right environment. And I think this has been done also collaboratively, and that those discussions happen with those members, OK. So to encourage that exactly, OK. OK, that's me for now.