 Welcome everyone again here to the Eclipse Summit India. It's nice to see a lot of faces again. I've been here at the Eclipse days before the last two years. So my name is Daniel Mekic. People know me as Danny. As Mike already said, I'm working on Eclipse since day one, which means even before it was open sourced when you started in IBM. I have different roles all around the Eclipse platform and JDT leadership. So I'm leading the platform. I'm leading JDT and I'm also on the Eclipse PMC. And as such, I represent the top level project in the architecture council in the planning council. And I'm very honored that the committees actually voted me in as one of their representatives in the Eclipse Foundation Board of Directors. So that happened this year. I'm very proud of that. And the other role that I have is I'm leading the overall Eclipse desktop and SDK team in IBM. So most of the people from my team are here in Bangalore. That's why it's always nice to come here and visit my folks. So why I'm doing that talk over the past years when I was speaking at conferences about technical stuff like Java 8, people approached me and said they are new to Eclipse and they would like to have some background, some history. How did it all start? And then I said, okay, let's do something like that. And I've been doing that now for the past two years at several conferences. So really this talk is for you guys. If you have any questions during the talk, just come and ask the question. You don't have to wait until the end of the talk itself. So what we will do here is we will first hear a little bit about the history of Eclipse. How did it start? And then in the second part, I'd like to show you why the project itself is so successful over all these years. So at the beginning, here it's how did we start? And with we, it's really IBM. So at the beginning in the late 1990s, there was something that you know, know us.net. So Microsoft started to do that. And they had something ready at the 2000 that was net 1.0. And as people thought back then at the name Eclipse was there to compete against Sun, which is now Oracle. But in fact, it was really IBM that wanted to have something against this net framework from from Microsoft. Then the second point is back then the landscape of IBM tools and software was really bad. So every tool, every software was its own silo. They couldn't talk to each other. They could not be extended. And it was a lot of codes that was complicated. So one goal for IBM was to have a platform where they could actually build all their tools up on. And third and not least important is IBM is working close to a lot of software vendors. And they also need to have a business and for them it was very hard back then to extend the existing IBM tools because there were no extension points and it was really hard. And one goal was to give food to those software vendors and to the community that they can extend the IBM tools and provide services for the community itself. So when we started, everything started inside IBM. So this was not, we didn't say we want to do open source and now start doing open source. We started with the idea that this is something IBM internal and then we make the products out of it. And we didn't think about open source when we started at the beginning. So everything was closed. We were not allowed to talk to friends and colleagues. So it was really everything done inside IBM. And also like software development back then, we didn't talk to customers. So we didn't care what they were saying. We just did what we thought was the right thing to do. That's how everything started back in 1998. So here you see in 1998, we actually started with Eclipse that had another code name back then. It wasn't Eclipse at the beginning. And the first thing that we did actually was we started with the team doing the widget toolkit that you all know as SWT. And why did we do that? Everyone in the team believed that the right thing, the right approach is to use the native look and feel and not do the same failure and swing that you emulate everything on all the platforms. So we really believed in that approach. And personally, I still think that's the right way to do. I know we also have teaming in Eclipse that allows products to have their teaming that looks alike on all the platforms. But personally, I prefer that you can switch to the dark theme in the OS and not just on Eclipse. I still think this is the right approach. So after we had SWT, we started to actually develop the platform. And one goal we had is we need to have a dedicated client that proves that the platform works. And that dedicated client is the Java tooling, the JDT that you know today. And in order to make sure that we have these boundaries, that we have a platform, we have JDT, and we also had CVS and PD support already in 1.0, is that we separated the teams. So the team that did the platform was in Canada. The team that did the JD core stuff was in France and the JD UI team was in Switzerland. So that's how we make sure that we have the right APIs and that there is a certain boundary that you have to make sure that you have the APIs ready when you do that. Now, after that, inside the company, the discussion started, do we want to do that? Do we want to make that open source? And you can imagine there are a lot of things that spoke against that. So the first thing is, of course, the lawyers. The lawyers were the first ones who said, no, that's never going to work. Then there was the product and sales team who said we cannot do something that we gave away for free. And this was really a long process that took more than a year. And at the end, in 2000, as I mentioned, the .NET 1.0 beta came out. And that was the trigger to actually convince the whole management that we have to do and make that open source. And in 2001, in November, I think it was shortly after Tampa, Uppsala, where we had initial talks about Eclipse. It was open sourced and this was about the $40 million investment from IBM that was given away. So that was done back then. But what you don't have to forget, you cannot just put something into the open source. So what IBM had to do is we had to evangelize the whole thing. So we had to go to keynotes. We already had Eclipse days back then. We had the Eclipse demo camps. We had code camps where we made sure that people actually use the technology and get to know about the technology itself. So that was also a big time-taker for the whole team where the developers had to do all that stuff. So there was no team around the developers that actually would do something like that for us. So how many of you work in open source? How many get paid for open source? How many like to work in open source? Okay, so you have to imagine back then, 2001, there were not many people or companies doing open source back then. So the action of the team was a bit different than what you see here in the room. So the reaction was really, let's say, from hesitation to, I don't want to do that. I don't like that. And there was also fear from people that future employers will see that code or will see their comments. And it took quite a long time until everyone felt comfortable in actually doing open source and have the interactions with other people that are not part of the team. So what are the key lessons of making the thing open source? So at the end, first of all, the developers learned that it's a good thing. So the good thing is you get early feedback on your stuff. So you really get people that use the things that give you feedback. And then the other thing is the developers and the clients, they use the same channels. So those who use your stuff, they use, for example, Chyra or Paxilla and the developers use the same tools. And that made things a lot easier for everyone involved. Then the other lesson is the switch is not for free. So if you have some software in your company and you think, let's make it open source, you have to be aware of certain things. So one example is the bug database that I mentioned before. So if you look at your internal bug database, you will have some bad words that you would never want to see in public. You will probably complain about some vendors or you will complain about some people. So you have to make sure that the database you make public with your bugs, that this gets cleaned and is ready for public and for everyone to look into it. You also have to make sure that there are cultural differences. So one word might not be okay in this county, but in the other counties used on a database. So you have to make sure if you go open and worldwide, you have to have a clean bug database. Then the other thing that you have to be aware of is if you put your thing into open source, especially these days, you need to have tests. If you have some code and you have no tests, then people will not believe into your software. And if the tests fail, then they will also not believe in your software. And if you only have 10 tests, they will also not believe in it. So you really have to make sure you have good tests that are clean. Otherwise, products that are built on new technology, they will also software. They can have sales impacts and such things. So it's really crucial that your tests are there and work. Now something else that IBM specifically learned about the lesson is in order to get all the benefits that I mentioned like cooperation and getting early feedback using the same tools, it's not something that you have to make your stuff open source. You can also do open commercial development, which means you get access for your clients, do your tools, you share them, you react to their feedback. And with that, you get all the benefits, but your source code will stay with you in your company and you will not allow others to extend it or use it. So that's another lesson which paid off for IBM when they built or when they open sourced or made just on its platform public for the clients. So now let's move on to the Eclipse way. And the Eclipse way is really a software methodology. It's about a software process. It tells about the success of the Eclipse team, how everything else, how the stuff has this success. And it's used by the Eclipse team and it's also modified by the Eclipse team. And with the Eclipse team, I mainly speak about the Eclipse platform team, but a lot of these things you will see later are also adopted by a lot of projects, especially all the projects that they participate in the release train itself. So why should you learn another methodology or another HL process? You have Kanban, you have Scram, you have all that stuff. So why should you do it? So the reason from my perspective is the Eclipse way is something that worked over 15 years. And since 15 years, we ship software on time with high quality and we still have the same innovation in it. So we don't have as much new features as we had in the earlier releases, but again, still in Neon, we have a lot of new features and we continue with that pace. And I think that's a big differentiator to other software methodologies which come and go. We really have the focus on shipping the software on time with high quality. And with that process, so far we succeeded. So let's get started. It's really not much, it's five practices I would say. The first one is milestones. So you have milestones first, that's one. Then the other one is early and incremental planning. And the focus here is really on incremental planning. The early is not done by every sub-project as early as I would like, but it is done at least early enough. Then the third practice is continuous integration and continuous testing. That is very important. Then as a fourth practice is the end game. And at last, but not least, there is the decompression which is very important as well for the whole team. So this is how we developed software in the past and would be, I say, IBM. So at some point you realize you have to start working and then you look at the calendar, it's only three months left and then it's too late and at the end you're done. So I really hope that this is the past also for you guys. If not, then you should think about switching to the Eclipse way. So how can we fix that? The fix for the problem is really to split the whole release into small releases. That's what we call milestones. And each of these milestones is like a miniature release. So we do full planning, we execute, which means we implement the stuff, and at the end we do a real test pass and test the features. And also, it's not like just simple splitting. The outcome is something that needs to be usable. So it's something that the community can download and can use and can give feedback. That's really important. And these days it's even more important because all the projects in the release stream, they also participate in that cycle. So they have an offset for one or two weeks. And after that, they ship their stuff on top of what we deliver at the end of the milestone. And again, after that, all the packages that you can download are built for every milestone so that you can switch with each milestone, you can switch to the latest milestone version. Now, if you, I would just want to quickly show you how this looks like for a milestone plan, for example. So here you see some people, they do planning over the wiki, where you can really go to the wiki and you see the live plan. Others, they do it directly through Baxilla only with setting a target milestone. So what we really learned is the milestones, they reduce the stress because they moved at the end of the big release into separate chunks of each of the milestones. And that is really something good. So here you see how the development of the release 4.6 worked. So you see the things that I mentioned before, we will go into that later. But the important thing is we have the seven milestones where we have plan, develop and stabilize. And what is also important, we have special milestones. Not every milestone is the same. So the milestone 6 is important because we freeze the API. That's there so that adopters and other projects in the active space from the release train, they can be sure nothing changes after that point. Then M7 is again another point where we freeze the features. With that, no feature work is allowed. And we use the milestone itself for polishing the features that we already have and also for performance improvements that are necessary. So those are the special milestones. Now, when we go to early planning, with early planning I really mean iterative planning. What happens there is we let the committers do the plan. It's not that I go there with the PMC and we tell you have to do this for this year. So it's really each sub-project, they have their ideas. The committers have their ideas. There are committers that say, okay, I want to work on this feature. And usually the PMC says yes, you can work on that feature for that release. Just let us know how you want to do that. It's really a bottom-up planning that happens here. So one input is the committer, as I mentioned. The other input is coming from the community. But of course, if someone suggests a very great feature but does not work on it, then there is no guarantee that we will do it for them. So what we will look at the input from the community and if something is really great, we say, okay, yes, that's worth putting energy into it and we'll do it. And I have to be honest here to you guys, some of the committers are really paid by companies. And for those committers, also the companies, of course, they have some requirements that needs to be put into a release. And of course, they will then work on those features as well. So that's what will happen there. Now let me show you how such a sub-project plan could look like. So here you see the plan for JDT UI. Let's scroll a bit down. So you see that for the next two days, oxygen, which we are working on now, we really have three top-level items that are important for the team and only those. And that's the Jama9 support. That's JMA5 support. And then there is the usual maintenance, critical bug fixing and such things. But that's how a plan could look like. And each of the items has a link to a boxilla entry where the community can follow the success or not the success depending on how it goes. So after the sub-projects have made their plans, the PMC looks over those plans. And as I said before, usually only if we think that the feature really looks unrealistic or maybe it doesn't fit into the SDK, it needs to be some additional plugin that can be put on the marketplace. Then usually the PMC says, yes, that's good to go. So what the PMC does in addition, we make, we decide with the community what are the target platforms that we support with this release. That's something that is also part of the PMC work that we do to figure out with different parties. For example, do we still want to support Solaris? So for this release, we dropped Solaris, HP, UX and AX. So this will no longer be there for oxygen. That's gone. So that's what the PMC does in this role. So again, very important compared to old projects where you make a plan at the beginning and then it's dead after a week. Our plans are iterative. So first of all, all the component plans, the sub-project plans, they are updated when you close a boxilla entry so that you automatically have an overview of what's going on there. Then the other thing you have is we update the plan quarterly from a PMC level. So we look where are we? We get feedback to the sub-projects when we think they are not on track and we update the target platforms and we also put the breeze. That's the bundle required execution and run into the plan at this point. So that people see when I want to use SWT, I need to have at least Java 17, for example. So only at the end when we ship, we mark the plan as final. So until the last day of development, it always says this is the draft plan and only at the end it really becomes final. And one thing I want to mention, as said at the beginning, we adopt the process and we always change and try to make it better. So until last year, the PMC had a plan where a lot of static items were there but we didn't have a link to the sub-project plans. And now with Oxygen, we drop that completely and we just link to the sub-projects and they have full control over what they do with their plans. Now, let's go to continuous integration. So our build is completely automated. We produce a full build of the SDK on a nightly basis. These builds are meant to be, let's see whether it works and if it doesn't work, let's fix it for the next nightly build. Then we have these weekly integration builds where we expect that things work together. We also expect that people who work on features have it in a state where the whole development team can actually consume the feature and work with the feature. So we want to make sure that the development team itself works on the latest integration build. And then the next level is the milestone build, as I mentioned before. This is really the build where we want the community to pick up and to give us feedback on that build itself so that we can polish the feature or maybe we remove the feature again if it's not stable enough. So this is really important for us. Now, some of you are doing HL practices, I hope, and they may say, okay, that's not continuous build, right? Continuous build is you commit the code to do the build. But this is just not possible with the size that we have with the SDK. On the other hand, what we do is on the development side, we use Git and Gerrit, and people in all subprojects, they have to use Gerrit. And in order to bring something into the repository, they upload a change to Gerrit. And that triggers the Hudson build of everything. And after that, we run tests for that particular Git repository. So we really have, on change, we have a build and we have tests. And only if Hudson says, yes, that's good, then you can submit the code into the actual Git repository. So we do have some sort of continuous integration as it is in the books. So as I mentioned before, we are always better. So we don't want code in a state where you say, yeah, my feature is in there, but it doesn't work yet. So we really want to be able to have the integration build usable by all the people in the development team. And some of you might know the term, eat your own dog food. And I think it's a privilege if you work on something that you can actually use yourself. I think that's the best situation you can be in. Then you suffer from what you do. I think that's really good. So as I said before, we really like that people use the milestones, but in the beginning, in 1.0 and 2.0, almost nobody used them. And we went back to the folks and asked them, why are you not using our milestones? And they said they are just having bug fixes. They were not aware that we have features in there. So I'm not sure that we invented the new and not worthy, but at least we added the new and not worthy to each milestone so that people were aware that we have new features in these milestones. And I think we do it since 2.3 or something. So we do it for a long time. And what we do also is at the end of the release, we collide all the new and not worthy from the actual milestones into a new and not worthy for the release. And what we also started to do is we have a tips and tricks document which every one of you should read. If you do JDT development, there are lots of good things there that you can still learn, I guess, so take a look at it. And the point of the slide is really we are the community. If you use JDT tooling or any other tooling in Eclipse, anything, make sure you give the feedback. It doesn't help if you say it's not working and I'm going to IntelliJ or it's just crap, just give feedback. Even if it's just the bug that you file, we look at it and we try to fix it if it's really a bug and not the user error. So that's my point here. We are the community. Everyone should help and make things better if you want to make it better indeed. Now testing, I cannot stress that enough. Testing is crucial. So I'm not saying you have to write tests before you write the code. I know some people do that. I know there are applications where this really makes sense. For example, scientific stuff where you know your problem exactly, then you can write the test upfront. But at least after writing the code, you have to make sure that you have test cases. Imagine you go to a new employer and you get a code base of 1 million lines of code and you have to do some refactorings. If you don't have any tests, I'm sure you will be lost because you cannot change anything without being sure that you break something else somewhere else. So our Eclipse SDK by now has hundreds of thousands of tests that we run with each build. So each nightly build runs these hundreds of thousands of tests. So these are correct as tests. They test whether something is working okay or not. Then we have performance tests. They run some tests like open 10 editors and closing them again and then compares in a database to other releases. And then you see whether we have a performance issue or not. Then another thing that's important is leak tests. So for example, we have tests that open an editor, close the editor and then make sure that all the instances are still the same and that not one editor or parts of the editor are leaked. And if they do, the tests fail. So that's also something we have and which is very important. And as I mentioned before, we are a platform. So we want people to use us and we want to make sure that they can be sure that the next release, they also work. So we are very, very careful in breaking stuff. So even when we went from 3.0 to 4.0, we had a compatibility layer that people are not broken. And you can only do that if you have tools that tests whether your APIs are breaking or not. So we have API tools that verify that we don't break the APIs. We have tools that can tell you whether you are not using APIs. So I would just like to show you two of these reports. So here we have an API tools verification report. So this report, of course, we have zero compatibility warning. So this would show up if, for example, we delete a method that is API, then it would show up here. If the bundle versions are not correct, then it would also show up here. And it also shows when one of the components itself violates user stuff that it shouldn't. So you see here, for example, compare at the top has five warnings. This does not mean a breakage, but it means that compare is using stuff inside the SDK that it shouldn't use. And here, I just randomly picked the pd chain unit runtime. Here we have a report that reports everything which uses internal code. So in OSGI, you have a class path where you specify what is exported, what is public, and what is internal. And the Eclipse application runs in a non-strict mode. That means even if you refer to internal stuff, it still runs, but it's discouraged. So that's why we have here the discouraged access warnings that are issued. And the good thing for you to know is all these reports, you can generate them either out of the IDE or there are AMP tasks where you can produce them. So if you are in a situation where you have clients, then you can also create these reports. Or if you want to make sure that you don't use internal stuff, you can use these reports to make sure that this is not the case. So the end game, what is the end game? You cannot have milestones until the end and expect that you have a product. So you need at some point, stop doing development usual or normal development, adding features. And need to come to a mode where you have longer test passes than fixed passes, test pass, fixed pass. So we have between three and four such release cycles where we do that, where we encourage the community actually to help us also with testing and report stuff. And the goal there is also that with each release candidate, we from the PMC, we put higher burden on the developers. So they have to get more approvals from higher levels. Like first, a committed kind of proof-a-fix. Next level, it has to be the project lead. And then at the end, it has to be the PMC to approve any fix. And the other idea to slow down the development is also if the developers are not allowed to develop anymore, they have time to document stuff. So that's the period where they update the tips and tricks, where they write the F1 help, where they have time to go and put the new node worthy in a shape. That it's consumable by client. So that's also part of the end game. And why can we do that? We can do that because part of finishing the release is already distributed into the milestone. So we have test, fixed passes also at each end of the milestones. That's why that works pretty well. So here I have, that's a live chart so that you can use Paxilla to generate that. I could demo that if you don't believe me. So that's really just a chart generated out of Paxilla. And it shows the orange thing is features or enhancements. And it shows from M1 to RC4 here on the right side. So what this, I show it to you to prove that things really work. So you see after M7, the amount of fixes is going down drastically. And then again from RC2 it's half and again to RC3 it goes down. So that really works and it really pays off. Now I have a question for you guys. Why do you think this guy here M5 is so low? Yes? Yes? Christmas time. So this is coming in January of the Christmas. So that's why it's that low. So it's not because the committees were just lazy at that time. So it's really Christmas. Now let's go to the decompression. So the decompression is not for the sake of the project or the release itself. The decompression is for the sanity of the developers. So really that they know when we ship in June, end of June, we really have time to breathe. We don't have to go and just start over with the next release. Maybe I have some cool projects that I would look at. I have some cool idea that I would like to implement or an idea for the future, but it's not yet sure. So they have time to actually work on that. And that's usually happening July beginning of August. Of course, we also have those guys in the team. First of July, they start planning and working on the release. So we will not stop them from doing that. That's completely their own business if they want to do that. But it's really there to give people time to relax and come back. So the conclusion of all this is really it's the team that makes the process work. And the team needs to be able to change the process. If you have for example Kanban and you want to do that with your team and you're not allowed to modify it, then it will either be painful for everyone or it will fail. So you need to have a process. It doesn't need to be the active way where the team itself can influence the process and where the team itself can actually have influence on how the process works. So coming back to that, I think I forgot something on the previous slide. That's the retrospective. So the retrospective is also something that we do and I forgot it because we didn't do it this year. We usually do it every second year these days because the team is about the same people working on it and the outcome is often the same if you do it with the same team. But the retrospective is really important to look back what went good, what went bad. And we do that as I mentioned before, transparency is really important so the community can also learn from these things. So I'd like to show you quickly how this looks like. So see here from Mars, we had every team, every subproject had to go and collect what went good, what went wrong. And then we came together as a team and we discussed the team issues and the cross-team issues and the whole exercise would be useless if we didn't have any outcome of it. So out of the retrospective we have about 10 points that we said we wanted to do better in the next release. And I think we did quite well with the new release and then that's really really helpful and brings a lot. And as I said, you can find that all on the weekdays. If you want to learn what can go wrong, you can take a look through it and then you don't have to make the same mistakes to learn from it. You can learn from it here. Okay, with that I'm done and I'm taking questions. Yes, yes, yes. So the performance tests are quite heavy so they only run in the integration builds but all the 100,000 correctness tests, the leak tests, the API tools, verification tests, they run every day. And those are JUnit tests but we have some frameworks, for example, that allows you to leak testing and instance counting and such things. Yeah, everything is in Eclipse in the top level project. I can give you pointers if you want. Any other question? Right, you have a hard one for me? Good. Okay, then thanks for joining and have a nice day and enjoy the Eclipse Summit.