 Good afternoon. So shall we start, right? OK, so hi, my name is Avitash. And this session is going to be typically on features of Java 9. So this will be probably covered by three persons, me. And after that, Manoj will cover the modules. And then it will be continued by Allah Baksh. So let's start the session. OK, so we are talking about how Java 9 is a boon to enterprises and how enterprises can use Java 9 for their own benefits. So OK, so just a brief timeline view about the Java history. So Java beta was released in 1995. And over the years, it has grown to cater to different needs of the enterprises. Then we had first release in 1996. After that, then in September 2004, we had JDA 5, then JDK 8. And now we are on the path of getting the JDK 9. So basic good things which came into the timelines were generics, invoke dynamic, Java 8 had lambda. And now Java 9 has project Jigsaw, REPL, and various other features. So what are the key features of Java 9? Java 9 major thing which everybody is talking about is project Jigsaw and modularity support. After that, HTTP 2.0 and WebSockets, then we have REPL or JShell. These topics will be covered in various different sessions and process API and JLink and other different multi-release jars and other features. So I would be talking about HTTP 2.0 or we should say HTTP 2.0 in this session. Just a quick check, how many of you are web developers over here or has done web development? Oh, cool, OK. So I would be covering brief about the history of HTTP as well as how Java is using now HTTP 2.0. So we started with HTTP 0.9. This was the first published recommendation of HTTP. So it came in 1991. It had only a get method available. So what client could do is just shoot a get request. It would get a response from the server and it will close the connection. So there was no concept of headers. There was nothing HTTP could do, nothing over there at that point of time. So it had only basic support of HTML, no media contents, nothing. So OK, then came HTTP 1.0. What was HTTP 1.0 is what we use today as, you know, it's also called 1.x. And this was the base of modern day browsing. So it was published in 1996. It had multimedia contents, like it supported images, videos, textual, in addition to the textual content, any other things were also included in this. So it also included a method called post and head. Headers were included in this version. We had content type or accept and user agent. It supported basic authentication of username and password. What was the drawback of this? So HTTP 1.0 didn't cater to the needs of, you know, multiple requests or multiple simultaneous connections. So once the connection is established, it will be closed. So that was not taken care of in this version. So what did 1.1 do for this? So 1.1 came with a unique idea of having persistent connections, which we use today. You know, you establish one TCP connection and then over there you can have multiple simultaneous requests. But it has its own drawback. That's why we are now moving to HTTP 2, which is much far better than what we have in our history. So it was published in merely after three years of HTTP 1.0. It came into 1999. And there were a few more methods which were added to HTTP 1.1. So now the base thing which came over here is client could close the connection. Initially, server would just send something and then close the connection. Here, the power came into the client. Now, till the client doesn't shoot a connection close request, it won't close. It also came up with a concept of pipeline or persistent connection, where a pipeline is established between the client and the server. And multiple requests can go simultaneously without waiting for any response by the client. So how do you know whether if a client is there, if you have a web page which shoots 10 requests to the server, how do the server knows or how do the client knows that you have got the response of this request or that request? So that was identified by the content length, which was the demarcation between the various contents. So this is something some mess up has happened. OK, so here is some quick facts about HTTP. HTTP 2, basically. So everybody says HTTP 2.0. So really, it's not HTTP 2.0, it's HTTP slash 2. That's how the official name is. And then there was a discussion going on telling if HTTP 2 would replace HTTP. And it would be only HTTPS. But that myth has been broken down. It's like HTTP 2 will support both HTTP and HTTPS. But most of the vendors have agreed upon telling that they would support HTTP 2 only if it is a TLS or HTTPS. So in theory, it's not a binded to HTTP or HTTPS. But yes, in implementation, you might find over HTTPS, HTTP 2 is used more. OK, so there is one more thing in quick fact. So if you have an existing server which supports HTTP, would you require to have additional server to cater to HTTP 2? No. So your server can support both HTTP 1.X as well as HTTP 2. And it depends on the client. So if the client supports HTTP 2, the server would respond accordingly. Or otherwise, the server would respond in HTTP 1.1 protocol. OK, what are the key features which HTTP 2 has brought onto the table? It's a binary protocol. It supports streams. It supports multiplex connections. And there is a methodology of called server push. In all other cases, we have a server pull. So client request and server response. Over here, server can also push the data without client being asking for it. And you have header compression. So I would go one by one into details of each topic. So what is the binary protocol which HTTP 2 supports? So right now, it's a textual content. Everything we send and get is through a textual method. So now HTTP 2 has come up with the transfer or the data transfer should happen only in terms of binary. Why? So first of all, binary is easy to parse and compress and encode as well. So it's very easy on wire that it's easy to transport binary message on the wire rather than a textual message. So it's less error prone because when you have a textual content, you have to take care of escape characters. You have to take care of white spaces, slash n, slash t, and all those things, especially when you transfer. Over here, if it's a binary, either it's a 1 or 0 or it's a modular analog signal. So you don't have to really worry about having special thing called slash n or slash t. So what are streams? So HTTP 2 came up with a great idea of having your transfer or the transfer of the content being converted to streams. So if you establish a connection from the client to server, your connection would be taken care by various multiple streams. So how stream works? So think about each stream. Each stream is divided into multiple frames. And there are two kind of frames which are there. One is a metadata frame which contain headers. And there is a data frame which contain the content. Each stream has been assigned a unique ID. So how it works is first, when the multiple streams are opened up, each stream has been assigned a unique ID. This unique ID is taken care by each frame also. So each frame has that unique ID of the stream. And that's how it's been identified between the server and the client. And to identify whether a frame came from the server or the client, HTTP 2 recommends that the request initiated from the client should have an odd number of unique ID. And request initiated from the server should have an even number unique ID. And there are other types of frame like RST, stream, settings, and priority. So to give you a quick view, RST stream is the frame which will be used to close the connection or close the stream basically. So if all the RST streams are closed for multiple streams, then the connection is broken down. So what is multiplex connection? Over here in HTTP 1.1, though a client can had multiple requests for a particular server, but then at the same point of time, you can have only one response being processed by being into the pipeline between the server and the client. So what it would lead to is a head of line blocking, which is a basic problem of HTTP 1.1. So take this example. You have four different resource objects which are getting transferred through the pipeline. You have three 1KB of data and one 20 MB of data. So till the 20 MB of data is processed, all the resources which are falling behind this 20 MB of data will have to wait for this data to be completely sent to the client. So this would lead to a head of line blocking. You cannot really have parallels, simultaneous data transfer going on over here. So over the time, what will happen if a page is requested by multiple clients, the server would respond it to the block request or the dropped request. So there will be a waterfall of block request. Now how do HTTP 2 solve this issue? HTTP 2 gives you multiplexing of streams. So if you establish a connection between a server and the client, your data transfer can happen parallely through multiple streams, okay? On top of it, since streams are broken down into frames, while transferring your frames can also merge between the streams. Like suppose over here in the example, you can see frame two from stream two can be combined with frame one, three and four. So this becomes a single data transfer between the server and the client. So what might happen is all the header frames are combined into one and the data frames are combined into one group and can be sent. So this is how the multiplexing between server and the client works. Now what is header compression? So now if you see a HTTP basic request, you would see there is a large amount of data which has been taken care by headers. So you have headers like content type, you have user agent, content length, and all other different sort of headers. So these headers are, every request contains this header and every response also contains these headers. So there's a lot of data which comes and go between the headers. So what do we do is in HTTP 2 is HTTP 2 takes care of these headers through compression. A server and the client maintains a table and the redundant headers like say user agent which would be constant for a client for each connection is encoded or compressed using Huffman encoding. So like accept an user agent you can see over here is compressed, it's converted into a single compressed header which is then exchanged between the server and the client. So to give you an example in HTTP 1.x message, there might be say 1.5 KB of headers, it would take at least seven to eight TCP round trips to leave the client. So that and then if you compress this particular header, then it would take at least one to two round trips. So there is a huge fall between in the latency. So your client would quickly send the data and receive the data as well. So what is server push? So this is a intelligent step from HTTP 2 where when you access web pages, your what should eventually happen nowadays that the server should always know that through caching or other algorithms that after page one, page two should be or resource one, resource two should be sent to the client. So intelligent servers can be built upon telling that the source X and Y are always frequently accessed resources. So when resource X is demanded, server can send resource Y also. So that is what server push provides you. So it gives you a resource Y without even client ask for it. So this is happening through a frame called push underscore promise. So this reduces extra handshake because if you know TCP gives you a three level handshake before any connection or any request has been sent. So if you push promise frame is sent to the client, a new stream is established and this frame is sent which gives the client a data telling, okay, this is the data which server has sent to you without you you've been asking for it. So yeah, so there can be or there might be already some algorithms which are working on making intelligent servers like I said which will be sending frequently accessed resources to the client before they even ask for it. So this is how it happens. Clients request for a resource say X, server sends X with a push promise of Y telling, okay, this is the new data which you might require for this particular resource X and here it is taken. Please don't ask for it. Okay, so now this was the brief history of HTTP. So what HTTP has brought into Java 9? So there is a major change which has happened. We have added a java.net.http package which contains altogether most of the functionality for the HTTP. So some of the abstract classes for example, are HTTP request response client as well as builder. So you can have, there are certain abstract, these are all abstract classes. There are other classes also which you can use for your client connection. So if you are a web developer in Java, you might know that to establish a HTTP connection right now you use HTTP URL connection class which is now in verge of deprecation. So they would typically use this particular package for all future HTTP enhancements. There are some support classes like HTTP to frame and header frame which are HTTP to specific which converts your data into frames and then sends it. Okay, so here is some level of, I think it's not visible completely. Okay, so in how do you do request in HTTP in Java 9 is you create a get request like this. You have a, over here you have HTTP request.builder which creates a new builder for you. You pass on your URL and then you shoot a get connection. So initially you had get as a string but now get is a method. And then when you receive a response, you can do what sort of things you want to do like this. But now in other earlier versions of Java, you didn't have async calls for HTTP. So you would have different threads and all to cater to the async request. But now native support for async has also come. So over here you can see this get async shoots a request to a particular URL and it gets a future object as a response. So you can use this future object to get the response back from the over here. So you call response async and you say target dot join over here to wait for the response to come and over here your main thread will join and then you can get the response from that. So native support for async has also come up into HTTP package. So now you don't have to, you can do Ajax call right from the native code. Other than this, I just want to add that HTTP to support is there from different libraries like Netty, JT and OK HTTP. But now this is a native support for Java 9. So you can start using your different code, your this code without any external libraries for HTTP too. So other than that, okay, other than that, there's not a new thing added for HTTP too as far as I know. So yeah, that's all for my thing. This is the journey for 0.9 to two. Yeah, thank you. So the topic today, which I'm going to talk about is JDT and Java 9. What are the features in Java 9, which has affected JDT? What is the status, the Java 9 ready status of JDT as of now and what's pending and what will be up in the next few weeks. I am Manoj, Shashi is not here and Jay is sitting right in front of me. So we three work in the Eclipse JDT team at IBM. The agenda is something like this. I'll be covering the Java 9 features from a JDT perspective. Primarily we'll be talking about modules and milling project coin that are in the passing and would give a general introduction to the modules. Detailed introduction will be given by, detailed talk will be given by Alabash later. We'll talk about the changes in the JDT for the speech and would end up with a demo. I'll also give references for you to download the stuff and resources if you want to discuss the design or even if you want to contribute. We'll conclude with what you can expect in the next few weeks. So let's see what a module is. Module as per the definition was like self describing, entity consisting of code and data. So I would say informally it's something that contains a lot of packages. We'll see what is the, every module has a name. I'm sure some of the Rackle guys will not agree to it. Yes, there is an unnamed module. So it will not be bothered about it right now. But let's say every module has a name and sometimes we want to look for a particular module. Search for a particular module. For that, you have to go to the, use the module part which is provided by the system. And this module part is akin to the class part in the sense that you, it's a list of directories, for example, and you go ahead and look for the particular module. The differences in the class, you will go and look for a particular type. In module, you look for the entire module. So once you do that, so what causes you to go and search? What triggers the search? It is that a particular module would say I am dependent on some other module. And it says that there is a, I require another module. So there is a dependence relationship that comes into the picture. And that's what triggers the search for the module. After the search, you try to resolve the module and you get something called a module graph. So we'll come to that. This module graph introduces the concept of readability. I can say that a module reads another module. So let's say we have a consumer module and it says I am dependent on the producer module. So essentially, this is a module graph that would come after, after this phase. Simple module graph, two vertices, the consumer vertex and the producer vertex, the dependency being shown by this arrow. The readability means that the module is readable by the consumer. It doesn't essentially mean that the types of the producer is visible to the consumer. For that, you have something called accessibility. For example, if the producer has external packages, producer has the freedom to say that some packages are external and some are internal. So this is the next one and whenever there is a problem, it is the duty of the compiler to show up this error. So we'll talk about the implied readability if the producer is dependent on some other modules and the producer can make this module readable by the consumer by a particular keyword. So all this information is captured in something called module info.java. This is Java 9 specific and the compiler, especially the JTT compiler is directly affected by it. It introduces a lot of new keywords. Let's say our old friend consumer depending on the producer, the consumer module is shown by this particular file which contains the keyword module. This is a new keyword. The second one is the name of the module, the consumer and in the body of the consumer, it says I require the producer module. The producer might have two external modules, prod.ext1 and prod.ext2 and it has an internal module, prod.intern. So the producer module looks like something like this. Again, it has the module keyword in the module info.java file of the producer. That's the producer name and then it says I export the prod.ext1 and prod.ext2. In the second one, you can say that it says I'm exporting to the consumer. This essentially means that only the consumer module can see the prod.ext2 module. It also has dependency on couple of other modules, prymod and the pubmod. The pubmod is written as public, which means that the pubmod can be accessible by the consumer module. So that essentially is a theory and what does our compiler do? We will have to identify the module by parsing. We have the grammar for parsing that and then we'd like to figure out the dependency by going to the module parts. We'll get a module graph. At this point, we'll flag out, the JDT flag out will flag out a lot of errors if there are issues, something like, and to figure out the readability, it is an error if a module is dependent on itself, saying that if there is a cyclic dependency, either directly or indirectly, and of course, looking at the rules of the exports, we'll figure out the accessibility. Again, there will be errors flagged by the JDT. So these are the places where the JDT compiler is affected. So what do you do with the module info.java file just like what you do with a Java file? We compile it into a class file and where the module is represented in the class. It has new attributes, which shows, which has different tables, which is what is exported and what is required. And where do we see these modules? These modules are available as source modules like the module info in jar. And of course, there is a new format which is going to remain internal, believe. Maybe J images. And again, the JDT is affected in the sense that you need to read these modules from the file system provider. We have the oracle provided the JRT FS file system, which is kind of implements java.nio and provides you a file system walker module, module walker where you fish out these modules. And we internally, we fill out the lookup system. Okay, so this is not clear. On the right side, I wanted to show the Java model. In the Java model, we have the Java 8, where you see the jars and that's okay. Directed and here in the Java 8, in the Java 9, what is listed as modules? I hope the demo will show it clearly. So, just to summarize from our compiler perspective, we have the grammar changes. We do error reporting as for the JLS at various levels, implicit reader accessibility checks, then of course translating and reading from the files and we do have the module path and command line changes. From the, that was about the Eclipse Java compiler. And compiler is the base part. And in Eclipse, JDT, we provide other things which are built over the compiler like Java model, the search capabilities, DOM, content as a script fix, et cetera, as you guys already know. So where does module fit in terms of Java model? It fits. We had a lot of discussions as to whether we need to have multiple modules for project or single modules for project. Finally, we decided to go for one module for project. Project itself is an abstraction which contains a lot of packages. So similar to modules. So we have this, if you guys wanted to follow the discussion and the issues there is a bug, but the entire discussion was captured and the reasons of why we have come to the decision. So we had issues with the cyclic dependencies, et cetera. So that's about from the Java model perspective, that's where the module fits in. And the JDT perspective, we, the other features of JDT perspective, we had to read the new format and we have content resist in module info.java file. The module info.java file has the keywords and these keywords are not relevant anywhere else. So we have the content resist only there. For the DOM part, we don't have any changes as of now. We might provide a DOM for module info that depending on any requirement. Command line has changes and we have a pretty primitive migration assistant which I'll demo where you can start to move your code from Java A to Java 9. There is something called milling project coin. A few things, some five things are listed here. One of the things I'm going to talk about is one small thing which allows effectively final variables to be used in the auto-closables of tri-statement. So coming to the demo part, I had a video but somehow the video was not playing. So I'm not sure why it's not showing up. You can of course, install the Java 9 support now. Sorry, so you can install the Java 9 support from the marketplace by typing Java 9 support for beta Java 9 support. And also you can install the Java 9 support from the P-built from the repository. You can get the support from here. It's all being installed. And then let's see a project which is a Java 8 setting. So to get the first introduction to the Java 9, what you do is you change this to Java 9. Move the, check that the compliance is indeed 1.9. It is because it depends on Java, the GRE. Then what you do is create a new source folder with the same name as the source folder what you have and make sure that you have the create module input or Java setting checked. And here you have the first module input. So this is the pretty game in the sense that it exports all the packages it's finds and you can probably remove, keep the internal stuff and keep whatever you want. It has the two modules which are built in as a Java base and other one is required to Java base. So some basic, basic keyword completion and of the content as it works here. And of course, some of the basic exports packages also work here. So this is still in the works. That's about moving the project. Now let's say you have a project which is already in the, which already has a module info file. So you let's say first project and then you have a second project which is the, which has a dependency on the first project. And this is given by a request public first.mod. Let's say we have a third project which has a dependency on the second project. And here you can see that the interface is used from the first project because the second module had imported, had exported, had said that it requires the first project with a public keyword. So it's the first project is put over to the third project. Okay, so I guess that's about it. And I had one more thing that is about the tri statement having effectively final variables allowed. So, so this is with the, this is not available in the partial. I'm just taking the latest quote. Here you have initially before this we never allowed a local variable statement to appear here. Now we allow this as long as the set one is final, effectively final. And if it is not, we flag an error. This is a small part of the project point. So that's about it. I think then there are some errors which might drop up. Here you can see that the ECJ that is the Eclipse Java compiler flags error like cycle sexes which figures out there are cycles. So these are the support we have currently. And yeah, that's about our, that's about my demo here. And if you guys want to contribute, please do install the Java 9 support from this place. Take part in the discussions, file bug reports, file good bug reports. Otherwise you might get comments like this. You don't like to file bugs, please fix the bugs. We are ready to take it up. That's it. Thank you. The guy who is between you and the lunch. So slightly about file, keep it short. I'm a product technical architect at Infosys. I have worked on a modularity a bit in the sense we used to work on. Okay, we used to assess the modularity of a software system. There have been some work done in C sharp and Scala by me, I'm the author. And then the other work which was done for Java which was related to done by my colleague, Grish, I mean, Ash Kaku was the professor at Purdue and the Shantanu Sarkar. So the talk is actually talking about what kind of things which we faced in a large, very large software system. And then we'll define some of the, what we call as a large software system. And probably some of these problems could be solved by the Java 9 module system. And we might actually introduce new problem because anytime when we fix a problem we actually introduce a lot more new problem. But that's the journey. So that's the reason the thing is it's a boon to the enterprises because enterprise code is not a day or two days code. It's actually kept for years and it keeps on running from 95 or 96. And it, when it was like people started using Java and then it gets a patches and patches and patches, different branches for different customer and all these things actually create a lot of problem. And it's a pretty difficult, you know, to figure out whether some features are still used or not used one thing. And the second thing is okay, what feature is used by which module. And unless I until you break your code properly, it is next to impossible to actually, you know, figure out how things work well. And I actually thank the Sergey stock and then the contribution done by them because some of the things which we faced over here when we talk is because of the fact that we didn't have an ID which used to support that big, large source code which was not well written, I agree. It was never well written because of a naive developers but you know, the indexing time and the other time used to take so much that people never actually went ahead with it. So some of the problems which a regular enterprise application face, which is a problem even with the Java also, the class path hell, the number of class in the jars in the class path are such a huge that people actually say, okay, write a long batch file and then keep it and then people run through that. The other thing is because of this class path hell, there are different problems which crops up one is flow starter because your class has to be loaded into your memory and it will put it in a perm gen space and all sort of stuff. The other thing is the security part of it. Many of the people are aware that security is pretty much very important to the enterprise application and security can be in a different context. Like a person who is just a developer actually thinks that security is a different thing as compared to a person who is writing a tool for him unsafe class which is part of it. That is a mere problem because a person who writes a password and who has actually read the password as a plain string can actually manipulate it and then read it out. So the security is different for a different people and that actually creates a lot of things in one case for the tool writer. The other thing is, okay, for security is one more thing is who is accessing my class, whether I should have access to that class for that particular module or for particular class, whether I should be given by default when I say public, the public means everyone can read my class. There is nothing which is stopping me saying that, okay, public can't be read by anyone else. Yeah, OSJ solves that problem to some extent with the internal. Yeah, but otherwise, plain Java application till now, public class means it could be read by everyone. And the other thing is the maintenance. Large software systems tend to be difficult to maintain. To give you a sample, there would be a people who would be just supporting that thing and then they develop special tools and techniques for just fixing those bugs. And one of the tool which was there was, I had created for that was a code search which I said about the surgery. So we had a big product. We'll talk about that product without naming that. And the problem which was there was everyone who was on the team used to use VI for the development. And the reason they used to use VI for the development is these are the backend days of 2008, 2007, they used to check out the source code from CVS, okay. And the code used to be such large that if you started in an Eclipse, Eclipse used to take amount of 10 minutes to 15 minutes just to index the whole source code. And that used to make things very difficult for them to actually start. And when you are picking a content access or open type, it used to make things worse. And that created a lot of problem because if you want to do an impact analysis to change something or to add some feature to fix a bug, then you have to figure out what are the classes which are dependent upon that. And that dependency analysis becomes quite difficult. Now the thing which we did was we wrote in, we used JDT and then indexed the whole source code, kept it in a Lucene solar and then read from that. That was a temporary fix for that. But that is the example. And that actually created a much bigger problem because we had in a single package, someone had put 2500.java files, okay. So that 2500 java files, you can't actually remove in a day and then put it separately and then package it because you don't know what actually is going to break later point of time. So the build system which was, there was plain builds which were running mostly and and then it was a, this one. Now some of these problems are the problems which are faced by the regular, all regular, this one. But yeah, I'll take that example, okay. So modules tried to solve that problem to a big extent. So for example, in this application which I'm talking about, there was a one single business module which used to be, which we used to call it as a, you know, elephant module to be very frank. Within this there were, technically there were modules inside but it has a more than 33,000.java files. And then these files used to talk to the C++ code behind that. And maintaining this is quite a challenge. The other thing which we had was, you know, it was not part of this product. There was another transaction system which we had and there was a class in that and it has roughly 15,000 lines of code. I don't know who wrote that but that came to know at the end moment when you are just about to deploy and then it is a 15,000. People who are a naive developer, they don't write a unit test case. They think it's below their dignity to write the unit test case. There were 100 plus jars. There were multiple jars. Actually, if you see the Hadoop or Spark or any of these open source projects which are running in a distributed environment, the jars are, they are a large number of jars and then that create a problem. Jmod tried to solve that problem to some extent but yeah, it actually is a bigger problem than that. A native code is thing, like few of the things which have been there, it's like you have written it in C++ for a variety of reasons. Some of the things are, the performance is one of the reasons though with JIT and the other present benchmark of Java it's just three times slower than the C++ and the development time in C++ is much higher but that could already exist and how do you reutilize that code? So to some extent with Jmod, with a module system, you can actually package that native code along with that that actually solves some of these problems but otherwise you have to actually put if you have installed any of these, what do you call, a wrapper over open GL or a open, this one, open text, recognition, like I don't, I'm not remembering the name. So if you are using a JOGL which is just a wrapper over a open GL, you'll figure it out, there is a lot of native code which is there and then Java is just a wrapper around it and that native code, the binary which is there, the version 64 bit, 32 bit, all these things cause a lot of problems. And the biggest problem was there were, there are too many naive developers in the service industry, the people just come out of the college, they join and then they start writing the code, they just want to write the code, nothing more than that, they don't want anything like analysis, they just, there might be a framework or there might be a jar which actually does that work very well but people don't go and then pick that up because of various reasons, might be a license, might be the learning curve, might be other details, might be a thing that their manager actually evaluate them on the number of lines of code which is written by them. So, there's too many people and then that's the one of the biggest problem and the another problem is there are too many containers, you have a spring container, you have a Tomcat, you have lots of this and to add to this now there is a Docker which actually makes things much simpler for you for all the developers but I would say that some of the thing that, for example, the basic thing which JDK9 does very well is the project Jigsaw which means that it actually breaks down the complete JDK into multiple profiles which can be used. So, I need not actually put a complete 60 MB GRE or 18 MB GRE just for a simple work which I am going to do. That actually makes things much simpler because I went for a Docker container or any of the container system because I want a smaller footprint. So, I can actually easily transfer my image file but if actually I have a 60 MB of my JDK itself that actually creates a lot of problem for me. So, with a smaller profile which is like I can break what I want, I'll just take that and then package that into my image file that is a very or Docker file, I can just use that and then do that, that is a pretty good thing and the container also actually creates a other problem like if someone is actually moving from one version of Java to another version of Java. For example, if I'm actually working with Spring Framework, Spring 3.2.1 or 2.2 then that works very well on JDK 1.7 but I'm actually moving to the later version of Spring like for example, Spring 4.0 then I don't have a compatibility with the JDK7 then I have to migrate my code to the Java 8 that is a guarantee and some of the cases that the code is running for a long time in a JDK7, people don't want to touch it. That's, there is a software rule like if you have anything running, don't touch it. So, making those things to progress, to find to adopt that actually takes a lot of problem. That, so there are too many frameworks you have to actually see the dependency which framework actually works with for what and all these things. Maven actually is just a packaging system or a build system which actually solves some of these problem with a dependency within the version of a framework like for example, Spring Framework depends upon a specific logging library but yeah, that doesn't solve the actual problem of it. So, finally that actually leads to the the basic problem of a JDK that JDK was dying and the Java was dying because of the fact that, you know there was no progress which was making thanks to Oracle which actually made a lot of progress in the recent days and then people have again one back and then they are waking up and then saying and thanks to Google also who actually adopted Android as a, for Android Java as a programming language but you know, if you see the way things are going on JavaScript is the only language people actually want to learn because they can write a front end, they can write a back end and they say that nodes will scale very well and that's the kind of thing but I have a lot of disagreement with that philosophy. I'm supporter of node, I'm not support, it's not like that, I don't support node but Avitash might also tell you that what kind of a problem they face when they were actually running a production service on node, it does work but it doesn't scale the way people just put node and then it scales and what about the transaction support and other support which was given by the sum of the containers that's presently not there or you don't know how do I build a microservices which has a transaction support in built into it. So some of these problems were there for not only for JDK but it is there for other framework also and other programming language but people actually started making a lot of noise about Java is not doing things properly and then Python or JavaScript is the language of choice forever. Okay, so some of the things which people were saying like packaging is one of the problem, JDK is trying to solve that problem why don't we talk about it. Then the other thing is was the modularity because I said the module should be properly defined and if I define the module properly it might solve some of the problem. So Manoj talked about a name module for example if I have a big large Java code then if I have to actually first thing what I would do is if I have to migrate it to a different version I have to actually see what it depends upon and can I run that application without changing anything. That actually will be the first thing which I will look for but for doing that one of the way is just run it and then you are not getting any advantage of it rather than just running it and you might actually get into problems later point of time. The other thing is okay I'll migrate my application over the period of time to a latest framework or latest programming language in a way that it does not actually hamper my work but also add value to the customer in a better way. So name modules with the modules so for example if someone is adding a new module or see he's writing a new business logic then he can actually create a new name module easily by just defining a module info file and then package it either as a jar or either as a you know jmod and then distribute it with a giling but the other thing which would there is there are a lot of classes or a lot of jar files which are not there which has to be converted into the module system and it will not happen over just a day or in a month's time it will take a lot of time. So the introduction of a name module was one of the things which was there so anything which you have there are actually there are two parts in the when you run the Java one is a module path in Java 9 and then another is a class path. In a module path whatever you put that will have a module name. We'll actually come to see if I put a jar how exactly it will work in that and the other thing is the thing is a class path. So all the jars which I had earlier I'll just drop it into the class path and then that becomes our name module. So all these unnamed modules will be there in the class path. So there's one unnamed module and this unnamed module can read every other module which is declared in the module path. So if I had a new jar written and then put it in with a module info file and then I have put it into the module path then I can read every other module which is there in the module path. So this jar can actually read that the older jar. So the class path jars are in the unnamed module and the other thing is all exported drive from the unnamed module can be read that actually is needed because if I have actually converted something and then put it into the module path if I can't read it then it is a horrific problem. And unnamed module actually expose everything in it. So for example it's as just as a class path because it's in the class path everything which is there in that it can be read. So if I actually put a jar which is created with module info into the class path then it becomes a unnamed module. Okay so just putting by them into a module path or a class path which are two separate things I can actually convert one module into a name module or unnamed module. And a name module can't depend upon an unnamed module. So you can't actually say requires something which is in the class path jar. So for example I can't say requires and then I have a common collection library or a common library which is there in the unnamed jar. My jar. So now the other thing which comes is like that's perfectly fine I'm for it. The thing is there are some jars which are going to evolve over the period of time. And people have actually written it the code in a manner that they have a implementation package separately they have a services which are separately. What about that? So for example if I take a library like a Gawa which is a example given in majority of the time I can just put it into the my module path and that module path makes it this as a automatic model. So the class name, the jar name itself is taken as the module name. So earlier you have seen the module name saying that com dot, who dot or com dot, infi dot, some business logic which is there like I have accounting module. So I'll have com dot, infi dot accounting. Now in that I can say requires Gawa if Gawa jar or any of the Google jar is present in the class in the module path, not in the class path. If it is in a class path then at the compile time only it gives the problem saying that okay it's not a different. And every package which is there in the automatic module is by default readable by everyone. So it exposes everything which is there in that and over the period of time if the jar owner or the person who is contributing the jar decides to go to a Java nine and then put the module in for that's perfectly fine. Otherwise everything which is there in that package is visible to everyone because if it is referred by a class path file then it should be able to read that information. So that's fine. So the if I have to migrate some of my application to take advantage of modularity to the you know to the Java nine the automatic module and then unnamed module actually gives some of the this one but if you are a developer or if you are an architect you will actually migrate in a step-by-step process. Okay it's not that that in a day I'll be able to take everything and then put it in a module in for what packages I'm exposing, what packages I'm not exposing and all these things. So in and in some many of the cases I want something like a qualified export which means that I want to actually expose some of these things to only my friends. I don't want others to actually see that. For example, some of the data is private to me. I don't actually give it to anyone which is I don't expose it by requires but in other cases I'll say okay the other person has that information and then he can only see it. I don't want other than this person to see that information. That actually brings something like that seeing that I export this packages to a very specific set of modules. I'll actually export it to the sum of the specific module. I don't export it to anyone. This actually helps a lot in the migration exercise because it will actually say okay I don't want any of these code to be seen by the other module other than this because now later point of time this specific modules which are to which this module is exposed. So if it is a module A module A is exposed to the module VCD then over the period of time I can actually say okay what exactly can be seen in that and I can over the period of time change the whole module thing. Now that's something which actually makes some of the things better but we have a problem with class loading then we have a problem with the reflection. So the reflection in Java 9 actually make sure that even if at a runtime it does not allow you to see some thing which is not exported by the another module. So if I had a package and then I had a type in that package so suppose we had a package called cominv.crm and I had exported that as a in my CRM package and now that whatever is there in that I can't actually say that okay I'll use reflection and then that type I can access it in a module which is not actually if it is not exported by com.np.account so that kind of runtime information is there so at a compile time and at runtime both the way it looks at the type system is the same. Okay and this actually brings back to something like the service loader which is actually brought in Java 6 the service loader has been used mostly by the plugin writers or by the people who write the containers and what you can do is you can define a set of an implementation so for example I have an interface called as foo I foo and then I have a implementation of a foo, b foo and c foo and these interface, this implementation actually I can keep in a separate jar and then I define this can be loaded dynamically and then to define that how it should be loaded where it should look like I will put in a meta NF services and then the package or the interface name over there as a file name and then it can be loaded dynamically over the runtime information. This is pretty useful like for example just consider that you have a airport car airport taxi services where there are multiple people who are offering the cab services like there is a Miru, there is a Ola, there is a Uber and then there is a centralized desk which is the official airport category guy and then he will say okay you guys actually implement my service implementation and then I'll just drop your jar into the class path or and then it works that used to be the case now with the and then how do we actually do that in a Java 9 because right now everything will work perfectly fine but over the period of time you'll again get into the problem of class path hell and then I have to scan through the whole class path on all the jar and then figure it out where is this services which is the jar which actually have a services and the details where I wanted to actually avoid that earlier so now in this can be actually said that if I have a module for example SQL is a module it can say that okay I export all these things and apart from that I actually have I use is a use is a java.sql driver and this driver implementation can be later part of any of the jar which is there and the in another jar like for example my SQL jar I'll actually say this implementation of java.sql driver is provided by com.mysql driver so as Manoj said when the module info file is read and then you have this information I just know that I have to search in this module and then get this implementation class rather than searching through all the jars and which are there in the class path to figure it out which is the implementation which I am looking for okay so that is some part of it but people actually have a quite time that you know should I use OSGI or no OSGI was solving some of the problem and there are lots of facts on a stack overflow and other places where people say that okay I have a I'm actually thinking of using OSGI which is a container and which does a lot of thing good thing for me should I actually go ahead and then use OSGI or should I actually wait for java9 and then I'll use java9 for that matter java9 and then OSGI solves two different problems there are some similarity between them but OSGI is more dynamic java jigsaw the project jigsaw which is a java9 implementation is much more static it solves other problem it actually brings things at a native level with that a JVM level whereas instead of OSGI actually brings it over the period in a software system like on a framework whereas the JDK actually it's a internal implementation for that so you will get definitely you will get a lot of advantage in that but OSGI is solving some of the problems which are not actually solved by the jigsaw that is by design it was a goal not to solve it by the jigsaw so you could still use the OSGI and the jigsaw there are some problems GSR 261 is trying to solve one of the problem like for example if you have the problem right now is by the design they have said that package can't be have it in you can't have it in a two jar file so I have a com.nf.account package so I can't actually put those some of the types in the class jar A and some another in a jar B that makes it problematic because you don't know how it is so now in case of implementation of some of the services which are some of the classes which is declared in the java.base packages like for one of the thing is your annotation and then transaction systems which are part of your java.base or part of your complete module system that actually once get loaded then you can't actually use something which is provided by your container so now they have brought up the advantage the thing which they have done is like you have a layer so the boot layer actually loads all the things which are there in the module which is already you get it in the module path and other things which are also the one which are there in your jmods of base and then you can actually create a new layer and this layer you can actually say that okay I want to load this specific jar in this and it has an implementation for one of my transaction support now if this jar is not loaded then layer one can look at a layer zero otherwise the layer one will actually have its implementation it will actually ignore the things which were loaded in the layer zero so that actually solved some of the problems which are there but it still it is under I saw some of the discussion which was happening and then I think over the period of time it will get solved but last one of the blog which was there was like it says it is broken but yeah but the JSR 261 and then some changes which happened for java.e and then for the OSGI will actually change this thing that was a part of the presentation but the major thing which would happen to move the application is quite a cumbersome thing because you don't want the code to break one of the thing so you will actually take a stage by stage approach and then also you want to actually go ahead and then use some of the things because it has an advantage you have a big software piece which you could actually take some part of it and then expose it as a microservices for others to use it and if you want to use those kind of approach I would there is no other way rather than to actually start doing that right now and then see what kind of thing break and then over the period of time you know evolve the whole product to the right in the right dimension if you have any questions if you have yeah no it's as you said rightly the migration has not been done we are just preparing for some of the things which will make things better for us and the basic problem which is there most of the containers have still nowhere near to the support of Dedekai 9 so that's a basic problem that's the reason that in many of the cases your migration exercise becomes too lazy like for example, Spring will actually have a support of Dedekai 9 in 5.1 or 5.0 and if you take a VOS server or any other server which most of the enterprise application runs they actually don't have a still the support still it's under so you are still dependent upon a lot of other moving pieces which are not set right yeah, build tools is another thing may win has some support in a beta I suppose for a build and other thing which is still major this one but we are in a the product which we have it's still uses and any questions? Thank you