 So I'm Nicola Frankel. I've been working like 20 years in IT. I held a lot of different technical roles. Like I've been, I started as a developer. I've been doing team lead or architecture. And most of the time what is like very specific is I was a consultant. So I don't know what is a consultant in your, in Singapore, but basically in France, it means that you are paid by a company and that company like lends you to another company to do some stuff. So I've seen a lot of different customers, a lot of different contests, different ways of doing things. And in most of those roles, I was there for a couple of years. And so I had to take testing seriously just for regular maintenance purpose. As Michael mentions, right now I work for a company called Hazel Costs. And yes, we are known to be a caching company, but actually the in-memory data grid is a bit larger than that, even though our use case is generally for caching. So Hazel Costs strategy offers distributed data structures. In general, it's a map. And yes, in general, it's done for caching, but instead of having it inside a single GVM, then we can do replication or charting and distributes all over the network. We also have another product called Hazel Costs Jet. And Hazel Costs Jet is about stream processing, also in memory. So we leverage Hazel Costs IMDG to do stream processing. However, my talk today is not about Hazel Costs or Hazel Costs Jet, it's about integration testing. So yeah, I wrote a book. Actually, I didn't draw on to write a book. As I mentioned, I was interested in testing for maintenance purpose. And if you check the literature, that there are a lot of books about unit testing. I mean, everybody needs to have written a book about unit testing. The thing is most of those books, they always more or less say the same stuff. And the scope of unit testing is pretty clear. You might want to discuss about, hey, what is a unit? Is it a class? Is it whatever? But in general, the debate is pretty constrained to this stuff. However, when I checked books on integration testing, because we were doing integration tests, well, I found nothing. And I found it nice that, yeah, I could wrote about what I experienced my failures and what I learned from them in a book. Yeah, on the Mentimental, you had a couple of questions about testing, like, hey, who is doing unit testing and integration testing? In general, those are pretty well known. At least the terms are well known. Mutation testing might be also interesting. If you don't know about it, I have another talk, you can check it on YouTube or wherever. But just the concept itself is good. It makes sure that your tests are actually testing something that you didn't forget the assert or whatever. You can also have end-to-end testing, you have performance testing, but even in the realm of performance testing, you can have low testing, stress testing, and well, you have a lot of different kinds of tests. Like, once I stumbled upon a page that listed like 100 of different kinds of testing. In this talk, however, I just want to cover only integration testing and this is already a very, very wide and large realm. Let's check how integration testing like is different from unit testing. As I mentioned, like unit testing, you test a unit in isolation. We can discuss again about what is a unit. In general, it's a class. And again, we can discuss for a long time whether it's relevant or not, but for this talk, it will be about a class. However, integration testing takes a step further. Like, you're not testing a unit only, but you are testing the collaboration of multiple units. And this small step makes a huge difference. And because now you've got a lot of different things. And if you have been on the internet, if you have been interested in learning about unit testing or integration testing, you'll find two different positions. And sometimes they are very like, very, let's say, flaming. Now that the people are very aggressive in the way they tell it, A. And one stance is A, the only thing that counts is unit testing. And the other stance is the only thing that counts is integration testing. So first, I like to have this parallel. And I used the parallel of a prototype core. So let's have a prototype core. So unit testing, a prototype core would be like testing every nuts and bolts of this core separately. And integration testing would be like assembling the core and making sure that the core works correctly on the test drive. And so if you oppose unit testing and regression testing, if you only say, A, only have unit testing, then it means that you test each piece, each nut and bolt, and then you say, A, let's make the core ready for production, it's good. And nobody in his right mind, I believe would ever do that. It would be too risky. You have no guarantee that the core would work perfectly. But on the opposite side, if you say, oh, only integration testing works, this is the only way to go. You should have no unit test. That means that you first assemble the core, you take it on the test drive, it fails, and now it takes ages to understand why. And after like two months, you understand it was because of a faulty piece. And again, this is crazy. Basically, you've invested so much in testing that failure could have been caught much, much earlier in the life cycle. So my conclusion is both are required. Both are complementary. They shouldn't be opposed to each other. They should work in collaboration. Both of them should be done. So now that's it said, as I mentioned, when you go into the ring of integration testing, you make a baby step, but this baby step like opens a lot of new interesting stuff. And the first thing, the first concept that arises is what we call the system on the test. Again, the system on the test exists in the unit testing realm. It's the class or the unit or whatever, but it's very well constrained. By going into integration testing, you are assembling some pieces, some bits and pieces, and now you need to define for each test what you will want to test and you define it as a system on the test. So these new concepts brings interesting problems, but you can also keep the solutions that you had before. Like you probably have dependencies of your system and test and probably there is some inputs in your site. So just keep the stuff that you were doing before. Keep dependency injection and keep test doubles. Just as a reminder, test doubles, dummies, mocks, tubs, spies, facts. I guess you are familiar with those concepts. You are probably already using them. And something also that I need to emphasize is testing is about return over investment in the realm of unit testing. I say it a lot. You have people that are advocating for 100% cut coverage. So if you do mutation testing, you know this is just crap because it's very easy. I can, in every one of your project, I can achieve 100% cut coverage. I just have a robot, a script that creates tests and I just don't have any assert and I have 100% cut coverage. It's easy peasy. So this is a bad metric. And the second thing is in some cases, you can achieve 100% cut coverage and 100% branch coverage and still have missing cases. The easiest one is the boundary. You test above the boundary. You test below the boundary. You always forgot to test the boundary. It's very easy to forget it. And if you test above and below, then perfect to have 100% line coverage, 100% branch coverage, but you don't have 100% mutation testing, which in that case will help you check. And so if testing is about return over investments, the problem is the larger the system of the test is, then the more fragile it becomes. There are a lot of moving paces that now become sports of what you test. And so it's easier to break it for an expected reason. Also it's less maintainable and the return over investment, that case gets slower and lower every time. So if we admit that testing is about return over investments, you need to organize your tests in a certain way. And in that case, I didn't even anything. You probably know about the testing pyramid and the biggest the system and the tests, the less tests you have. So that's the top in general, you test only the standard path. At the top probably you have like, depending on your software, of course you are probably like two or three main scenarios that you want to make sure that work. Like if you are in e-commerce, your main scenario is you go on the shop, you take items, you put them in your cart and you do the checkout. And perhaps you have two different scenarios. You might have one scenario where you are logged in or on one scenario where you can do an anonymous checkout. And even in such a complex software such as e-commerce, yeah, you can have two scenarios and both of them just need to work and you don't need a, I can just remove item from the cart and add them all. They should have been tested lower in your tests, in your testing pyramid. Integration testing offers new challenges. And those are like, I basically focus on those three. They are slower, they are slower than unit tests. They are more fragile. And as I mentioned, the order of the system and test is the harder it is to diagnose. First problem, why are they slow? Well, because you are using infrastructure resources. You might use five system. You might use the database. You might use something from the infrastructure. When you are doing unit testing in general, you keep everything in memory. In memory is fast. As soon as you go outside this memory boundary, then it will be slower. And then you choose containers. In that case, containers as two meanings. In this talk, I will only talk about the container as a spring container or a time cat container or a java container. But right now, if you are all already using containers as in Docker containers, yeah, it's also another reason why they might be slow because they are dependent on infrastructure resources. How can we cope with this slowness? Well, there is no real way to make them faster. However, what we can do is we can separate integration tests from unit tests. In that case, integration tests will still be slow, but at least you will fail fast. And that's very important. It's like in my prototype core scenario, you don't want to assemble the whole core and take it on the test drive to uncover the fact that there was one faulty bolt. You want to know that the bolt was faulty as soon as possible. And so you want to fail fast and that will speed testing. How would you separate them? Well, it depends a lot on the build tool. I hope that nobody of you uses ends. I hope it might be the case. In that case, I'm sorry for you. You might be using greater. And then I'm also sorry for you for different reasons because I personally had greater. So in the following, I will be using Maven as my example, but I guess that's what I will be telling will probably be the same in greater. So in Maven, you have this Maven life cycle and the phases they are executed in order. So first you have compiled, then you have test, blah, blah, blah, blah, blah. And at every phase, you have like plugin targets that are associated with them. They might be associated by default or then you might associate them yourself. If I take the plugin that executes the tests, it's called the Maven Surefire plugin. And by default, it is bound to the test phase. And by default, if you don't configure it, it will run every tests that ends with tests that start with tests or ends with test case. The people at the Apache Foundation, they don't have any imagination and they are developers. So they are very lazy like myself. So they copy pasted the Surefire plugin and they created something they called the failsafe plugin. It has different defaults. Now it runs by default tests that ends with IT that starts with IT and that end with IT case. Gets a bit harder because there is not only one phase. For unit testing, there is only this test phase. But for integration tests, you have four phases. So you can have like a setup phase, pre-integration test. You have the real stuff, the integration test phase. You have the tear down, which is post-integration test. And then you have the fail phase where you check that every one of the tests was okay. And then you fail. Something also that needs to be mentioned is that the plugin needs to be configured and bound explicitly to those phases. So imagine you don't need to have setup and tear down or you would do that in which one of your tests. This looks like this. So this is probably what you need to do in every one of your POM. Or of course, if you are an architect, you will probably provide a parent POM and you have this kind of stuff. Otherwise, if you provide like IT suffix tests they won't be executed at all and you will wonder why. If unit tests are quick and they need to be, you can run them during each comments. However, integration tests, as I mentioned, it really depends on your context. They might be very, very slow. And if you run them during each comments that might not be that great. So depending on your context, you might think about not running them at each comments but running them at regular intervals. Might be hourly, it might be daily, it might be two times per hour, it depends. But you need to take that into account. That's A, the feedback will probably take more time. Second problem, so the first problem was they take more time. The second problem is they are fragile because they are using dependencies that are outside the scope of your software, outside the scope of memory. There can be multiple ones, but the main problem is they are external. And just to mention a couple of them, like the five system, that's very stupid. Yeah, we are using the five system probably. We might be using time. Probably you are using a database or a data store of some sort. Right now you're probably using web services with the rest of SOAP, mail servers, FTP servers, message queues, Kafka, whatever, whatever. And that makes them brittle. So let's see it through a couple of examples how we can cope with this fragility. The first one is time of file system. For time, like I once had to create a batch that had to run every Monday, I mean, I don't remember the exact time. And I designed it in a way that I couldn't test it because I created the time and the clock itself inside the class that was very bad idea for me. So the first thing that you need to do is you don't do that. You don't use something dependent on time inside your class. You use dependency injection to inject it and then it becomes testable. Likewise, probably you are already using it. Just don't use file, the old file stuff, but use path because path is an abstraction whereas a file is a real item based on the GDK. Recording that's a basis or data stores. Testing the service layer is quite easy because as I mentioned, you can reuse the same stuff as before. You use like mocks and so you mark the repository and you can check the service layer. But if you want to check the repository layer, what would you mark the database? It wouldn't be very relevant to say, hey, I want to test that when I execute this query, I got this result and then in your mark, you say, hey, when I got this query, I return this result, it's pointless. So in that case, you might be using fakes and fakes are just like a regular dependency but they are not in general production ready. They are not made for production. They might not provide you clustering or whatever. They just do the job but not more. Or if you are interested, if you are already like in the realm of Docker, there is a nice project called test containers that allow you to create those dependency on the fly during your tests. So let's forget about test containers because not everybody is containerized. I will use the example of Oracle database. Imagine we need to put some data in the database and check that the query, the select we create is correct. There are several ways to achieve that. The first one is we will use an in-memory data source such as H2 for example and a hope that the gap between H2 and the correct database is zero. And so we are okay with that. Or we could use Oracle Express which is free and hope that the gap between Oracle Express and Oracle in production doesn't introduce any issue. Or if you have a good working relationship with your DBAs, you could add a dedicated remote schema for each developer and for each integration, continuous integration process. It happened to me that it was, I was proposed that for customer and it works pretty nicely. The problem is that the closer you are to the real infrastructure, the more complex the setup. So for example, when I had this one schema per developer, I had to configure the stuff that I got the username from the person executing the test and I had to map it to the schema we were using. It's not something very complex but you still need to do it. Now on the other sides of the scale we have this H2 stuff, the in-memory stuff. And so we can more or less like reduce the risk by using H2 and for example, with H2 you have a dedicated Oracle modes. It's not the real stuff but it's closer to the real stuff. Now web services, even if you are not using databases I'm pretty sure you are using web services. So we need to think about how we would categorize those web services, some organization, they might say in that case if the web service is hosted by our organization we will use like the real one. And if it's hosted outside we will have another strategy. That's now how I organize myself. Basically I have those like either it's a REST or RESTful depending on how you look at it's web service or it's a SOAP web service. We will use fake. Again, you can use test containers if you want provided. Of course, there is something that is able to be provided to you if you have a web service that it's developed and provided outside your organization. There is chances that using test container is not that great. So how do we fake RESTful web services? You can use any micro framework you want. For example, here there is something called Spark Java and it's very similar to Sinatra in the Ruby world with a very few lines of code. You can create a web server and then you just like serve static JSON file. And afterwards you need to find out how you will map the result of your test to your JSON file but you can use the name of the class or the name of the method it's quite easy. So here is how you use Spark Java. It's a main method, you see the ports and then you start the server. And it's based on Java 8. So you can say, hey, here, for example, when I received a GET requests on this pass, then I can return a static string. Or when I get this with a specific placeholder here, I can get it anyway or I can manage the response or I can serve a file or whatever. But in a few lines of code, you can add this fake rating. Now about SOP web services. It would be possible with Spark Java but it wouldn't be very efficient because what you would need to provide is a huge stuff of XML file. And so perhaps you know about this mod there, SOP UI in general, this is used by regular testers. You have this UI and then you can say, hey, when I send this stuff, I need to get this stuff. It has also very good documentation. It understands a lot about the SOP web services, authentication header, whatever. In general, this is how you would use it. You would get the web service definition language file or either online or you create it by yourself. Then you create a mock service, you craft the response, you run the service and then you point the dependency to local hosts. But the problem is crafting the response. I mean, I don't know about you but I've already seen like one makes of SOP responses. That's not really great. So you know what you do is you just send a request, get the real response and copy, paste and perhaps change it a bit because otherwise it won't work. And the problem of using a graphical user interface is we want to do that in an automated way. So for that, it's hard but we can do that with SOP UI because there is a job for that and the code would look something like that. So you have an API, sorry. We have an API and this is how it looks like. It's not fun. It's not easy. It's a bit fragile but it works. Conclusion about Faking Web Services. I would advise you to use the same rules as for unit testing. The validation needs to be simple and you need to test one thing and please for the love of God or whatever, you need to keep your testing logic simple. I see a lot of engineers, especially junior ones and especially I'm guilty of that myself. We want to be smart. We want to be dry. We don't want to repeat ourselves. So we try to keep all here. I have the same line that's repeated in two tests. Oh, I will create a dedicated class for that. And now your test becomes a mess. And imagine that integration testing can already be a mess. There is a lot of complexity to handle. When I read a test, whether integration or units and I need to check 10 different files to understand what it does, it's not great. So please don't try to be smart. Think about the guy who will be maintaining your tests afterwards. In my opinion, your tests should read like a real world scenario and your tests should be as self-contained as possible. I mentioned that integration testing is about collaborating between multiple classes and probably we have dependencies. But what did we forget up until this point? Well, we forgot the most important dependency and I assume that you are using one dependency in one form or another that we called before the container. Now this time is loaded because container refers to Docker containers or OCI containers if you want to be pendentic. But in general, you are using Spring or Java EE and that is the first level of container that you are using. So let's check. I will have a huge board on Spring because I mean, I'm quite a Spring fan. I have a small board about Java EE but it can be skipped depending on people. If nobody of you uses Java EE, that's fine. Also I'm not an expert. So I just checked how it can be done. Meanwhile, I would like you to write down in the chat whether I can skip the Java EE port or not. Or I can skip the Spring port as well but it will be horror because it's really a big chunk of the presentation. So Spring configuration testing. The assembly of the code is made through Spring. So if we can test beans, we probably will mock the dependencies. We have beans that depend on factory sources but how we can make sure that the assembly of some parts of the application would be working. Well, the thing is we must design our configuration, our assembly configuration to be testable in itself. I don't know what size of projects you have been working on but sorry, in general, at least especially before microservices, the applications that were huge and most of the time we created huge monolithic configuration file. And this is clearly not testable because that means that you cannot pick and choose the beans you want to assembly and test. So you need to think about this testability and you need to design your configuration and break it down into fragments. And each fragment would either be a set of real beans or fake or mock beans. Like I have this example. So this is a standard Spring application with these old legacy layers of service, repository and no controllers, but I just want to check that it works with the service and repository. The repository would say, hey, I'm dependent on something called a data source. And if you package this configuration with the real data source, then you will be cheated out of a way to test your configuration outside of a GNDI providing container. So what you need to do is you need to have this main config and you say repository, I need a data source. And then I have two additional configuration. One is for testing purposes. Then you define a bean of type data source and one is for production. Then you have this GNDI lookup stuff. And then depending on your context, you can assemble the main config plus the testing config or the main config plus the production config. And it doesn't need to be only XML. Of course, I hope that you are not using XML anymore. I hope, perhaps you are, perhaps you have legacy applications, but you can do it with XML. You can also do it with regular classes. So here I've created two different kinds of beans and they have the same name. They just don't need to have the same name because you probably are injecting by type. One of them is doing the GNDI data source lookup. The other is creating an in-memory data source using H2. And if you have like profiles, it gets even better because you can say, hey, I activate this profile or I don't activate this profile. But even without profile, you do this assembly here. I have this main fragment and I have this production database fragment. And I assemble them however I want. Something that I failed at and that I learned is the odd way you should prevent coupling. So if you are using XML, your XML file shouldn't reference each other. There should be this top level test or production stuff that reference all of them. If you want to be sure, probably there is a high chance that in one of your repository, you forget to close the connection. And if you forget to close the connection, that means that once you get the connection and you forgot to close the connection, that means that one of the connection objects is removed from the pool for a long time. And after a while, then all of the items in the pool, they are removed and you have pool exhaustion. And it's not great when it happens in production because now you need to debug it. So in general, when it happens in production, the first work around is you increase the size of the pool, which defeats the purpose because now it becomes even harder to check where this problem comes from. So in development, just sets the maximum of number of connections in the pool to one. And then again, you fail fast because if you forget to put it back into the pool to close the connection, you will notice immediately. I mentioned XML, but I hope that you are using Java config for now. Again, I don't know about your context. It depends a lot, but yeah, if you can use Java config, please. So now how do we test? Well, if you are in the Spring world, you're probably already using Spring tests. It has integration with a lot of testing frameworks. And I am myself a fan of TestNG. I'm not a fan of JUnits. I didn't check JUnit5. So perhaps I'm an old beam and perhaps I should have, but the thing is in a framework, which is called JUnit, it was a JUnit4 was heavily biased toward unit testing. And so if you wanted to have also integration testing, it was either very hard or you needed to bring in TestNG. And so you would use TestNG for integration testing and JUnit for unit testing, which didn't bring any advantage. So I said, okay, just use TestNG for every kind of testing and it will work. So this is a sample for TestNG. And here you see, hey, I will create this context configuration. I will use this main config and test that as was config class. So those are my fragments that I was using. And I don't know why I have a sample TestNG in my title and a Spring JUnit4 class runner in my annotation. There is something that is wrong with that. Just let's pretend that you didn't see it. I want to test with the database probably at one point or another. And probably if you are like working in Singapore, you're probably working in the banking industry. Well, that's what I assume, but perhaps that because I'm far away from Singapore. And so if you're working in the banking industry, probably there is a high chance you have transactions. Just as a reminder, transactions, they are not parts of the persistent layer. They are part of the business layer because only the business understand what needs to be rolled back afterwards. And so they need to be implemented the service layer. And this is implemented very easily by adding this transactional annotation and Spring will take care of the magic. Now by default, the problem is when the test fails, Spring will roll back transactions. And if test fail, how would you audit the states? It's not a great idea to roll back transactions. So I would probably advise you to use the Commits annotation instead, meaning that every time you do something, then you will commit it. And then if at any point you still want to roll back, you can do it on a per-metal level. That means that at the beginning of the test, you should remove all data. And at the end, you leave it there. Instead of the reverse, what most people do is they create it at the beginning and remove it at the end. So this is just the opposite way. I found it multiple times that if you don't have the data and the database and you taste fail, it's very, very hard to understand what happened. And to end testing, if you remember the testing pyramids, we at the bottom, we have a lot of unit tests and the more we go toward the top, the more we have like integration test with a system that's bigger and bigger. And at the top, the system and the test is a whole software, including the front end. And the problem when you go this way is that, well, the UI layer is super fragile. Not because it's fragile per se, but because it changes a lot. A, a business, probably a business analyst or business owner will say, hey, I don't like this button. Can you move it like this location? And now your CSS selector is broken and your test fails, but it shouldn't. So I would advise to leave it out or to leave it to the realm of manual testers if you can. But the URLs in general, people, the business owners, they don't care so much about the URLs so they can be pretty stable. So just test the URL. And in order to test the URL, you have this stuff called mock MVC. So URL is just one step above the controller. You could test the controller for sure, but the URL is one step above. So it's even more integrated. The system and the test is even bigger. And this is the kind of code that you can write. So you will perform a get at the root and you will expect that the HTTP status is to OO and that the return view is welcome. It can be something that you can do. You can also check that you have the right items in your model or whatever. If you are using Spring Boot, so far I've talked about Spring, but not Spring Boot. If you are using Spring Boot, you have this orientation auto-configure mock MVC and then you can inject one mock MVC object and you don't need to care further about it. Spring Boot has a different annotation called Spring Boot test instead of application contexts. And it also brings a couple of what they called layer tests. Honestly, I found it's not that interesting, but just be aware that it exists and then you can test layers. In general, I prefer to test slices, but if you want to test layer, that's perfect. So GE, should I check GE? Should I talk about GE or not? Is there a chat window that they can bring and people will tell me, people didn't write anything. Great, that's really nice feedback folks. Not really, I have two. Okay, so let's forget about GE. And so that brings me to the end of the talk. So recap for this talk is integration testing introduces unique challenges. The first answer to those challenges is to really clearly separate the unit tests from the integration tests to remember to use test doubles because you know them, so reuse them. You should design your software for testing and I'm not talking about A, let's do TDD everything will work out fine because the example of the configuration for spring is should be highlighted. And remember that the closer you are to reality the probably the slower it is, but also the better the guarantee that it work, it will be. So there is like always there are some trade-offs and I believe that's a real problem of integration testing. See a lot of talks, I read a lot of blog posts and most people who write or talk they are very prescriptive. They should say, hey, if you do that, you will be the best. If you don't do that, life will be hell. And what makes me very afraid is that most people watch or read, they like that. They want to have rules, they want to have order. They don't want to think. And the problem of integration testing is you need to think a lot. And there are no hard and fast rules. It's a lot about trade-offs. It's a lot about return over investments. So be very careful with that. I try not to be prescriptive in that talk. I try to present you with options. So if you're interested in what I told you you can with my blog, I try to publish weekly blog posts. You can, of course, follow me on Twitter. And if you really are interested, you can get my book on the INPOP. It's not free, but if you are nice with me, I can give you a coupon. But it's not very expensive, especially if you're living in Singapore and having Singaporei salaries probably. And now there is time for some questions, folks. Do you have any questions? Actually, Niko, do we have a question? Hello. Hello. Yeah, this is Michael speaking. Yeah, thanks a lot for the nice talk. For me, when talking about unit testing and integration testing, I have a lot of issues about the vocabulary because very often we do some stuff which is kind of hybrid. Because sometimes we just mock one dependency, but it's still like a kind of an integration test. So I was wondering, like, for example, the example you showed when you have this kind of, I think it's an integration test, but with an in-memory database or a fake, how do you call it, it's still an integration test, or? You're correct. I mean, we should first name things. And in my book, I say unit testing is you test a class in isolation. As soon as you go beyond that, then I call it integration tests. That's my vocabulary. We can agree on having a different vocabulary, but for sure, before we discuss about integration testing, we should use the same words to describe the same reality. So it's not really important how you call it. The important stuff is you know what it is. And of course, if you are using H2 in memory, it will be a very fast integration test. And then you might say, oh, but because it's very fast, I make it like part of this test phase, and that's perfectly fine. That's really perfectly fine. You should really try, and thanks for bringing that to my attention. Probably I should, like, rephrase it. You should not decouple integration tests for a unit test. You should decouple slow tests from fast tests, because that's really what you want to do. Operate, if you are using, if you are using, I know now probably JUnits5 brings it, but with TestNG, you can have like categories. You can say, hey, this is a slow test. This is a fast test. This is whatever test. This is a business case, one test. This is business two cases. So you could have like this first layer in your rocket, which is like unit test. The second layer would be like fast integration test, and the third layer would be like slow integration test. Again, that's what I mentioned. It's really hard to be prescriptive. Depends a lot on your context and what you are doing with it. But that's a good question. Okay. Thanks. Thanks for the reference, Bill. Yeah, I knew there was something on JUnits5, but I've been doing too much TestNG and now I don't want to do something else really because TestNG, I mean, does everything I need. Other questions? I have one. Do you monitor, in fact, actually the performance? Can you integrate, in fact, performance testing during the integration test? I think it could be a save of time if you can monitor the output. You can do a kind of performance because it's slow, it's running. So you can try to find some bottlenecks in terms of performance thanks to an integration test. You could, to be honest, I've never done it. And to be really honest, in only at one customer did we really, really focus on performance. It was at Nespresso because, I mean, the Nespresso shop, the eShop, is serving the whole world minus a couple of countries. So it was very, very important for us that we got good performance but it didn't work like that. We had dedicated campaigns for performance. The question now, I mean, I'm thinking out loud so I'm not, it's not, don't take what I say for the truth. The thing is, if this cannot be done is your scope is small enough. Probably if your scope is big, then you will need probably dedicated campaigns. And if you do performance testing, as I mentioned, you should really be careful because what is performance testing in your case? Will it be like endurance testing? Because endurance testing is you run for days and you check, I mean, how long your software can run without any maintenance. In that case, you probably won't do it. Are you searching for peak performance? I mean, you need to define clearly what you mean by performance testing in that case and it might be good, it might be worthwhile but then it probably will be for like very low level software like how it's called hyper-frequency trading on this kind of stuff. Again, this is, I never done it. So it is just what I'm thinking on the fly. Thank you. Other questions? No questions. So thanks a lot for your attention folks. I hope you like this talk. And yeah, feel free if you were too shy or if you have a question that you didn't think about. Feel free to ping me on Twitter, my DMs are open. Thanks a lot for your invites, Michael and team. And I hope to see you sometimes, somewhere, especially in Singapore because then this COVID shit will be gone in the meanwhile, take care. Yeah, so Nicolas, we do hoax days in Singapore. I mean, not this year, but maybe next year or the one after. So it would be nice to have you as well. And Nicolas, I think I'm sharing my screen. So this is what I try to capture from your lectures. Wow, that's amazing. That's amazing. I really, I'm amazed. I have no words for that. I'm really trying. So it's an experience. But without knowing entirely what this lecture will speak about, I think it's interesting. I didn't succeed to catch everything, but I think that. Tell me if I did something. I will, but I will definitely share it. If you send it my way, I will definitely or just like share it, tweet it and ping me. And I will be very, very happy to retweet it. That's really huge. Thanks a lot for that. There's always no error actually. Okay, so ping me first on Twitter, send it to me and then I will check. If you write email and we will check. Okay, and after we do the group. Thank you. No, thanks to you. Yeah, it was very cool. Thank you everyone, see you next time. Have a good evening, goodbye folks. Good sleep.