 Thank you, for those of you staying, Dihards, you must be desperate to learn something about financial systems and open source and distributed systems. Great. I'd been very fortunate in my career in that having worked for a variety of different organisations, I had the chance to travel the world, Ac rifi'n dweud y gweithio hefyd yn gweithio'r cystall o'r planau cheilno yn gallu Gweithio. Dwi'n gwen iёт ar gyfer y Unifesdyd a dwi'n gweithio'r pryd pawb, gallwch ein 600 Hatan. Ond efallai'r gweld yma, gallwch ein 500 Hatan yn galwg fydda'r Gweithio. Dwi'n gallu'n gweithio hefyd. On i, dwy bod yna'r cyflogon o bosb yn ddyliadau. A'n dwy wneud o'r ffaintheith cyflogaeth yma amser dyddiad iawn o'r gwleidau gwaith. maen nhw'n credu mai go ahead o weld fi'n gwybod. Ond, rydyn ni'n gofo'r ddelnu'r cyfw Contactai yn lle i oherwydd mae'n meddyfiad o'r amlwgmat, neu roedd gofio'r cyfweld i'r rhaevyddiad dda i i leidio'r cyfl computers a heb neidio'r cyflodau a'n ei fod o'r gweithio y gallai ddeilig am ddefnyddio'r pryd o'r byd o'n cyflodau ar gyfer 5 byddau ac mae'r cyflodau yn helpu tîm leidio itru. A'r ymdweith yma, rydyn ni, fawr, i'w fawr sy'na fawr i'w methu, a nu gennych y cyfle o'r wahanol a'r ymdweithio. Fawr iawn. Pwynt ydych chi'n dechrau i gweld Cymru yn iawn, a oedd ymddych chi'n berthigai, yn gweithio fawr hyn, ac o'i palwch byddai'n gwlikell yn gwerthanol byddai, ac mae'n clywb niechau arall sy'n y ddechrau i'r banyddau Cymru Cymru i'r banyddyr teithiol. Mae'n bryd Subscribe, Eistedd, i'r banyddydd o'r amddans yn rhesweddau anthefyn, yng Ngwneud, yw yw 11 yng Ngwneud, yw yw yw unrhyw yw 22 yng Ngwneud, ac mae'n meddwl o'r bynnag, mae'n cyfnodd yn ymddangos ymweld y cyfnodd yma, a'r cyfnodd yn ymddangos, ac mae'n ymddangos ymddangos, ac mae'n cyfnodd sy'n cyfrannu sylwg sylwg wedi'w gael ymddangos. Mae'n gweithio'n ymddangos ymddangos. Rwy'n cael ei ddwylliant a'r gwneud. So Apache Ignite is the open source project available from the Apache Software Foundation, so ignite.apachi.org, OK? I'll show you the website a little bit later on. There's some things that are useful there and worth having a look at. Gridgate is the company originally behind the development of Ignite and open sourced it some years ago through the Apache Software Foundation. So they are the company that developed this technology about nine, ten years ago. And so there is a little bit of a difference then, so it's important to understand the difference, so ignite memory centric points there. OK, so in terms of the big picture then this architecturally kind of sums it up if you like. So if we look across the middle there we see this memory centric storage, scale to thousands of nodes and store terabytes of data. And the other thing is that because there is now integration with ZooKeeper you can really literally scale to many, many thousands of nodes if you really want scale to reach that sort of level. And historically then the caching aspect of it, the reason this technology came about was to solve two problems, OK? So first, scale. Scale is achieved because it's cluster computing. You just add more resources as you need them. And the second thing, performance. And the performance is achieved because the ability to cache data in memory and run at memory speeds, OK? So that's really useful. Now what's happened over time, particularly with Ignite, is that more features and capabilities have been added. So if we look in the bottom left-hand corner there we can see Ignite native persistence. So Ignite can now save data and state. It can be a system of record. No longer do you need to just cache things in memory if you really want to add persistence, it's very easy to enable it. It can be done through one line of code, for example in your Java programme or through an XML configuration file which each node kind of picks up. And this is useful in those scenarios, for example, where you're dealing with very large quantities of data. So you may have tens of terabytes of data. Not all of that is going to fit into memory. And so Ignite can page in data into memory, OK? First in, first out, or it can use other techniques where data are not being used to read things that you need. And therefore this is a very useful feature. So the ability to handle larger quantities of data and again the ability to really extend the capabilities of the in memory. Now there's another benefit as well, of course, that if your cluster is running, let's say you have a massive power outage and or you know say an asteroid or something hits your data centre, if you're running in memory then of course everything goes down. You know you've lost it. State and data is gone. With this persistence capability because it uses logging and partition files, each node managing a small part of the overall data, in the event that something catastrophic happens it's much much easier to bring the cluster up again. And this is one of the things that Spurbank, it's a major requirement for them because they have very stringent kind of SLAs which we'll get to a little bit later on. Another thing that Ignite does, if we look in the bottom right hand corner there, third party persistence. So this is kind of a useful feature. So a key point if you like that Ignite is sort of positioned in the space that they're trying to really address is this kind of no rip and replace. So the idea is that probably you are using systems that you are invested time, money, effort in over a long period of time. Typically these may be relational systems, transactional systems, but it can work with other things as well. HDFS, no SQL as well. But let's consider the scenario where you're working with the transactional database systems, Postgres, MySQL, Oracle, DB2 and so on. So in these cases what Ignite can do is it can take schema information, it can take data, it can cache that in memory using the power of the cluster, it can parallelize operations, it can run operations at memory speeds. So this is really a benefit. And they give me an example. So earlier this year I was in Brussels and I was presenting at a Java user group and there was a gentleman who came up to me at the end and he said that we are a big Postgres user and he said we like the product very much, it solves many of our business problems but there are some cases where performance is poor and we're trying to address those type of scenarios and in this situation they are using Ignite to really offload that processing to the cluster, run those operations at memory speeds and they are seeing vast improvements and the ability to really do the kind of things that they want to do with that data. Okay, so just above the memory centric storage then we see a range of grids if you like or features capability. So starting from the left-hand side SQL. Now the thing is that whether you like it or you hate it, SQL is intergalactic data speak. Okay, there's no escaping that. So the vast majority of BI tools is very easy to plug them in if you need them. The other thing is of course skills availability. If you need a good SQL developer it's very easy to find one. Ignite is a key value store. There are other great key value stores out there as well but again historically this product has been around for quite a long time and key value was one of the original kind of strengths and features that they introduced and the value can be anything. So you can be using simple types, character in integer, floating point, string, whatever or these can be user defined complex types. So things like say a financial instrument or a healthcare record and then as deep as you want, as deep as the programming language will support. It does transactions and it does the two types of transactions. So it does both lock-based acid transactions and it does the lock-free optimistic transactions. Excuse me. Let me give you an example of why these are useful. So consistency is something that we look for in particularly in database management systems that system moves from one consistent state to another consistent state. In between of course whilst the transaction is running, perhaps some level of inconsistency does occur. But when we commit, either it's successful and we're okay or there is some error or some problem and we have to roll back back to a known state. I'm a great fan of MOOCs, massive open online courses. So earlier this year I was on the website of one of these providers, as you know, there's Coursera, EDX, Udemy, Udacity, many, many of them around. And I happen to like a particular course. So I got my credit card out. I put the details in, hit the pay button, and it came back and said there is an error. All right, I thought transaction didn't complete, not a problem. A while later I checked my credit card bill and I noticed they've actually been charged. So I go to the merchant, they say we haven't received the money, we haven't been paid. I go to the bank and they say you've paid for this. Where's the money? Okay. Now there's an example of a poorly designed system that should not happen. Okay. If I want to transfer 100 euros from my bank account to your bank account, my bank account needs to be correctly debited, your bank account needs to be correctly credited. So there is a situation, you know, it's an atomic unit of work. Either all of it happens or none of it happens. So Ignite will guarantee these acid operations. And the other thing is that one of the things that it does when particularly when working with these third party persistence capabilities and transactional based systems, typically relational, but some no sequel products are transactional as well. The thing is that Ignite will keep its cash and the back end system consistent. Okay. You don't have to manage that. It will take care of that for you. Other things, compute and services. Unfortunately, no time to cover these today. Streaming and it will happily connect with streaming technology, spark streaming, flink. You saw the previous speaker talking about that. And there's Kafka, all of these. There are adapters and connectors for all of these technologies. And so one of the benefits that Ignite brings is that it has this data streamers. Okay. So it can take the data, parallelise it amongst the cluster and then we have the opportunity to run operations in parallel upon this data. And of course the data can be finite, maybe coming in and stop at some point or it can be continuous. And the other thing is that particularly with continuous data, we can do things like complex event processing. So we can say, all right, you know, I'm interested, say in the last five minutes worth of data, or I'm interested in the last 100 events or whatever, these things are supported and perfectly doable. And machine learning as well. So machine learning was really driven by one of the major users of Ignite out there because previously the problem was that if you want to do some of this capability upon the data that you have in your cluster, you have to offload it, ETL, you know, take it out, use that body tools on it, do some analysis and then you get some results. Now if you, you know, for small quantities of data, that's doable, it works. Okay, but once you scale up, you've got terabytes, petabytes of data, enormous amounts of, you know, data to manage, it becomes really hard. And therefore the ability to run these operations, machine learning algorithms in place on the data that you have is a real benefit. Again, we'll touch upon this a little bit later on. Okay, so generally my focus as someone that works for grid gain, it's purely open source. However, from time to time, it's worth just mentioning a few things. And so if we look here, here is the key difference that the commercial grid gain has versus the open source Apache Ignite. Okay, so there's a few things. And it's the kind of, you know, red hat model. Essentially, the software is given away for free. And you have some additional services around that, you know, professional services training bug fixes at enterprise level, plus a few enterprise features. And these are the kind of enterprise features that spurbank in particular utilize. So if we look at this, things like data snapshots and recovery, okay, so the sort of point in time recovery, the ability that, yes, you may take a snapshot of your system say on the hour every hour. But let's say you want to take incremental snapshots as well, you know, every couple of minutes, you take a snapshot in the event that some failure, you can recover back to some point in time. That's an enterprise feature. Monitoring and management security and auditing. More sort of stringent requirements than maybe available in the open source product. And then data center replication. So being a large commercial bank, they run this technology across three main data centers around the world. And again, for allow, you know, acts of God nature problems, you know, power outages, serious sort of issues that may take down one or more of these data centers. So it's important for them to have this ability at a sort of commercial scale that they need. Beyond that, though, I mean, the basic open source technology actually does quite a lot as well. So it can start from that if you want. And then if you find that are some commercial features you need, obviously I would suggest you talk to grip gain. Okay, so let's have a look at some of these challenges. Okay. And then if time allows, I've got a little bit under sort of 20 minutes or so to finish this. I'll show you a couple of demos, some simple sort of examples of some of the things that I've talked about. And we'll see how that goes. Okay, so spur back itself core banking services at scale. So essentially what they are in the situation is that basically they pay some of these larger organizations huge amounts of licensing fees that they want to move away from that. They want to also redevelop a lot of their services and systems to take account for a lot of the new technologies that are coming out there. Now, we've all got smartphones, okay, and we all do banking on our smartphones. I'm kind of late to it. I mean, I started very late. My wife was a lot quicker than I was, but I found it very, very useful. The ability to transfer money to pay bills, all of these kind of capabilities. And of course, that if you are a major organization with millions of customers, you know, and thousands of branches, historically, reinventing and introducing these new services, getting a better idea of what your customers are doing and getting this kind of 360 degree view of the customer is really what they're trying to achieve so that they can target specific services at specific customers and make it more personalised for them. It's hard. And so a lot of what they want to do is to replace the existing technology and bring in a lot of this new technology and things like that. And that's what's to computing, give them that capability. Again, for those two reasons that I mentioned earlier, scale and performance. So scale again, just add more resources as you need them and remove them when you don't need them. And the performance, they utilise the union memory capability very efficiently and very effectively. But equally, they because of the amount of data that they deal with and some of the SLAs they deal with. It's important to have that persistence capability as well, which we touched on a moment ago. Okay, so what is their actual system look like? So if you are familiar with this, Cheyenne, one of the biggest sort of supercomputer clusters, essentially, their system is architected and looks a little bit like this. Okay, so this thing, you know, 4000 nodes, 300 terabytes of RAM, 50 petabytes of disk, and there's a link there, which actually goes into a lot more detail about describing this particular system. So they've got huge amounts of data comparable to this. And equally, they're trying to manage a lot of it in RAM, keep much of it in memory as possible. But because of some of the reasons I stated a little bit earlier, it is important for recovery and other purposes, the persistence really does aid them in this. Okay, so the first thing then challenge number one, large memory size. So typically, they are dealing with reasonably large amounts of memory, about terabyte per server, which in these monetary terms these days is pretty affordable. Memory is cheap. The other thing is that ignite being a Java based product, obviously there are issues in terms of garbage collection, okay, on heap, off heap, as you know, and therefore, particularly with ignite, they re rewritten some parts of it so that it utilizes its own memory management system does not rely upon Java. That would be a problem with large amounts of memory. Okay, because these things like stop the world garbage collection can have a serious impact upon performance. Okay, so on heap is not an option. Off heap is the primary storage as well. Okay, so they need something else in addition. Okay, so the ability to use off heap memory management is really going to assist them and help them in this respect. And if we look at the durable memory that ignite provides, so off heap removes a noticeable GC pauses, okay, one of the key benefits that they get, automatic defragmentation, some of these other features, okay, predictable memory consumption, okay, so these are helpful, okay, and in large deployments and environments, we want to get some sense of how the system is performing, where we can tweak and where we can tune it and getting some understanding of how it's behaving is really, really helpful. Now, the other thing is that if you look at some of those points there, so full transactional right ahead log and instantaneous restart. So this goes back to that persistence capability that I talked about a little bit earlier on. So in this scenario, ignite is no longer behaving just as a cache, okay, it behaves as a system of record as a distributed database system that persists data, okay. And so the right ahead logs are used for recovery purposes. Each node manages some small part of the overall data, and it's got partition files, which keep that data as well. And so, like some other systems, it avoids in place updates because they are slow, right? So we write these changes to a right ahead log. Sometime later, we flush these changes to the system, shrink the size of the log, and you know, this is, and rotate it, and this is how it handles these scenarios. Store superset of data on disk. So in this, again, with this native persistence capability, what they're able to do then is that all of the data are stored on disk, okay. Some of the data may be cached. And that's entirely up to ignite based upon its strategies in terms of how it reads in data. And again, there's some tuning and configuration that you can do with that. But this is, this durable memory is the idea that you are persisting data now, not just caching it any more. All right. So some useful things there that the bank has found and be able to apply directly for some of the problems that they see. Challenge number two, instantaneous restart. So they have a pretty stringent refinement. Now with all the vast quantities of data that they have and the size of the cluster that they have, they have this requirement that in the events that things fail and failures happen, you know, it's the nature of life and the, and we see these problems occurring from time to time. Five minute SLA, all right. Now with memory only systems, if we lose everything, let's say through a power outage, our entire cluster goes down, it's going to take some time to rebuild that data for them. It's going to take time to load all that data in and whatever, you know, processing that we're doing, perform some some level of recovery upon that. It's slow. It's going to take time. I'm far better to have this capability to, you know, with the logging and the partition files that are maintained. Persisting the data means that we can bring that cluster up a lot faster. That's very, very useful. Okay, cannot wait for data loading need to operate from disk. And therefore, this is a significant sort of benefit that they found. And they are actually one of the driving forces behind this capability that ignite is added. They specifically requested this. And because they are a large customer and have a huge sort of user base and in terms of the scale that they operate in, you know, naturally, this is something that eventually came to pass. So we've touched upon this a little bit already. So each node maintains a writer head log. Okay, and also partition files where we keep some parts of the overall database. And if you are familiar with database recovery systems, you know, you've worked with relational systems or other types of database management systems, I mean, this should look fairly, you know, it should be fairly obvious to you and reasonably understandable. There's not, you know, this is not kind of reinventing things that we don't know about. All of these are using tried and tested techniques that have been around for a very, very long time if you've worked with database management systems. There's a lot of detail about this as well. If you want to know the nitty gritty of how they actually implemented it as an entire wiki that goes into the depths and the descriptions of how this is actually supported at a very, very low level. Okay, so if you're interested in the details, happy to help you out and point you to those resources. Okay, challenge number three, huge data model. So one of the things, again, through various legislation, historical requirements, you know, legal requirements is the fact that they need to keep track of products that they've offered in the past. And therefore, sometimes, you know, things are versioned. They have to keep all these different versions. And a lot of these sort of older products, if you like, you know, when we are doing this versioning that inevitably that leads to a large number of data types as well that we have to think about. Things may change over time. Things get versioned. We may not use the same types in the new product that we did in the previous product. All of these changes have to be maintained so that at some point in time, if for legal reasons they need to go back in time, and to show how things were at some point, they can do that. Okay, so that's very, very important. Fast replication and partitioning, again, the idea of cluster computing, helping them in terms of achieving these sort of capabilities. One of the nice things that you get with Ignite, which is a benefit that is also available in other types of distributed systems, is this idea of co-located processing, for example. So data that has some association with other data has a kind of a natural affinity, if you like, and we can store that data together. That means that we are bringing the processing to the data. We are storing that data together. It helps us achieve better performance because we don't have to search across our cluster anymore. We know exactly where that data reside. Distributed SQL joins, so because Ignite supports SQL, SQL 99, that, yes, joins are probably the most expensive operation in a relational database system, but again, the system can keep statistics, it can keep useful information about data sizes, the tables, access patterns, indexes, all of this information is available to it, and therefore it makes the good optimisers make great decisions in terms of what's the best approach to retrieve data. There's nothing to stop you doing a distributed join across an Ignite cluster. Now, if you do it badly, you will bring the system to its knees. That's life, unfortunately. It's one of the things that you get. There's great flexibility and great opportunity on the one hand, but equally use it with care. There's nothing to stop you doing it. Ignite will not prevent you from doing it, but this is part of the overall architectural decisions that you make, and the types of queries that you run, the access patterns, and again, using these kind of co-location capabilities. They've chosen wisely in terms of how they use these distributed SQL joins. It can be done, but obviously used with care and caution. The co-located processing we already talked about is a very useful feature, and again, it helps in terms of boosting performance. There are some demos and things that show Ignite's capabilities in this space. You will get some sense of what sort of performance benefits that you can achieve. In terms of the distributed SQL, then, we've talked about SQL 99, I've mentioned, ODDL and DML feature support. If you're working with the three primary languages that Ignite supports, which is Java, .NET, and C++, these are the top-level languages, there is this binary format that you can use to store in one language, for example, and retrieve it in another. That is something within larger organisations that you often find that different departments, different parts of your company may be working with different programming language, different frameworks. That happens. Ignite provides some flexibility. There is support for additional things coming down the road. There is a rest-based interface as well, very, very useful. But things like, for example, if you're a data scientist, you're working with things like Python or R or those technologies, then there are these thin drivers being developed for those. The next big thing that's being added is certainly as far as the machine learning capabilities and deep learning capabilities, there's TensorFlow support, which is coming in 2.7 releases almost imminent. Indexes in RAM or disk, dynamic scaling, so all the capabilities that we would expect of cluster computing in terms of failover and recovery, the ability to scale up and down as we need it, that's all supported. It's the standard stuff. Compute grade. This was one of the things that we saw a little bit early on. How this works is if you're familiar with fork join or map reduce, it's exactly the same approach here. Divide and conquer. We have something that comes in, some compute task. We break that down. In this case, for example, we've got a cluster of two machines, and it will do it in half the time. I mean, that's, again, machines of equal capability. But one of the things that Ignite can do, again, is recognise where resources are being underutilised. Where machines have additional power, additional processing capability available, Ignite will be able to use that and be able to perform, you know, operations on those. Okay, so the machine learning. So, Sperbank was one of the drivers behind this again, because to work with enormous amounts of data and then not have the capability to run sort of in place operations, covering machine learning algorithms, it's a big bind. I mean, you have to do ETL. There's really no choice. So, the opportunity to really work with machine learning on the data that you have to do your analytics, to be able to do things like fraud detection, for example, very, very important in real time, is very, very helpful to them. And so, what they've done with Ignite is these algorithms have been implemented from the bottom to take advantage of distributed processing, large scale parallelisation, and again, there are some features that they've added things like partition-based data sets. So, the idea here is that if you're running a machine learning job, and let's say it's been going for half an hour, and perhaps it may need another half an hour to complete, if you lose some parts of your cluster, for example, the thing is that the job will still continue. It's able to process that data because of these additional sort of features that they've built in, because typically for machine learning it's an iterative process, and typically you have this notion of data and state, and as algorithms iterate, things change, okay, and you want to preserve where you've reached in the particular sort of run of an algorithm to be able to recover from that. So, Ignite will happily do that for you. Okay, so, no ATL. Okay, backups and night shots. Need to backup data, consistent data restore, restore on different clusters. So, these are a few of the enterprise features now we're starting to get into, okay, so that again, the needs of enterprises, sometimes you have maybe a cluster of a different size. Perhaps you want to restore the data to that very, very quickly, and one of the enterprise features allows you to do this. Backups, of course, are important, you know, even when we're running with persistence capability enabled, we may still need to ensure that we've got adequate backups. You know, if you work with relational systems, for example, any type of database technology, you know that these type of things are done on a regular basis. We need, again, support for this. And so, these things, snapshots and recoveries, so snapshots, think of them as like sort of database backups if you like, and the ability to recover from external storage, your cluster to a sort of a known state, and again, point-in-time recovery as well, very, very important. And I'll give you an example of why this is relevant. So, not with this particular bank, but another example that I've heard of, we are human, we make mistakes. Developers make mistakes, okay? And so, there was a case where a developer wrote a SQL query, set all the bank balances for this particular organisation to zero. All customers' balances were set to zero, okay? So, the ability to go back just before this problem, restore the system to a point-in-time very, very quickly, the bank realised what the problem was and be able to recover that very useful capability. Particularly if you're working across lots of time zones, you know, you have a need to keep customers happy, a need to have the system fully operational. These, again, are very key features to enterprises. The last thing here is data centre replication. So, if I, probably best illustrate it with a graphic like this. I mean, this is not actually the spur bank where they've got their data centre. I think we're not allowed to tell you where they are for obvious reasons. But the way that these things can work is typically if you can have just clusters and they may work in a variety of different ways in terms of how you do your replication. So, it can be active-active, for example, or active-passive. You may just have a backup, for example, where the data are being replicated to that. So, in the event that the primary goes down, the data comes back up again. But, you know, if you've got active-active, then there's more data flowing between these. You know, it's, again, architectural decisions based upon needs. And so, there's a sort of a lot of configuration that you can do in this space. One other thing that's worth pointing out, that if you're familiar with the cap theorem, and sometimes people ask this, you know, this consistency availability, partition tolerance, the idea that you can only have two of these at any one time, you know, in a distributed environment, it's hard to get all three of them, or impossible to get all three of them, so people say, then it's worth adding here that Ignite does the C and the A, okay? So, it's consistent and available. And therefore, the partition tolerance, you know, there's things that we can do to mitigate that and help with that as well. But we need to think about these scenarios. Okay, so a little bit in terms of the numbers then just to the last sort of couple of minutes that I have left available. So, these are numbers from the Apache Software Foundation, okay? So, we are number one in terms of the developer mailing list. Number two, as far as the user mailing list is concerned, and only Hadoop, Ambari and Camel are ahead of us in terms of overall number of commits. So, typically, each year is about a million downloads per year. And so, the community is very active and it's worldwide, okay? And I would encourage you to participate and get involved. Okay? So, as I promised, a couple of things that I can show you in the last couple of minutes that I have and we leave time for one or two questions if you have them. So, let me just show you this, first of all, okay? So, this is the landing page at the Apache Software Foundation. In the top right hand corner, if I can try and... Does that work? No, it doesn't. Not on here. Okay, sorry about that. I think we're not able to see this page. Can we switch to the... I think your AV is designed to just handle the PowerPoint slides but won't show the... Unless I go back here and say share my desktop. Let's try the display. There may be an option here. Arrangement, mirror display. Let's try that. See if that works. Okay, that works. Great. All right. There we go. All right. So, if we just have a look here, there we go. Okay? So, this is the landing page and again, let me try this. Okay, that works in this time. Great. So, there we go. So, in the top there you can see screencasts, okay? So, there's a couple of... 10 minutes of your time, a couple of videos that show you how you can take a relational system. The example they use is MySQL, for example. Ignite can read in the schema. It can generate all the plumbing, the infrastructure for you. It creates a project. Just read that project into your IDE and then you can just put your credentials in, the port number, the IP address of the server and it will happily work with that. Take the data from there, cache it and your cluster can be as big or as small as you want. Okay? And then the ability to process that data at memory speeds, any changes that are made in its cache automatically propagated to the back end as well. Okay? Two are kept consistent. Okay? That's a major feature. Other stuff. So, let me just quickly show you the... Let's minimise this. Okay? So, that our eyes are a little bit... Let's close this and then just show you this. Okay? So, if you have the opportunity, you can just download the... This is 2.6. Okay? So, this is the binary version and all I have on my system is just Java. That's all it needs. And this will work pretty much anywhere. So, it will work on, you know, even these small sort of type of microprocessors, you know, that you get Raspberry Pis. There's an ARM-based version of Java for them. Ignite will happily work in that environment. And so, lots of examples. So, all of the sort of capabilities we talked about, key value, persistence, machine learning examples, there's genetic algorithms for examples, lots of other code examples and all of this stuff works standalone. You don't need a cluster to get this working. All you do is just read in the pom.xml file. It will create the project for you. And then just lastly to wrap up, let me just show you one other thing, which is this here. Okay? So, if you are working with Kubernetes, for example, then Ignite will happily work in that environment as well. Okay? So, this is just running locally in Minicube. I've got two pods here. And if we just have a look at this and look at logs, for example, this will show us that we've got Ignite running as a service inside this containerized environment. Great for DevOps guys. Okay? Now you can deploy this anywhere you want. Okay? It's an external cloud, an internal cloud, a combination of the two if you want. And so, this is great because we've got a standard set of commands in terms of managing this environment and we can deploy it wherever you want. That's very, very helpful. So, overall, I think the message here is that the bank has benefited significantly from a lot of these capabilities. They've been a major driving force for adding some of the new features, for example, the persistence and the machine learning, very much driven by these large customers who wanted these capabilities that Ignite previously didn't support. And you can treat Ignite as a system of record as well now, if you want to run it in that way. And the other key message is the no-ripping replace, which is very, very important because it provides the ability to maintain your investment in what you're already using, but perhaps use Ignite to handle certain types of use cases or certain types of problems which performance may be an issue with your existing system, but you can offload that now to cluster computing and be able to run things there at memory speeds. Okay? So, I think I'm almost out of time there. Maybe I think time for one question, perhaps two, I'll be around outside as well. If you want to reach out to me, just do a search on my name. I'm on LinkedIn, you know, feel free to reach out and connect. If you wish to do so, I'm on Twitter as well. And I think, let me just have a look, as I said before. This is a great place to go to. The community is very active, very lively, get involved. Any questions that you have, no matter how complex or how simple, don't be afraid to ask. Great, thank you. So, question? Hello, your profile said you're a global rockstar. What's your favorite band? Too many to say. I like Eric Clapton. So, no, done. All right, so all right guys, thank you very much for your time. I said I'll be around, so grab me, drop me an email, firstname.lastname. at gridgain.com and that will reach me. Great, thank you very much for your attention.