 Well, thanks, everyone, for coming. My name is Jean Desjardins. I am the CTO here at Hazelcast. And I'm joined on stage here with Neil Stevenson. I'll introduce myself and let Neil introduce himself. I joined Hazelcast four years ago from Cloudera. I'm a big fan of open source. And I was the potential of taking a lot of the kind of capabilities that we had at Cloudera, but being able to do them in real time. And with a much simpler architecture that could also deliver zero downtime, which when you go from batch to real time, suddenly you're a real application that people use, and five nines becomes actually important. So all of those things led me to come to Hazelcast and really excited about our talk today. Started to get involved with Finno's last year because of our customer Morgan Stanley, and was having a look at different projects. One of our longstanding supporters at Morgan Stanley, Steven Goldbaum, is one of the leads for the more fair project at Finno's. So anyway, I'm very excited to be here today to talk to you guys about how we can process data in real time, financial services data, and then visualize that data with the perspective framework. And Neil will be helping to demonstrate that capability. Thank you, John. So just a very quick hello. I'm Neil Stevenson. I've been at Hazelcast for six years. Previously I was working for a number of consultancy firms, and we just loved open source software. I was using Hazelcast. I got in touch with the Hazelcast guys and I said, your software is great, but it's missing this feature. And the answer I got back was, why don't you build it? So that was my introduction to Hazelcast, and I'll hand back to John because I'm conscious we're short of time, but that's the way it goes with open source. You don't just take, you also give. So yeah, so thanks for coming today. We're going to talk a little bit about what Hazelcast is. I'll talk a little bit about how it's used in different financial services areas, and then how we envision it working with perspective and potentially with some of the other Athenaeus projects. And we'll talk a little bit about also some of our key differentiation or differences. And then we'll hand it off to Neil to demonstrate the perspective and Hazelcast capabilities. So this is our vision. This is why I get excited to come to work every day at Hazelcast, is that our goal is to empower the world, meaning we think that this should be a democratized technology that all companies of different sizes can take advantage of to work with data and not just know about things, but to act. So when you start to think about real time data, if I pull all that real time data up onto a dashboard and I'm looking at the dashboard and then I have to go to the bathroom or eat lunch, it's not really real time in the sense that there's no action when I'm not looking at the data. And so we're talking about real time in the sense where we can trigger automated actions driven by rules, machine learning, business logic, or fed out via alerts and other ways. And we do run everywhere. So we have everything from people running Hazelcast things like heavy trucks and warehouses and factories to people running us obviously in the cloud and data centers. And we work with streaming and stored data. Our biggest users are in the financial services industry working in real time with a lot of things. Securities trading data is continuously changing and so being able to analyze that data and work with it is critical but also payments. And really everything is moving even more to real time. You look at corporate payments and that world is now moving to caring about instant payments and real time payments and oh well, wait a minute, what does that do? Our cash flows. So now we need real time visibility of our liquidity, real time analytics of the influence of foreign exchange, which foreign exchange isn't like a real time thing except, well actually right now it kind of is. So we really see from the trade execution side that real time has flowed through the financial system to every part of finance now caring about real time. And a lot of other industries as well, we'll talk a little bit more about that but it's not just about analyzing but things like real time offers of banking products and investment or real time offers and lending and things like that. If you know what someone's doing at that moment as they're banking and you offer them like a payday loan for example, you probably don't want to offer someone a payday loan when they just got paid. Maybe you need to know some context. We're big into open source and the ecosystem and that's why we're here today and so we've got a lot, these are some of our sort of top partners to just highlight a few but I'm gonna call out in particular our friends at Red Hat who we're actively working with and they also gave us a shout out earlier. So I wanna make sure we do that but we're actively partnering with all the data technologies, all the cloud vendors and many others. In the ecosystem. So I'm gonna move past this slide because I think everybody sort of understands it but for context, if you get the presentation later, you have some examples of real time SLAs from the real world, the only one I'm gonna hit on is the Google Rail model. Google studied people's expectations for applications and they analyzed this extensively cause Google doesn't really do things sort of halfway and they found that people when they click on something, they expect a response within 100 milliseconds that says yes, my app saw me click and it's doing something and within 1000 milliseconds or one second, they expect that the data is starting to load and if any of those things don't happen, they think the app's broken, the network's down, something's wrong. So that means that latency is the same as downtime. What's different about Hazelcast is that we work with both data and motion and data at rest in a unified, not product, a unified runtime. So we're talking about one server and one jar file. What do we offer in terms of the capabilities of this runtime? So most people who do know us, how many people have heard of Hazelcast? A lot of you have heard of us. You probably know us for caching or maybe being a data grid. So you have applications and they need to have fast response to data so they may write to Hazelcast and we may write through asynchronously to other databases where that data may live for longer periods but by writing through Hazelcast, you can get sub millisecond read and writes for your application and because we also can replicate continuously active, active across data centers or cloud regions, that means we can deliver five nines. We also have like zero downtime, rolling upgrades and other features to deliver uptime and that also means that the databases we're sitting in front of can be taken down sometimes and the application doesn't know. But we also actually do a whole other thing which is the in-memory store has evolved to do computation and it started with just the idea, hey, I'm putting all this data in Hazelcast but I wanna query that data that I've put into my key value store and if my query runs on the cluster, it's gonna execute faster and so we started doing more and more computation. We realized there was real power in that. Not moving data is a key to performance and so we introduced our streaming and batch processing engine or our distributed compute capabilities and that allows you to process continuously changing data being fed in from your Kafka's, your MQs, could be fed in from IoT devices, could be fed in from just raw network streams. Anything that's a TCPIP protocol, if you send it to a Hazelcast cluster, might take a little bit of custom coding but we'll figure out a way to receive that data and process it and we can do that in very real time. This is what I used to work with in Cloudera and a lot of other data platforms in the open source space have this, the official Delta architecture is another vendor but I'm not gonna talk about them because I didn't work there but you get data coming in and you store the data and then you query the data and then you take the output to that query and you store it and then you maybe do some more preparation and eventually get to a data product that is a high quality data for a particular use case. There's a lot of storing and most of the time that storing is on spinning disks. Spinning disks are bad and nowadays MVME is super fast and then of course we also partner with Intel we can use their Optane memory which is like flash that you stick inside your server directly. So this is about architecture if you care about real time. It's also about architecture if you care about zero downtime. There's a lot of moving parts. I'm not gonna go into those moving parts. We don't have those moving parts. We have one runtime, a peer to peer architecture so every node of Hazelcast has a bit of the data and is gonna do a bit of the computation. So we're not just partitioning the data to allow you to scale data storage but we're partitioning the ingest, the query. We can execute the machine learning algorithms on the particular node where my data lives or where Philazza's data lives or where Neil's data. They may be on different partitions whether it's our investment banking and trading accounts or whether it's our payments and deposit accounts. All that data, the Hazelcast knows where that data lives and we can partition the compute by saying partition by account ID. So that input stream we can distribute across the cluster or if we wanna do a particular query we can also execute that query on the cluster in a way where it knows where the data is and it's gonna execute where the data is and so you're not moving data over the wire and that's gonna bring those execution times for your compute down typically under a millisecond. But the other cool thing about that is if I add nodes, I'm adding capacity for my compute, for my analytics, for my data. I'm also only losing one out of however many nodes and is, so if you have like 35 nodes in a cluster and you lose one node, you've lost 1.35th of your data and your compute, of course we have backup copies of the data on other nodes and we will restart the compute on other nodes so all you're gonna get is a brief pause in the compute and the data won't be lost. We can also write the data out to disk and so we can actually have that additional guarantee that we will not lose your data so we will not lose your data and we're gonna execute a very real time and we can do so in a way that's very easy to operate because it's just a Java application. I can go to any company in the world and go, hey we've got this new thing, it's Hazelcast, we're putting it in production, it's just a Java application. Do you know how to run those? Most people can do that. At Cloud Era we were like, who are your best people in your ops teams? We're gonna send them to Cloud Era training and that's all they're gonna do now and they're not gonna be doing other operational tasks. But that scale out also means we are fast and that model allows us to do stream processing with over a billion events per second. I think where benchmarks have gone up to somewhere around six or eight billion per second and we're able to do so with consistent low latency. So this is a real world benchmark and we were able to hit 30 millisecond, 99% latencies on a real world query engine. By comparison, Flink and Spark, well, let's just say don't go past a million events per second with Flink and Spark. And by the way, if you try to do this with KSQL, we don't even put KSQL on this chart because after like 5,000 events per second you're just gonna start to get ugliness in KSQL with latency. So that's why I think it's cool. And it's used in a lot of the biggest banks in the world including probably somewhere in a lot of the banks if you're working for a bank and or if you're not working for a bank it could very well be the case that it is actually something that, if your customer is a bank, that they're probably using Hazelkass in many areas. Real-time offers at PNB Pariba. We've got authorization and fraud processing for every single card holder, every single transaction at Capital One. We're also using a lot of the biggest retail sites in the world like Target. And basically if you care about performance and you need five nines, people tend to use Hazelkass. Sometimes there are other technologies that are similar in terms of one or other part but they're not providing both, right? They're not providing distributed on-grid scalable compute where we do all of the job management and workload sharing across the cluster or even across threads. And in fact, I think we're very unique in our peer-to-peer job management and peer-to-peer distributed execution. This, by the way, works very similar to Flink and Spark in that it is a directed acyclic graph. So the pipeline of logic is broken out into stages. Those stages can be running in different threads or on different nodes of a cluster but it's data aware. And again, it also is automatically distributing that work. So as you add nodes to a cluster, they say, hey, I need some work, I need some data and that scaling is very elastic. So that is kind of a quick run-through of Cloud Era, I mean, sorry, of Hazelkass versus Cloud Era, sorry. And I'm just gonna pause there for questions before we talk a little bit more about how we see working with FinOS. So Flink is very similar architecturally to Spark, right? It has a master worker, there are executors, there's a master node, if the master node goes down, then no jobs get distributed and you don't really know what's going on. So you have to have a backup for the master node, but it's an active passive, right? The workers are more of a distributed model. You know, Flink and Spark require some place to store the data during the pipeline that you have to add into the architecture. So that's another moving part. So it's a lot of moving parts. So our difference is not a lot of moving parts. And that means that we can execute the data, continuously process with high uptime and high resilience. The other difference is the fact that we have this, we're built on a data grid. So that means that every node has data and is able to do continuous processing. So you're not moving data over the wire. And so if you're not moving data over the wire, again, you're saving on latency and you're becoming more efficient with how much hardware you're gonna need to run. That's gonna mean less nodes in your cloud or less nodes in your data center. So we're gonna do a lot more throughput with less hardware. That also means just in terms of how much raw data we can move, we often are replacing Spark as people start to realize like, I have a need to run something in Spark. It's great at throwing compute at lots of nodes, but it just assumes that compute is cheap and storage is cheap. That's the basic architectural assumption of the whole big data world was, yeah, it doesn't really cost that much to run five more nodes and like, well, if I'm trying to do risk calculations, I probably don't need just five more nodes. I might need 50 more nodes or 100 more nodes, depending on what my SLA is. It also means that whether your SLA is, I wanna run a risk calculation every minute or every 30 seconds or every hour that we can continuously do that calculation. And in risk, we're often used also for aggregating all of the data needed for those risk calculations. So some of our customers are basically pulling all the data, aggregating the data into Hazelcast where we can use the streaming to aggregate or integrate and then the data grid to serve the data, but then they might use something like data synopsis or something else that they already have in place because they already have it and we're just providing a super fast data source. But that super fast data source now needs to be up to date continuously during the day. Maybe I am not running my risk calculations continuously. Maybe I'm running them every hour, but guess what? If some event happens, I also wanna be able to just run an ad hoc calculation right when a country gets invaded or some other cryptocurrency Elon Musk does a tweet. So I can use those events to just determine I need to rerun my risk on a particular asset class or a particular whatever. So there's other examples. And we can deliver a particular window of latency with a particular deployment very, very consistently because of our architecture. And when you get into Spark, Flink and similar technologies like maybe you can tune it and you get pretty good latency but when you get a spike or something like that then like all bets are off. And it doesn't just, you can't just go scale a Spark cluster plus the supporting data and all the other pieces to handle the fact that today we have twice the normal trading volume whereas you could do that with Hazelcast. All right, I'm gonna hand it over to Neil to talk. Oh, sorry. Before I get to that, I, you know, the Finnois perspective was the first project that we selected to work with because people often do wanna see what's going on in inside of this platform even though what we really wanna do is automate the actions and the responses to the data. And this seemed like one of the better tools for this in open source in financial services that we'd been kind of seeing. And, you know, we've integrated with like Grafana and Kibana and things like that. So we just thought like, you know, we should try to integrate with this. It's what I would call a sort of alpha of the integration and, you know, we are going to continue to improve that. And when we get it to where we think it's like beta able we will put it out and contributed it. Well, we're not gonna, we already have Hazelcast open source. We will show how Hazelcast open source can be used with perspective. And then we're gonna also start to look at what other projects I'm intrigued. I kind of didn't really understand legend as well until my last talk I just attended theirs and I'm keen to have, you know, that as a data source. And, you know, we're used with like chatbots to provide AI driven real time, you know, conversational banking or, you know, conversational customer engagement and things like that. And so we're intrigued by maybe integrating more with the symphony platform and we're looking for more ideas. We're definitely also going to take a look at how we could execute more for within Hazelcast, you know, runtime so that we could allow more for rules to be executed with our speed and performance. So with that I will hand it over to Neil. Thank you, John. Oh, and my microphone is on. Okay, so I'll just talk briefly on a slide and then I'll swap to my laptop just to give you a little bit of background. So the example we have here is continuous querying on a stream of trades. And what we mean by a stream of trades is we have some sort of connection to the stock market which is coming into our business through an airlock and it's sitting on a Kafka topic. So we're getting somebody is buying X1000 IBM, somebody is buying X1000 of stock ABC, X1000 of stock DEF. And we want to know what's happening in the market, what volumes are being traded because this is a thing we're interested in for optimizing our business. And the way we're going to do this is we're going to read off a Kafka topic and keep a running total. It doesn't sound like rocket science and indeed it's not. So if we just plug into my laptop, hopefully the screen will change. I don't want keyboard setup assistance. So if we just take a very brief look at our trade data. So it's coming in on a Kafka topic. This is a calf drop and basically we've just got a piece of JSON that's coming in. And what we're doing with this JSON is we're reading this into Hazelcast. It's the NASDAQ stock. So there's about 3000 of them. It's a five node Hazelcast cluster. So that means each node has got a keep track of 600 running totals on the stock. Now, if you did this in a database, that's a group by query. That's an aggregation. That's essentially a table scan. It's not the sweet spot of what a database does. At large volumes, you find that query might take four hours to tell you what's happening. Here we're doing it immediately. We're getting the current volumes of the 3000 stocks on the NASDAQ on five nodes. If you had 6000 stocks, you could just go to 10 nodes or whatever number you need. Now, if you're in the stock market, this is just your core business logic. You might look at this and say whatever that one on the top of the list, the Raspbian therapeutics done 42 million so far today. You can see the numbers change in front of your eyes, but maybe 43 million, that seems a bit on the high side. That maybe takes you into some sort of drill down. We want to see what are the individual parts that have made up that sale. Is it like a hedge fund that's done a massive movement or is it lots of institutional investors following the advice of Robin Hood? But fundamentally, that only gets you so far and it just starts what if questions. And that's when you get into SQL. You maybe can't see that too clearly on the screen, maybe you can, let's try it. What this query is, it's a join on a Kafka topic with data that's in Hazelcast. So when you do a query, most of the time, you don't really care where that data's come from, you're just interested in the answer. So Hazelcast can query using SQL data that Hazelcast has but with data that's somewhere else that Hazelcast knows how to get hold of it. But the reality is if you're not gonna stare at screens as John mentioned, what you want is alerts. So you might get alerts and here we go, there's our Slack system, we're getting alerts coming out. If you're a senior manager, you're not sitting, you're in meetings and whatever, suddenly there's a buzz on your phone, something's gone mad in the markets, a ship has got stuck in the Suez Canal. Google that, another ship got stuck somewhere else from the same company. I'm sure it can be a test. So this is Hazelcast, but let's have a look at the Finos perspective. So this is Finos perspective connected to, so that's running in a Jupyter notebook on my desktop, it's connected to a Hazelcast cluster running out in the cloud and if we just look at the code, scroll up, now look up, that's all the code and essentially it's just an SQL query. So let's start from portfolios and then it's just, Finos is doing all the heavy lifting with the perspective and I can put that query into Slack and I can go and do it that way, the same query, here comes back to my result, but that's like a static query and what perspective is doing nicely is just this running it continually and giving us this moving view and I can do all the nice parts, I can change it into a connect bar and whatever and see how that stock all moves. So that was kind of it for the demonstration because we're very low on time, but if you wanted, you can look at that same data using like an SQL interface, this is DB, where it's just a JDBC connection. Where's the refresh button? I can't see the, where's refresh? It's over there. No, never mind, there is a refresh button somewhere that I can't see, that's what happens on a live demo. So there's our data in Hazelcast, we can see it changing in perspective, we can manipulate it from wherever we happen to be, if you're close enough, I could be doing this on my cell phone. Any questions, we've got a minute and a half left. Well, there's another variety, SQL's just one that everybody likes. So that is currently running, like it's an SQL that runs and produces a finite result set, but Hazelcast also does SQL that does infinite results. So if you're doing something like querying Kafka topic, which is essentially a queue, that never ends. So you can do an infinite query against certain data sources, if it's like an event stream, or you can do a finite query if it's something like a map or a table or something that's finding. Yeah, so fundamentally, and the question was around our SQL capabilities, we support both streaming SQL, where we can take a stream of data and treat it like it's a table effectively, and the data that we're storing, where it's working like a database, and we can also combine those. And some of the SQL logic for streaming is like window-based that is sort of a specific syntax that you wouldn't necessarily have out of the box for data tools. And then some of it is, like if you're just selecting from, selecting from sources Kafka topic, then it just works like SQL. And so that would be very easy to plug into this tool, but that's actually where we're looking to take this next is to try to figure out that tighter integration. We're also looking at, beyond just this framework, we're looking at better out-of-the-box support for Apache Aero. We do work with Avro and Parquet and a lot of those kind of formats from the data world, but Aero is one that is on the roadmap as well for us. So yeah, I think we're out of time, but any other quick questions or otherwise, we can catch us over at our booth. Yeah, we can be pulling data from anywhere and working with the data and keeping it in Hazelcast and making it super fast. It doesn't matter where the data is, we also can do like change data capture. So we could be doing change data capture from your data warehouse if it supports some sort of change data capture format and then feeding that in as a stream to continuously keep Hazelcast up to date. And we could be either querying that stream in real time, but we could also be storing that data in Hazelcast for fast analytics, where you really care about those ability to kind of again query like the last days of data and those kinds of things. So hopefully that, In terms of format, you can query things like a CSV file or an S3 bucket, all sorts of possibilities. Your data is gonna be in a range of silos and we've worked. Yeah, we're often used as a common operational data layer in front of lots of databases and then real time computation on that operational data. So that's a very common use case for us in financial services, whether it's an aggregated view of positions and latest exposures or an aggregated market data hub or maybe you're trying to do real time effects, calculations, pricing and things like that. So all kinds of uses, payments, real time payments. One of the things we also do really well is we can run the ML algorithms in a distributed way, in a data aware way, so that we can actually execute this, you know, a fraud algorithm in less than a millisecond. All right, thank you guys for coming and feel free to stop by the booth to have a deeper conversation.