 Hello, traders. Good morning. How are you doing today? So we're going to try something a little bit new and different today in Discord. We've got an AMA session with Sovietyslav Toda, the lead engineer here at Bookmap. We basically, we want to give you a chance to not only meet some of the people that are working hard behind the scenes, but also to learn a little bit more about the platform itself, how it works under the hood. So before we get started, I'm just going to go through disclosure. So bear with me for a moment. First of all, the general disclosure. All Bookmap limited materials, information and presentations are for educational purposes only and should not be considered specific investment advice nor recommendations and risk disclosure. Trading futures, equities and digital currencies involves substantial risk of loss and is not suitable for all investors. An investor could potentially lose all or more than the initial investment. Risk capital is money that can be lost without jeopardizing one's financial security nor lifestyle. Only risk capital should be used for trading and only those with sufficient risk capital should consider trading. Pass performance is not necessarily indicative of future results. Right. Okay. I see Sovietyslav is here. Let me give you a brief introduction first before we get stuck into the Q&A. So Sovietyslav, he's been instrumental over the years in designing and building some of the core Bookmap features and functionality. He's been on the team a lot longer than me. One of the true OGs of Bookmap, I think he joined back in 2015. He's seen many iterations of the platform and is kind of uniquely placed to have watched Bookmap evolve from the early days into what it is today. So he's really a true expert when it comes to the technical side of things. So the idea is obviously pics of Sovietyslav's brains today. Ask questions, anything you can think of. Don't be shy. Write those questions out in the Special Events channel and we'll do our best to answer them. Hello, Sovietyslavs. Can you hear me okay? Yep. Hi everyone. I have a first question for you. How's my pronunciation of your name coming along? I've been practicing. It's fine. It's fine. It's very close. I never have this problem, by the way. No one ever gets Sam wrong. I kind of feel left out. While we're waiting on questions here, maybe you could start by just giving us a brief overview of your role. Give us a taste of your kind of responsibilities at Bookmap. Basically, I joined the team when it was kind of slightly past the... I was going to say prototype stage but it wasn't really prototype. It was more or less a functional application. I believe it was already distributed to customers. So it wasn't really a prototype. So I joined at that phase. Initially, Bookmap was kind of, to my knowledge, again, something that I missed, but anyway. So to my knowledge, this was a tool to analyze the algorithm's behavior. So you write your own algorithm. You want to debug how it performs. You want to see what actually is going on with it. And you can obviously read the logs. Or you can try to visualize it with some... Well, basically, yeah, you can try to visualize it with some self-written tools. And Bookmap was the self-written tool initially. And that's, again, to my knowledge, this is what it evolved from. At the point where I joined, it was no longer an internal tool. It was already being kind of distributed to customers. I don't know the details on how big the customer base was at the moment. But yeah, it evolved from there. I guess my first task was OpenGL rendering, in fact. So when OpenGL rendering doesn't work for you, you can blame me, I guess. No, it's interesting. I don't think a lot of people, maybe they don't know that Bookmap originated for algos and for high-frequency trading. And it's kind of evolved into something a little bit different, I think. Yeah, and actually, Bookmap, kind of, while we are trying to make it as useful as possible for normal traders, there are still parts from which you can recognize that this was originally built more or less as analysis tool. Because, like, zooming into another segment. Like, yeah, there are some cases where you might call it useful, but for average trader it's not. You can even argue that even for a quant, like, another second, it's probably a bit more resolution than you need, but it doesn't really cost anything in terms of implementation, I believe. So, yeah, this is how it was implemented. Yeah, so no one has any questions yet for the artist's love? So, Owen, you can see the chat, can you? Yeah, yeah. I see. So, about the programming language, should I read the questions just so everyone can hear or everyone can see this? Yeah, I think that's a good idea. Okay, so question, what programming language is Bookmap written in? Why was that language chosen? It's mostly Java, and in fact, it's kind of somewhat for historical reasons. We had even had a point, it was around a point where I joined where we kind of thought maybe it should be Cichard, because it's kind of used by a lot of financial applications, but we decided to stick with Java. Main benefit of Java is cross-platform, and it helped us to agree when we were still trying to support Mac and Linux. The reason I'm kind of saying trying is because you can see that Linux version is not perfect, but yeah, we are trying to make sure that it works, and it helps. Just to be clear, it's not, I don't think it persists for everyone. For me, it looks fine. Yeah, so, sorry, I kind of lost the thought. Yeah, I wanted to say before I respond to the next part of the question, which I can see. So, about why Java, it's not... Okay, yeah, I will speak slower. So, Java was and still is helping us to make the application cross-platform. However, at the same time, application is not purely Java. It has a bunch of C++ code that is used for rendering. It has adapters, which are not always Java. In fact, at certain point, we had a bunch of sharp adapters, and this is what kind of limits our ability to support different platforms. If we would be using pure Java for rendering, and that's saying it's impossible, but most likely we would be running into problems with performance and we were running into problems with rendering performance in initial stage. This is why OpenGL was used. So, this is how kind of all this native code ended up inside the application. This is kind of what's making supporting multiple platforms different. So, answering the question, have you ever thought about .NET Core? Well, we basically thought about .NET, as I said. I believe it was at that point .NET Framework. I don't think .NET Core was popular or even existed at this point, but I'm not a .NET expert, so don't quote me on that. But yeah, we did. In that initial phase, we thought to rewrite the prototype in C-sharp, but we decided that, well, we have something in Java. We don't have a very strong benefit coming from C-sharp. Again, if I'm not wrong, I'm not 100% sure, but I believe at this point where we had to take this decision, .NET Core was not really, at least it wasn't popular. So, if we would go with .NET, it would be, yes, it's portable. It runs on every operating system as long as this operating system is Windows. That's the kind of joke about that version of .NET. There is mono, you could run it with mono Linux, but this gets into all kinds of fun stuff. Java kind of runs much better if you want to run it on a different operating system or potentially a platform. So, we didn't get there, but theoretically Java is something that could run to a degree on Android, but I won't get into it because there is a lot of stuff to get it running on Android. It's not like we could just take our current application with it. Yes, next question. Yes, I feel free to pick it up when I'm getting stuck. As a beginner trader, what is the best approach to use BookMap for the first time without being overlapped? Well, it's better to ask one of our education guys and it would be probably a better answer. I'm more of a technical guy. But the way we approach it is we try to realize that the market is not really that complicated. Like try to think about what really happens on a single instrument. First of all, yeah, so general kind of mathematical idea. When you have a big complicated problem divided into a bunch of separate problems and solve those one by one. So, kind of the first step, you have instruments. Instruments are independent. From this point on, you can molesting just about one instrument. I'm not saying that instruments are independent in terms of like deeper interactions like there is a panic on one instrument. It might somehow affect another. This is all true, but they are independent in technical terms. Like exchange does not take data from instrument A and move it to instrument B unless we are talking about spreads, but let's not talk about spreads for a second. So, instruments are more or less independent. And every instrument, what happens there? There are orders and orders are on certain level. In fact, what you are seeing in order book are just limit orders. You are not seeing anything else in normal order book. I believe it's called limit order book because technically there are few other types of orders. There are stops. They exist separately. We can get back to it in a second. There are simplest ones and you can see it. It's just by here or better. And say just we can say you can modify it. It can be there. Now the raw market orders is just by now at whatever price or sell now at whatever price. And there are stop orders. So, interesting part about stop orders, they are kind of a container for an order that gets unpacked on certain condition. Like market order gets unpacked at certain place and stop limit gets unpacked again. At certain price, but it contains not a market order but a limit order. I won't obviously get into it because we can talk about that probably for too long. But that's kind of those other fundamentals. So try to understand how the market works. It's not complicated. Everything complicated about market is kind of emerging behavior from what people are doing using those effectively very simple building blocks. I guess I'll jump to the next question then. So what's the next one? Why are you using different class loaders for each add-on? Good question. So let's say if class loader would be able to easily unload part of the classes, it would be possible to use one class loader. But I'm not saying it's impossible to unload a class loader from a class loader, but it's somewhat tricky. And it provides a lot more control for us when we are just loading every add-on into its own class loader. Another benefit of that is we have a compatibility system. If you played with API, you probably noticed an annotation that's called, well, something about version, I don't remember from the top of my head, but basically point of that annotation is to tell book map which version the add-on was implemented for. And what book map will do if it finds an add-on of the older version, it will try, it's not going to always succeed, but it will try to cover some common cases and effectively create a compatible environment for that add-on. So add-on will more or less believe that it's running in an older version of book map. It's not exactly how it works like you kind of modify the add-on on the fly to achieve that there are a bunch of technicalities. But yeah, that's another point of why we need so separate class loaders. I have a question for you, Svetlis Lev. The stops in iceberg indicator, obviously that leans on the MBO data, rhythmic MBO. It's really powerful and really interesting for me. How does that actually work? How are we able to see stops, detect stops, get triggered? Okay, I obviously will not be able to answer this in details because this would mean I will have to describe the proprietary algorithm. But being kind of very brief about it, we are watching at data anomalies. The data transmitted by exchange is not really the same kind of box for icebergs. There are certain anomalies and by watching the data closely you can figure out that well, this looks weird and if you think more about how the machine engine itself blocks, you can figure out that well, this specific anomaly is more means, certain type of them. Okay. As an interesting example, I believe we cannot detect this currently stops on Cedro. Well, no, no, we can't detect stops on Cedro because there is a different trigger. But this is kind of an example. So different machine engine agents behave differently. And this is why tricks that you did on one machine engine might not work on different one. Yeah, sorry for interrupt. Okay, Owen's got a question there. What was the most difficult or complicated project you worked on to develop with BookMap? I guess there was a bunch, so it's a little bit tricky to remember. Yeah. One of the projects is, ah, okay, I know which one. That was by far probably the trick. We had an issue and we still have it. That's a bug currently acknowledged by Java developers. That is probably down the bug in microcode of certain family of CPUs. But that was probably the absolute worst issue we had to ever deal with. Impacts are probably also in terms of impact, but in terms of diagnostic. What happens on certain machines, BookMap will just close suddenly, which happens because JVM just crashed inside garbage collection code. We didn't obviously know this from the beginning. To us, it looked like, well, it crashes. So why would it be? Why would garbage collection crash? Probably because there is some corruption of memory. That was our first assumption. We made users test memory. We tried on certain machines we had access to. We tried to replace GPU. We tried to replace a bunch of stuff. So basically we tried to locate an issue. And at some point we narrowed it down that well, there is a certain combination of motherboard and the CPU. There are more combinations, but we located one. And at that point we realized it's always something wrong about it. We tried to build JDK from scratch, from the sources. We tried to make modifications to JDK itself to figure the issue out. We didn't really narrow it down to a specific problem, but kind of figured out that something is going on with parallel garbage collectors in Java, at which point we just reported it to Java developers. And kind of from here we are waiting for it to be resolved, because let's be real, we are not big enough company to just go and fix JDK. We would have to divert a huge amount of resources. It doesn't look like a simple bug. We tried to fix JDK. We tried to figure out it in a matter of weeks. So we had to stop there. But yeah, so this one still exists. We are still hoping that probably Intel will release a microcode update. But our understanding is incomplete. So is understanding of Java developers. Yeah. I guess I want to say Oracle, but I'm not sure Oracle. Other entities were on Java, so probably not on the Oracle team. Yeah. So that's like an underlying issue with Java itself. Yeah. That's kind of a price of building on top of certain technologies. And this is actually kind of opening a topic that I don't know if we will have extra time. We can talk about it, but basically when you're building on top of some technology, you have to accept issues that come with it. And sometimes you have to deal with those issues around it. So this is why we are often not so eager to adopt some frameworks or some readily available solutions because it's all great as long as it works. When it doesn't have to suddenly figure out something that we didn't write, it's not always easy. Okay. We've got a couple more questions there. Yeah. What's the landscape of data feeds? Is it in high demand from other software? How to get? I'm not sure 100% understand the question. So maybe some help me here, at least just to understand it. The landscape of data feeds. I probably know the answer to the end of it. Like, Arca, I believe there are like kind of legal reasons we need to arrange a bunch of stuff around it to actually get it. And that's why I believe it's not important, but again, I'm not sure. It's probably not the right person to ask it. Yeah, no, I'm not sure either. Maybe, Barboza, could you rephrase that question or try again? Yeah. Sorry, yeah. If you want me to give an answer, please kind of narrow it down a little bit. In the meantime, I will continue. So in terms of data, which exchange provides the best quality of full depth data. I'm not sure I kind of have a definite answer. We had a good experience with CME and Rismic for the most part. Like data is detailed. If this is what you are worried about, there are some cases in terms of how it behaves during high load. But like this is the only data provider that currently have that is providing so much data. So that's kind of, yeah. The exfit is interesting, but it's kind of different. It's totally different. So there are completely different limitations. So yeah, there is no clear answer. Like if you want full depth, the exfit has it, Rismic has it, probably a bunch of other providers also have it. At some point it was more kind of limited. Now everyone is starting to have full depth data. I guess if it also has it, probably almost everyone has it at this point. But yeah, pretty good ones in my experience except for the performance issues that you might face. Next question. So when will the automatic signing process for the exfit add-ons be finished? Being honest, the more you develop of those add-ons, the faster it will get finished. The reason that the process is currently manual is because there is not that much demand. So you're not really, it's okay to deal with it manually at this stage. If we have more people developing add-ons, that's going to be, but like the exfit is a very special case. I'm not sure. I don't think automatic would even be possible there because there are a bunch of restrictions. It depends on which data you are trying to access through the exfit. So yeah, they are pretty strict and if you're writing an add-on, we must ensure that add-on is not violating anything. I'm not sure it's possible to do the automatic signing process in that particular case because probably a human would still have to take a look. So what does GP need to answer is correct? It's basically just a video card. Same abbreviation as kind of CPU, central processing unit. That's your process or a GPU. I wasn't sure if that was an actual honest question or if he's just trolling at this stage. Okay, we'll see. Oh no, okay. There you go. Sorry. Now I feel bad. I feel like I've been patronizing. So moving on. I think that's all the questions for now. I've got another question for you about performance because this is often a big discussion point around bookmap. There's no getting around that you need quite a powerful machine to run it. Maybe you can talk a little bit about how we're working on improving the performance of the platform. Yes, so first of all, I want to kind of explain why, how we ended up in this situation. So you have to approaches how you can handle the data. You can either say, well, fine, we will sacrifice the data quality, digitalization and try to just deliver kind of a high level view. Or you can go with opposite approach, which is what we did. We tried to deliver as much data as we can without kind of dropping it in the process. In the process. This is where you can zoom in almost by ever and see what exactly happened there. And this comes obviously with the cost. You need to process all the data. I want to point out that currently we are somewhere in between. We are not just saying, well, okay, we are never going to drop data resolution. We have some mechanisms of data resolution. Historical data have to sacrifice some quality for performance reasons. If your bookmark is going to be lagging too much in 7.3, I believe there is a new mechanism that's going to aggregate the data. It will warn you. It's going to turn it off or whatever. Sorry to interrupt. I think your mic is just kind of fading in and out. Interesting. Is it better now? No, it's better now. Probably I shouldn't turn my head. Okay. I think I got the gist of it, I think, but it was just kind of fading in and out. Okay. About the performance itself. This is how we ended up here. We are trying to optimize the performance. There are multiple directions on which we are working. Probably for developers it's kind of obvious, but I will answer still. So we are trying to just directly optimize the code. So we profile it, we locate the place that is too slow. We try to speed it up by just rewriting it in a better way. Sometimes we multi-thread stuff. So we try to ensure that if you have more cores, we will be able to take advantage of extra resources of your computer. We are trying to use your GPU for rendering. But here, I want to be very clear that you don't need an extremely powerful GPU for a bookmark to run. You need a decent one, but if it's not too old, you're probably going to be fine. So there is no need to go for the best GPU pay like the times of your computer price given how expensive those currently are. Like it doesn't matter. Yeah. I'm not sure I can tell much more. Okay. We are trying to optimize it. There is some rendering code written in C++, but the main reason is that it's facing with OpenGA, so it's not so much for performance reasons. So it's not RAM over GPU performance that you would recommend? RAM and CPU performance. Single-threaded CPU performance matters, but you also want enough threads. Definition of enough is kind of tricky. It depends on a lot of stuff, including what you want to watch and how much other software are you going to run in Perco. Because, obviously, bookmap is not going to be the only process on your computer. But yeah, what I wanted to say is that single-threaded performance matters because some parts of bookmap are not multi-threaded as well. Okay. Yeah. About why we upgraded to Java 14 instead of 17, main reason was actually that bug I was talking about. According to Java developers, it should be more stable on certain processors, the ones that are affected. It's, if I'm not mistaken, 9th and 10th generation of Intel CPUs that are affected. Apparently not every processor in those generations and presumably microcode update might fix it at least for some. But anyway, so yeah, according to Java developers, Java 17 was recompiled to be more stable. Well, with some compiler flags that if I'm not mistaken ensure alignment of certain instructions and that somehow makes it more stable. If you want, I can look up the ticket number and you can read about it from kind of the first, why not? I mean, we'll just copy paste. So this is the ticket that we are talking about here. Yeah. Answering the question, what threads? Thread is effectively in programming. You need to somehow define that this is a sequence of comments that's going to be executed. And the simplest way, which is called a single threaded application, you just have one sequence of comments. So say create a variable, add variable eight variable B, print the results, whatever, but you do all of it in sequence. There might be some checks like if A is bigger than B or greater than B, then do something, otherwise do something else. But this is all executed in sequence. Threads let you tell, well, okay, I'm doing that. But while I'm doing that, you should also start doing that. It comes with all kinds of fun challenges because there is synchronization between those threads. As soon as it starts working with the same area of memory, this is now your problem to ensure that it's not going to do something stupid like simultaneously read and write from the same address. Well, assuming your code actually cares about it because there are exceptions where it's going to be just fine even in this situation. But yeah, it comes with its own challenges. And usually it also comes with certain overheads. So you will spend more CPU time just to ensure that your multi-threaded code is working correctly. In other words, say you have one thread, single-threaded application, and it, say, performs the calculations in 10 seconds. If you split it into two threads somehow, in most cases you are not going to get it running two times faster. It depends on the task. There are tasks that are kind of perfectly parallelizable. I'm not sure it's a technical term, but certain types of tasks you will actually get two times increase in performance. On other tasks, it might be nearly impossible to parallelize. So it's kind of an entire topic. Okay. Oh, we've got another question here from a trading analyst. What are the next steps in API development? Is there a roadmap for what other developers should be prepared for the near future? By the way, I can tell he's a program by the way he writes add-on. The capitals. And his questions, by the way. So there is no clear public roadmap right now. We might try to bug us, so we create one, but being honest, we have a bunch of stuff that we need to do. So it's probably not going to be a top priority. The way you can impact development of API is basically just requested either through support or through forum. Forum is kind of an interesting place in that regard. To some of you, it might look like we are not watching it. Usually we are. Yes, we are not always responding immediately, and there are some threads that are being looked at less frequently. But it is important for us to understand that, well, this request was requested by, say, 10 different people. That's really valuable information, because we are trying to collect this data in the support. So when you are writing to support and requesting something, they try to collect it into some document and classify, separate those questions to figure out, well, this is the same question. But first, it's not always that obvious, especially to non-technical team members, that this is the same question. On forum, you kind of have more flexibility, because you are the ones asking the question. You can take a look and figure out, well, that guy is already asking what I need, or maybe he is asking something very similar. But like being answering the question directly, there is no publicly available roadmap at this point. So as for the site, is there a site to vote for already requested features? Basically, right now, forum is feeling that role. It's not a perfect fit, because again, if you requested something through the support, it's not going to end up on forum. But if someone else asked it through the forum, it will end up there. And like kind of a life hack, you see a request that you want, and it's already on the forum. There are many people under it, maybe just try to support saying that, well, here is a topic and there are already many people asking for it, because we have probably seen it. But maybe we missed it, or maybe we forgot about it, whatever like, I mean, it's not a bad idea to draw attention, draw our attention to a topic if you see that, well, we are probably missing something. Sometimes what your request is just going to be too complicated to do so. Some requests are not going to be completed just because it's very hard, and let's say we are not going to get into two years of development or the dates of the feature unless we believe it's really, really important because we have other issues that we also need to address. But that's a balancing act, it depends on what your request is. Okay. Do we have any more questions? That's it. If you want, they can talk about multistrading and kind of possible performance. Yes, sorry. I mean, I can ask you some more things if you like. I mean, what just kind of taking it going a little bit higher level, what are your kind of favorite things that you work on on the platform and what are maybe your kind of least favorite things that you like, that you don't like working on on the platform? Most favorite stuff is actually usually figuring out something really complicated. Like while this bug with certain family of processors was quite frustrating, it also was extremely fun to figure it out because like, well, when you figure it out, you actually feel like you accomplished something, not just did something simple. So, yeah, this is kind of the best part. The least interesting part probably some kind of routine tasks, like something you have to do just because you have to, like you need to write UI, you need to, like, it's necessary. You need to do it. I wouldn't say that I hate it or something. I find it with it. But it's much less fun than actually diving into something that is complicated. Yeah, no, that makes a lot of sense. I can see you kind of like a challenge, a technical challenge, a little bit like me in that respect. Solving puzzles, solving a problem. Yeah, you can get a lot of satisfaction out of that, I think. Yeah, and there's always the kind of day-to-day grind stuff that needs doing, that isn't necessarily something we want to be doing. But yeah, no, I can relate to that as well. Did you want to talk about the heat map a little bit? I mean, obviously the book map itself, the actual name comes from order book and heat map, it's one of the kind of core features of the platform. The visualizing the historical order book over time. Yeah, I mean, what are the main challenges there or what's, yeah. So about the heat map first, why it even exists? You can represent the data as DOM. So just current state, what are the sizes on every level. And some traders, after a certain time, they kind of remember what it was like. Okay, this size moved down. Maybe there was a guy who moved his order down. So that kind of stuff, like you might actually end up reconstructing the heat map in your brain to a degree. But what, well, it's not convenient. It takes your brain power, you're not going to do it perfectly. So we decided why not do it, make machine do it. In terms of challenges, I guess the main challenge with heat map is that it's a bunch of data. Instead of just having like one column, you have a bunch of pixels that need to be filled correctly. And if you're not sacrificing data quality, because there is a way you can just say, okay, we are only updating it every 10 pixels horizontally, or maybe we only have say 100 blocks through the entire screen that actually cuts the task dramatically. Or we can say you are not allowed to zoom to access independently. You have to either zoom both price and time at the same time, because this actually makes two-dimensional tasks effectively one-dimensional, making it much easier. But yeah, we didn't do any of that. And this means we are required to handle a lot of data in order to accommodate all those possible combinations. And this is why it gets tricky, because there might be millions of updates within your screen. And in fact, there will be like on a 4K screen zoom out vertically as far as you can, and zoom out horizontally as far as you can. So you will have like 16 millions, probably even more updates or pixels. So yeah, and obviously in every pixel there might be multiple updates if you zoomed out far enough. So there are like interesting data structures optimized for this case, and this is actually why our cache folder is so big. So some of you asked like why C Bookmap cache is so huge. This is exactly because while we are receiving the data, we are also unpacking it into those data structures that later help us answer those requests quickly. Well, somewhat, reasonably quickly, because it's still a big task, and we are not able to do it immediately in most cases. Right, and what has been the kind of evolution of the Heatmap? The Heatmap was included in the original version, was it? Yeah, at least when I joined. At this point, the algorithms behind how we build Heatmap were relatively straightforward, so it just was basically every update applied. The color map was basic. I guess I cannot get into that stuff too deeply because this is kind of proprietary, but the algorithms behind how the colors and sizes are mapped. So all this stuff evolved over the years quite far. Yeah, so answering the other questions, can you add the tooltip option to Converse icon on Heatmap? Good point, answer is always yes, but we need to make this high enough priority feature request in order for it to actually get into the pipeline. Because again, we have a ton of stuff that we need to do. It's fixing bugs, developing new features, and there is really a lot of stuff. So we have to prioritize. So if this feature is requested by a significant amount of users, it's likely to get implemented. Specifically, this one in some way is probably going to get out once we feature out mouse listeners, not feature out like implement. We are talking about it a lot. I don't know when we will get to it, but we want to implement some ability for add-ons to interact with the mouse, to react to mouse movement, probably tooltip is going to be there in certain form, but they cannot commit any data. How often should we shut down and restart Bookma for optimal performance? Well, ideally, as soon as Bookmap is perfect, has no glitches, your GPU driver has no glitches, everything has no glitches, you never should have to do it, but reality is different. So it will depend a lot. The main reason why you might want to restart Bookmap from practical point of view, you would want to discard historical data. Right now, Bookmap is not perfectly efficient when it's aggregating the traits that are outside of the left edge of the screen. This is needed to build the volume profile or whatever this column is called. Yeah, so as you accumulate more and more historical data, even if you are not watching it, it's going to have some performance impact because it is playing part in certain computations. So, yeah, you might want to restart the Keyengine and you know to discard it. Now, you probably should be fine restarting it once a day for sure. If you have to restart it more than once a day, that's something we would take as a bug, definitely. It's not normal if you have to restart Bookmap more frequently than once a day. We know that some people have it running for a week or so. It seems to be okay, but I cannot really comment in details because for some it depends on your data. Basically, if you would have data with absolutely no traits and relatively calm Hitmap, you would probably be fine running for months. But it depends on the data. In the Hitmap, we often see a liquidity tunnel around the actual price. Is there a possibility to mask out the defined region around the actual price? That's a good question. That's one of the projects we are working on. We have some advanced filtering operations to the Hitmap, but I'm not sure if it allows to get into the deeper. Yeah, applying some filtering when you've got market makers and a lot of static big orders that pollute the Hitmap a little bit, it would be convenient to filter some of that off. Yeah, what I guess I can say is that we have data editor type of add-on. So if you want to kind of take a shot at trying to implement this yourself, you can. There are data editor examples in our repository. I'm going to just post it into the readme. In the readme, it mentions what's called a data editor module. So this lets you manipulate. Sorry, it didn't do it properly. I'm sorry. Well, basically just scroll to the point where it says data editor module and you can actually modify the data, try to filter it out depending on what you believe is useless. You can apply different algorithms. Yeah, you might have something like that. It's basically offering you a job there, I think. You can sell it on our marketplace. There is a way for you to actually monetize it if you want to do it. But this is level one or not. No, I mean, it depends on what you define as level one, but usually no, because level one is typically just BBO. Demo strategies repository contains a bunch of examples, but then kind of advanced API, but even simplified, like even through simplified, you can access full order book as much as is provided by data source. So no, that's not level one. Level one, I apologize. Ah, yes, okay. Kind of naming collision. Entire stack is called layer one protocol. This has absolutely nothing to do with level one concept that is used in market data, like level one, level two. It's not related to layer one. The reason it was called layer. So at some point we drew a diagram. There was like layer zero. This is where we ingest the data. Layer one, this is where we process the data. And layer two, this is where we visualize the data. And the name stuck with the protocol. So this is now called layer one protocol or layer one stack, because now we basically stack a bunch of layer one modules on top of another. And this is why by the way, low modules, like data providers are called layer zero add-ons, because they are at the bottom of the stack in the game. And this is the name from that diagram. For at least zero one source code has to be open source. Well, it depends. I would suggest to discuss it. It's kind of on a case-by-case basis, because they were like, yes, the basic use case is that if you develop something, it is open source. I believe it only relates to layer zero, though. I don't think we have such a requirement for layer one. Like if you want, you can ask either our support. I can try to check it for you, but it would be best if someone reminds me. But basically, I believe layer one does not have this restriction. Layer zero, yes. And this is because we wanted to make a distinction between kind of reading proprietary data and open data. But I don't think this is a restriction for layer one. Is there anything that you particularly wanted to discuss of Yatoslav? I don't have a specific topic. If you want, I can just talk about some general stuff, but otherwise, no. We've got a few people typing in chat here. Oh, okay. In the meantime, I can say about generally like using the GPU because this is kind of a question I heard from time to time. Like, why don't you use GPU more for computations? In fact, we looked at it and like more than once. Yes, you have like GP GPU from not mistaken types of, I was going to say languages, but they're not languages anyway. So you could do a general general purpose GPU computations like just computations on GPU, not just rendering graphics. We could not really effectively apply that to what we are doing. Like you have to upload the data to GPUs and upload it back. It's nice if you have to do a bunch of complicated calculations like maybe train some neural network or apply some neural network to your data and maybe you will get there one day. But at this point, we don't really have much like off-loads. Uses and rendering, rendering is on the GPU, but it's not really that complicated modern GPUs rendering. It's great. In the meantime, I can also tell you about one interesting bug that affects certain laptops. So there are certain laptops that have an adequately designed power delivery system. What you will observe on those machines is once you load CPU and GPU simultaneously, your CPU will get into, I believe it's power throttling. So it hits power limit that suddenly gets lowered and frequency just drops. So what it means is that once you run book map on this kind of machine, your GPU will start being used and you will suddenly find out that your CPU dropped from like three gigahertz to like 0.5, which makes computer totally unusable. I have no idea what the designers of those laptops intended those devices to be because I believe this issue also exists in all kinds of video games, probably not just in video games, but in basically everything that uses GPU and simultaneously with CPU. But yeah, this problem exists. This is kind of another of those fun issues to diagnose. You can check it like those are typically Intel laptops and you can use Intel Extreme Tuning Utility to take a look at what's limiting the CPU performance. If you will see that CPU is consuming very small amount of power and at the same time is showing it as power limit, or power throttling. I forgot how it says that, but basically it's power limiting. In this case, yeah, you have this laptop, but those are relatively low end machines. I hope not. I think it's safe. It sounds interesting. By working on book map, you unearth a lot of anomalies with, you mentioned Java already and also certain laptop manufacturers. So you get all kind of edge cases that can be a challenge to deal with, I suppose, if it's not necessarily anything to do with us. It's the hardware or the underlying fundamentals behind the languages. Yeah, that's where supporting any system is actually interesting because you're not kind of living in a vacuum. There are a bunch of other systems that you are building on top and those systems are also not perfect. Yeah, so answering the next question, what kind of laptop specs would you recommend for book map? Well, again, it depends like with any kind of, what kind of specifications for computer we recommend. It depends on the data. If you have heavy use case like a ton of instruments, and build data add-ons, you probably won't talk of the line machine idea. If you just want kind of lighter use, you might get away with slow. And so it depends. It's the same kind of a balancing act as it is with the desktop, just with the only corrections that laptops are probably about 30%, or even a bit more slower than equivalent desktop. So, by the way, keep in mind that part numbers or laptop and desktop, they don't really match. What I mean is that you see a GPU or a CPU with very similar number. You would assume, and like a long time ago I also was assuming that that they would be about equal, typically no, like laptop is usually like couple generations behind. So about 30% roughly. Yeah, you can also get kind of overheating problems as well. The cooling is a big issue for laptops. Yeah, that's a very good point. So you need good cooling. You need adequate power delivery. In fact, you just need adequate cooling. You don't want laptop to stroke. You are fine with it being hot, like this is your personal preference. How much fans ramp up does not affect performance, obviously. Next question, I guess. I have a special question to RISMIC data provider. Is DBO calculated by Bookmap? Or do you use the data delivered from RISMIC to pay in the DBO linestone chart? It is calculated by Bookmap, but last time we checked it was exactly the same data as what's received from RISMIC. So typically the way updates are received, they are just like depth or the book update. And then immediately you get the DBO update or other way around. So like you are getting the same data twice. Do you recommend installing Bookmap into its own SSD? I definitely recommend having cache folder for Bookmap on an SSD. That will help a lot because Bookmap uses memory map files for cache. This means that if you have free RAM, computer will keep this data in RAM. If you don't have free RAM, it will dump it onto the disk. If you need to access this data, it will be pulled back from the disk into your RAM. And this is where you might see a slowdown. With HDD, those might be more noticeable. So yeah, SSD is preferred, let's say. By the way, don't worry that much about wearing out an SSD. I don't think it's that much. Like it's a pretty big deal. I never seen a TLC SSD worn out in practical use cases. I'm using mine pretty heavily and in four years or so I only used like 20% of cycles. And you probably are not going to be using it that heavily in most cases. And the QLC, I guess QLC would be like for me, QLC, yeah, it would probably for years hit end of its life. But I don't know if it's a big deal, especially... Like, yeah, I cannot promise you that QLC is going to be perfect for you. But for most real-world use cases, QLC should also be fine. But keep in mind that QLC SSDs will lose performance when writing large amounts of data. It should not be probably an issue with Bookmap if you are just getting real-time data. But it might be somewhat of an issue if you're trying to open a huge feed file, like I don't know, a few gigabytes. Because this will get unpacked into a bunch of data in the cache folder. QLC drive might drop performance, but you probably won't feel it. Like it will drop from 500 megabytes or no, it will drop from much higher to something like 100 megabytes per second. So maybe it's not such a big deal, probably not, I just said. Next, to expand on this question, what if you just want to look at one instrument such as SDS and PTC-USD multibook, for example, naming a CPU and amount of RAM would be nice. PTC-USD multibook is what gets me concerned, but in fact, the S is also not like... It's one of the most active instruments, especially during volatility, it will be a very significant portion of the updates in the market that is going to be ES. I will not name the specific CPU, but you can take a look at the thread about picking a computer for training, about the amount of RAM, I wouldn't go below 16 gigabytes, because if you are going to at least open a browser, in parallel with Bookmap, browser will happily consume a bunch of memory. And if you are in habit of having like 20 tabs open, you can find out that the browser is using more memories than everything else in your system. Right now, I would never buy a laptop with less than 16 gigabytes, 32 sounds kind of future proof in certain way, but also depends on what you are going to run. If you are talking about one-two instruments, you are going to be fine, most likely. If you want later to run some heavy use cases, maybe. But keep in mind that it's not actually that hard to change memory on many high-end laptops. Usually, you just pop off the bottom panel on the bottom of the laptop and just take out the DIMM, whatever they called DIMM. Okay, I forgot, but anyway, memory modules, I was going to say the form factor of those. What RAM sticks, but it's not a form factor, there is a form factor. What are the new features that you are working on for the future? There is a bunch of those, but I'm not sure if I want to... I mean, I'm not sure I'm allowed again to talk about any particular feature at this point. And another one, for visualizing all the data, do you use timestamp on reception of data or the timestamp that is delivered by RISMIC? Very good question. Right now, we are using the client's timestamp. So that's what was on your machine at the time when we received it. The reason this is done this way is because you might have multiple instruments and we are making an assumption that time is only going from left to right. Time never jumps back. This helps us both in terms of performance and makes it simpler in terms of protocol in general. So there is kind of this limitation. We are considering to provide some metadata that would let add-on or whatever needs this information receive a timestamp delivered by RISMIC, but there is no ETA, this feature is not currently in the work. This is just something that we might do. We are aware that this would be nice to have, but it's not... Is there a possibility that bookmarks minimum chart refresh rate may be reduced below 8 milliseconds in the future? I'm a little bit confused about minimum, so it's the maximum chart refresh rate. 8 milliseconds is the smallest delay that you can select. So 8 milliseconds refresh delay means that you are getting, I guess, 120 frames per second or something around that. Which is quite high. If you want more than 120 frames, I don't think it's practical with Bookmark right now. Efficiency... Yeah, I'm not sure I see a practical use case where you would benefit from having more than 120 frames per second. Yeah, I mean, most standard monitors are 60. Hey, if you could get up to 300 or maybe even 500 right now, it's a little bit like getting your territory. Okay, I think the questions are dying down. Unless anyone else has anything else, we maybe should wrap up there. We've been going about an hour. Yeah, so thank you for joining us, Savyatislav. And thank you guys for joining us as well. I hope you found it useful. We'll probably be uploading this session to YouTube if you need to rewatch or if you missed anything. Yeah, no, it was really interesting stuff. Thanks for your time. So later today, be sure to catch Tom B as well. He'll be live in the Traders Lab voice channel here in Discord. And as Bruce is away all this week, I'll be helping out tomorrow with JTraders Live webinar. So I'll see you all then. Yep, enjoy the rest of your day, guys. Thank you very much. Have a nice day.