 So, this RubyConf, unfortunately, I hope you enjoyed the conference so far. Did you? No? No? Raise your hands. More fun. I know. You're a bit tired and the talk is going to be a little bit technical, so that's the last one for today, so I hope we pass through it successfully. So, the talk is called high-speed cables for Ruby, probably not so clear title. Maybe we should call it cables at Tiffany's, which actually explains what we're doing right now here. And we literally had some cables problem right now. As I understand, we do not have a live recording of this slide, so, well, they're going to be kind of combined later from the PDF. Okay. So, I'd like to start with explaining what I'm going to talk about, but we are on the same page. So, what a cable is, first of all. By cable, I mean a tool for building real-time applications. So, from one definition, we should go to another, because what is real-time application? What is a real-time web? And we could check out the Wikipedia, and it's not so explanation. It's not a good explanation here, I think, because some updates, checks, really good. What's this all about? I think I try to explain it a little bit differently. So, real-time communication at the communication, which is just different type of communication, different from what? From what we have in almost, I think, in every web application, request response, kind of synchronous communication. And request response communication implies the following. So, in order to get some data from server, a client should send a request. There is no way for server to send a data client without a client requesting it. Yeah, by the way, anyone played this ping-pong the first day after party? That was pretty fun. That's when I came up with the idea of this slide, actually. It's really a request response, and server is not feeling good. Well, real-time, so this is not real-time. This is request response cycle, typical. Real-time is different. And a different isn't usually two things. First, the communication is bi-directional. So, messages are passing both ways independently. And it's usually dealing with persistent connections. So, it could be real persistence in terms of, you know, sockets, all kind of abstract persistence modeled by other transports. Another thing which should we care about as backend developers and most Ruby developers, not backend developers, I think, is the difference, sorry, it's logging. Yeah, it's the difference of how we handle these requests at our server side. So, request response usually deals with request queue, first comp, first serve. And we have a limited number of simultaneous requests in our server, very limited, usually. So, it's kind of fixed. And we have a queue. And real-time, the situation is a little different. We have a large number of concurrent requests. And we have to serve them all, but not all together. Just like I imagine there's a century ago, there was a dinner somewhere nearby, and there were a limited number of waiters. We still have a limited number of servers, but we have a lot of people to serve. And we kind of serve them at large scales simultaneously, but actually concurrent. That's an example of concurrent work. And that's what we're dealing with in real-time. And what we need real-time for, there are a lot of examples. And actually nowadays, I don't know, raise your hand if your application you're working on right now using real-time features, at least small. Yeah, about half of the audience. And it's going to be more and more every day. So, it's kind of a modern technology. It's actually not so modern, but it's popular. And we need it to make user experience better in our applications. And we're talking about real-time. We usually talk about web sockets. We're not going to talk about older techniques, like long-calling, comets, flash sockets. Probably anyone have tried it. And when we're talking about web sockets, we're talking about concurrency. And the thing is, the problem is actually that concurrency and Ruby things, they're not playing well together. That's actually... Yeah, I'll give you a break. Yeah. Well, the thing is, that's kind of a knowledge, a common knowledge in the community. So, right in real-time applications in Ruby, it's almost impossible they're going to be less performant than applications in other languages, such as Erlang. And that's, well, after today's keynote, I think my goal is to help you unlearn this knowledge. And to just trying to convince you that writing concurrency applications in Ruby could be tricky, but it's possible. So, you can choose a Ruby peel instead of switching to something else and building your another microservice in some mainstream language. I'm not going to name it here. I don't want to. So, that's kind of what we're going to talk about. And let's quickly about myself, just to tell you something else about me. Usually, I add this part in the beginning of the talk, but now that the title is not so clear, so we should first get through the topic for the introduction. So, my name is Vladimir. It's not maybe so easy to pronounce here in the United States. So, you can call me Vlad. It's a little bit easier. Oh, many people call me Pelkin using my GitHub handle, or Twitter handle, so I'm doing a lot of open source stuff, my own projects, I'm contributing to Rails. I spent two and a half years trying to make action-cable testing marriage into Rails, and it's going to be in Rails 6, finally. It should have been in Rails 5, but for some reason, you know, you can't read the story here. It's a log story. Currently, I'm working as team lead at a company called... Not Elvis Martians, sorry, but Evil Martians. Elvis is everywhere. We are known as lenders, you know. Every restaurant has its own Elvis here. It's so strange. And we're doing product development, and it's called consultancy, actually, for big companies, small companies, and there is a bonus. One of these companies is hiring, and actually a company I'm working on right now. You can check the board of jobs here the whole way if you want to kind of work with me, but not as a part of Evil Martians, but as not a company team. That's not interesting, actually. What's interesting is our open source work. So doing commercial development, we're trying to kind of give back to the community as much as possible. So we're trying to extract some frameworks, some tools from our open commercial projects, and make them open source. And I'm going to talk about the open source tool today. A few more advertisements, so we get a blog where we're writing about what we're doing. And, yeah, right now I live in Brooklyn. We have multiple bases around the world, so in the United States and Russia. So I'm based in Brooklyn. If you're around and want to talk about ruby cables, whatever, just being me. Finally, we reach the point where we're going to start the talk. We already covered an introduction, so we get two more, it's actually titled, I'm just trying to outline what I'm going to talk about, but it's action cable, ruby, everything. Let's really talk about everything related to ruby and other stuff. And I'd like to start with a particular example of action cable. Who's using action cable here? Anyone? Again, about a half audience. That's great. More and more people using action cable. The first time I gave a talk about cables, almost two years ago, and there were me, nobody else. Actually, the situation is much better, so we're using cable. That's why I'd like to talk about action cable, because it's the most popular cable in ruby nowadays. And actually I like it. I like the way it's designed. It's a good Rails framework from the API point of view, from the framework design, not other parts. So, just to remind you what action cable is and what it consists of, we have several parts of the framework. So first of all, action cable provides a server part. That's a part which is responsible for handling WebSockets. Then we have a broadcaster part. The part is responsible for sending messages to clients. It implements publish-subscribe pattern, so we do not send a message to a particular client directly. We use named channels, called streams and action cable for some reason. And to manage all the stuff in our application code, we have channels framework. So it's an abstraction to manipulate WebSocket connections, to give access to business logic for your clients. And actually, channel is something like a controller for request response, but for WebSockets. So it plays the same role but for persistent connections. What's good about action cable is that it's really easy to build a working application in five minutes. It's really good in it, so that's, yeah, five minutes and you have a chat working. Have you imagined this five years before? Ten years before? Probably not. And even nowadays, most people think that WebSockets and that kind of stuff is difficult. It has never been easier with action cable. That's kind of an example of simple application. But how can we use this application? How does it scale, maybe? You know, this year's Rails Conf was focused on Rails scales and Rails 6 scalable by default. And actually, there was nothing about action cable, so we like to cover this in this topic, because scale and action cable is hard. And let me show some, first of all, benchmarks related to cables. What we want to learn. What we want to measure. Two things. First, real-time characteristics. The real-time characteristics of cable is actually one of the characteristics as how fast it can broadcast data to clients. And also we want to measure resources usage. That's a more crucial part. But let's start with the first one. For the first benchmark, I used a benchmark once conducted and it's called WebSocket Shoulder. And let me explain this benchmark, what we measure by the example. Suppose that we have live comments feature on the config site for live streaming of RubyConf. And there are thousands of people watching Keynote, of course, and they want to publish comments and share their ideas. And once one person is publishing comment, we want the server have to transmit this comment to everyone and it's the most connected client. And the time it takes server to do this job is actually what we want to measure during this benchmark. So it's like a latency of delivery. So to all the clients, a message. And we want to make this time as less as possible because if you send a message 10 seconds after it was published, it's actually not a real-time anymore. It doesn't relate to what's going on on the screen, for example. Let's see the results. Implementations in GoLang and Erlang. Very simple. Doing the same thing. The same functionality but in different languages. And the thing about ActionCable is that well, it doesn't work great. The more clients we have connected to the stream, the more latency. It's almost linear function here. And well, 10,000 clients, 10 second latency. Not real-time, right? Not good. It doesn't mean that we shouldn't use ActionCable. So good stuff here talking and say, don't use it. No, that's a little bit more tricky. Of course you can use. It depends on your use case. And that's kind of a, I call it cable theorem. So with the ActionCable, you either use not crowded channels. I mean channels would have not that big number of subscribers, less, not a thousand or a thousand. So a thousand, two, it's okay maybe. It's just going to be high latency. And if you don't care about latency, you can use ActionCable without any problem. The second part of this chair actually not so positive, because never mind how you use it, resources usage is going to be high. And it's about CPU usage. Yeah, that's kind of a what server monitor looks like when you running ActionCable and doing some stuff. So kind of under pressure. And memory. So we're going to focus on memory much more in this talk. So as you can see, handling the same amount of active clients and ActionCable Ruby solution, we need much more memories than using different web servers in different languages. But that's about benchmarks. Well, not the best way to evaluate the technology. We need some real stories. And that's a story I'd like to tell. Actually, this story is much more scary than benchmarks, because the project called equip.com, it's how to say it. So it's a platform which provides live statistics and live translations for across the training and shows. Horses, sports, something like that. I don't understand it actually. And there are a lot of people watching that in Europe. So that's a Swedish project. 10,000 people every weekend watch this kind of textual translations of the shows. And to handle this amount of people a project used to have more than 20 dinos on Heroku, 20 gigabytes of memory just to handle 10,000 clients. And they still experience problems. That's much worse than I thought in benchmark. Why? What's the problem? Why the real-life memory usage is even worse than we see in benchmarks? Well, I tried to think about it and did some investigation. I want to get through it and tell you what was my suggestions. Well, first let's talk about how we implement WebSockets and REC-based applications. So we have this thing called REC hijack. So REC is the interface of every Web application used REC. REC is a common interface which kind of a bridge between Web server which handles connections all this HTTP stuff and your application. It provides a format of input and output of data. And it was designed for request response communication. So it's kind of synchronous. I don't remember when exactly probably about five or four years ago. REC hijack which allowed you to kind of hijack underline IEL object socket and use it in your application and manage it yourself, not within Web server. And this trick helped us to build WebSocket servers within REC without needing to run in separate process like we did before, for example. But that comes with the price. And it helps us to build our performance and our memory consumption. Because we have to run a separate IEL loop to handle sockets. And we have to do low level stuff like parsing WebSocket protocol for example and etc. So there is definitely an overhead. Let me come back for a minute. That's a quotation from the article of this person, too. Because he's doing great stuff. Who knows about these tools? Just one person, you know? And that's kind of strange because these two actually probably is the future of Ruby Web servers. So it's like Puma but more efficient and more performance. And it has some great features for real time performance. So it's like it's own protocol instead of rake hijack. Which is much better because you don't have to handle any low level stuff in your code. And it's just a web framework based on top of that. And if we run benchmarks for IELT, our real-time performance is almost the same as for Golang. And memory usage is a web framework. We didn't buy the same offer, actually. And the thing is that the way IELT handles web sockets could be standard in the future rake. There is also already a pull request which proposes this kind of interface to rake. It's implemented by IELT servers and another server called I'm not sure about the pronunciation. Maybe Agu. It's a Japanese word for some kind of fish. Oh, I know about this framework. Sorry. Not familiar with it a lot. The idea is to replace rake hijack with rake upgrade. That's a work in progress title. Which also provide an access to kind of an abstract IELT object, not directly to the socket. So you have to implement some callbacks and you can do real-time. But all the low level stuff handled by a web server. And it could be much more efficient. And there is also a pull request to Rails to add support for this new work in progress rake API to ActionCable. And it's just like a proof of concept that this rake API works. I've tried it and it turned out that it doesn't help here. For some reason, hijack is not the biggest problem of the current implementation of ActionCable, actually. So we should investigate it further. And the second thought was, okay, we know that with ActionCable we have long-lived objects. Every connection initiates a bunch of Ruby objects. Actually, almost a thousand of Ruby objects, which is almost 60 kilobytes of memory just to open a connection. It doesn't include subscriptions, message passing, all that kind of stuff. What's wrong with long-lived objects? My hypothesis is that it costs heap implementation. Well, I tried to capture a heap dump once from a running ActionCable instance and it was a little bit surprised. Well, yeah, how to explain. So every vertical line is a page of memory allocated by Ruby. So red dots is a occupied memory. There is a live object here. And blank spaces are unoccupied memory. What we see here, what is fragmentation? Fragmentation is a phenomenon when we have a lot of allocated memory, but it's actually not used efficiently. And we have this strange blank lines. Probably that was connection avalanches and clients disconnected somewhere later and we left with this empty spaces. That's probably a problem. We know there is some work is going on to solve this problem. But yeah, this particular snapshot shows that we waste about 60% of memory just for nothing. So we started talking about future improvements on Ruby and let's talk about it a little bit more. I had a discussion in the complete guide to Rails performance slack. It was a very good community. There was a lot of people involved focused on performance. And Nate, the author of the book, asked me, what do you think about Ruby 3? Would it be useful for action cable, for example? And that's an interesting question which turned out to be part of the stroke, the answer for this question. One thing here, yeah, I don't have to read this. I will repeat it in slides more quickly. And the last thing I mentioned here which relates to the previous slides where we saw heap fragmentation is generational hypothesis. That's a hypothesis. It's pretty common for language runtimes with generational garbage collection. It states that most of objects die young. And do not survive many garbage collection cycles. There's a lot of heap which collected more often. And long live objects is kind of excluded from this rule. They do survive a lot of cycles, but they finally collect it. And the space they are being allocated is not freed immediately. So we have to wait a full cycle. That's kind of an, well, I'm not the person who's good at GC, but I think that's somewhere here is the truth why we have this problem. And as I already stated, yeah, compacting GC, if it's going to be part of Ruby 3, will be really helpful for this kind of applications where we have long live objects. Another thing we're probably thinking about, okay, we're going to have builds. And builds are great. It's like coroutines and go, right? Spoiler or not. Let's revise what we have right now. So we have threads and we have fibers. The first question is, what the hell is fiber? Who use fibers? Anyone? Yeah, okay. A little bit more people than people who know what iodine is. That's interesting. How to explain what fiber is? Well, if you know what thread is, so threads run Ruby code concurrently and it provides automatically switching between threads. So you don't have to care about when to switch between your execution between threads. Ruby cares about it and fiber doesn't have this thing. You have to do everything manually. So fiber is like a very, very simple abstraction for doing concurrency. And actually thread includes kind of includes fiber or every thread there is internal fiber. What we want to know about fibers and threads that to allocate to start a new thread, we need to allocate a stack for that. And the size of the stack for now is fixed and it's one megabyte for thread and 128 kilobytes for fiber. And what is Guild in this picture? Guild is a next level to the left from here. So it contains a thread and fiber. Actually we don't care about fibers right now too much. But if we want to use Guild as the same way as coroutines and Erlang processes used for real time and they used the way that for every connection, WebSocket connection, we just spawn an Erlang process or we run a GoRoutine. Usually that's the way this implemented in existing frameworks. Running a Guild for connection means that we need to allocate one megabyte of RAM just for one connection and it won't be helpful for memory from the perspective of memory usage for our real time application. And actually Guilds are not about concurrency. Guilds for parallelities, they are going to replace processes. We don't need to fork processes, we can use Guilds and utilize all the cores available on the machine. That's actually what Guilds are good for, but not for concurrency. Good news that okay, let's skip this. Dynamic stretch stack size is very early work and I'm not sure it's going to be ready free by free, but that could help. Another good thing is threadlet. It's something in between threads and fibers. I'm not sure about this stack size for threadlets. I think it should be the same as for fibers, so it's pretty small, much smaller than thread. What's the difference between threadlet, thread and fiber? I said it's something in between, so as it positions here. Threadlet is a fiber, but it automatically switches on IO operations, so when reading from socket right into socket. So it's a semi-automatically fiber. Actually, the second name for threadlet is autofiber. I'm not sure what's going to be a final name. There's a feature request and Ruby bug tracker. We can check it and follow. And the combination of all these four things builds to replace processes that also save a bunch of memory and threadlets to handle concurrency and, of course, compacting GC will definitely improve the performance of real-time applications in Ruby. That's, well, we would like it to rewrite everything using these new things, but eventually, we'll have a really good performance. And somewhere in 2020, maybe, two years later from this day, well, I don't know. Are you going to wait or not? Let's take a break and think about it a little bit. So, I'm thinking, what are I going to talk about this slide? Yeah. Okay. Now I'm ready. So far, we talked about how we can improve Ruby to be as performant as other technologies. But the question is, do we need to do that? Ruby is non-exclusive for language, right? So we can use Ruby with other languages or even better. We can make other languages serve our needs. Why not? Right? So why should we care about performance? There are already performance languages. We just have to learn how to use them with Ruby to kind of improve it. So we must our code base to write code in Ruby application code but still run performant applications. And that's the last part of the talk. We have a lot of time for that. Cool. It's going to be about and it's a project called AnyCable. And it's actually an implementation of this kind of philosophy. Do not replace Ruby with anything but combine and make some kind of hybrid application. So what is AnyCable? AnyCable is a project. It's not a jam library because it's a collection of different libraries and services and how it's called tools. That allows you to use kind of logic class servers written in different languages to handle all the hard work and keep all the interesting work, but it's also called application code in Ruby. If you remember the parts of ActionCable, so we have server, broadcaster, channels, clients, the problematic part here is this server. It's the weakest part of our stack. So we can move it somewhere and this somewhere is called AnyCable. AnyCable is responsible for this application. It looks like this. You have a separate instance, separate WebSocket server which handles all the connections and serve WebSocket clients. You still have your Rails or Ruby application which request response things and this too connected somehow to each other because you want to access your business logic from here, your channels. Why do you need this? You can build WebSocket RPC or replace your controllers with WebSocket, why not? There are some projects or render synchronously parts of use. A lot of use cases why you want to have an access to your Ruby code to your Ruby application and to keep all the business logic in one place and not to duplicate it between different services. So it's not about microservice architecture actually. WebSocket server is not microservice, it checks a proxy for example. That's the same role here. The question is how to manage this communication. When I first started working on this project I was thinking about building my own protocol on top of TCP, maybe some binary protocols for messages and I never ended up here I think if I decided to do that. So I think that already was built and good thing it was built in Google so they do not tell that G is for Google for something else. And it's really good. Framework, so it's universal RPC framework. What does it mean by universal? It means that you can write clients and servers in different languages using a definition file so you actually ought to generate this client and service and they communicate with each other so it's really good technology. And from technical point of view it's just HTTP to combined with protobuf for data serialization. All you need is as I already told to definition file which describes your service. Run is script using Google JPC build tools and you have a library which works with this server or client. That's good thing. So the final diagram looks a little complicated. Yeah. It's not as simple as action cable from the infrastructure point of view because you have to run at least three processes, one for handling RPC, one for handling web circuits and still you have to run your usual Rails application or Ruby application to handle HTTP requests. But the value we get with this little bit complex infrastructure is really good. So let's talk about benchmarks again, right? So WebSocket servers. There are two implementations of WebSocket servers. Servers exist for any cable. So any cable compatible WebSocket servers. Any cable go, written in Go. And early cable written in Erlang. The second one is kind of my playground for experiments so nobody except me, because I don't think anyone want to use Erlang for some reason. Everyone afraid of Erlang. So any cable go, that's your choice if you want to try. Benchmarks. Well, no surprises here. The performance of any cable go is almost the same as performance of very simple Go application. And the memory usage is really good. Well, that's just data you can share. CPU usage is beautiful. I like the slides because we can compare how Erlang Shadow or GoLang Shadow differs. If you check this here, that Erlang is really, really uniform in distribution. It's a CPU usage while GoLang has some artifacts. That's kind of a conclusion that Erlang virtual machine is still the best in the world of concurrency. But, well, GoLang is not that bad, too. And coming back to the real-life example about this project. So after suffering with action cable and a lot of dinos, I helped them to switch to any cable. It's actually was pretty easy. And the number of dinos decreased by 5, and the number of gigabytes decreased by 10, the same as number of bucks per month. That's, well, pretty good, I think, for the small project like this. That's a real-life example. Why any cable is so efficient? So why we do not have memory issues, except from handling most of the hard work and in this case, GoLang application. Another thing which differs any cable from action cable. We do not have long-lived objects. Yeah, we do have a Ruby part. And that's very interesting part, actually, because from the action cable point of view, so it connects to action cable. You don't have to write anything else. You use the same channels as with action cable. And we make action cable think that it works with real sockets, but actually, these are not sockets. There's a temporary objects which quacks like sockets, so it's a Ruby type in action. And that could be a little tricky, but it works. And it surprisingly works very good. Another question which many people asked me. Is there this connection between WebSocket server and RPC server through HTTP2 and not bottleneck or bottleneck? How performant it is? It turns out that it's very performant, so no operational RPC. I mean that just a server doing nothing, so just playing Ruby RPC server, just accepting something and responding with a pretty fine response is doing more than 5,000 requests per second. It's pretty good. Any cable is twice slower because we have to deal with all this quacking stuff and initialize objects and do these things. But I'm not sure is it realistic to hit the limit, actually. So to follow requests per seconds is a lot, but if you want to hit the limit we have an option for you. I'm not going to talk about it here a lot because it's very technical, but just let me tell you about the idea. It's not clear from this slide, unfortunately. The idea is the following. Most of the channels actions, subscribed callbacks and perform actions kind of logically. If you're streaming from channel, you're doing some broadcasting, you don't have to touch your database, for example, or other parts of your application code. So the logic lives behind the channel. So maybe we can execute this within a GoLang application, not without doing RPC calls. And yes, we can. And here where Ruby comes into the stage. Not only into smaller devices, actually we can embed Ruby into other languages such as GoLang. And the idea is to grab the simple channels from Ruby application, compile them into M-Ruby, and execute within GoLang service. So I call it M-Ruby cache. So it's not actually cache, maybe it's inlining better name for that. And that's, we can avoid calling RPC at all. That could be very effective if you have a latency between your WebSocket server and RPC server. You don't have to run them on the same machine. If you run them on the same machine, that doesn't make sense. If you run them on some quiet environment and you don't know how the network is looking, how it operates, that makes sense. The work is still in progress. Actually, I wanted to release it, but when I was running benchmarks, I got some segmentation faults, so I have to investigate. M-Ruby is actually C, so I have to learn a little bit more. You can check the talk. There are some examples on how to use GoLang with M-Ruby too. It could be interesting. A little bit more about any cable. So despite from being complex from infrastructure point of view, it's really easy from Ruby point of view. You actually just add in a gem, configure in a subscription adapter and run any action cable config and run a few services. You need to run at least three or maybe two if you use some hacks. We think it should be used only in production because it's compatible with action cable and you can still use it for development. We have some tricks to ensure that it's compatible because not every function is supported, not every functionality is supported. We have compatibility checks, runtime checks, you can make sure that your code will run the same way in any cable. As a side effect of using separate WebSocket servers, we have some interesting features. One of these features is zero disconnect deployment. What happens when you redeploy your code with action cable? You have to disconnect all the connected clients because you have to load the server, the complete server. You can't deploy this. If you have a large number of connections and you disconnect them all at once, they try to connect all at once, and you in the effect of such code when we discussed this with DHH, she told me they had this problem in Basecam and they called it connection avalanche because when multiple connections try to connect, there's still a request queue to serve the clients. There's some maybe race conditions, so it could just kill your server. Surprisingly, any cable solved this issue was not intentional, but we just can't do this. Since WebSocket server is logicless, it doesn't contain any application logic. It's just proxy. You don't have to restart it on every deployment. Otherwise, you can add a load balancer such as Android, which supports your PC. And if you have at least two RPC instances and you can implement for all in update, a lot of DevOps work, maybe. We plan to release our help charts for Kubernetes soon. We're using it on Kubernetes, actually. You can have disconnectless deployments. Your clients shouldn't disconnect. There is a problem with that. If you want to change some kind of structure of your identifier, major updates, you'll have to disconnect clients anyway. In most cases, you shouldn't. Another side effect that we added a comprehensive analytics support and instrumentation to any cable so we can finally know how many clients do we have, how many messages do they send, and a lot of other useful stuff to customers, and whatever. That's our dashboard. Not so many clients, sorry. And another thing, it's Rails free, actually, so you don't have to use it with Rails. You can use it with other frameworks. To use the same channels framework, we have a light cable. It's a Rails free action cable implementation, actually dependency free. So it doesn't have all the features, but most of the features are in the React application and connect to any cable. And yeah, something new. Have you heard about GraphQL and its subscription super feature which doesn't work well for some reason? You can check an issue and any cable, you have Rebo, there is an issue related to GraphQL subscription where the colleague of mine explains what's wrong with GraphQL implementation and after explaining that, he explains what's wrong with GraphQL application compatible with any cable, and we use it right now in production, and it's good. So it's something new from us here. Yeah, we're out of time, and I'm actually finishing the talk, so the new release has been today. I'm a conference driven development, so I'm trying to release something for every new conference that motivates me to work on open source because it's also built a beautiful site with documentation, a lot of useful stuff here, and yeah, we are announcing, but probably clear that any cable is built of little Martians, you can hire us to build your real-time solution. What we decided to make it more clear and edit this page called AnyCablePro. Actually, no pro versions, we're just providing a support or help you in any other way if you want. Actually, one more slide just to add ideas so real-time as Ruby is possible, and for now we should enhance our Ruby with something else, either you know, C++ based web servers or other languages, bridges, whatever stuff, cables, and Ruby 3, we are waiting for it. I hope AnyCable will become useless, unfortunately not. So thank you guys for having some stickers, and let's go to listen what?