 Well, the door is closed. So I guess it's time. Yeah, fantastic. So thank you all for being here. I'm Lorenzo Mangani from QXIP The company behind Homer sub-captured, I have protocol and a few other things actually Parts of good part of the team is here with me today. So Foslem is always the only occasion where we actually meet over the year, Andre Hey, I see a lot of friends in the audience. So thank you for being here Again Really quickly because we don't have a lot of time today. We're gonna present some news about our project specifically Homer 7 and how it finally rejoins with the RTC community. So Little big picture of our stack which has been evolving over time for those of you who are not familiar with Homer. Homer is a collector Which is purpose for you know tracking and generating metric statistics time series indexing out of data Originally, we are we were CIP centric or you know very CIP focused platform. This is not scrolling, right? Well, okay So the platform has been initially created to track CIP then it evolved to also track RTP RTCP statistics Logs and so on so forth until you know RTC became popular and we had to rethink everything all over again Which is what really happened in Homer 7. So Homer 7 is really an Evolution of the project which tries to get rid of some of our design limitations and open the doors to whatever is coming next So we don't want to make the same mistakes again So we dropped the CIP centric. We dropped being Tied to a specific databases. We dropped our internal way of doing statistics. We dropped the way we're doing correlation We basically threw it all away. So we kept the concept of Homer and we tried to rethink Oops how it works internally. So instead of having you know static map protocols now We have custom ones so anybody can take our platform define a new protocol Of course CIP is going to be already there, but we you know did this to prepare for really events So events coming from Janus media soups and now also Jitsi Can be indexed can be correlated with other protocols. It's it becomes a generic platform for collecting Data flows and events so totally right. We threw it all away the back end it was originally done in PHP and it was a bunch of Excellent garbage that we couldn't maintain anymore. So we threw it all away and we did it in the modern Technology so we picked no JS just because there's a lot of people that are good with it So we can get help and we can get contributions Again designed to support any protocol. So this is going to be a generic collector. Of course, it's Tailor to voice and real-time communication. So it comes already with you know some concepts that maybe other platforms don't have In terms of tying things together and correlating It's integration ready. So we no longer Forced people into our little island come and see the statistics that are made for you But we rather do the opposite so we open it up so that we can send and stream whatever the platform is Processing and generating to pretty much anything that's out there and I'll show you some of this We went standalone so the previous versions of Homer could only be Assembled using open sips and come I do as shells for the capture functions. This is no longer the case So we have our independent capture servers and agents We have two branches one is in go and it's you know Tailored for performance and stability the other one is in no JS and it's Taylor for hacking I'm working on the second one, of course And then we try to turn the project into a building block for other people to do the same thing we did in their platform So Homer until the previous version was something that you had to install and use and that was it Now we want to allow people just to use Let's say a headless version of Homer where they provide the API they provide the UI Sorry, they provide the visualization or the integration Maybe they want to capture some stuff and using in a customer portal. They don't need our entire stack They only need specific elements of it. So we broke it all down And this is how more or less Correct. Yeah, this is more or less how it looks today. So in Center out Left to right. So on the left you see you know starting from the classic We have all of the void platforms that we support natively and of course This doesn't include anything that you can port mirror or just pan to the platform It doesn't matter but asterisk freeze, which I made the open sips RTP engine on tp proxy They all come with native HEP support. So you flip a switch and they're sending data to our platform We now have also a harder set of agent that we use for Just normal sniffing statistics lawful interception a bunch of purposes Which is a darning partnership with Kubro and then we have all of the newcomer So we have Janus media super and jitsimi now our finally first-class citizens into the platform All of these splits between two main sockets One is the HEP one that you probably already familiar with so that's our encapsulation protocol Which is native in those platforms that we mentioned the other one is just a generic JSON socket that we used to receive whatever you send to it and Inside of this socket. We have Processors or pipelines that are modular where we can decide. Hey, here's a new type of report that you know Jitsi meet is gonna send from the browser and from those reports. We can go and tap out any data So this is completely dynamic. It's not something that's hard-coded because those events are not, you know Based on specs that might change maybe next week does a new metric being shipped out But one of these products that you want to make a use for so we try to make it easy for this to to be possible Once data makes it into Homer it gets of course processed index prepared for correlation And then it splits into these three main groups that you see on the right So TS is for time series Data is for whatever we index and we make searchable and logs are logs So the lowest let's say for in based on complexity there, you know different use cases different types of data that we want to handle And now they're all separate. So we have those three nice pipelines with the same event We can choose a let's pick those, you know two Properties and store them in the database for search Let's take those three metrics and send them to Prometheus or influx to be or whatever we want And you can mix all of the above. So what we really wanted is to just give people finally the choice the option to do what they want Why because we find that a lot of our users and customers already have some of those systems. They already use them They're already tracking, you know system load or some other applications status and events somewhere and we just want to make it easy to plug into those Platforms be it metrics logs and whatever else. So you see here on the list the influx to be for me to send elastic are mainly the three most popular targets that we have for statistics, but you know Elastic for instance can take all documents. So you could build a homer You just sense everything to elastic search and that's all you have you don't have a UI You don't have any of our databases just trim it all there and hopefully mix it with other data That you have same for time series and so on so forth. We'll get into more detail internally It's all simplified. So we have those sockets. We come with those two initial ones But you know, hopefully people will contribute more or extend them We have the API which is mostly used for you know, connecting to the user interface and the clients and so on And the two main features that I mentioned before and you know, this is just fresh out of the oven So, you know, I decided not to go into a lot of technical detail, but where explain how they work Protomapping is the stage where we define what comes into the platform from either of the sockets so Of course once we parse everything it becomes a JSON object from these JSON object We can start and deciding what we're gonna use for what so maybe that's a specific header that we're gonna elect for correlation Maybe that specific headers that we want to be indexed in a certain way that we want to be wildcard searchable or I don't know The integrator will decide for the majority of cases and on top of these proto mapping rules We have correlation rules. So For those of you that use the old homer correlation was sort of a static concept So you were going for instance from a call ID and trying and adding suffixes post fixes or maybe extracting an XC ID To find what was the next step On in number seven. This is a cascading concept. So a C-plag could find another C-plag These other C-plag could find a log this log could find Media statistics and so on so forth. Of course, it can be a circular concept if that's the case But you get the gist. So we just want each method to be able to find more information and bring it all together There's a new of course user interface to leverage all of the above it's based on HEPIC, which is the commercial version. Hey, Tari of our stack So this brings a whole lot of you know new features into the experience the old Homer 5 interface. I think is the main reason why we don't get a lot of contributions. It was homemade It was a mess. It was something only we could understand and not always, you know There's parts that me Alexander were going back to and you know, we also couldn't read it so we decided to take it all throw it in the bin and Have somebody redesign this properly. So this is now a Proper UI where you know a web developer can come in and within hopefully 10 15 minutes Make the extension that they want without getting into our mind. I mean telling is you know, I mean our cultures I think are probably the issue here. So this is no longer the limitation as other people working on this and it makes a lot of sense It's pretty much the same thing. So, you know, we run a time-based or time-range application We have widgets which are mainly useful for displaying charts or looking for data The latest addition that you see here is lucky support. So Homer is now a full Fledged lucky client so we can search using the tags and stuff the same way that Grafana does in the latest release Homer is already able of doing so you can send Seep or logs or whatever it is that you want to store as a simple log with labels to lucky as opposed to the main database So we can now even create different storage types for the data that we send and the beauty of it is that hockey lucky Of course is providing labels. So the search is dynamic as you type You get all of the options and it's really easy to get data together Of course logs correlated to seep seep related to log. So it doesn't matter where you start You're always gonna end up with the full cycle information Time range selection just like the previous version everything that you have on a dashboard Everything that you're doing is based on a range that you have selected or that you're you know changing as you go Typical case, you know what's happening in the last 24 hours who does a spike? Let me zoom into that What is the spike from going back to the logs message it all statistics that are behind it? We mentioned custom protocols and statistics so this is a zoom into how those widgets are configured We currently support influx dbm from it you so the Homer UI can display data straight off those two platforms without mirroring without doing Anything so we write to them and we read from them in terms of displaying it multi-select I mean this works pretty much just like you know, it's It's based on the original. So for the influx dbm widget It's loosely based on our chronograph works for the Prometheus one. Of course our Prometheus works Widgets for search specifically can be fully customized. So if we're talking about seep, we're gonna have you know All of the standard seep headers. But what if we're talking about? GC events what do they have what what are the headers that we care about and do we really care about the same ones? So users here can or integrators here can choose what they index as we said and of course everything that they index becomes available Into this dynamic form. So if you have 10 fields any of your users can choose whatever fields they want to Have on their desk every day. So to make it comfortable Inside of those widgets Don't see the time I'm gonna check it good inside those widgets Everything is again dynamic. So you can see here an example for some protocols that we have in our test stack So we have seep. We have logs. We have RTP reports, which now can be indexed as searchable data If you have that need in your you know support case We have the Janus protocol. I didn't make it in time to add GC But that's what we implemented just over the last few days. So Search results are dynamic tables depending on what they are they display different columns when you select one of those rows the system starts the Starts fetching all of those correlated legs and correlated information. So you start from a CIP session You end up with maybe three CIP sessions because they're correlated some logs so on so forth I don't want to get into more detail about this because I guess everybody is pretty much familiar with the scope of this application We display everything as flows. This now includes anything that's captured. So previous versions This was only valid for CIP or let's say network Type of events now it's everything that we capture. So in the in the flow, you will see logs You will see RTP statistics. You will see whatever you send to the platform that belongs there So the juicy part RTC time series and for this I'm going to zoom in to that previous image How do we do this? So Janus has its own type of events and they're you know Completely different from those of media soup and sure enough completely different from those of GC What we didn't want here is to create an adapter for each So the JSON socket that we have is simply able to recognize types by the properties that they're carrying. So yes Let's say for the GC events the way we collect them right now is not from the server, but rather from the browser So we use the analytics. I don't remember what it's called function where you can inject the library into your GC client just like they do for Google Analytics or call stats We can take all of those events instead of sending man to you know, somebody else's storage We send it to our own we index them the way we want and we can make sense of that Mostly as time series because you know, it doesn't really make sense to keep any of those events as originals But when you start extracting, you know the round three time the jitter or whatever else You can create really really nice pictures such as this one So, you know, you can instantly start tapping into what your platforms are measuring You can decide how you take those platforms. So is it gonna be by IP? Are you gonna give each your own name? Maybe comes from signaling Maybe you have any specific field within the events that tells you who's the customer all of this is now programmable So any of you hopefully can go in can use it for you know the Standard stuff so whatever is common within those platforms But if you're developing one of those platforms the best part is that you can exactly start Offloading, you know some of the the tasks directly into Homer This is an example of the same in Elasticsearch as we said now you're free to store whatever data you want whatever you want By default we use Postgres for index data five. Yeah Postgres for index data, but again, you want to send it to Elastic do it you want to make statistics in another platform great Freedom is the is really the key here. So This said last few minutes. Why should you use Homer and Homer 7 specifically? The first reason is that it's now full-featured. So the previous platform by our own admission, you know I had several limitations that were kind of you know a restricted in the use cases. This is now longer the case So we should be ready for anything. We should be ready for you know the generating time series indexing whatever you want We're vendor agnostic. So by now it's been Many many many years and I think we work with everything and anything out there Some of our customers are incredibly large. So we also follow, you know tier one Networks fortune hundreds out there. It's incredible. You know when I'll you know large enterprises how many phones they have and I'll Finally they realize who I have to monitor this stuff. So we've seen it all and we decided to stay agnostic So we don't care about anything specific compatibility is a global HEP native, of course, this is a booster, you know all of the platforms that already support HEP You don't have to do anything to use and finally That's a model behind this so more five again was a little bit of an original concept This one is standardized so we can support it More efficiently the use cases of course packet capture Which was the previous one RTC analytics, which is the new chapter that we're really focusing on So talking to anything troubleshooting doing alerting and alarms based on all of these new data That we now collect intersect and correlate big data Because we can stream whatever we're extracting to anything and finally machine learning, which is the new This data it's really easy to use it for something and we have a different presentation for that that doesn't fit today Lessons learned with Homer and using Homer There's a team line between love and hate your customer get pissed really easy, especially the big one So if you cannot offer them a good model if you cannot understand what their problem is They're gonna leave for somebody who does Early bearers get to find the best bugs if you start monitoring when you start designing you don't have a problem to solve later You already know, you know the initial issues you don't force your customer going through the bugs that you don't even have visibility over Complexity is the hardest thing to capture the later you do it the worst is gonna be when you go and try to capture already You know a complex system that spends over 12 data centers and already has 12 protocols Going in ways that not even the engineers understand go and explain it to the monitoring guy. Good luck You know, we're gonna take weeks just to understand what the problem is So doing it early pays off and then of course the higher you fly the higher you fall when you have a lot of traffic and you start You know having issues Those cost money. So working on this early is really paying off to many of the customers that we're working with Installing the easy way Docker so we have Docker containers for everything that's already recipes that mix all of the features So hopefully you can go in there and find something that falls close to your use case and use it as a starting point If you don't find it open initial will do it How to support Homer by using it by talking about it by letting us know why you even don't use it sometimes Or what's preventing you from you know taking it where you need we need community input. So We've been working really really hard to keep, you know, everything open source To keep it useful now also other platforms are able to use the app protocol to the same thing that we're doing and we're super excited about it But we really, you know the community feedback to understand what we're doing right and wrong Are there any questions? I don't think I have any time We got two minutes two minutes for questions. Does anybody have a question and keep in mind you get a shirt if you do Yes What's your specific usage for a hat this important protocol that Seems that is going for the most Or transport for that analysis well have what we were hoping for have was it to be a generic that? Anybody could use to make a platform to make the same thing that we're doing or to make it easy for Somebody to monitor, you know a new product that we're making so what we're hoping for it is that it you know It continues or it becomes a standard that more and more platforms and doors And we hope not to be the only ones using it and I think in the next presentation will find out that we're not I hope I make it there What's the relationship between HEPIC and The the HOMA product the HEP protocol really HEPIC is something that we Designed because some really large companies apparently they really don't want any open source into the platform So we kind of have two tracks HEPIC is something that's more designed for the very large And complex network for you know people that need to spend money to do it Homer does really the same but you know it's a fully community. So they don't have any code in common They're just using the very same encapsulation protocol and they go for the same goal There you go. Thank you. So my question is what's the compatibility between Homer 5 and Homer 7? In terms of capture agent 100% so they still use the HEP protocol nothing changes everybody that was Yeah, HEP 3 let's say from HEP 3 up. It's all the same. So open seeps remains the Leading cities in terms of variety, but anybody that sends HEP it's able to continue cool The difference is in the JSON socket. So now we're able to get also those non-structured events and whatnot Those shirts are nice come on guys One more There you go. Sorry short. I'll bring you on Is it packaged in Some distribution not yet. No mostly because it's very modular So the way that you assemble it makes it into many pieces. Hopefully you don't need all of them So no, we don't have that yet. Actually, you know, we never been huge fans of making static packages Also because it's different technologies, but we'll get there hopefully as a contribution Thank you. Well you get it anyway Thank you so much