 Did I will show you the reminder that you have 10 minutes to the end of the presentation? Right. Yeah, I mean, do you think it's kind of shorter? Yeah, I'll see. Sometimes you need to tell them how sorry you didn't want to tell them. No. I think we can start, so let me introduce Fred. And I hope everyone is looking forward to find something new about the monitoring. We'll try. Good morning, everyone. Thank you for making time Sunday morning. I know it's early. Everyone awake already? It's an entertaining talk just before, so that was good. So, I'll be talking about the issue of monitoring. I have a particular take on it, which might be interesting, but I'll be covering general topics first, and then going to two particular technologies that my group works on that have an unusual angle on this. So, the first part might be familiar and boring if so, just make some yawning sounds and I will move on. If it's interesting, they just ask questions anytime and we'll get this done. So, everyone just heard a monitoring word, right? Everyone knows monitoring is a good thing. But sometimes I like to think about it, step back a moment and say, why does monitoring exist? Why do we even need such a thing? So, if you think about it, old monitoring is a way for a robot to do your job in the sense that once you have written the program, once it's running somewhere, you want to make sure it keeps running happily. And if something breaks and it inevitably does, you want to be told that something is broken. So, at the most basic level, it is there to warn you of something that goes wrong so that you don't have to spend your own attention fixing it or noticing that something has gone wrong. And I'm sure everyone here is a programmer, everyone has seen hundreds of failure modes. They're all kinds, not just bugs in your program, but hardware failures, network failures. Security breaks and you have to take action. A huge gamut of problems, obviously. And there are too many problems to manually check. So, again, we must delegate it to some kind of machine to do that. In some cases, the failures are very soft failures in the sense that it's not that something is broken, it's that someone would like the software to sell more products. You know, it's a website that wants to sell 3,000 widgets a day instead of 2,000 and they're tweaking web server settings and so on. They want to find out whether that has been successful or not. So, sometimes it's that soft, the kind of failure that we want to monitor. So, how does one find out that something has gone wrong? Well, there are some schools of thought where you basically just wait for reports to flow and you kind of fire off a software package, launch it, deploy it either with an AB type deployment but you basically wait for someone to tell you that something is wrong and otherwise just keep on and build, build, build. But normally what the most responsible thing to do is to let a computer program monitor all the other computer programs around and tell you when something is wrong. Again, all this is very basic, but it gets interesting in a second. Who can name more than five monitoring systems off the top of their heads? I'm sure you've seen, I have only seven or eight there but with a little more effort one can come up with 20 or 30. It looks like every year someone builds a new one that does something very similar to all the others. It's a very fertile ground for reinvention. And sometimes just choosing one is itself a big difficulty because there's again so many and also because people come in with different biases, different traditions and they're familiar with some and not others. Or their platform infrastructure of choice might have one or two installed and then you make it fit to whatever you need. So people tend to be very much creatures of habit and they go with what is easiest and they hate to choose from 30 options because it's a lot of work choosing. So they turn not to. It tends to mean that it tends to make the reinvention problem a little bit worse because anything that's new and flashy naturally attracts attention and then kind of fight, well, I just get frustrated thinking about it because it's just funny how many new projects occur, come up, advertise themselves and show up is the greatest new thing that will do everything except a few things and then suddenly your decision matrix has become much more complicated and you have to decide whether you go with the fashions or not. But the general patterns are all the same. You want to identify what it is that you want to measure, whether it's basic system statistics or something more complicated. You need to find a way of pulling the data in from its sources. You need to store it somewhere for a long term. You need to draw it usually. Some people are really, well, our visual system is very tuned to finding patterns and graphs, of course, so people are used to visual renderings, visual drawings of the data to make good sense of it all. Other people don't mind grids of data and computers, of course they don't mind raw numbers at all, but sooner or later at some point one has to visualize everything just for the humans. And if something goes wrong, obviously you want to be notified. So these basic five elements are essentially in every single monitoring system out there and they change in taste and flavor and focus and emphasis, but the same elements are there. The interesting aspect of it to me and which is what motivates this talk is not the common elements though. The common situations are already addressed in most of these tools. The interesting aspects to me are what you do if something very unusual happens, what you do if you go sort of what Dan Walsh was talking about in the container talk a moment ago. If a failure occurs that is not already well handled by the monitoring statistics that you have collected, what else can you do? Are you stuck with what the system gives you or can you go beyond it? I like to go beyond and I like to explore how to go beyond for those unusual cases where you must, where the mystery is just too harsh. So why might we need something more than the normal monitoring systems data? Well first of all, if you think about everything that goes on in a large system, there's simply too, too much in terms of number of different types of data that you might want to collect. There can be thousands, tens of thousands of different measurements you could take and if you take them all then you'll bog the system down while extracting it and you'll bog down your network and your storage just transporting a thing. So you literally cannot collect everything you might want. And you also can't pull it frequently enough to get very high resolution data because again it has a multiplicative effect on resource requirements. So some systems try really hard to have very high data rate capabilities and stuff but at some point you always run up to the limits of your storage infrastructure or network infrastructure at which point you have to decide, okay I'm going to collect only these statistics, only every 10 seconds and I'll make do. But again sometimes that's just not enough. And in order to do even a lot of data, some of these systems have to have enormous infrastructures in terms of having very large clusters of distributed databases to store all these time series and it's just heavy, heavy infrastructure. So what I call ad hoc monitoring meaning what can you do if you know you have an unusual monitoring need that the basic system does not provide for you already what can you do that's a little personal that satisfied your need just this moment. And some systems are good at satisfying personal needs and some are not. This one is one that is really good at personal small scale exploratory kind of monitoring where it lets you answer questions that you couldn't forecast ahead of time. So I'll be talking about, I don't know, 10, 15 minutes about this tool. Anyone familiar with it already? Not you guys. So performance co-pilot is an old tool. I say that just because even though it's just new to many people it has existed for probably two decades in some form already. It used to be a commercial product for SGI in their large computing clusters. It was open sourced probably like a decade and a bit ago. But it has gotten some mind share partly because of its commercial history. Partly because as a part of its commercial history it had to have made very robust and fully documented and higher standards than just a little open source project in some ways. Its main claim as compared to a more integrated single purpose monitoring system is that it's a toolkit in that it is designed to permit you to define exactly what you monitor as an end user, as a developer. You're not limited to what a system administrator wishes to monitor for you. You're fully encouraged to use the same system for personal needs that you have that particular moment. And the way it does this is an interesting architecture which decouples collection and storage and analysis into distinct but very simple processes and a very simple processing model so that momentary ad hoc monitoring needs don't conflict with others. So one aspect of it that makes it interesting is that there's no hard coded, no hard coded, no central configured overall system there. It contains several independent pieces. The key part, the basic part is this single daemon called a collector and all it does is it fetches on demand current live statistics that a particular performance monitoring user would like. And it's a very efficient little daemon. It's written all in C but it's quite efficient. It doesn't require a Java virtual machine or anything like that. And it's able to serve any of thousands and thousands and thousands of metrics including per process minutiae up to system level statistics and many, many other things. It's an extensible widget. It can pull out statistics from database servers, normal Unix daemons that have any sort of statistics exporting engine like the name servers do. Almost everything that has a little statistics exporting function, this guy can generally import and then make it available. But the key is that it does not do anything with the data. It is merely a server. It does not store the data. It does not pull it. It just holds it ready for when a client comes in to call. So this means that it can make available a thousand and thousands of potential metrics that are useful without taking up all the time and gathering it if there's no one listening. Then a whole separate module called the logger exists and its job is to talk to the collector from previous slide. It can talk to it locally on a machine or it can talk across a network so for it can remotely log as a native capability. And all its job is to send periodic query packets to the collector saying give me these 100, 200, 10,000 statistics that are current and then saves those to disk pretty much unmodified. So it's a very efficient way of sending the network packet straight to disk with very little processing involved. It's really a small lightweight tool that creates files from the recorded history of these metrics. And it's highly configurable, rigid even reconfigurable at runtime that each of these logger processes can change what kind of metrics it collects and how frequently. And it's unlike in other systems, it is not a central system service kind of tool. This is a tool that an individual developer user can, if they wish, start completely for themselves. It's an unprivileged process that just writes files with full of metrics. So developer doesn't have to talk to the assistant man to please log more for him for next two minutes. The person can just start this totally by themselves, direct it at whatever nodes are running their applications and it will do the job within a special coordination with anyone else. So that is pretty special. Many of the other systems require assistant man reconfiguration or stopping of some central service or setting up more databases to do anything. A lot of headache depending on the system of course. Here it can be totally personal. And because it's such a single purpose tool, it's very modular, very unixy flavored. So if you want to monitor 10 machines, you run 10 copies of it and each is very small, so that's okay. But it's very straightforward to just launch either directly or under sort of intermediary tool like what we call pm manager, which will then monitor machines that you designate and just will launch these little loggers for you and then bring your all of your data back. But it's all, it can be totally personal. Now how does one actually use all this data? Performance copilot comes with dozens of tools, generally little programs, but some larger ones too. And each of them is capable of either talking to a live collector, so consuming live data either from local machine or any of the machines on a network, or can talk to or can read the archive files that are just being generated by a logger. So each client tool that shows you system stats can talk either to history or can talk to present data either a local or a remote machine. It's all just selected by command line options and it's built in capability to all of these tools. So for example, yeah, I'll show just a little bit of examples next, but it is kind of interesting that whereas you're used to running the top command or iostat, all these guys, they're monitoring tools of sorts, right? You're used to having to log into a machine and then run iostat there and then you stare at the results for a few minutes. Well, with this one, the same data that iostat would print out for local machine, this will give it to you for any remote machine you have access to, as well as any historical data you have for many of those machines and with just a few different command line options. So for example, pmstat is just a little vmstat impersonator, but it uses the pcp infrastructure to make remote connections. So here you see us looking at three separate hosts concurrently in parallel. So those are the three hosts named there. And this will, in this case, just scan some pre-selected set of stats from those three machines and intermingle them on the console for you. So it tries to be very simple, reasonably intuitive, but it hides a bit of magic in that we're actually talking to different machines collating the data, converting all the units, making sure that even if you don't want any machines, it's a Windows box. We still converted statistics to the same homogeneous kind of dimensionality. So everything will be kilobytes instead of megabytes, this kind of thing. It's all handled by the infrastructure automatically. To do that unit conversion stuff, this system has to keep track of metadata for every measurement. So it knows a piece of documentation text about everything that it has. And it knows the data types. It knows, again, units, which is a little bit different from data types. They can have a fully dimensioned numbers. So you don't get confused if a kernel version changes and starts printing something as kilobytes instead of megabytes. The system does not get confused. It still knows what the numbers represent exactly so that clients can convert without any confusion. So it's a little subtle issue, but it means that every single chart that any of the tools shows is fully dimensioned. It's a meaningful scientific data. It's not just a curve with a number that's 0 to 100. You never have to ask 0 to 100 what. It's always 0 to 100 kilobytes per second. It's something that this tool takes very seriously, whether others do not. And the bottom there is just a little demonstration that this tool is very happy to deal with textual sort of measurements, too, not just numeric ones. So one of the agents that comes with PCP is simply listens to system D journal entries and they have others that listen to log files. And it can transcribe log records into the data stream, into the metric stream. It's just like it would transcribe numeric results. So it can function as a combined source of numbers as well as text and binary data, too, if one really wants to go that far. It does have a GUI or two or three, depending on how you count. So once you collect the data, you can use countless of these little command line tools or libraries to process them. But also there exist GUIs that we ship. There's a QT-based one, which I think there's a screen dump in the next slide. There are various web-based user interfaces where it should run as normal JavaScript web applications. And again, all these run with almost zero infrastructure. These are very, very lightweight, personal level servers. You don't need to install Golang and a big auxiliary database or anything like this. Everything is very small footprint, very easy to install, personal. It can be personal. Very little infrastructure. So yeah, that's just what the VM chart looks like. They don't have a screenshot of the web interface, but that's not hard to find. As I may say, it's very invited to try it out, of course. It's available on Fedora and then many other distros. The view on the right is just a PCP's version of, I think, the ATOP program, ATOP or H-TOP, ATOP, which uses the normal local remote live or historical capabilities to draw the exact same kind of stats that normally a live local tool does. So it generalizes across time and space, so to speak. And it still looks familiar. It still looks comfortable. And one final, higher level tool. The PCP inference engine, as it's called, it also is flexible in terms of whether it's a local remote or live or historical data analysis tool, but it's a little tool that just takes mathematics expressions based on metrics. So for example, if the load average is more than five for the last 10 minutes, then do something. So those kinds of conditions are easy to express. And it, too, is not a core part of the system. It is just, again, an ordinary, potentially personal client that you can run just for your processes or just for your machines, just for your containers that will monitor only them and will do only what you need at that time. So again, like, near zero infrastructure. Then we have tools to take other systems data in or send data out. It's a very much open system in the sense that the data formats are all documented, the APIs are all documented. Data flows in and out are well prototyped already. So it's not difficult to send data to and from. And it's kind of original form. So we take it seriously that we are not at data silo. It's not like data comes into our system and never leaves again. We encourage the data to go two ways. We don't want to own. We don't want to be the central gatekeeper of your monitoring data. I just want to mention this one. While many of these are lower level, this one is a little smarter. It's a new tool that we're building just now. And it demonstrates just how one can assemble the individual products that I've already mentioned into a more high level, more intelligent system. That it turns out that the fairly small JavaScript web app can use and integrate the pre-existing pieces to provide something that is fairly unusual, which is a guided educational sort of tool. I wouldn't call it intelligent, but it's a tool that, well, I don't have a screenshot, sorry, but it's not hard to imagine. So picture timelines of only a few metrics at a time and explanatory text as to what a problem might be as suspected by the system and what its possible cures might be. And the system does all this by its own monitoring of only selective measurements at a time. So for example, there might be a single screen that measures disc latencies. And if it detects it's up of a certain threshold, it'll print out an explanatory text that, okay, your disks are getting saturated. These are various possible cures to it. You can look here to analyze whether the saturation is due to errors or whether it's just overload from too many processes. And it kind of guides down through a tree of possible explanations. And it does this guidance automatically based on its own analysis of the metrics. And then it's all very lightweight, very small. Again, personal sized tool. So that's pretty much all I have of a performance group pilot. So far it's... I hope it helps motivate so far the ad hoc aspect of what I'm really interested in about the monitoring problem space. But it only addresses that part of the system where the instrumentation already exists somewhere in the system. So it'll help if the kernel exports those statistics but it doesn't help in the cases where the stats don't exist in a naive form already. So the next tool I'm going to talk about is another tool that our group works on, of course. And this is a tool which, in this context, can be used to fabricate statistics, pull out statistics where they were not already present. So we can dig into the middle of software and pull out its numbers that it wasn't going to give us, but we're going to pull out it anyway. So we need this kind of thing because sometimes, well, often these days, our software stacks have many, many layers. We have libraries upon library and virtual machines and kernels and sometimes multiple of these things. And if something goes wrong with the overall system, it is really difficult to narrow down to which layer might be responsible. So sometimes you have to dig into internals of these layers even if those layers weren't instrumented properly in the first place. So sometimes you have to investigate what's going on inside them. So our general approach is to look at this problem as a debugging problem, as if what we want to do really is imagine sending a debugger into these various layers, any of the layers, because we don't know where the problem might be, and let the debugger snoop around and collect statistics for us or our investigation. And then once this investigation is done, then those responses, the resulting numbers, can then go back into the monitoring system and then flow back up to the user the same way as normal statistics would. So our tool of choice for this task is to investigate the internals of mystery layers of software is a tool called System Tap. Anyone familiar with it, heard of it a little bit? Okay. So some of these will look familiar because the mentality between here and PCP, there's some common elements. Again, it's a toolkit rather than a single integrated widget. It's a tool-building tool. While it comes with a lot of pre-packaged samples, pre-packaged information, it is really there to let an educated person use it to perform new kinds of research, new kinds of investigation. It deals with live data as opposed to historical data. It's one place where the PCP slide and this one is different. But in some other ways, the mentality is very similar. It's a personal tool. It's meant to satisfy your personal needs at that time. And it's not something that a system administrator needs to do for you. It's something that you do for yourself. So you can think of it as a programmable debugger, but like GDB, except much faster and it's able to run in many, many different layers of the software stack up to and including the kernel. So that is an unusual combination to have such a wide scope. It's a programming system. So there's a programming language involved. So it's, again, it's a higher level of commitment to use it, but it gives one tremendous amount of power to investigate things. And we found that tools that don't provide this much level of control just can't answer some kinds of questions that really do happen. So we found we really did need this level of expressibility. So if you know that you might need to look into running software, how we just start. The first is see if someone else had the same problem before. So our first advice to users is to just look at our library of samples. It's up on the web and it comes as a part of the self-distribution. It's a whole gamut of a couple of hundred examples. Some replicate a very common job. Some replicate silly stuff. There are games. There are sysadmin spying tools that the sysadmin college user might use in order to monitor other users. There's a wide variety of things. So about half the time when we get asked for help, half the time we can point people to one of the examples that already exists and that will be, that's good enough. If on the other hand there isn't such an example, then the next step is to write one of these system type programs for oneself. So just going to, it's not going to be very long. Just going to give a quick overview of what that feels like, what that looks like just to give a sense of the power. So the first element is to identify points where we want to extract information. And these points can be anything from parts of a source code of any software that you have access to there. It can be any sort of event source in a kernel that exists there. It can be interfaces to auxiliary user space programs so that a system type program can talk interactively to a user or to another program. So anything that can be phrased as a form of something happening, perhaps with parameters like a function that got called and it has parameters or a statement got executed or a performance counter overflowed because we had too many cache misses. Any of those kinds of things can generally be represented as a system type probe point. And it has a little naming syntax to identify them. And a single system type script will have one or more, generally many more of these probes because men are generally interested in multiple events and coordinating between them. So once we named what we do, once we named the places where we might want to gather data from, the next step is to decide what to do with the data. So we have many choices. It's a full programming language. It looks a bit like C, so you see it has printfs. All these dollar variables are for accessing context from the event source. So if we set a breakpoint inside a function, then the dollar variables will be whatever actual variables are available at that time in the target source code. So it lets us access just like gdb lets you print a variable from a program you stopped in. This is exactly the same thing, except we can also pretty print and we can manipulate things. We can express general logic. We can traverse data structures. We can process strings, which is important. And sometimes some other systems deal with numbers and then let you have one kind of condition but nothing else. We can let one express fairly complicated predicates like, you know, has this user, has this process done a system call in the last five seconds. Those kinds of expressions are expressible here, whereas other systems will not let you do that. We have some built-in statistics features, too. So again, it makes some things very straightforward. We can also do things like impose. We can actually make changes to a running program. Sometimes that's necessary if there are security bugs or sometimes you want to trigger inject default. You want to trigger some unusual response. You want to simulate out of memory indication, for example. So we have ways of letting the system type user modify state. It's not just for debugging but for bug fixing purposes, too. This sometimes comes in handy. Obviously, there are different levels of privilege for different levels of destructiveness. For those who care about the implementation aspects, it's a full assistant type language. It's a programming language, so it has to be parsed. It has to be executed. Our execution model traditionally is the next kernel module-based model. So it means we actually create a kernel module on the fly that does the same job as the script acquires. But we also have two other backends that are not Linux kernel module-based that are similar in capability. So there's a pure user space, one that's based on the Dynance library. Have you heard of this? What Dynance is? It's a C++ library that lets one inject code into running processes. It's like a binary rewriting, profiling, instrumentation library that comes out of a university. So anyway, we can use that same library to inject assistant type instrumentation into running programs. It's a completely unprivileged user space sort of thing. We also have been in touch and monitoring the progress of the Linux BPF virtual machine. Have you guys heard of that? So we've been keeping a close track of how they're progressing and we have a prototype whereby system type uses the virtual machine that is, I think it's kernel 4.5 and up, which lets some simple type of programs be executed under a different protected environment without all the kernel module safety worries. So we're constantly investigating more and more different execution backends for the same scripting language, for the same scripting environment, but to adapt to different people's needs. So it's a very broad capability tool. The reason it came up from the ad hoc monitoring, though, is because once you can look inside how a piece of software works, you can find out the numbers that it wasn't telling you. You can find out how many web requests go around a loop, how frequently it goes through, how full are its internal caches, right? It might not tell you anything about its own internal caching statistics, but with this kind of thing, if you can actually look at their data structures, you can pull out those numbers, even if there was no other way of pulling it out. You can pull out those numbers and expose them to a performance co-pilot and then it can be injected into the rest of the monitoring system. So I'm pretty much ready with that. I'm happy to answer questions even now if you guys have anything. So if I have an app, an application that I want to instrument to expose on the internal state or whatever, should I be looking at, is there an API to expose it into PCP or SystemTap or some generic instrumentation framework that could make it available to anyone? Right. The question was how a person who wants to instrument their program could do so to get it to PCP or the SystemTap. And the answer is yes to both questions in different ways. So PCP has a library after several possible libraries which you can link to your program with different values, a very low overhead way via shared memory. And then any numbers that you put in there will get automatically visible to any PCP clients downstream. If you want to identify SystemTap level points, then yes, there's another way of doing that, which is the exact same method that the detrace system has, just if you put some macro calls into your program via a header file we provide, and it identifies potential breakpoint locations and data that goes along with them. And then this becomes a very high performance, very portable way of denoting where you think interesting points are in your program, which makes them very discoverable to other people. And many, many serious programs already have this exact same kind of instrumentation attached to them, a patch of a browser, databases, shells, it's all over the place already. So chances are if you run SystemTap list command to find these, you'll find hundreds of them on Fedora or RAL distro already. And you're welcome to add those in, and they're basically zero overhead, literally in no op is all that exists in the object code, so it's very efficient. So there's still lots of work in progress. We're still fairly new to the broad cloud world, so we're still finding limitations as to how easily one can deploy these tools to a large cloud. So we're still working on scaling, making sure it's easy to run, but we know it's important and we're on it. And we also are building support for more and more scripting languages, our Python and Java support in both sets of tools is getting better and better. But I think those are the biggies, I think Golang is also coming in many respects. So, but we know more needs to be done. And some, okay, when I say solves, it's not completely solved, not 100% solved, but it addresses a larger fraction of the problems that your naive monitoring system would have. So, again, with PCP, we get very good ad hoc capabilities, assuming that there is enough data potentially available. And if you add system tab to the mix, then you can make an order of magnitude more data potentially available, which is the data that comes in from all practically black box binary blobs. You can kind of crank them open and see what's inside, and that data too can be then put into your ad hoc monitoring needs. So I think these two systems have to some extent a bit of a synergy and using them both together can let one dig into problems that alone you can't and you can get nowhere near with just a plain old, standard, charty kind of monitoring system. And just some links. And that's all I have. I think we might have time for one or two more questions, maybe. Anyone else? Sure. So, how exactly do you, from system tab, do you export metrics to PCP? I don't understand that. Yes, I just, I left a little tantalizing hint. There's a YouTube video that actually demonstrates that. The way that works is that the system tab script exports a PROCFS file, and PCP knows that it can look at for PROCFS files with certain syntax in a certain place, and therefore it can pull system tab generated live statistics from a synthetic PROCFS location, and it will then transport them into the normal PCP data stream. So that's one way. There are others, but that's the easiest. And there's a little YouTube video, something like three or four minutes long, that shows the whole thing working and it shows profiles and numbers and all kinds of stuff that this can be used for. That's it. All right. Well, thank you very much for making time and sitting through. And if you folks have questions after, of course, I'll be around. Alex. Hi. Do you have a live demo? A short one, but yes. So do you need internet access? No. Because that should work better than Wi-Fi. That's right. Well, I would not need a wireless. Oh, that's fine. Everything is on my machine for safety reasons. Yeah. Just in case. Yeah, who knows what will happen. So was there a reason for that? I'm not sure. Probably one thousand twenty-four. All right. If you want to hire, you can try to HDMI, right? It should be fine. Oh, it's big enough. Oh, okay. Okay. I think they already told you or maybe you heard of it. This microphone is just for recording. Okay. So in case of any questions, please repeat it for the recording. I get that. And what? I get that for you. So fine. Then we will be seeable. So you have plenty of time? And I should be done in 50 minutes, right? Like 40 and give like more than five minutes for Q&A. It's absolutely necessary. The 50 minutes is the kind of the recommendation. But in case of many questions, we can go to the boundaries. Okay. We'll see. I don't know whether there will be any. You should help for many questions. I do. And if you have water here. I'll need it. Yeah. Thanks. We'll show you the time was left. Okay. When do you start showing it? Like 20 minutes before the intro? Like 10 minutes before. Then five minutes. Okay. One minute. And there is a last question. Okay. We'll see. Okay. Thank you. The yesterday was a presentation about the Vizron studio. Viscote, yeah. Were you there? Yes. Did you like it? I don't like the way it works. I like some of the technology in it. Yeah. So it's... It's a relation between the presentation layer and the language stuff. Yeah. Extracting all the language specific stuff from the ID is not too good. But he mentioned about something like C-A-C or something like that. Yeah. So I didn't get the idea from the presentation. So... It's Chi. Chi or something like this. Oh, I don't know. Okay. So the... Who can use this coach? Actually... On the last day. I never used it on the laptop. Yeah. What Chi is, is really a web front-end and web ID. It shares almost nothing with Eclipse itself. The desktop Eclipse. Is it really web front-end? Yeah. Oh, okay. It's... That's the way it works. What it does is you have the runtime. The Chi is documented and all your workplaces are in other instances. In other containers. And your language specific stuff is yet another container. And they orchestrate them together so... You can mix and match. There are a number of limitations. I mean... It works well for... It's a white coat and interpreted languages. It's not easy to make it work that way. Let's see. Yeah. So more likely to web developers to HTML and to the screen. Yeah. I think that's the target. Yeah. Maybe targeting probably Java, PHP. That kind of work. All right. All right. Here you go. All right. Do you understand the language? Yeah. You're welcome. All right. Let's see what else I have under my slide. Okay. I'm guessing we might... Oh, yeah, this is how it works. As much as you think, this PCPRO is like, you get a chance to do it.