 Are we going to start? So hi, everyone. Hi, George. Thanks for joining me. It's like this slide says my name's Larry Price. I work at Canonical on the new Chinese perks of popular learning space operating system, the Bluetooth. This afternoon, I'm here to talk to you about how we run classic applications that I can find in the system in a Bluetooth. So what do I mean by classic application? And furthermore, what do I mean by you can find the ecosystem? Well, so let's start with a little bit of background. My job focuses on the idea of convergence, bringing the same as we do on any platform. We've got some screen size and computing power. Convergence has been a lot of really cool software over the years. Discovering new concepts of security and usability for the future. While we want to take advantage of these new architectures and tool chains where we build, we also want to take advantage of the tremendous scope of stable mature software available in the archives. And when we set that software, it should be seamless to our existing users. So a classic application in this context is in the application that I can currently install with a traditional package manager, with a package, a P package, or a software sender. And then run using either the terminal or a graphical discovery interface, such as the UDesh. These applications are terminal-based for Graph3CGNX11. Their application would be using for years to make our Linux desktop experience complete. And I'd like to buy a box that we're out with again. And D, it's like CRID, PNGIS, and many others. With that, it's apps. It would never be your Linux desktop instead of every year being your Linux desktop. And those simultaneously applications have already been migrated to new display servers and new packaging systems. This top will be focused on those which have got to be migrated. Classic applications were designed to be installed and used in an unconfined ecosystem. The way that applications have been traditionally installed and accessed resources is grown up organically alongside the tooling and other parts of system management. And throughout this process, applications can or must access parts of the system that they are given with access. And unless your software could be used to do on private business, sometimes applications. This is not only a show on desktop where you're likely checking on the machine once a day or so, but also a potential threat to the internet and able devices which are infrequently updated or checked for health issues and to be scattered throughout your home. And dependencies are another place where classic applications end the problem. Typically apps depend on several other packages and sometimes those packages can conflict with other versions of packages installed by their applications. Worse, sometimes applications need newer versions of packages that aren't available on the system or similar package versions have different patches on different distros. But even applications can pull back changes until a newer application version becomes available, which may not be until a distro makes a new release. This could be preventing users from any of the best versions of pieces of software at any given time. On top of that, we also see an increase in the pollution of our root file system. And this is hard to read. But this graphic is a great visualization of the way that the Linux file system looks. But you may notice that it's pretty complicated and some of these things are a little bit redundant. For instance, this is the contents of my app variable. We made our user-runable binaries as bin, user-bin, user-espin, user-games, user-local-bin, espin-games, and any number of dot directories in my home directory, as well as hierarchy sneaking in some random user-local root event. And similarly, library and data files appear wherever they want. These are the results running out on my machine, but I'm not considering this complete list. And data directories are just tied up wherever they need to be. So to generalize the goal of confinement, we want to move away from letting apps soothe by each other without permission. And we want to take back control of our file system. Putting these applications in a known location and giving each application the appropriate tendencies. Our traditional package manager and definitely the sovereign system to be packaged is over 20 years old and periods of those years on its shoulders. And because of this, we thought it would be best to introduce a new packaging tool chain. So historically, a couple years ago, Canonical NAS Clip packages, which were kind of a solution to this problem, an application package in the click is bundled with any specific dependencies that it needs and it takes advantage of frameworks to access and share resources that it might need. It also uses app armor to create a confined application and specify any security feature that needs. Clicks were used to package applications for the initial wrapper. We do touch devices, giving them a little bit of headline. We used advice and brand manager instead of X11 as well as Unity A's display server, which sort of gave us our combined ecosystem. But these tool chains were missing some important features and lack maturity. So fast forwarding to today, we've seen the emergence of SNAP actually. Also referred to as SNAPI. SNAPI is essentially another iteration on clicks, doing better version management, so your application can find the definitions and new ways to interface with system resources. You can use SNAP today, I'm monitoring Ubuntu or several other distros. SNAPs will all live in a special root directory and can be completely confined when it's also stable channels or unconfined in self-mustable channels, if you must. SNAP apps come bundled with all their clinics and co-locally, meaning that all apps will have the necessities to need it, that's where we're going to go. No longer need to wait on package containers, so you're going to need distro. SNAPs are automatically updated once a day, taking the pressure off of the user to keep their system up to date. And SNAPs come with the option for Delta updates, which means that the tool chain can balance or diff from the entire package, potentially making downloads smaller and faster. And as confining as concerned, SNAP apps are relegated to their unilinkable still classes and by default, require interfaces to be connected, to perform system tasks between the display, using audio, and using the network. Instead of using a hybrid or classic environment, in addition to a traditional package manager, we can continue to pull in and use our classic applications while enjoying confinement security in our migrated apps. However, one exciting premise is SNAPs is the ability to run in all SNAPs environment, which we, I think, call a bit diverse home. Yeah. But this puts in a bind, we can't run classic apps in this all SNAPs environment. Our little classic applications are looking for an unconfined system and we're looking for the up-to-date X-Levver display server. And this all SNAPs environment can only be used in neither. We need a way around this issue to give to us all users to run applications just like they would in a classic system. So, to resolve this, we've been working on a suite of tools which we call Libertine. The Libertine is a person who acts without moral restraint. Libertine is an extreme form of hedonism. Living one's life of pleasure while only thinking of oneself. In a sense, our classic applications are practicing Libertine. And it's our job to pull them back and to more on my standard. These applications need a safe place to go while the rest of the world falls in line and we provide the safe space as an Atlantic container. Containers are generally unrecognizable from a traditional Linux desktop experience. A Libertine container is un-fossessed that we work with and binds assistive resource directories to access things like code and some assistive resources. Internally, we continue to use apps and de-package to manage our packages. Even when the host system is not allowed to use de-package, even if de-package is not installed on the host system. Libertine also connects to a program called XMirror, which allows us to run an X11 window isolated on top of Mirror. Every installation, removal, and system modification is done internally. Protecting the outside world from whatever these classic applications may be doing to the host system. In the Libertine suite, it has many parts to extract away the underlying connotation of classic windows as best as possible. C++ library and GUI Python launchers and command line interfaces and tools to control other aspects like launching XMirror and to combine it with the board. I'll go over a few of these now. Again, a little difficult to say. This is just JSON, dictionary, arrays, and all the good JSON stuff. So we use Libertine uses JSON as a database file, keeping track of all install packages, creating containers, modified bind-outs, and we just use it as a database. Maintain the individual leaf for each user on the system. And using JSON, we found that it lets us not pin on any database frameworks, which is good because we can also just ask users for the raw database file and data line users, 30 or 40 lines of JSON. And so the crux of our container management is our Python library. Of course, it contains an abstraction around the database that we just saw and utility functions, but the primary use is to extract away the process of handling different limitations of Libertine containers, of which we support three types, Juru, LXC and LXD. Juru is a special case that we maintain the only three use-on-devices with Kernel's tool to support LXC or LXD. With the Juru back-end, we use the think-or-coming-line tool to create a base-serving system in the user home directory as a true jail from which we'll run all of our demands and applications. And when we run a command within a Juru, we use the tool help keyword. And we'll generally try to run a QM privilege to run that command in an extra jail, but every now and then we have to run a privilege-free command. And so we prefer not to use short-current containers except for the deprecator, though it's still useful. But LXC containers have been our preferred container type for most of Libertine's lifetime. LXC combines the Kernel C-groups to support isolated main spaces to provide an isolated environment for applications. LXC tools, basically, have been for us to cover in the home directory. No. I forgot to ask you. It was me who said no. I'm sorry. Oh, my God. Thank you, but for the key. We're capturing the sound from the camera, but it's just not perfect. Okay, okay, thank you. So, great. I'm sure with LXC containers, setting up the ocean here inside the containers, we'll be able to access the home and system directories. And setting up, I'm sure with LXC to integrate the traditional desktop features to be true. And we've got a great line to make sure that we can access things like audio for the system. Also, the LXC Python API is quite mature and works out really well for our needs, for instance. This is a code sample showing how we use the LXC container attachment to be on our command to enter your Python. But the newest shine is back in for Libertine as LXC containers. LXC is a new face on top of the LXC implementation. Simplifying container management and providing speed and usability improvements as well. Libertine LXC containers are outliers and that we obtained on the LXC back end to maintain the root file system. Though we still maintain locally that you use home directories and with applications directories. But the most part of setting up LXC is just like every set of LXC container. We buy massable devices, map the current machine in the system and it's also a default package. One, I guess one weird thing about LXC is how we have to set up slash run slash user. Slash run slash user is a system director for still involved with the right processes when LXC finished starts. This directory unfortunately is overwritten completely when it starts. So we need to remap it using sound storage scripts. Work around this, we create a custom script and copy it into the container. The script creates missing directories, takes its permission to remap it into the slash run slash user. But sure, the system is on a default. We add the file to the container using the files API which is back there. This is part of the Python by LXC mechanism. But the Python API for LXC is not quite for traffic. We can use it for all of our use cases. So we use a hybrid solution here for Python API and calls it to be open to start processes manually. As an example here, we use the Python API to update those devices. It's just a dictionary. And we use the Python by LXC API to create this profile of a dictionary and say using LXC exceptions to tell us when there's an error. On the other hand, when we run applications, we have to use psc-show.k open to launch applications because that's the best way for us to run LXC applications in the background while still getting all the outside and foreground. So we also have these services that are responsible for making sure that containers are starting to stop appropriately in LXC and LXC. This is dialed to a safe battery life so that the container's not started all the time and ensures that when a user stops one container or stops one application within a container, the container doesn't stop, but we wait for all applications to be closed. Moving on, we use the realm of binaries. Python binary called a container manager, which allows you to essentially create managed containers and self-active and modify them by mounts. This is a basic overview of using the LXC container manager to create a container. It's fairly straightforward. We just call them a creative command with any number of flags all which have intelligent defaults, the priority. So the user doesn't have to know what kind of container backing we're using. Here I've shown that as just a box, and then it just updates the database and the agile of the file system using the container's backend. And this is similarly just what installing a package would look like. Note that a package name may not line up with the binary's installed, but we're only keeping track of what the user is supposed to ask us to install. Because the purpose of using the container manager is to prevent the user from having to deal with the low-level concepts of any particular container type or package management system and focus on the key concepts of creating and installing. We also have a DOS server in Python to replace the work being done by the last application installed. All of this isn't fully implemented yet, but many of the commands that we use in the container manager take a very long time to run, and bad things will happen when those things are interrupted. For instance, having D-Pag, which of course will cause or can cause broadband packages to appear in the system. Other team being to solve that by being completely asynchronous method, where we use Python threads and send up these two U.S. signals. And although we would like to replace the container manager back in with this in the future, we currently just use it to list container ads and application ideas. Which brings us to LiveLiverTeen, which is the root of discovering container information for several external libraries. It's currently around a client for LiveLiverTeen D. And many of the low-level ad libraries that access LiveLiverTeen are written in C++. And LiveLiverTeen gives them a convenient way to access LiveLiverTeen information without having to talk directly to D-MOS. The code sample in here is just by making a simple D-MOS call from C++. We also have a GUI written in C++ with QML. This is an easy way for us to manage containers and even a lot less technical than you just be able to easily manage and manage and install packages. It's mostly a series of lists showing you what you can do in a list of containers and what you could do to a list of packages. For the most part, from family GUI it's the same as it is in the mainline tool. And now the real neat is LiveLiverTeen Launch, which is a Python tool that we use to actually launch applications. What LiveLiverTeen Launch would do is come up with the environment for running apps and then forward that environment to the container implementations and actually run the application. LiveLiverTeen Launch would be returnable but would generally prefer it to be used from a launchable tool like an Ubuntu and Unity 8 we use with the DREP launch. PasteD is a demon. It's a conveying binary to help applications to use the board. We just use key signals and slots to wait for copy and paste events from the underlying combo hub and then forward those events to our excellent apps. And doing this, we can use copy and paste pretty much like we would in these applications in a classic desktop. So what actually happens when you launch applications to LiveLiverTeen? While clicking on this up icon, we'll call LiveLiverTeen Launch with the container ID and application in that line which is probably found using LiveLiverTeen which found that information actually in LiveLiverTeen. LiveLiverTeen Launch will then start utilities, like paste it next year. It'll set up a session by coming at the environment and setting up a whole bunch of sockets to communicate between the two. And then it will launch a container instance based on the Python library which will then do any number of things based on the container type. Update bindouts, launch the window manager to connect to LiveLiverTeen Launch and finally launch the app. And that's most of the full picture. To review, we go to the container using one of the LiveLiverTeen launch modules and it sets it up to install applications through app, even when app is not a system. LiveLiverTeen keeps it container and it's possible applications can find their root file system in a small hole in the code to let them access things like the home directory or the resource directory. At which point something or someone can launch the application via LiveTeen Launch. We go prepare the container environment near the host, so the window manager and setting container connected to host display variable and launch applications. To divide all of that, where can we actually use LiveLiverTeen? The answer is, well, in most of our modern user systems. We use that as the current people that self-session and do-toon and using LiveLiverTeen and LiveLiverTeen is a little bit silly because LiveLiverTeen 7 is not a confined environment. Already run there's 11 and already installed packages to be packaged. But I think we're learning that we can use LiveLiverTeen just as well here and where there are a lot of users to run their applications in a confined container without looking up their host system. If your container is 7 is already running X11, there will be some classic apps that can use the running X server to connect to their window manager. So we don't need any special tools for that. And because the container is 7 is not a target platform, applications in container are automatically discovered by the 80-dash. So we have to jump recent loops to give them a show up either by writing some vital desktop files or by manually launching LiveLiverTeen launch. Alright, so Unity 8 is a new improvement for Unity 7. Running Mirrors is a display server instead of X and Prints, all of them are reduced. Desktop Unity 8 is also not particularly confined but running Mirrors instead of X may be some of our previous security things. In Unity 8, you can run some applications natively such as modern Qdash but I haven't run all the workloads that's natively just yet. And LiveLiverTeen works well in Unity 8. And it's built into the application discovery logic that's in here, so it's for being this and come up. If I got an icon for those in the past, yeah, it's a running LiveLiverTeen launch just as we normally would. Except in this case, there's no X11. So the system will also start instance with XMir to put their four applications to a tension window manager too. XMir allows us to run the X11 apps on an isolated window on top of Mir. Each running LiveLiverTeen application will get its own XMir, running online communication between the apps. And with this done, we successfully launched Xapps in an onyx environment. So, in general, we would be touch, we would be touch also in Unity 8 in here and then we can't run across the apps on top. Furthermore, users would prevent using apps to debug and install apps because we'd be testing it on a fine system by default. So we even asked, that would work in a desktop Unity 8 sessions might not be installable and runable and it would touch them. And we're varying hardware and treat sizes but you can't always run as well. Onyx would be touch. But on some devices like the BQM 10, we pre-install a LiveLiverTeen container with an essential class of apps such as LiveRail to NextApps. And we found that they were just fine. And it gives a peak at a truly converted environment. Of course, we're looking almost exactly the same way here as we did in desktop Unity 8, launching on sessions and experiments. It says that the older turn on the internet requires a feature of your computer which is one of the reasons we can treat these containers around. All right. So our end goal has always been trying classic apps and completely confiding your system which brings us to snaps. As mentioned, snaps are a packaging system to allow complete deployment. They make your personal and the system provide quality snaps. In fact, in the beginning of the snap, we double-tune the source, we can see and create one large package file. We need to update some padding and environment and variables to be more flexible for snaps. And we need to make some architectural changes to LiveLiverTeen and LiveLiverTeen B. Snaps also encouraged us to hurry the production of our LiveLiverTeen container back end because there's a LexySnap available that we can use. In order to access the resources and snaps, you have to connect to the interfaces and we connect to quite a few. This is the short list currently that we use. LexyOxy, connect to Lexy. Now we can network bind and connect to sockets and the network. Home to connect to the user home directories and all the rest of your standard graphics. Great. LiveLiverTeen box. This never works, right? Well, in Plot Twist, I've been running this presentation and Firefox running a Liberty container and snaps the whole time. So of course. But it might be interesting to also run some other applications real quick. Something like X-Ten M1. Come up. That's a lot of time. And we can say Firefox and you can see the command that's being run within the container. And this is running completely isolated in its own root file system that we separate from the host. I don't know. Maybe we can run something like LeapPad, edit documents. I can open something that already exists. Test file. And it also is fine. And it doesn't crash, which is good. No. And one more, how about some minus fever? And they're pretty good. We got 200. This is on Unity 8? This is all on Unity 8, yeah. So yeah. To be clear, this is Liberty's running on Unity 8. Liberty is a SNAP running LXD container. And with that, we can go ahead and cut. Oh, wrong. And more of that. To very quickly review, we wanted to be able to run classic desktop applications in a fully confined, I mean, fully confined, an X-List ecosystem in a packaging format. And we used Liberty to do just that, mostly through Linux containers. And there's a bit of a disagree in the back end in SNAPs. And we have to working on a lot of our modern machines. There's obviously a lot that we're still working on. All of these things, Unity 8, Mirror, and Liberty and SNAP, tomorrow, I'm moving in a final minute, LMI's. They're also still working, like we're in a couple with a lot of apps that you can use us. Our SNAPs are big. I recently, I mean, think I might have made a regression where audio doesn't work. But that's called working too hard on that demo. And that's it for me. All my content information might be out of time. No, it wasn't right on time, so I still asked, I was to set up a rally-card access for questions, but they will not be recorded because the recording is currently in the email. So are there any questions? What will be after running the state container? So there is one bigger thing? Right, so the system is made up so that you could create as many containers as you want. And then our underlying back-end will know which container it needs to be, which container is the right one to run an application in. All of these are right in the same place. And so if you want to make it up. I think what you said in the beginning that it was like in the modification of the price, you can't do that on separate containers. Right, so they all run. So we work around the problem of switching on each other. It's sort of a bit rough. Definitely want to have each day a very good DGA up so that we can model it. Yeah, so we prepared the apps around talking to each other through X11 by using individual instances of Xmerry. Or, and I think you can do it. And so each application will have its own Xmerry that's running out. But they can still talk to the same cloud system. Yes. Well, what do you want? No comment. Although I'm saying it's nice to have a diverse ecosystem of new display servers and packaging formats on that horizon. Anyone else? OK, thank you.