 to go. Hello everybody, good afternoon. I'm Anil from NexCenter. I'm the community leader and one of the developers of the NexCenter project. In today's presentation I will give you a quick rundown of what NexCenter is, of what's involved in making NexCenter and how NexCenter works with Winto and Debian. So the agenda for today starts over the history of the NexCenter project, of the ecosystem around the NexCenter project, and then we go into how we actually go build NexCenter, how we utilize OpenSolaris and the DBKG and APT packaging system, the challenges that we faced in creating NexCenter, the tools that we built because of the features that were unique to OpenSolaris. Then we go into some of the projects that have been built out of the NexCenter community. After the technical overview we'll go into a history of the project with Debian and how we want to collaborate more with Debian going into the future. So I guess we can start off with what is NexCenter? NexCenter is a community-driven open-source distribution. It is a server-only distribution that builds that combines the OpenSolaris kernel with the Ubuntu LTS userland. So we started off the NCP-1 release, was based on the earlier Ubuntu LTS, and the recent releases were based on hardy. So we basically combine the OpenSolaris kernel and specifically the CFS file system with the Debian packaging system. It is largely used as a storage platform, although it's also used for other server purposes. Our focus has been on the server till now. These are the upstreams of NexCenter. Debian and Ubuntu provide us with all of the userland packages that we use in the distribution, and we replace basically the Linux kernel with the OpenSolaris kernel. In the coming slides I'll go into the features and what OpenSolaris brings to the table. This is the history of the project. OpenSolaris was born in 2005 and son decided to open up Solaris 10 at the time. NexCenter was initially going to be a desktop. The first port, back then it was called the NexCenter OS, and we ported the Ubuntu LTS release, which was a 6.06 release, and we basically had a complete desktop with Linux replaced with OpenSolaris. So that's an early screenshot from one of the alpha versions of NexCenter. But we soon found out that what it took to maintain a desktop distribution, we just didn't have the resources to do it. There were over, I think, 5000 packages at the time, many desktop packages, and we did have a very active community of desktop users around NexCenter, but it was proving very hard to maintain it because there were just too many bug reports and RFEs coming in, and the developers simply could not handle it. Which was when we decided to change the focus of the distribution from being an all-purpose desktop distribution and focusing on the core, which is to just provide a very stable CLI-only system. So then NexCenter OS was basically deprecated in favor of the NexCenter core platform, which is what the project has been focusing on since late 2007 or 2008. So what's NexCenter core platform? We basically combined, oh, should I be away from the, we basically combined Open Solaris kernels with the Ubuntu LTS user end. NCP-1 was based on LTS-1, which is the Ubuntu 6.06 release. It was released in February of 2008, and we had about 5000 packages in the opposite tree. This is the LTS-1 release. We renamed Edge to L-R-T-E-L-A-T-T-E. Following the NCP-1 release, which was basically the first core release that we did, we actually built quite a community around it, and we had a very active user base who used NexCenter core platform as a storage distribution, as basically a storage server, largely because of CFS. With the success of NCP-1, when Ubuntu Hardee was released, we decided on the NCP-2 port. By that time, Open Solaris community had released the build 104, which brought in a lot more features. And by May 2009, we had the final release of NCP-2, and by this time, we had around 13,000 packages supported. So there was quite a new number of packages, but most of them were unsupported. They were basically the universe from Ubuntu. We supported a core of around 5000 packages, which were focused on the server and not on the desktop. So if you look in the opposite tree today, you'll find GNOME and XFC, but they're not really supported by the community as such. Rather not supported directly, but indirectly by the community. With the NCP-3 release, we kept the Hardee environment, and we basically upgraded our kernel from 104 to 134. This brought in a large number of CFS changes, like deduplication, et cetera, which basically consolidated the distribution split using the storage space. So the project itself has had a history of about five years, and a majority of that was as a core platform. So today, NextCenter is widely recognized as one of the best distributions to use for storage purposes. So we basically want to grow that, build on that reputation, and pack in more features. I'll come to that shortly. NextCenter was not supposed to be just a distribution. It was a core platform on which other appliances, other distributions, could be built. So the major sponsor of the NextCenter project is NextCenter Systems, and it creates a commercial distribution or storage appliance. That's basically how it makes its money. There's also another community distribution called StarMOS, which is an XFC-based desktop based on NextCenter core platform. So today, we have something like Subuntu with the open-slash kernel. Those are the websites. So if you see screenshots of the XFC desktop and the UI of the storage appliance, there were other derivatives planned, but they didn't pan out fully because we didn't really focus much on the desktop since it was, again, the community was not big enough to basically maintain so many packages when it came to the desktop. So now we finally get to the technical stuff of what goes into building NextCenter. This slide basically shows the process of how we build ON and how ON packaging system works. So at the top is, we basically grab the tower ball of the ON. ON stands for OSNet, which is basically the kernel and user land utilities provided by the upstream open-slash project. They're compiled with SunStudio compiler to produce the system bindaries. So this is a one-go process that's a one-go process. It builds everything in one make. It also generates PKGDef files, which are basically similar to Debian control files. PKGDef files build SVR4 packages. SVR4 was the early packaging system used by open-slash and Solaris. Recently open-slash, is that better? Better? Now? Okay, this is better? Sorry. Well, as long as it works. Is that better? Okay, let's move on. So the PKGDef files are basically parsed by the SVR4 scripts and we build the SVR4 packages. So we come in at that layer, the SVR4 package building layer, and we replace that with the Debian packaging system. So we have scripts that convert the PKGDef, the SVR4 definition files into Debian control files. This is usually a one-to-one mapping between Debian fields and the SVR4 fields. Whether or not, we've taken good defaults and that's how we then simply generate those control files, use the Debian build commands and build the Debian packages. So today we have about between 200 to 300 packages that start with SunW and all of these come from open-slash. These form the core of the open-slash kernel. And on top of this, we have packages from the GNU user LAN, which is basically the one-to-one LTS user LAN. So this is the evolution of a package as it makes it to an excenter today. First is the upstream tarbox that the upstream package releases. Debian, of course, adds its own patch and definition control files and the binaries it provides. Ubuntu then adds, appends its own version number on top of the Debian versioning and AccentR replaces Ubuntu with AccentR with its own set of patches. So there's basically three steps away from upstream and we want to improve that. Today, we maintain most of the patches on the packages that we change. This is a two-fold, there's two reasons for that. One is that we have been lax in how we work with upstream and get our patches upstream. The other is there's been no clear process for the community to send patches upstream, even if they want to. So one of the reasons I'm here is to figure out the process and a relatively simple way for a package maintainer on AccentR to easily get so lot of specific changes upstream into either Debian or the upstream developer itself. And I'll be looking for input from the Debian community after the talk and so we have scheduled a BOF session for later. And so if you have ideas on how we can work on improving the patch, sending patches upstream, please get in touch. This is the general repository structure in AccentR. It's similar to any other Debian distribution. At the top level, we have various directories representing various releases. The Elate releases are the NCP-1 releases, the Hardy releases are the NCP-2 and 3 releases, under which we basically have three main categories. A main represents the core packages and the supported packages by AccentR. Contrib usually has community-built packages, but they're not really supported as well. And non-free has some packages from OpenSolaris where we don't have the source and we just have the redistributable binary. So it's fairly similar to any other Debian distribution. Now, this situation changed a little soon because of a project called Elumos and I'll come to that in a future slide. Any questions so far? I'll also take questions at the end, but if you have any questions, please feel free to ask them. I'll just raise hand and I'll stop there. Okay. So what were the challenges faced when we built AccentR distribution? First was it was a major change for Solaris, the build system and the packaging system to change from SVR4 to the Debian packaging system. A lot of it was, there were no equivalent mapping of the definition files to the control files, so we had to take good defaults. But even then, when you consider the number of packages, about 300, we ran into issues and we had to basically test for I think six to eight months until we finally had a working system. Another, so Sun provides its own set of user learn binaries. So another question we had to ask ourselves was, do we have new utilities placed under user bin or the Sun utilities? And we made a decision to go with no because that's what a large part of the community is familiar with, which again led to the question, then what do we do with the Sun libraries, the Sun binaries and utilities? So we moved for any case where there was a conflict between a new command and the upstream Open Solaris command, we pushed the Open Solaris command under user Sun bin. For commands like CFS, for example, which don't really exist in the new user land, we put them on a user bin. But for other commands where there was a conflict, we moved it to user Sun bin. So the next question arose, those in the Open Solaris community would like to use this distribution because of APT and the familiarity with the DPKG packaging system, but then they don't want the new commands. In that case, we basically added support for an environment variable called Sun personality. So if you said this, we basically have patched the exec v command to check for the existence of that variable and then automatically add user Sun bin to the path. So even if your path doesn't have user bin, user Sun bin and you have set Sun personality to one, it will automatically pick commands, the Sun commands rather than the new commands. So this way we basically provided a new system to the user, but then he had the option to get a completely upstream Open Solaris Sun environment if required. Apart from that, we also added SMF support to packages. So Solaris has support for starting services in a distributed and parallel manner. So rather than starting servers serially via the init scripts, SMF allows you to create a dependency graph of services depending on other services. So two independent services can be started together. We added SMF support to server packages like Apache, Arcync, we added to mail servers and FTP servers. And so currently, if you run any of these servers, you can use the SMF commands to start and disable the services. Recently, Upstream Open Solaris moved from SVR4 packaging system to the new IPS package system designed by Sun. Previously, Open Solaris used to provide the SVR4 definition files, which is what we use to create Debian packages. But now since Upstream stopped providing the PKG Devs and moved completely to IPS definition files, we had to rewrite the whole package mapping script to use IPS rather than SVR4 definition files. This turned out to be, actually, IPS definition files turned out to be better than SVR4. In the sense, since IPS is more modern, it was more close to the Debian scheme of using various variables and control files than SVR4 was. So it was fairly easy to create the new mapping. And now we have tested our new scripts and we have basically built system based completely on IPS definition files. So NCP3 is about to release next week, after which NCP4 will move to the IPS build packages rather than the SVR4 packages. We also modified Debian tools. So it will work in the next setup. As I explained earlier, ON is built in one go and then various Debian packages are built from the proto area. So the proto area is where the ON build places all of the build binaries. We wrote a set of Dev helpers. We added a few developer scripts to the developer package. DH install SMF added SMF support to packages. So you basically have to play the SMF XML file into the package control directory and it added simple support for SMF. We added DH install proto whose purpose is to pick up files from the proto area rather than the Debian temporary directory and place them into the Debian package directory. DH Sunstrip removed the extra strip the Sun libraries that were built by ON and Sun installed Dev basically created the Debian file out of all of this. Just the addition of these four to the Dev helper allowed us to basically use the CDBS and other Debian rules and Debian control files in an easy manner to create Debian packages out of ON. We also wrote a new utility called APT clone which is a wrapper around APT get. But it used advantages provided by the CFS file system to create completely safe upgrades. So CFS is the root file system used by NexCenter and CFS allows you to take easy snapshots and clones. So if you have to install a new project, a new package for example, you could simply create a CFS clone. CFS clones are basically copy and write clones so they don't really take any space. So what we did with APT clone was when someone tried to APT clone install Apache for example, it would clone the existing root file system, apply upgrades to the clone not to the original copy and then once installed the user could boot into the new clone. If everything was alright, he could proceed with using that clone. If something went wrong, he simply destroys this new clone and goes back to the old system. This way you never lost a usable state of your system and you could always incrementally install packages and go back in time to a different state of your installed system. So you always had a working state and you could fall back to it anytime you wanted. And CFS made all of this very simple. APT clone is a simple Perl script that ties over APT get. So we do not allow a user to upgrade via APT get. Anytime a kernel package is upgraded, we recommend you use APT clone so the system is always snapshot and cloned. So we never have issues where an upgrade fails and a user has to reinstall the system. He simply has to go back to the older clone. That's a simple method of using the command. You simply replace APT get with APT clone. You can also activate an earlier checkpoint if you want to make it the default and move back to a different checkpoint later on. You can move between checkpoints basically. You can also delete all the checkpoints if you no longer need them. But as I said, CFS clones are copied on ride clones so they don't really take up space unless you make changes. Yes, you have a question? I'll repeat the question if you want. Hi, is the checkpoint on the entire file system? Yes, the complete root file system is a CFS pool. So yes. Okay. There was another question. All right. So then how does that interact with, for instance, logging? Because there are some things you want to keep logs of. For instance, you don't necessarily want to keep the dpackage.log that shows installation of something you ended up rolling back. Right. So, well, initially the whole root file system is basically on the same pool. So in the case where you create a new clone, a snapshot is created on the files. So those will be maintained in the version as they were. So if I create APT clone install Apache, the older files will remain in the state they currently are. And the new clone will have the modified versions. So what you could do in such a case is basically create a new pool and then mount that particular directory on the new pool. So the root pool is called syspool and that's what's cloned. So if you were to mount, you also have the option of mounting a pool at a particular path. So you could simply create a new pool called pool2 and mount while logged there. So that way when you take a snapshot of syspool, this is a clone, this is left as it is. Correct. Right. So it's a clone of the file system. So in this case it's syspool. But whenever you clone a file system, it's a clone of the file system itself and not the whole pool. So I'll come to CFS and how the pool's working a bit. All right. That's the next slide. So I basically wanted to highlight the things that Open Solaris brings to this distribution. And by far the biggest use that our users have found is of CFS. I'm sure you've all heard of this file system that was open sourced along with Open Solaris. So CFS is a file system that basically combines volume management to the file system layer itself. Here are just a few of the features offered by CFS. It's very simple volume management. So disks themselves are abstracted from pools. You could create a pool with two disks. And then that pool would basically distribute data across those disks. Later on, if you were to run out of space, you could simply add another disk to the pool and it would grow instantaneously. You can take snapshots which are basically read only clones. And you can create a clone out of a snapshot. And then you have two versions of the same data that can go in separate directions. These are copy and write. So only changes you write or save. Otherwise, they just point us to the older data. Simple network sharing. So CFS allows you to share any directory or pool over NFS, over CIFS, or as I discussed, basically a block target. It has complete end-to-end data integrity. So there's no FSCK-like utility for CFS. There's a 256-bit that runs throughout the file system. And this is run in the background all the time. So there's no big fraud. If CFS ever discovers something wrong, it immediately tries to collect it. It also has built-in compression and de-duplication, which can be turned on. And compression, you get many options under compression, from the most time-saving to the most space-saving. It also includes mirroring, striping across this, rate 5, rate 6. In fact, there's a better version of rate 5 called RAID-C, included into CFS, which overcomes the right hole of rate 5. But there's actually support for triple redundancy. I think it's called RAID-C3. It also had two levels of logging via ARC and L2ARC. ARC is caching of the file system data and physical memory and L2ARC on the level 2 ARC on SSDs. So you can basically add one SSD to a pool consisting of slow spinning disks, and you can get 3x the read and write speeds. So you basically get the speeds of SSD with the cost of spinning disks. Again, as I said earlier, CFS allows us to provide completely safe upgrades via CFS clones. Next center was, in fact, the first distribution to provide completely safe upgrades. Open Solaris recently added support. Added support later for build environments, sorry, boot environments, which is similar to APT clone functionality. It also supports Ditto blocks, so you can create copies of it. You can define how many copies of the data you want written to the disk if you're worried about the integrity. You can also define variable block sizes at an individual per pool level. So the way CFS pools work are, you can create child pools of the top level pool. So you could have a sys pool at the top. You can have sys pool slash A slash B slash C, which are three different pools. You can also extend a sys pool slash A to sys pool slash A slash 1 slash 2 slash 3. And each of these can have a unique mount point wherever you want. And so you can take a clone of one particular node of that pool tree or you could take a clone of the top level or snapshot of the top level and you can set it to take snapshots of all children. So CFS offers fine-grained granularity on how you control various file systems. So with all of these, this is the main reason why Nexentai is very well suited for a storage distribution, because it simplifies storage management to an almost, to a very simple level. And it's all handled by just two commands. The z pool, which handles pool-related changes and CFS, which handles the file system changes. The next technology I want to talk about is detrace dynamic tracing. This is a functionality provided by OpenSolias, which is basically breakpoints all throughout the system. The great thing about detraces is it can run on production systems with hardly any effect on, with hardly any CPU load. So you can run it on a system to diagnose any kind of fall directly. There's around 70,000 probes laid throughout the system libraries basically, and you can hook into any function call into the entry or exit of any of these function calls. Detrace probes have also been added to multiple server packages like Apache, PHP, et cetera. So this is an example of a simple detrace command that prints out whenever a process opens a file, it prints out the process and the file name. So what we're doing here is hooking into the OpenSys call upon entry and simply printing out the name of the program and the file that's opening. So that's a simple one-liner. There's hundreds of such one-liners that do various tasks. In fact, the language you see within codes is basically a language called the D-language, which is very similar to C. And you can write whole scripts that basically allow you to collect complicated data from the system to try to diagnose a problem. Detrace has been used to optimize multiple libraries and systems. In fact, I've heard of cases where Linux library has basically, that also exists on Solaris has been optimized because of Detrace. And then those changes also reflect a considerable optimization on Linux. So Detrace is one of the jewels among the OpenSys technologies. Do you have a question? Okay. The third technology I want to talk about in OpenSys is Zones. Zones is a lightweight para virtualization technology. So it basically allows you to take, create VMs based on the existing system. And it's not a full VM, it's basically a jail-like environment. So you get, it basically uses the, it separates various zones in terms of the applications and what they can interact with within each zone. In case of an extent, we basically debut-strap the Nexenta system into a particular zone. A zone basically has a root file system, which is a CFS file system, an IP address and a host name. You get five minutes and you basically have a VM on top of a global zone. So an installed Nexenta system is a global zone and you can create multiple Solaris containers within it. And each of these is a very lightweight VM, completely separate from each other with its own IP address. So you could have applications that are running in separate zones and then interacting with each other. This is very easy to install and you create a zone in about five minutes. The advantage of using para virtualization over things like virtual box and VMware is that they're very light on CPU and memory usage. So a system with about two GB of RAM can have about six to seven zones running on it. So what we have found in our development boxes is you could have a zone for about two to six megabytes of RAM. And they basically have the same power as the global zone. We built, the Nexenta community built a package called dev zones, which is a wrapper over a zone. So this is, this is also a very interesting technological challenge in developer environments. So say you have a very powerful box and you want to share that as a development resource among multiple developers, but you don't trust them with the root access to the machine. But the development itself requires root access. So what we do, what dev zones does is allows you to define a base zone which the administrator basically sets up as the complete developer environment. So base zone has all of the packages and the development libraries, et cetera, installed on it. And this is basically an environment that a developer would need to work on a particular project. You then create simple user accounts. These are non-root accounts and non-sudo accounts on the system. And once a user is logged in, he can simply run dev zone create and an instantaneous copy of the base zone is created and handed to him. So any developer can log into a machine, provision a dev zone for himself, work on it, destroy the zone later for others to use. So a system that can handle six to seven developers in various time zones can be used by around 40 to 50 developers. So you don't have to spend resources on buying 40 different machines, but have a powerful enough machine to handle multiple zones for users at various times. We have used this project quite successfully for the next center package development where we have one powerful machine with dev zones installed and any developer who, any community member who basically wants to build a package, we give him a zone on the developer machine. He logs in, creates a package, logs out, and destroys the zone. This can, of course, be extended to any kind of different environment. And this package is hosted on nextcenter.org. We also wrote our own auto builder, which is basically a utility or rather a set of utilities and nodes that pull packages from upstream, rebuild them for next center. Initially, NCP-1 had about 5,000 packages, and all of these were manually built by a small team of developers. The same was done for NCP-2. We had about 5,000 packages in our industry, all of them manually built. But then we saw that it doesn't really scale well, and there's a lot of packages upstream that can simply be pulled in and rebuilt without any compiler issues or complaining. We simply needed the people and the time to do it. So we basically used dev zones along with the auto builder that we wrote that basically checked the various package and their dependencies. Whenever it saw that package is not present in next center, but it has all of its dependencies fulfilled, was simply provisioned to an auto builder node. So whenever a node got a job, that said, hey, package A has all of its dependencies satisfied, but it's not itself built. The node simply pulled in the sources, rebuilt for next center and uploaded to our repository. So we got through about 7,000 to 8,000 packages in the course of two months, and our package count increased from 5,000 to 13,000 as it stands today. The auto builder also allowed us to track the most common type of errors. Whenever a build failed, we could tag it as a build fair because of a particular variable that is different on Linux and Solars. Basically, we have a set of around 20 known porting fixes and code that need to be done to port a package from Linux to Solars. So we tagged these packages accordingly, and then a developer went in and manually made the change and built the package. So auto builder basically allows us to use dev zones to increase our package count and only involve developers when the package does not automatically build and when there is a manual patching required on the package. This is also on our website. And I know that Debian has a similar build system, but we kind of went and wrote our own here because we had very unique needs in terms of how Solars packages were built. It's your question. I'm sorry if I just missed it, but which compiler and which library are you using to build these? Right. So as far as an exercise concern, the open Solars is basically built by a sound studio because that is the best supported compiler for ON, but all of the user land is built with GCC. And you're using the open Solaris Lib C for a reason? Yes. So I don't understand why you're using the open Solaris Lib C. All of the attributes of the operating system that you've described as being interesting and different are kernel space things, and it seems to me like you've made a huge amount of work for you, not to mention the licensing issue that you've never adequately responded to about linking between GPL v2 at least and CDDL licensed libraries. So I'm curious what the motivation for the open Solaris Lib C is. It seems like that's the source of an awful lot of work. Actually, that is something I was going to cover in one of the later slides, but I asked you a question about Lib C itself. New Lib C does not really... Well, Solaris is completely tested and QA tested with the Sun Lib C, and New Lib C is not really a drop in replacement for Sun Lib C as it stands today. And it also needs to be heavily ported to work fully with Solaris Lib C. So we basically... Currently, we are really focusing on stability and porting... Once we port New Lib C, we'll have to work on getting it to the same level of stability as, well, Sun Lib C is already tested by Oracle and Sun. So we decided not to do that and just use... As far as ON is concerned, as far as the kernel layer is concerned, we just took everything upstream provided to us. Solaris Kernel share a lot of private interfaces. So it'd be very challenging, I think, to take just a stock New Lib C and make it work... At least work well on Solaris Kernel. Yeah, I understand this very well. My name's BDL, by the way. I have a long history of doing library and kernel interface things and dealing with cross-architect reporting and all sorts of stuff. So I understand exactly what's involved, but it still seems to me that the combination of the amount of work that has to be done to build a complete user space on top of a different C library implementation than it was originally intended to be built on, combined with the legal ambiguity of the continued use of a Lib C under CDDL with a huge amount of GPL user space in this kind of a combination, whether that's actually harder than doing a port of G Lib C to the Open Solaris Kernel remains really unclear to me. I'm going to suggest that we take that... Both of these are... There's two separate points here, right? One is the technical challenges, and the other is the legal challenges. And we can talk afterwards. I certainly... Yeah, be happy to. This is a topic of interest. In fact, I was going to... This was this in shortly in one of the future slides about the challenges faced when collaborating with Debian for Nexcenta. So let's move on to the next slide. So Debian and Nexcenta. So Nexcenta basically has some history with Debian. As was pointed out, there was some ambiguity on the licenses, on the licensing interpretation, and basically also some technical challenges. One of the points we got was that Open Solaris was not fully open. There were small bits, including the internationalization library, for example, that were redistributable, but close in themselves. So recently, a community project led by Nexcenta released a branch of ON called Illumos. So we're calling this the Illumos project. It was announced a couple of days ago, and you should probably... If you follow some of the tech sites like Slashdot and other news sites, you would have heard of it. So this is a product started by Garrett, who's also with us right now. Wave, Garrett, to everyone. So Garrett is by com. It's the largest contributor to Open Solaris, the kernel. So the Illumos project was basically created to address this challenge of having a fully open distribution with some close bits. So we no longer wanted to have those close bits in Open Solaris. So we went ahead and the primary focus of Illumos was to open up these closed libraries and drivers. The biggest component of all of these was the internationalization part of Lipsy, and we actually have a fully open implementation of this in place, which was taken from FreeBSD and NetBSD. So we have that fully placed, and if you go to illumos.org, we announced this to a crowd, to actually a very big crowd of around 300 people from the community on the third, and we also have open replacements for other small close portions like drivers, etc. The aim is to have a completely open ON and replace all of the closed parts in a very short time, about a month or two. The goal of the Illumos project is to remain fully ABA compatible with Open Solaris. So Solaris prides itself on its binary compatibility, so a binary from Solaris then will run just fine on Open Solaris. And with this branch of ON, we also wanted to make it basically a goal to retain that level of compatibility. So tomorrow if an application switches from ON to Illumos, it'll still work just fine. Another point I want to make is Illumos was basically started off by Nex Santa, but it's intended to be fully community-owned and governed. As such, we have basically been talking to other members in the Open Solaris community who shared the same concerns about these closed portions of ON, and they have also come on board. And multiple distributions, some of the biggest distributions in the Open Solaris community have also come on board. So as it stands today, we have about 99% of the Open Solaris community on board on the Illumos project. Another goal of Illumos is to be completely self-hosting, so you can build Illumos on Illumos. And I want to make a point of mentioning that this is not a fork of ON, but rather a branch of ON. So we will be closely tracking upstream ON, but we'll be maintaining our open replacements for the closed libraries. And also any changes that the community wants to make can directly be integrated into Illumos. And we have reached out to Oracle, and if they want to take in changes from our branch, they're welcome to. But Open Solaris as a community was largely run by Oracle, and but in case for Illumos, it's a completely community governed, and Oracle is welcome as a peer. We've had an incredible response to the Illumos announcement. We are actually still seeing mails coming in, not being able to reply to many of them. So this is basically was a big need in the Open Solaris community for someone to step and do this. There were previous efforts to try to open up, but they never really got anywhere. So we decided that one of the reasons we're here at Debian, at Debian Conf is basically to try to work with the Debian community and figure out how we can become, how we can have close relationship in terms of packages and patches. So we want to address this challenge of having to remove the close bits. Multiple distributions like NECCENTA and other community distributions like Balanix and Shellix are also moving to Illumos. We talked to the other distributions like Milax and StarMOS, and they've also shown interest, but they're not completely agree. But my take is that they'll also move to Illumos soon. So we basically have almost 100% of the Open Solaris distribution moving to the Illumos branch of ON. So now I want to come to how NECCENTA can work with Debian. NECCENTA, as of today, is based on Ubuntu LTS, but we are open to working more closely with Debian since we are about three steps away from the application developers. We want to reduce the number of steps that we are away from upstream simply because we want closer collaboration with our, of sending our patches and our Solaris specific changes upstream. And also be closer to Debian because it helps keep the package quality higher because there's a shorter path in getting Solaris specific changes rather than following the three trial process. So I also will shortly try to restart the port discussion of NECCENTA being a port of Debian. There's two ways to go here. I've already spoken to DPL and the K3BSD folks, and we have some ideas on how we basically have some options open here in that would NECCENTA be an unofficial or an official port. There's advantages and pros and cons on both of these. So I would like to hear your ideas if you have any. We have Sheryl, I don't think it'll show up in the events list because it's showing it as unapproved, which I guess is, I don't think I submitted that correctly, but we are meeting up at the HAC center on the left side, I forget the name, at 5 p.m. But we can also have a discussion after this talk. So we're basically looking at discussing how patches can be sent upstream from NECCENTA to Debian. NECCENTA currently maintains patches for about 2000 to 3000 packages, and these are basically minor changes, usually because the upstream authors develop the package on Linux and there's some differences in certain cases where certain system definitions, for example, or other environment variables are not available. So it's usually the patches mostly an if, if, def, or a change of the variable type. So we basically want to have close collaboration with NECCENTA as well, sorry with Debian because it also brings to Debian the powers of CFS. Now we have a very active community of users of CFS at NECCENTA, and we believe that bringing this file system to the Debian community would be highly beneficial to the Debian community. So if you would like to share your ideas on how we can work together better or if you have concerns in terms of legal or technical challenges that we may face and how we should go about it, I would love to hear your views. So I have a question. In fact, I think that basically ends my slides and I'll open this up to QA. That's my email ID and I'll be posting those slides up shortly. So please, questions. Hi, how does the purchase of Sun's assets by Oracle affect your work? Right, so Illumos was basically a response in a way to that because once Oracle purchased Sun, we basically saw some radio silence from Oracle about the future of Open Solaris. While we believe that Oracle will continue to keep the source open and not close it, we believe that the Open Solaris distribution as such may not have a long future and Oracle is kind of focusing its energies and strategies towards the enterprise Solaris, which I guess works for Oracle but for the community that depends on the Open Solaris kernel, we had to have a different, basically a branch in this case, for example. Also, the other concern was what if Oracle closed the tab on the closed binaries and stopped providing them, which basically would mean for distribution that you cannot have a distribution with all those closed bits, which is why we went ahead and just replaced them. And now today we are in a position that if Oracle were to just close the tab on that, we can take, we can convert this branch into a fork as such. So in fact, this is what we have heard from across the board from the various Open Solaris users and community members that we talked to after the release of Illumos. And I think it's one, and basically the Open Solaris community was waiting for something like this to come and after a period of about five to six months with no news, when Illumos came out to fill the gap, the community just converged around it. And so this is basically what we are seeing right now and we have very, I would say ambitious plans for Illumos in the near future. Another question there. Hi. So the three main features that you said that Nexenta would bring to Debian or that Open Solaris brings to Nexenta are ZFS, DeTrace, and Zones. And if you replace Zones with Jails, we have all of those in our hopefully soon to be released KFreeBSD port. I heard you've talked with those guys, which is great. Communication is good. What would you say the advantages are of Nexenta from our point of view or from user's point of view? You mean bringing Jails to Nexenta? Well, no. I mean, you said the competitive advantage, like Nexenta has, you said the three main compelling advantages are ZFS, DeTrace, and Zones. FreeBSD, I believe, has ZFS and also DeTrace and Jails, which are similar to Zones. And so what are the reasons one might choose Nexenta over Debian and GNUK FreeBSD once they're both released? So the version of CFS and FreeBSD is actually an older version of CFS, the file system, so it does not have latest features. Also, I don't think you can have root CFS on FreeBSD as of today, on KFreeBSD. Also, well, I don't really fully understand the differences between Jails and Zones, but from what I understand, Zones go a whole long, basically a longer way in terms of, for example, the user and applications were changed to understand how Zones work. And I don't believe this is the case with FreeBSD. Same goes for DeTrace. Technologies like DeTrace and CFS are developed on OpenSolar and heavily tested. These have been ported to other distributions like FreeBSD, but I don't believe them to be at the same caliber as it is on the OpenSolar's platform. Any further questions? So Anil just touched on a few of the improvements in the OpenSolar kernel. There's quite a few others. Crossbow is a great example. The virtualization technology that's in OpenSolar and the investment that continues to be made there by the upstream is enormous. And I think we're going to continue to see that even if there's no OpenSolar's binary distribution. The code base that makes up that foundation in ON is continuing to evolve at a rapid pace, even standing as it is as an enterprise-grade OS. I think we also believe that the work that has been done by the teams at our part of FreeBSD is just incredible, but we also believe that there's value in diversity here. So just for the same reason that we don't ask, well, what about us? Why wouldn't you have a spark port? You could still run all your programs on x86. There's value in diversity. Okay. Any further questions? Okay. So thank you for attending this presentation. I hope I've given a better understanding and view of the NextCenter project and how we want to work with Debian. So we're looking forward to your input. We are here at the HAC Center. The BOF is scheduled for 5 p.m. today. But if you want to come join us on the way there, please do. And we look forward to working with Debian. Thank you.