 So this is going to be a rant about how the so-called UNIX tradition the system of a hierarchy of Processes and file permissions and all that sort of stuff That comes down to us from the original UNIX all that stuff really needs to be scrapped It's well past time that we effectively entirely replace the UNIX user land not really the kernel That's all fine. We can keep most of like say the Linux kernel But some of it will have to be changed to present a different kind of abstraction for user land And if we're going into details, I should be clear. I'm picking on UNIX here specifically but Windows and other alternatives have all sorts of their own similar problems. They just made different variants of the same mistakes. I Should also say this is part of a larger rant about just general software quality and why it's so hard to make good software these days I think one of the very major reasons is because we're building on top of really shoddy platforms And that goes for UNIX, but it also goes for Windows and basically everything else what you're talking about the web as a platform It's really unfortunate where we are today because everything is this huge clutch is this huge giant mess Where yes, of course, there are very good reasons economical and some technical of why we don't just scrap the whole thing Why these things have hobbled along for so long and why the the path forward usually taken is to just pile more stuff on top Certainly at each step along the way. There are very good reasons why that has been done But you add that up over the decades and what we get is a bigger and bigger mess And that's what all these platforms UNIX included feel like they feel like this giant mess where it's like a work area It's like a desk where you have all your stuff piled up And you don't want anyone to touch it because hey, you know where everything is but all this mess has it created over decades now and You the person who owns the desk or in the analogy the community of people keeping track of this mess You know where everything is you're comfortable with this giant mess And it would actually be a huge burden to you if if were to be scrapped and Replaced with something much more sane and that's unfortunately where we are is that UNIX and other platforms are these giant messes where The people in charge of people doing the actual work like maintaining Linus kernel and maintaining UNIX user land They're all comfortable with this giant mess But it's extremely inhospitable to newcomers and it creates all sorts of burdens for people who aren't experts in the whole Giant mess but have to work with it Tendentially you know have to deal with some parts of it as as you do if say you're building user software or actually if you're just a regular user in many cases What the general programmer experience feels like these days and in fact in many cases the general user experience feels like is That all but the simplest pieces of software are all rat holes unto themselves where you know something doesn't act like you expect it to and Figuring out how to fix it is this giant tangent that leads you on to 20 other tangents and those 20 other tangents Probably each have their own 20 other tangents. That's what the modern user experience is often like for this power users And it's what the programming experience is almost always like Even just the act of building software and keeping track of your your code with version control All of those tools have gotten so complex to build tools and version control systems tools have gotten so complex I think in many cases because the underlying platforms are abstracting over have also gotten really complex And if that underlying platform were simpler, then this wouldn't filter up to the software above it So what's so bad about Unix? Well first off almost anything to do with terminals and shells I should qualify that right off the bat by saying clearly the idea of a Programmatic user environment where you have complete control programmatically over the system. That's totally sound But everything from there about terminals and shells is really disaster and it doesn't have to be this way The core problem here is just the sheer complexity of the whole arrangement Sure in the simple case of hey You bring up the terminal you have a shell there and you type commands and it does what the command says It seems really simple for the the trivial cases But what we end up dealing with inevitably in any complex system is all the edge cases When anything goes wrong the whole complexity of the thing tends to rear its head And it's this huge drain on all your time and your attention Just think of all the moving pieces and the conceptual complex that they involve with a very simple command like say LS hyphen LA redirected to Foo To really understand what's going on here. You have to know like 15 different things So first off there's this terminal thing which is like the pseudo device in the Linux kernel So it's like a pseudo hardware device that is interacting with this shell process Text typed at the terminal gets to the shell process it looks at that command and Interprets it as invoking the LS program which should be on your path, right? Otherwise it won't know where to find it and what the fuck is his path? Well, it's this environment variable thing which has to do with process hierarchies of data handed down from processes Spawning other processes and so anyway the path variable lists a bunch of directories where LS should be found in one of those and so the shell process forks itself and execs LS and the fork the text hyphen LA that's passed into the Parameters of the first function by C conventions passed into the the char array there char pointer I should say and so what hyphen LA means is entirely up to this particular program which we're invoking the LS program The angle bracket Foo however, that's totally different. That's something the shell itself recognizes as part of its syntax And so it says okay this before I fork and exec to run the LS program I need to Open this file foo in the current working directory, which is a part of the shell processes state We're gonna open up that file for writing and in the fork where we're going to exec LS We're gonna swap out the usual standard output file descriptor, which is Zero or is it one? I also got it is I think it's I think it's one Anyway, we're gonna replace that file descriptor with the file descriptor for the open file foo Which we're gonna write to such that when we exec the LS program It has the file foo as its standard output and so the the command output doesn't go out to the terminal It instead goes out to this file foo and Because we didn't run this command in the background The shell itself is actually waiting for the LS fork to return to give an exit code and then it will collect its The the finished child process and then continue on its way and I think I've even left out some details, but not alone what I've just described That's an extremely complicated story and you could say it well Sure as just a casual user of the command line. You don't have to use that but the problem with The Unix environment and really just all of our software is that inevitably you're gonna hit that point where you do have to know all These moving parts because there's gonna be something you just don't understand about why something is going wrong Until you understand that whole complete story and so I think it's very important We ask the question does the story really need to be that complicated and I say no it does not and Beyond even just the complexity of all that Almost all the particulars feel like this giant clutch where just in like the naming conventions make no sense to any Newcomer, it's just this this is jumble of names that has been piled up over decades and Sure, there's a lot of people out there very comfortable with all these and familiar with all these names But if you're not familiar with all these names, they just don't make any sense there the naming's all strange There are inconsistent conventions for both the naming and syntax of command parameters and The standard Unix shell and all its variants whether bash or dash or zsh or just all the variants They're all really terrible. They're just really bad variants of dynamic languages Over decades. They've just piled on all these convenience notation features which can save experts a few keystrokes But are just not worth it in the grand scheme of things. They just put a huge burden on everyone else I don't think it's an exaggeration to say we'd be better off just having a Commonly used dynamic language whether Python or Ruby or JavaScript even who cares as the standard shell language Even though it'd be more verbose to run commands You'd just run functions and say like run and the first argument is the name of the program You want to run and then a string for all the remaining arguments that would be better that because you'd have a simple consistent language relatively simple notationally language The shell languages are just these cobbled together messes of ad hoc conveniences Which is why pearl for example is a shitty dynamic language that no one uses anymore because it takes all its influence from the shells It was a worthy experiment at the time But we learned better and the better dynamic languages that have survived move away from that direction. I Should be clear though that simply replacing the shell language with something more sensible and somehow removing that layer of legacy Croft and which is the the whole terminal concept That all would be an improvement, but it wouldn't really solve the root problem Which is all that inherent complexity I described in what's going on behind initiating a process behind simple command like LS hyphen LA and Also be clear that all that complexity really just can't be sidestepped It's really kind of tethered together with the heart of most Linux distributions where everything is Glued together with a bunch of shell scripts that situation has kind of improved over the years because like system D came in and replaced a whole bunch of old Startup scripts, but there are still many places in Linux systems. We really just can't avoid the shell It's it's just tied up with the whole thing. It's it's a central glue that makes the thing go And of course if anything goes wrong on your system and you want to fix it and you investigate Internet forums to find the answer the answers you're going to get are almost always going to be in the form of some kind of shell Business that you're going to have to deal with Okay, so the shell and terminals suck. What else sucks dependency management? package managers are in concept a very good idea is just that well Sure, very often software install with a package manager does work very often But very often it doesn't and the question is well, why aren't these systems supposed to solve all those problems? If a piece of software works on one person's machine on the developers machine Let's say why is it such a headache to get it working on other people's machines? If you use the PC in the dossier You may remember that used to be as simple as simply just copying all the files of a program into a directory on the target system And then it should work on that system assuming it works on other systems with the same version of DOS Barring of course issues with the low memory barrier and hardware support But what we didn't deal with back in those days is one installed program messing up another because they have conflicting dependencies If the package managers manage dependencies simply by ensuring that all the static pieces are there on your system that are needed Then package management should work brilliantly, but they do more than that. That's the problem They also will install many packages that end up running scripts and other Programmatic code that actually manipulates stuff that reaches in and modifies parts of the system that mucks with Configuration and that's where everything goes wrong In other words these package managers Yes Do a reasonable job of ensuring that all the packages that you think should be on the system are actually there But then what it doesn't do is actually verify that those packages are in their virgin state because it's the way We write our software was the end up just recklessly mucking with the state of all of our pieces all of our packages So of course the package managers can't give real assurances because they don't really track all possible Configurations of these packages. They just track the packages themselves But of course in many cases whether or not your software is actually working hinges on the configuration state. I Think what's really terrible about these sort of software failures is that they're not really considered bugs These software packages are written to certain expectations about their running environment And when that environment doesn't meet those expectations the authors of those programs don't really treat that as a bug They just say oh you need to fix your environment We have this tangled system where the pieces are supposed to work together or at least live side by side peacefully But there are many many scenarios where no one's really in charge or feels responsible for when the pieces don't really fit together and don't really cohabitate well So how could we fix this? Well, I think first off you need to have a much stricter notion of package management where there are strong guarantees that Separate packages can't really mess with each other and also the packages can't really get into states Where suddenly they aren't really the package expected by other packages if a depends upon B B really shouldn't be able to get into any state where it doesn't suffice what a expects So how do we ensure that well? I think part of it is at least just having higher standards for our software in terms of how much Configuration we've deemed reasonable for a program to have but aside from just having higher standards We could also I think minimize configuration state if we recognize that most configuration a huge huge chunk of it concerns just two things It's about resolving references of just hooking up a supposed to find B And then there are other huge half of configurations all about security. It's about permissions Those two things right there resolving references and permissions. I think describe probably the root of 80% plus even 90% plus of all the stupid problems. I've had to deal with on my computer Things like oh that thing that's supposed to be in that directory wasn't in there or is misnamed or whatever or had the wrong permissions All that sort of garbage So if you could somehow get rid of all or at least most of this configuration state concerning resolving references and permissions You would be eliminating. I think the primary cause of stupid errors Okay, so now the question is what would a system look like that solved all these problems or at least mitigated them? Well, first I'm going to do a broad overview of my proposal before going into the details so first off For package management to really make sense. I think it actually has to be something done at the kernel level There's a kernel level notion of what packages are on the system bear with me I know it sounds strange, but I think the rationale will be evident fairly soon So within this system all packages meaning mainly all programs But also all shared libraries and then also all files and all directories All the things basically that the kernel identifies are known by both a uuid and then also hash IDs With the idea that the uuid identifies something through all of its versions whereas the hash ID Both verifies cryptographically that this is what we say it is and also it identifies the version The idea of these IDs is they really should be machine independent They're not just some artifact of one particular system when you copy a file around when you Distribute a package that you these IDs are really going to be the same on every single system And so when we resolve references in configuration in the system, it's all through these IDs What we're getting away from is hooking up references to the reference in the form of file paths That's extremely error-prone. That's a huge source of all over headaches. That's what we don't want to do Unlike file paths, these IDs are going to be the same on every single system Now, of course, there are details here. Well, how can you enforce uuid's being uniform across all systems? You can't really do that But I think it's a matter of all practical purposes. It'll work well enough for reasons. I won't get into quite yet Now conventionally directories on Unix and all systems like Windows They're a listing of names mapped to actual behind-the-scenes file IDs So in a sense conventionally Directories are actually what give files their names and you can have hard links so cold to the same file and from multiple directories But then they'd have different names in different directories That's not how things work in the system in this system Each file and directory has a set of made of data optional made of data or tributes associated with them Including possibly a name but also things like the creation date and whatever else you want But all of those things including any name in fact are all optional You don't have to give your files and directories names They can just exist because the true name of anything is really just its set of IDs The idea here is that we want to move away from any sort of conventional file hierarchy. There's no notion of root directories There's no notion of Mounting partitions any of that sort of business is just you have a bunch of files and directories known by their IDs And rather than thinking in terms of browsing up and down neatly organized hierarchies because in actual fact, they're never really neatly organized Instead of even attempting that what we're really doing is just relying on search search of the made of data of the files and directories So as I mentioned, we want to make the security system as Reluctively simple as possible. And so we actually really only have two privilege levels There's no real concept of user account privileges per se It's just every program running has either admin privileges or non admin privileges or maybe you call them super I don't know. Maybe they'll just be called super user privileges or whatever now also understand. There's no process hierarchy There's just your installed packages and programs when installed are given admin privileges or not And so when installed program runs and it has admin privileges that then can do certain things like in fact Install and remove packages on the system. That's one of the privileges so each installed program each installed package and each user account in Including whatever we call the admin account whether it's super user or whatever All of these things have their own individual file space and the rule is that admin programs can see anything on the system They can do anything. They want of course But normal programs can only access their own file space and the non admin user account spaces And understand there's there was a notion of a logged in user, but there's no process hierarchy, right? So when a user program a non admin program is running it can see actually all of the user accounts whether those people are all logged in or not We're not trying to really protect the the normal users of the system from each other because I think in modern computing That just doesn't make any sense. It's an out-of-date notion It certainly made sense back in like the 70s when you have a bunch of people log into terminals And of course you want to actually protect those users from each other because there are a bunch of strangers all sharing the same Machine, I don't think it just doesn't happen anymore that strangers are really sharing the same machine And sure you have people like spouses sharing the same laptop and so forth But the notion that we're really going to keep the one person's files on the machine Secure and truly private from the other user of the machine just It's a total fanciful notion because one of those people is going to have admin access, right? So it just it becomes totally meaningless this barrier between users messing with each other Now when it comes to IPC and a process communication There really should be one primary mechanism like the default way of initiating all sort of contact between programs And what that should be is something like a request response mechanism Any program can send any request to any other program in that program then comes back with a response To my understanding what I'm describing is actually quite like what KD bus is going to be on Linux Though I think that is complicated by all concerns about the conventional Unix permission system But we're dispensing with all that so it's a much simpler model here Do understand though the implication here is that each program effectively in some sense has to be like a server It has to listen for requests and respond to them sort of like the way that Windows programs always have to have this message pump going Now what requests a program will respond to is entirely up to that program It's just a matter of each program's external public API For many programs the set of requests that responds to is gonna be very simple like a lot of user-facing software Like a game for example, there's not much it really needs to do when in relation to other programs It just needs to be started maybe paused and maybe stopped So that might like be a good bare minimum that every program would implement because understand in the system we're trying to minimize as much configuration state as possible and What programs happen to be running at the moment that itself is configuration And so it's a source of error where you try to talk to that program, but oh wait It's not running and so the idea here is that with this IPC mechanism of request or a response a program should be in the state of Ready to respond to these requests at any time now Maybe the program is dormant or maybe it was never loaded into memory and started executing at all But the idea is that the kernel when your program receives any sort of request Then it spins up and it should then be ready to give a response So again, there's no process hierarchy here. There's no programs managing other programs really as much as possible Each program to external appearances should be this stateless always available thing. It should be more like a service basically Now this request and response mechanism is a good baseline for meeting most common needs of communication But then of course there are special cases where you have a lot of data You want to send back and forth and so this request response may not be Most suitable So between processes through this request response mechanism We can share handles to things like files pipes and chunks of shared memory and that way you can have higher performance mechanisms for IPC But understand the general pattern here is everything is initiated through this request response I'm not totally certain about some of the details here like when you hand off a handle to a file to another program Should you have to then also do some kind of system call that? Validates that file handle so that it gives that other program permission Because maybe the maybe these file handles that you share between programs. Maybe they're actually system-wide IDs instead of process specific ones So there are details like that. I haven't totally worked out of course and Lastly again, we're trying to minimize as much configuration here as possible But there's going to be some legitimate amount of configuration that our programs are going to need So where should we put it a traditional unix answer is that hey, we'll have this Etsy directory and a bunch of well-known File paths to certain config files there That's really error prone because you have a bunch of different programs all mucking with the same state there the same configuration The all sorts of other conventions for storing configuration and files Now I'm not positive about this, but I suspect it actually is a better system to have a central registry The problem I think with the windows registry is that it has a complicated permission system It also is kind of reckless in terms of what programs can modify what part of the registry And then a whole lot of programs totally abuse the registry because They they're not really storing config in the registry so much as they're storing data That's it's being abused as sort of this General-purpose IPC mechanism instead of a really a place to sore config So the registry I have in mind the idea is that would just be a freeform storage of key value pairs But every user account Including the admin account and then every program should have its own separate namespace for this key value pairs and Of course non admin programs could only touch their own namespace and not the namespace of other programs As for touching the user account namespaces I'm thinking maybe only admin programs should be able to touch that stuff Like do you want regular programs to fight over what your Language setting is or what your user preference for mouse speed is. I'm not sure that's something that should be possible Maybe maybe the non admin programs can read user account preferences, but they can't modify them Maybe even something more complicated than that. I'm not certain It might also be best if perhaps there were a separate namespace for every pairing of a user account in a program So that when a program stores settings that might affect they currently logged in user They would have one place to store that separate from where they would store settings for other Users that might be a thing. I'm not certain. Maybe there's a simpler solution to that This whole registry business is probably what I'm least confident about I know the windows registry has become a huge mess I'm not exactly certain why though and perhaps if we just avoid the extra complications introduced and it would turn out fine It's an open question So now consider how much the system I've just described consider how much it simplifies We're getting rid of any notion of a process hierarchy. So there's no real fork execing There's no environment variables like God. You don't have the file handles which a child inherits from its parent process So there's no notion of standard to stand out. There's no inherited permissions or anything like that There's no exit codes, no program arguments, none of that We're also getting rid of or I guess you could say radically simplifying the user and file permission system There's no notion of groups. There's really no file ownership per se There's no exec bit. There's no mounting of partitions none of that stuff and As I mentioned at the start we're also ditching basically everything about terminals and shills So there's no sessions. There's no job control, you know foreground versus background. There's No crappy languages only really proper programming languages that are meant for general purpose work and That brings me to the question of well, okay, if we're ditching terminals and shills, what does the replacement look like? Well, first off, you don't really need a terminal to have an interactive prompt You could just do say we're like what the JavaScript console and the browsers does where you you're presented with a prompt And then you get back the response just like you would for an interactive session of a dynamic language at the terminal So whatever language you use for your command prompt that language itself would provide its own interactive command prompt These command prompts, however, would have a call response format It would be like a terminal where it's this basically the stream of text input and string of text output That actually can be written to and read from in an interleaved fashion by different processes We don't want that at all. We just want the shell itself Presents user with a command prompt and then they enter something and they get back a response And that response is displayed in a self-contained text area It doesn't just constantly stream and interleave with whatever comes after If you follow that terminal model then you have to introduce all sorts of complications to deal with questions like Well, this process is running in the background, but we don't want it to actually spit stuff out to the standard output So we have this notion of sessions and so forth and just we want to sidestep all of that So no, it's just call response call response like you would see as I mentioned in the JavaScript console and the browsers Now for certain commands that you issue you want to get back a streaming Response like maybe it's just this ongoing Updated log that you're seeing the output of so the idea then is that in the response area You would have this scroll a response doesn't have to be this static thing. It can be this live constantly updating view of data being streamed in from the request that was made So for example, if the command you issue Sends an IPC request to some other program Perhaps it expects back as a response a file handle and then your command prompt your shell as we're whatever you want to Call it takes out file handle and then constantly reads from it and then updates the response to that command But understand the scroll of your shell itself Always shows the commands and the responses one after the other not interleaved as that might happen in the terminal. I Also think it'll be really nice if that when you're typing your commands You could have all the full features of a proper text editing environment It's just really obnoxious how like say the standard cut and paste shortcut keys don't work in terminals because of legacy Reasons basically that's just really really terrible. So in fact Depending on the shell language, but in what I have in mind you very often would want to format Whatever code you want to execute across multiple lines You want to use enter key and not have it actually execute them command So think actually shift enter or maybe control enter should be what you do to actually execute the command If you just hit enter it should take you down to the next line as what normally happens when you're writing text So whatever this show language looks like exactly one thing you're going to want to do a lot is Issue requests to various processes on the system various programs And so to make that happen basically what you want is for your shell to provide some standard library functions for making such IPC requests Lastly one thing I would really like to see modernized about the Command line experience is just the output should be much prettier It should be readable in a way that output of traditional commands and you know traditional Logging to terminal just isn't because terminals come from this earlier archaic era with you know, no real notion of text formatting So I think the solution is that many responses in the shell should actually display HTML output or you know, I don't like HTML of all sorts of issues with it But something like HTML at least some sort of response that is formatted text and having an HTML or something like it would actually also be a way to imitate the behavior of a lot of terminal programs these days where it's actually sort of interactive where you issue one command But then it prompts you for the next step Well, if the response that comes back in our show has HTML with some kind of form Then like you can click on it or use keyboard input or whatever to then submit another command And so you could get a very similar effect like maybe some sort of command Wants you to hit yes. No, like are you sure really sure you want to do this? And so in the response it could present a form With a button yes and a button no and you type wire and or or you click on the button And it submits the command again this time saying yeah, I'm really sure and You could do other interesting things like present a table or a list where you can click on various elements to get more information Or maybe as a convenient way to execute a command on some element of the list You could do all sorts of stuff, but of course past a certain limit You probably should just start writing an actual proper interactive program if that's what you really want Okay, I hope that gives you some idea of how we could have command line environment that's more modernized and also ditches a lot of Legacy complications of the traditional terminals and shelves Another big remaining question might have though is well What would development look like on this system? Because as I mentioned for a program to run on the system it really has to be an installed package And so what that implies then is we're going to want to in the course of development have an easy way of Installing a sort of temporary package And so I think what we'll need is to reserve half the the address space the id space Of our packages for private use so that when doing development, you can choose just a temporary id for your program Just so you can run it on your own local system But then of course when you want to actually publish your program, then you're going to want to use a public id So anyway, you're developing a program You have all these source files that you then compile into running program or whatever the process is with your language And you want to actually then install that well you're going to need what we might call a manifest file That's going to describe your package and its dependencies and understand that we consider the target system API and API ABI combos as Part of the dependencies here because we want to make sure that when you install a package that it's targeting the correct platform And then with that manifest file and then also probably just a single directory that lists all the files They need to be part of the package. There's some sort of system call that takes that Specified directory and manifest file and installs that as a package And then you have your program installed as a package to which you can issue IPC requests and thereby kick off the execution of the program. I Suppose you could argue this sounds more roundabout than the conventional way of just compiling your executable and then running it But of course even that's not so simple because you have to set the exact bit before you can run the file And of course if what I described is more complicated Well, of course, you could just automate it as part of your build process probably not a big deal You may still be wondering though. What about the case of dynamic languages or rather languages with run times where you don't necessarily have a binary executable How does that work? Well conventionally what we do is we invoke the say the Python interpreter and there's argument We pass in the name of the the script file that is the the kickoff for our program Well, what we could do instead is we could have a small binary Which doesn't have like the full runtime But it drags in the bulk of the runtime from a shared library and this way you could take your Python script or your whatever dynamic language script and package it up into something that fits in basically the same package mold of natively compiled programs and Whatever the details and complications here, I just think it's really important that whatever we do We need to arrive at some solution where every language isn't inventing its own package management system on top of the system Why package managers we're supposed to already been using if our system package manager doesn't accommodate all of the things We want to install in the system that I think we've failed because then for these special cases will be dragging in extra complexity Which is exactly the sort of thing that's so terrible about the systems we have already So I hope now you have some notion of what using the system might look like I'll end here by addressing just a few miscellaneous lingering questions First off, what about network sockets? The problem with network sockets is that they effectively represent a kind of configuration state And you have all these different programs They might want to install in the same system and there's potential there for conflict because they might want to use the same the same ports and What we can't seem to do is just resolve these conflicts as they come up on the system automatically Because external programs stuff under the systems is expecting certain programs in our system to run at certain ports So I have some notion of how this problem might be addressed. You could basically have the system Automatically assign ports rather than letting programs choose which ports they want to use and therefore the system can ensure No two programs are using the same ports And then you could have running on a well-known port a lookup service that tells other systems when they want to know Hey, what what port is that program running on it can tell them? So for example another system asks hey the program with such-and-such public UUID. What port is it using? This would be a workable solution perhaps though It doesn't introduce a little overhead and having to look up these ports But of course probably the bigger flaw is that it presumes that every system in the world has some notion of The packages I described which of course will not be the case Another question that comes up is well, what about plugins and modular programs? How exactly do they fit into the package system because I think what we don't want is For this package to be installed But then to bring in all these plugins or modules and then the package has this totally different Configuration state than what it started with in fact more than that. It's not just more configuration It's more code and so this program which was understood to be one thing when it was installed is now something quite different And that violates the whole spirit of this package management system as much as possible We want to install things on our systems to be just what they say they are and not something else. I Suspect the solution here is simply that any plugin or module to a program gets installed as an extra package and then The existing program that depends upon this new plug-in or module its manifest Describing its dependencies gets updated with this new module. I think something that simple may actually be a workable solution Another question people might ask is what I described like plan 9 Which wouldn't be a favorable comparison because plan 9 was a total failure. It didn't replace Unix at all There are a few ways in which what I described sounds a little bit like plan 9 Like actually getting away from conventional hierarchical file systems I was sort of an idea of in plan 9, but I don't think there's actually major overlap because actually what plan 9 was really all about It was an attempt to take Unix and somehow make it network transparent To do some sort of Rethink of the system that made it somehow supposedly more suitable for networking I think ultimately plan 9 failed because that vision didn't really coalesced it It wasn't really clear if it was any benefit to be network transparent in that way I think network transparency like that may actually really just be a mistake I think what we want out of our platforms is to take care of the local system and then let all Networking concerns really should be handled at the application level. In fact in a way Unix itself I think suffers from thinking too much about the networked environment because you know originally it was Developed for many computers with a bunch of terminals And so already it was designed to accommodate a sort of networking And so it brought in this complicated notion of user permissions and process hierarchy permissions and all that stuff Which I want to get rid of so I would understand people making a comparison with plan 9 But I think actually my system goes really in the other direction going back towards a network agnostic platform The last question I've been thinking about is well, how do we implement the system? so ideally of course as I described I think at the start is that you would take say the Linux kernel and you'd modify it to Present a different set of user land abstractions That of course would involve a lot of work And require a lot of expertise that I don't have I know I'm not a kernel hacker I'm sure it would be interesting work And I'd like to see it done eventually But I think the the shortcut to getting such a system as I described up and running at least as a proof of concept Is to do what's called application virtualization This is basically what things like the so-called containers like docker have done where it's not hardware virtualization We're not embedding an operating system Within another operating system is just representing an abstracted environment for programs to run in So the thought is I could define a set of APIs such that programs written to this API Could run within a sort of abstracted environment within Linux, but with all the appearances of the system I described So for example instead of using normal system calls for dealing with files You would use the ones provided by my API and then you would be working with files in the manner as my system prescribes Now I imagine such an implementation would not be ideal in certain ways like for example the whole Request-response IPC mechanism I don't know exactly how it might implement that within Linux in user land terms But it could be done probably though with overhead Relative to what you would get if it were properly implemented at the kernel level So this implementation wouldn't be ideal in terms of performance though It probably wouldn't really be all that bad either And I imagine there's something in the security model that couldn't be enforced properly So it may not provide the the proper security guarantees that a real implementation of the system would Still despite all these drawbacks I think that actually would be the way to proceed certainly as a as a means of getting the proof of concept of the whole system out there But also to to work out kinks in terms of certain issues and use cases, which I'm certain I have not properly accounted for So in fact, if you have any thoughts about how the system I described won't work in some key way or has some major issue And perhaps you can propose some solution I'd really like to hear that sort of thing and I hope that maybe if you didn't agree with everything I said You at least are starting to think about how our platforms could be better How they could be greatly simplified and cause us less pain