 I'm actually from IBM, I will admit that. If you have any questions as we go along, please ask them. If you're wondering what I'm saying, somebody else is likely to wonder as well. If you want to discuss something, though, and you might feel argumentative at some point in this presentation, I suggest we leave that to the end. And we can continue over the lunch. So I call this, when UNIX meets the mainframe mindset, I should also say that Pipes is an IBM product and I wrote it actually, I suppose. So I'll start out by outlining what I think is the mainframe mindset. And then I'll speak of something called CMS, which stands for conversational monitor system, Cambridge monitor system, console monitor system, or something like that. It's an IBM product. Then I'll do a couple of slides to show that I do understand what UNIX Pipes is about. And then I'll speak about how CMS pipelines evolved from essentially the basic UNIX model. The evolution was that I wanted to do computing paradigms that did not fit the UNIX model. And I'll walk through how this evolved. The CMS pipeline is now entirely different from the UNIX pipeline. And finally, I'll do some summing up. Now, first of all, the mainframe mindset is an old mindset, right? Mainframes are old stuff. They're still around and they're kicking ass. But if you want to describe it in one sentence, it would be multi-programmed and multi-user nowadays. And if it's multi-programmed and multi-user, then there are things that you do in UNIX that you don't want to do. Like, you don't want to take an interrupt every 10 milliseconds if you're idle, right? Because that bothers some other user. It has a long tradition for efficiency. And the reason for that was that originally they weren't very fast. So you'd say, but they're fast now. But yes, they're fast, but they're still other users. So inefficiency is not tolerated. And it's punished because you get charged for what you do. So there is a very real motivator in the business world to be efficient here. And another thing, I mean, it has its tradition in punched cards. Probably few of you have ever seen a real punched card, but that's the reason for the 80-byte record that was the punched card. And if I may put in a plug, the mainframes are reliable, robust and usually well documented unlike some things I've seen in the UNIX world. Now, CMS, which is my time sharing preference if you haven't guessed it, was written in the mid to late 60s for the then recently announced System 360. It was part of the research project, so it was more like, let's build something to support ourselves. It was, and still is, a single user system. So one user has a dedicated machine in which he runs CMS. He boots CMS, if you like. CMS has its ancestry in what was known, written at MIT in the early 60s as a compatible time sharing system. I'm sure none of you have ever heard of that before, but that's the mother of all time sharing systems. Now, even though you had your own machine, technology evolved or was desired to multi-program these users, and we did that by supplying each user a virtual machine so he had the illusion of a standalone system. And CMS is now part of the IBM product, ZVM. Now, here is a typical CMS installation. And I kid you not, CMS was designed to run on this kit. It was built on this kit. There's a CPU to the very left for you. That particular model there had all of 64 kilobytes, and that was a big one. Then there's a console printer keyboard, so that's your teletype of then, or IBM's equivalent of a teletype. It was a Sylectric printing machine, was a golf ball print head, so there was no carriage that moved out and hit you or anything. But I mean, this was a standard feature of 360 Model 30, which that is. We have the two disks here. There's a CMS systems disk on the left and the user disk, the 191 on the right. And each user, obviously the system would support only one user at a time. The user would arrive, plunk in his disk pack and IPL the system, if it wasn't already IPL, then off you went, banging the printer keyboard. And there were editors, compilers. What do you associate with interactive time sharing use? Tapes as well. And this is the unit record equipment as it was called then. And the lady is feeding cards into the card reader. Typically, I mean, before you got CMS, all programming on IBM Kit was done by punching cards. You punched your program into the cards, had them compiled, got your output usually as a card deck, took it away with you and ran it whenever you want. So that's the card. Sorry? And don't drop it. I beg your pardon? And don't drop the ballot. Ah, but they had serial numbers. So if you dropped them, you took them to a sorter and sorted them. And if they didn't have serial numbers, you were in deep shit, yes. And honestly, the answer, sir, is don't drop your cards. That's a card punch on the other side of that machine. And this is a printer. Now, this reader could read 1,000 cards a minute and the printer could print 600 lines a minute. There were faster printers then. But this is your basic CMS configuration. Now, all of this can be virtualized so that you only have one bigger and more expensive box, but you then have a number of concurrent users. But each user has his own mainframe and he can IPL CMS if he wants to do interactive work. He could IPL OS 360 if he wanted to do that. Or he could IPL CP as a control program was called if he wanted to test the basic control program. So the control program ran under itself. So no night shifts, everything done in prime time. Now, the bad news. Is that CMS and the tradition of all of these systems was that you had dedicated software for each piece of IO and you did not have a unified standard in standard out model. So that means that if you wanted to access devices natively in CMS, you would write assembler programs and you would use macros that would then, you would use different macros for different devices then we put it that way. This was how it worked until I did CMS pipelines. Then we had a standard IO interface. Shell languages, CMS works different than UNIX. In UNIX, you have the initial program that's usually the shell. And that would then read your output, interpret it and you could even program in the UNIX shell. CMS is sort of a two level so you could, you have a command prompt if you like and you can issue commands and these commands can then be what's called execs or rex programs which would then be interpreted. So you would typically edit the file, your command script and then execute it. So we could call it a shell but it's a different kind of shell. But the point is that those kind of languages are interpreted and not suitable for data manipulation. But they're good at sorting out logic for instance. So you know, I have this kind of record, what should I do with it? But then doing it is not, it's possible but it's not cheap and it's not easy. But the combination of procedural code and pipelines which is essentially functional programming because you're not concerned with the actual workings of a program that reads the record or anything. In a pipeline you select the function you want to be performed, a transformation if you like. I mean, that's the way the UNIX pipe works. Now the combination of the two means that you can write what logic you need to write in an easily debocked and programmed language. And when you then want to turn some data you hand that over to the pipeline and the pipeline does that very efficiently. Now the UNIX pipe is essentially the way I see it. A series of programs and you can pass data from the output of one to the input of the next. I mean, from standard out to standard in. It is linear, so you have essentially standard in on the very left and standard out on the very right here. The point is that they are built on top of these three system calls. So the shell itself has not got a lot of programming to make a pipe. Or rich, initially it would simply fire off the programs as it scanned them down the command and forget all about them. And when it got to the last program in the pipeline it would exec that program rather than fork it. And that's the reason why the return code from the pipe is the return code from the last command. The dispatching of the individual programs in the pipeline is unpredictable. I mean, you can observe what it is but I don't think you can predict it in general. I mean, it's reproducible but not predictable. Let me put it that way. And the return value as I said was from the last program. And if any of the other programs would fail you wouldn't get to know. Now, this is where the mainframe mindset says no way. No way do we work that way. We don't do that here. Now, some of you may know that bash three, version three introduced the sito-pipe fail. If you haven't done so already make sure all your shell scripts start out that way or you set that in your profile or your RC file or something. You definitely want to know if something goes wrong. But it's interesting that's the only change to Unix pipes ever since edition two of Unix which hit the streets in 1972 or thereabouts. Now, CMS is, and I'll just say CMS Pipelines here is a set of interconnected programs but not with a linear topology. Not necessarily I should say with a linear topology. In general, you would have a multi-streamed program. That's very common. The dispatching is not preemptive. It's coroutines and it is predictable and you can reason about it. And you'll see later why this is important. Another difference between Unix is that the movement of a record from one stage to the next is unbuffered. So whereas in the Unix pipe, you go through the buffer cache and essentially when you fill up whatever is allocated to the pipe, you switch dispatching. This is not the way it works here. Also because it's not integrated with the CMS shell, it is simply a CMS command and the pipe seemed to be the obvious one to choose. So it could have had any name, obviously. It's just the command. Now, I mentioned that the basic interface to the hardware was rather device dependent. So instead of the Unix IO model, I adopt the model where I have device driver programs to access what you would call IO redirection. But hey, I run a program that figures out how to twiddle a disk file rather than do IO redirect. But it looks the same except that there is a vertical bar, right? Because the greater than is not redirected. It is in fact a program that's called greater than. There are many host command environments in CP and CMS where you can issue a command, get a response. And the pipeline idiom is that you read your command from the input, issue it, and produce the response on your output. So that's pretty straightforward. Filters are programs, I mean, also in the Unix terminology, are programs that somehow massage the data passing through them, but don't interface to anything else. And the tradition is that they should be well-defined, small, efficient. And I certainly subscribe to that tradition as well. The main frame mindset does not preclude being simple. Let me stress that. Selection stages are programs that will select records based on some criteria. I mean, essentially what in Unix would be called grep. Right, grep will select some records and discard others. And then there's a slew of other things that don't fall into this model. They're gateways, space warps, TCPIP is integrated in all this. You can do wonderful things. Now, the basic model about what is a program in CMS pipelines? I mean, it is a program and the program has a number of input streams and a number of output streams. It's entirely up to the program how it reads and writes those streams, but it's up to whoever writes a pipeline to decide how they're interconnected. And there is a syntax for that to express it as a linear command, but I won't show it to you. The dispatcher essentially starts up the program when the program has writes output that dispatcher regains control, dispatches something else to have this record consumed and so on. There is much more dispatcher activity than there would be in a similar UNIX pipe. On the other hand, the pass links, I mean, the full round trip from you, put something into the, you write a record until it comes out. On the other side of the pipe is about 150 instructions. So, we're pretty efficient here. So, dispatcher starts the stage, calls it like, I mean, you have to call it at its entry point clearly, so that's what we do. Now, what can a stage do then? It can peek at a record. Now, peek means that it gets to see the record wherever it is, but it stays there. So, this is what OS people would call locate mode IO processing. I call it peek, I think that's easier to understand. So, you peek at a record, now you can see it. Then you can decide what to do with it, how to transform it, where to send it, and then you produce an output record. So, you write. And after you have written the derivative work, if you like, you then consume the input record. At that point, the input record is released and you're not supposed to look at it anymore because it'll be gone. I mean, programs can do whatever they like. I mean, there's no way one can enforce that, of course, but you won't have a lot of sympathy if you break the rules of this game. To support the multistream model, you need a way to select which stream to read or write. So, the peek and writing and consuming are related to the currently selected stream. And obviously, peek works on an input stream and write works on an output stream, so that's implied. You can also wait for a record to arrive on any of your input streams. So, this allows you to respond to whatever. And your program could have the convention that the primary input stream is, it triggers this action, the secondary input stream triggers another action or whatnot. It's entirely up to the program. Finally, you can sever a stream. And when you sever a stream, it means that you do not wish either to produce more output on it or to consume more input from it. So, once you've severed a stream, end of file will then sort of travel out that way. How far it propagates depends on the programs in general. Now, if you follow the protocol to peek, produce and consume in a loop in your program, then your program will not, as it's called, delay the record. And this is an important concept and it'll be clear very soon. So, that was the initial implementation. Much like the Unix One, straight single pipeline, standard IO, device drivers, but it's just plain old programs. I did do the aggregated return values. That means that the return value from the pipe command is sort of the worst return value from any of the stages, rather than some random number. And it all fitted within the 8K CMS transient area, initially, because that was the only place it could run. It's somewhat larger now. Right. So, the first extension was that I... I'm not interested about the master button. I'm not interested in you have a master file, I'm a loop master, you apply transactions to it, and you get a new master. And I should probably use two previous value on disks. Because that was not, you know, you have your master tube, your transactions and cards, you have your part in the listing and the new master on another tube, right? Now, we can do that in Unix, but I wanted to be generalized because what I was really after was something called master level update. And that is that the CMS tradition is the opposite of the Unix tradition. In Unix, you will keep the most current version of the file as the file for a program, for instance, and you then have disks that could take you back to the various incarnations of the program. And CMS will start with the base version of the program and then you could update some time. So, you apply all updates when you compile a program after compiling them. This has a lot of advantages. Some from the CMS mindset. I mean, I'm not in Unix in this one. But what I wanted to do was, first of all, to apply master level update without using intermediary files, right? So, because this update, well, this update, this update, and this update are performed in parallel. So, it's really driven by this file. This file has to be reciprocated. And this one will be in the sequence of whatever records needs to be updated. And the update is driven by the sequence numbers we have before, right? So, this program will leave the file and note the update against it and then produce the updated one, just like you do in Unix. This one will then do the same thing and this one will do the same thing. So, essentially, you have one record from each of those files in storage at the same time. You don't have to do the entire file. However, you want the transaction lock for the first program, for the first update to be in front of the second one and the third one. You don't want those being discussed. I mean, you'd go crazy if you did that. So, here's one of my good ways. It's called Fan-In. Those who have done electronic engineering probably know what Fan-In is here, but it seems a good metaphor here. So, Fan-In will copy the primary input to its output, then the secondary input to its output and then the tertiary input to its output. And that means that you have to buffer each of these. So, you buffer the update locks in the hope that they are smaller than the master file and it becomes very efficient. Now, that was a motivation for me just doing pipelines and the sort of modern technology that I'm using. It turned out, oh my goodness. In the development of this, it turned out that I had my focus on something initially and I did something to the pipeline like introduce multi-stream pipes. And then we discovered that it was really for something else that that particular feature was essential. I mean, multi-level update is nice, but multi-stream pipes is essential. Now, what happens here? The locate stage will select records from some criteria. The ones that it selects go on the primary stream and they go to this guy who is a fan-in from any stream. So, that's Fan-In and it'll just read records as they arrive and then copy them to the output. Now, those that are discarded by the locate are not thrown away. They're just by convention passed on to the secondary output. And then we can do this on whatever we want. And so far, you could easily write a show in units that would support the same paradigm. Where you can't compute is here that we put in, we select some records, we do some things only to those records, and we then move them back into the original sequence to the file. Ooh, how did you do that? But this is where the not-delaying-the-record comes in. Now, who located keeps up, gets them into the record, yeah? It's in the buffer somewhere out here. He then makes up his mind and writes it. If the record file is this way, this guy will also keep that the record didn't select any string then to get whatever gets and then cut it out, right? At the point when this guy writes his output record, nothing within this pipeline segment can move. They're all blocked. This guy's blocked because this guy has not consumed the input record, and this guy's blocked because whoever's out here has not consumed the input record yet. So that's why not-delaying-the-record is an important concept and something that pipeline programmers need to think very carefully about. So you put a sort of in here instead of change. Sort would have a necessity to read or it seemed to be sort of produced output, and you'd get a different order of records. Possibly not the one you want to. Possibly the one you want to, who knows. So, but the point is the selection stages soon got to put up for the second half of the stream. And that made, that made, that essentially made it possible to do with the nurse, kind of like it. With this paradigm, we can also chop the record up into three segments and then pass only one of the segments through a particular stage and then join them back together again as one record. So that means that this paradigm also allows me to apply a transformation or anything to a subset of the record, and units won't be able to do that either. The term for a cascade of located by selection stages is a decoding network, because the electronic term is a decoder. Now, having done the Manchester in pipelines, it started to become embarrassing that you would write a pipeline and then one of the stages would use your own error message and you would have tried to optimize, or done something else you didn't want to do. So, I then implement the syntax check. And if you look at it at a program in the pipeline as an object, it would then be an object that has two methods, a syntax check method and a main run method. Now, the rule of the doing method, the syntax check method must not do anything irreversible. So, it's sort of a voting mechanism. It's essentially a two-phase commute, right? Everybody does a syntax check, they say, yeah! And then we can go, right? And if anyone fails, we stop it over. The point here is that if we have syntax check, and if, what's that? That's a grader, right? If grader doesn't stop because something else had a syntax error, then it can't flash a file because it never gets to run. So, it never has chance. To implement this object kind of nature, I need what is called a program descriptor. And that then has information about what the program, what it will support, for instance. Must it be first? Must it not be first? Or can it be anywhere in a pipeline? Now, putting a space that will read a file downstream in a pipeline doesn't seem to be a useful thing to do. You'll also see up front how many seems it would support. And if it were connected with more than the supported number of streams, then this would be rejected out of hand at the syntax check. You'll also see whether it supports arguments must have them or must not have them. And then the method to perform the syntax check and then the main function. So, what was the syntax check? Something that Unix also can do actually is essentially a subject in a pipeline. That's if you from one program that is in a pipeline in the other pipeline, that's a subject in a pipeline. Isn't it? So, here comes the interesting part. In the file can travel into the subject in a pipeline. So, if there's no more data, then these guys will see no more data. That's understandable. I think everybody's adopted that way. But in the file cannot travel out of a subject in a pipeline. So, if this, of course it can't. If this guy terminates for whatever reason, then in the file they travel this way as to the probability of spirits. So, it will also travel this way. So, this one will see in the file on its output. It's in the file just the same where it's in product. This guy will then also see in the file and then all of the pipeline, the subject in pipeline will have terminated. At that point, it's at the point when those guys terminate the original conditions of the status. So, if there's more input, this program can then pick at it and decide what to do. The subject in pipeline you insert can be manufactured to the particular data at hand. So, you pick at a record. So, this is case A, B, C, D. For case A, I run this subject in, for case B, I run this subject in, for case C, I run this subject in. Unis can't quite do that because in a read on Unix there's a consuming read. So, the difference here is that the record, but who, the B space decides on, to issue this subject in, is still available here. And the subject in will also see that record. Now, if the subject in doesn't want to see the record, you can throw it away. I mean, that's acceptable. So, that was a subject in pipeline. That's run, then something immersed the record, the shipping pipeline, because it ships the input. That essentially, it's not a subject in pipeline, but it explores the ability to collaborate in the five departments. So, when a record arrives that begins to exercise it, to label the tournament. So, that causes these two guys to terminate, and then the entire subject team is terminated, and the streams are reconnected. And we have record, exercise it, still here. It's very important. Many of you commit. So, that has essentially performed some operation on the input stream up to the occurrence of the next record that starts with a particular stream. So, if you have units that need to be processed, and can be processed efficiently, this is subject in pipeline, then you can do those in a loop. So, that your loop is not on every record, but on every group of records. And this, the idea is that the subject in pipeline will run that's called a meta-status. That's real assembly code, and there's no interpreted code involved in the subject team. And that's very efficient it comes in. There's something called dynamic reconfiguration, which is a variation on this. That was made to support method include. So, we have E and B. B decides that it needs to divert from the current input stream to another input stream. And it then says, I want to loop included file and pass it to my primary input as it is. And that has concern is that, just like this is subject to in pipeline, this commission is temporarily broken and the new pipeline is inserted. Now, the difference is that when this added pipeline comes to in the file, the in the file is presented to this program. So, it sees in the file, and then it says, get rid of my current input stream and that will restore this commission. And that's how you can, the reason for this is that you might want to do something at the end of each sub file that you just included. Maybe you want just to add comments to the article that I put. But you certainly need to know in the file as well as beginning of file for each of these sub files you've included. Setting out a pipeline. But because the CMS pipeline has a dispatcher that also has a scanner passer to pass the command. This is done in several passes over the input command. First of all, you determine the overall structure of it. So, you can allocate whatever descriptors you need to try the world is. And then you resolve the programs. You'll have to be resolved somehow. I mentioned that in the video. First, it looks for building programs. They are sort of if you like. So, building commands. Then it looks in attached filter packages. So, you can package up your programs. Have them to work like building filters but they could be supplied by somebody else. So, users could add them very fast interface this way. If it isn't resolved from those, then it will look for certain states. The way we see this out, what it actually is, is that it looks at the resolved input point. Now, if that is an executable instruction, then it's a very fast and simple program that you'll be able to interface. And if it's anything else, then there is essentially a magic number that says this is a program descriptor and then you know how to deal with that. So, the starting of the program goes through several decision points. First, it says, is there a structure okay? I mean, if there's something broken in the input command, then we give up here. If we then scan and can really solve the stages, do they have the correct attributes and so on and so on. Then we do the syntax check. And first, at that point, do we start the programs? So, it's not until we've been through all the other checks that programs are allowed to do anything that does damage to them. Or allocate resources, or whatever. Now, but even though they're not running the programs, at this point, they're still, this is sort of the latest thing, the commit levels. The problem I'm trying to solve here is that some programs need to allocate resources. They can't do that in the syntax check because the syntax check, they have no chance to do allocate those resources if any syntax check fails. So, I wanted to be able to allocate in a controlled-fashioned resources so that I could prevent sort of the main pipeline from running when I'm seeing a larger and larger and larger class of errors. So, the commit level itself is just a 32-bit integer. A stage is at some particular commit level. The program descriptor will show which commit level the stage will start at. The old general programs have a default, zero. The next program starts on commit level minus one. There's some rules. Now, when the program starts running, it starts at the lowest commit level at all the stages in the pipeline. Only those stages are dispersed. They can then do whatever they want at that point. Then, when a stage has done whatever it needs to do at that particular commit level, it commits to a higher level. Typically, it would go from where it started to zero, but it could take many steps into the various steps. Now, at the time the dispatcher then would dispatch this program again, everything else would also have moved up to the same commit level. The return value from the commit function is the aggregate return code at that point. So if anything else has cleared, the program gets to know and it can then de-allocate the resources and terminate itself. If once the dispatcher has seen a program return with a non-zero return code, it will not start further programs. It will simply run whatever is still there, supply the return value on the commit and expect the programs to unwind and terminate. In general, data will not be able to flow. And this is exactly the way I run it, I mean. It's not something that just fell out of some coincidence. This is how I run it. Stages can allocate resources. They can have a protocol that says I want to allocate resource U on commit level minus 10, resource U on commit level minus nine, and so on. And this is the way to write deadlocks in resource allocation. And the states can determine that the pipeline will be abandoned until the first year of action. As I said, by convention, but it's entirely convention, data moves under zero. If you have two programs next to each other on commit level minus 10, they can pass a record between them if they like. But they better be sure that they both are that commit level. There's some interactions between programs and all that, which I won't go into here. And less programs being interpreted have an interface. So in this interface, this interface can know whether the program has done a commit or not, and then if it does error, it can perform an entire commit so that less programmers don't need to worry about this unless they want to exploit it. As an example, a safe file replacement method. The grader and program checks, first of all, the file name syntax in the syntax check between. Then it starts on commit level zero and it produces a temporary file. Then it commits to run and by convention, every old data has stopped moving when you get to commit level one. It cannot inspect what is the return code from the part command essentially, and it can verify that data loss in fact transpired correctly or at least nothing failed in the transfer phase. And if it doesn't get return code zero at that point, it can remove the temporary file and remove the existing file alone. And then finally, if everything went okay, it can then transfer a file by renaming the temporary file. So this disables the embarrassment of the transfer in the file because you have something that runs on after the syntax check. So in retrospect, I think it took about 15 years to run all of this. I have a picture here of the entire development team but we haven't got enough technology to show it, so I'll pass it around. It allows us to put up many data processing patterns that run in the unit's part. Enhancement or generalized don't bother. Hang on, yes, come along, good. Because I'm almost done, generalized enhancement turned out to have some unexpected side effects. For example, in multi-stream pipes, you've got decoding networks. I think you've got everybody's attention there, to say. Dynamic reconfiguration in the other kinds of program. On syntax check, the rest of the program's descriptor. And not have a lot of use, if you're not a theoretical kind of view. That's it. Any questions, comments, discussion, shouting, protests, okay. So did you find a part of putting a huge amount of debugging to this dispute? How do I debug the pipeline? First of all, I think the question is really, how do I help the user debug his pipeline, right? I use your meaningful and identified messages, for instance. No, but I mean, I use your message and identify the exact point in the command string where the parse fails. There are options to get additional information. You can insert stages in the pipeline that display the data going through, so that you can find out why a record didn't go where you wanted it to. It's, nobody seems to know about, I mean, if you don't know, it's about the amount of diagnostics you get, not the rest of them. Yes, the question is, the syntax checking is really on two levels. There is an overall grass syntax check. There's an argument string where there shouldn't be one, that would be handled by the scanner. Then there's the syntax method of the program itself. I'm not good at both. And if that returns non-zero, then the syntax has failed for that space. It's then up to the program itself to issue meaningful messages. In one way or another, yes. I mean, if anything related to the contents of the parameters to a stage, it can only be checked by that stage. It does. The question was, does the commit level replace the syntax check? And you're quite right, it does. And if I've done the commit levels first, I wouldn't have done the syntax check. I mean, I would just have, it would have been up to the program to perform the syntax check, but it would then do it at a low commit level. Now, if you like, you can consider the syntax check routine to be commit level minus infinity. I mean, that's essentially the way I'm doing it. Yes, the question is, a speak of efficiency and libraries switch tasks for each record. Is that correct? Well, it has to do that because I want to maintain record semantic. So it has to be that way. So I just have to make it efficient, right? I don't know, I don't know, I don't know, I don't know. So, the code routine kind of was long that evolved in the 60s before you have generalized multi-programming. You have multi-programming in that you have separate units of execution each with their speech, start if you like, switch tasks, giving up control voluntarily. Do you have? I'm really curious to see what happens. That's just a pre-emptive dispatcher, not a code routine dispatcher, yes. And the other thing is that we don't change that as much as we like. And also, this is all within your machine, so it all runs in run after space. Okay. I wouldn't be able to do it in 150 instructions if I had to switch out the space. Also, if, as there's no real tasks which are the multitasking level, the chances of caste pollution, I smell around, that certainly helps it out. But let me give you an example from real life. I mean, somewhere we were, while we were doing this pipeline thing back in the early 90s, there was a guy at the account who had written a peer run program to look for things in a file, essentially locate. And he was very proud of his program because he learned very fast. He learned around four times more than the current pipeline. Now, he had algorithmic trouble, so I told him about algorithmic issues, and then he could get his handcrafted peer run program and try to speed up the pipeline. So, that's the algorithmic run. And the alternative is to try to use peer run program, right? Nobody wants to do that. Nobody ever wants to do that. No, except from those who learned the salary by doing that, and they are paid by the world. So, that's the solution. Yes? No questions? So, what did we get in here for? When did you get it in Unix? Well, when is Unix learning? Well, I mean, I've been asked this question before. I've doubted a bit in Unix in the meanwhile. I don't know how many of you are familiar with Unix 390, but there is a Unix implementation that runs in a virtual machine side-by-side with CMS now. Well, that means that it runs on the same hardware, and therefore, the programs will inferiorly be transported. There is a run challenge here, and that is that the Linux system runs in ASCII. You are familiar with ASCII, I'm sure. Now, if I tell you that CMS runs app-to-bit, you probably don't know what I mean. So, the entity is quite like ASCII. It's only that the line is x30 and x20. Right? So, it's twice as good, isn't it? But that is a challenge. The other thing I didn't tell you was that the whole thing is written in a sense that stacks are pre-allocated. I mean, this is... If you have a single outer space, you can't have in-unbounded stacks. Now, the way these programs are written, I compute the required stack space and allocate that initially. So, that's the dynamic size required by one of the building programs that can be determined at compile time. To make that happen on units, that could happen. That could happen. And I seriously have been to it in two ways. Now, this linear spring line is just simply taking the code, somehow fixing the ask-to-issue, which is non-trigger, and then write something around the CMS pipeline. CMS pipelines is also built like modern system, right? So, it has interfaces, and most of it is written completely independent of the system. That's why it runs in CMS as well as in the U.S., or the other U.S. Because it relies on the object code being the same format and executed at both places. So, to make the pipeline run on linear spring line, too, I'd need to produce an ELF format object that would ask the ask-to-issue. And then I would need to put something around it that would correlate the library between these two system codes. I'd also look at taking the 372 machine code and converting it to RS-6000 machine code. Because that would also be a way of putting off that same stunt. And I'd be interested in the RS-6000 architecture on the top. So, I'd do that up. But actually, a lot of people in my room look seriously at converting machine code to RS-6000, I think. But nothing too much. Now, the good thing about the linear spring line, too, RS-6000 is that they are bigger than machines. That is, they are opposite to the interphone. Right? So, the lowest artist might have an integer who is the most significant one. How many times do you think you have an integer? How many times do you think you have an integer? Depends. Depends who asked me. When I wanted it announced, I wanted it to be too far into announced. So, I counted lines of code in a very strange fashion. And then I came up with a very small number. Not quite a number, no, no. But, I don't know. 200,000? And now you want to ask me what kind of language it's written on here? I know it's written here somewhere. I mean, if you do it in a somewhere because I could see the use of these tablets and I don't know if that's true. So, you should know it's written there. No, it doesn't really matter. And you're like, I could do it long to two. And that's pretty good going. I know it's not so efficient. But the same 392 or 372 assembler is a misnomer. This assembler is what's known as a macro assembler. And the code I'm going to use is more like Pascal. I mean, it has nested procedures, it can allocate various and I can generalize instructions that would react to the phenomenon in the data and things like that. So there's very little machine code in it that is correct, that is compiled by the assembler. Is that it?