 Hello and welcome to the Wonderful talk history of the mainframe where we will learn something about I'd say old and current hardware and We have the wonderful I Forgot the names. Sorry Claudio and Nicole Nicole. Thank you, and I'll just hand over to them and enjoy the talk Yeah, hi and welcome to our talk. We're gonna leave you a little bit through the history of the mainframe We're gonna start at the s360 and go right back into the yeah current time Linux But before we do that, let's first say who we are. My name is Nico I am a developer for KVM on S390. I'm also a co-maintener for KVM unit tests on S390 How my hi my name is Claudio I am one of the co-mainteners for KVM on S390 and one of the co-mainteners for the KVM unit tests on S390 Yeah, so first of all, let's answer the question. What is a mainframe exactly and that term originates from a time where Computers were made out of several boxes. So to say you can say wax today Maybe there was one box that was the power supply that was one box that was the tape drive There was one box that was the hard disk There was one box that was the actual CPU and this box that contain the actual CPU That was the heart of the whole computer the mainframe. So that's where the term originates from and At some time that term becomes so common that we that we people started to refer to the mainframe as Like the computer so the mainframe designates like a computer architecture that started somewhere in the 60s and Claudio is gonna introduce you what that meant and how that came about Yes, so We start in the 50s late 50s beginning of the 60s There were many manufacturers of computers or different types of computers big small and They were all different and all incompatible even the same vendor sold many different lines of Systems that were incompatible. Maybe maybe the systems had like some small variations Like you had maybe the same version like slightly faster slightly more I owe a few more a little bit more memory But it was more or less the same system and if you wanted to move to a Significantly bigger system than it was a completely different new type of system different Different architecture different instructions different operating system Operating systems were written for that specific type of machine and they were not compatible one which with the other they were Very closely modelled after the hardware itself The drivers for even for the cases where you had the same hardware attached to different machines in that case Even that case the hardware drivers needed to be written from scratch and That meaning meant that moving your software from a specific machine to maybe even a bigger one meant Completely rewriting it Everything was different the OS interfaces were different the programming languages even were different and the machine code It was a mess and it was a mess also for vendors themselves because each vendor had many different lines to cover the spectrum of you know possible Performances that you would want to cover and and they had to support many different Incompatible platforms or also from for the vendor side. It was it was a huge mess. So what happened was that IBM realized that They were not exactly on the leading edge anymore there was strong competition and IBM reached to become just a company that sells computers just like all the others So a test group was created to address this issue and the recommendation was to create a Line of five compatible systems spanning a 200 fold performance range and Surprisingly, but I would say for for these times it would be surprising IBM followed the advice and Did what the test group suggested to do and replace the whole product line with compatible machines? the whole project was massive and estimated a hundred six hundred seventy five million dollars which for in the 60s was a Lot of money or which 30 million for software but as Things are you I mean the thing went a little bit over budget It ended up costing five billions which for the time was an even more massive amount of money of which 500 million just for software almost like the whole original budget was just for software basically and Toward the end of the project IBM was in financial difficulties like they literally didn't have sometimes the struggle to find the money to pay the salaries because they were like really really really Broke thankfully the mainframe worked and they made a bought a lot of money out of it like But yeah So that was a moment where they literally bet the company on the mainframe If the mainframe had not been accessed a man would have would not exist anymore so What came out of this thing some innovations? eight-bit bytes There was no concept of bytes at the time there were characters They were usually around six bits because with six bits you could represent all the letters and numbers and a few more symbols That was enough for the time but From the mainframe introduced a concept of eight-bit bytes the concept of an instruction set architecture So you have an abstract instruction set architecture You have your instructions and then you have the machine implementing the instructions and you can have a cheap machine or an expensive machine As low machine or a big machine and it's the same instructions You can take your software from this low machine if you grow your company grows and you want to buy a new or rent a New or more expensive machine. You can just do it and move your software and it just works Solid state solid logic technology Was at the time an innovation to to basically put small standardized components on small Printed circle balls like some transistors or some resistors some stuff in a specified Configuration so you would have standardized modules for like Soar or you know, whatever kind of thing you wanted so you could just build your stuff by slotting the standardized things and Another interesting thing was again hardware abstraction in the OS So how does the s360 architecture look like first of all, it's a big Indian because at the time was the most logical and obvious thing because you've right from left to right you big numbers first, right, so 24-bit addresses and the consistent instruction formats 246 bytes long and the first two bits of the instruction indicate the length of instruction And this is interesting because today the modern mainframes still have the same system You still have 246 and the first two bits indicate instruction length take note Intel registers 16 32-bit general porpoise registers, which was a lot in the 60s If you think about the 8086 that came out Roughly 10 years after the first mainframe. They had half so many registers and they were half so big We have one program status ward, which is like a mix of program counter and Flags some control flags like interrupt flags whether we are in supervisor mode like kernel mode or user space mode and a few other things and for 64-bit floating point registers Optional because but Also take note that the registers are 32 bit, but the address is actually 24 bit. So that's important later Channel IO is interesting because it's not anything like port IO or memory mapped IO or it's not even like DMA in the most understood sense Because the device cannot just write into memory anywhere. There's a channel controller that Is programmed by the operating system and the device cannot just write in memory just communicates with the Channel controller and the channel controller writes in memory where the operating system told them to so it's kind of like having a DMA weeds a built kind of built in IOMMU and that was there in in the 60s already Interrupts with classes and set classes protection for him point. I will cover that in the next slide decimal BCD arithmetic. This is interesting 8 bit bytes means that you can put two Base 10 digits in one byte one per nibble and then you can do base 10 arithmetic for example for banks or cobalt That was actually very new thing at a time cobalt Yeah, and then we have what we call that or dynamic address relation virtual memory on a specific model and also had multi-processing with multi CPUs so protection Paging there was no virtual memory on most s260s, but there was a mechanism to protect memory a storage keys each Two kilobyte block of memory had an associated for bit value plus an optional fifth bit There was a four-bit field in the program status ward If that field matches with the storage key of the block of memory then access is granted otherwise. It's not granted Optionally fetch protection. So reading was allowed, but not writing or even reading was not allowed depending on the fetch protection But if it was set as well or not and the key zero what could do everything like kind of like kernel mode Thing why is this interesting to show because this is still there today actually It's four kilobyte blocks now. It's not two kilobytes, but this same mechanism. It's still there storage keys are still there Linux doesn't use them, but the hardware has it so Hexadecimal floating point is also interesting It's of course not IEEE compliant mostly because the IEEE standard came out like 20 years later So yeah, and another interesting thing is that the longest short only differ in the size of the fraction so the range was the same it's just that the longer one had more precision and The other interesting thing is that the exponent is not in base two, but it's a base 16 So it indicated in which nibble the the floating point was not in which bit no nonce no infinities and There was some maximum interrupt that could Be used to signal if something went wrong like division by zero stuff like that so These are Two of the well a small one of the smallest and one of the biggest models of S2-6 this that came out and we can see that they they they had a very very Different like a very wide range of performance from ten kilo instruction per second to ten mega Instruction per second so From very slow to for the time actually very fast from eight kilobyte of memory, which is very low to four megabyte And interesting is also to see the weight which was they were quite heavy They would almost one ton for the small model and the big model was from six to almost 13 tons And if you wonder why such a wide variation that depends on the memory more memory means you need more of this gigantic Steel frames containing the memory. So more memory means it's actually heavier and bigger And it's more power Yeah The s3667 had a virtual memory some counter registers to Handle the virtual memory in upsw format because was needed four kilobyte pages and a TLB and We looked at the documentation actually and we looked at the documentation went into very very very specific details of how the TLB works How the the new entries are formed and how the all the entries are evicted and it was very very detailed and we wondered Why are they putting so much detail into how the TLB works? That's which I mean Well, it had eight entries. So you needed to write your software Carefully to make sure that you wouldn't trash the TLB all the time. So that's why it was so detailed There was a bunch of open resistance written for the s60 I will cover some of them in the next slides I will not cover some of them because not so interesting But there's one I want to mention. It's the TSS 360 times sharing system it's The operating system is system that IBM wanted people to use for Yeah, time-sharing like Concurrent interactive use of multi-user system like most of the other operating system were like for batch processing they were not meant for interactive use and So IBM really put lots of resources into this TSS system and Didn't really work very well. I have read somewhere that like it took ten minutes to boot and it would run Approximately for ten minutes before crashing. So I Don't know if it's true, but Yeah, so but IBM really pushed that really wanted that to be successful. So The big OS 360 took a lot of time to develop because it was big and complex so They decided to write a smaller OS to fill the gaps because they need to sell hardware and they need an OS and so they've write a small Smaller operating systems then they realized that over the second the big was can is actually too big and cannot run on smaller machines So maybe we should actually yeah, oops So maybe we should actually keep the smaller OSes and for the smaller machines and Also some customers just invested in the smaller OSes and they didn't want to switch. So yeah BPS was actually not an OS. We're just a collection of standalone Programs that would run directly on a machine like a compiler or stuff like that Then it was a basic operating system that would run in in eight kilobytes. So also in the smallest machines Toss is a tape operating system if you had tape but not a disk because this were expensive and Doss the disk operating system Which could run in 16 kilobytes of memory At Western 60 the flagship of For s360 mainframes three variants all the same API API and job control language PCP a single task could run on 48 kilobytes MFT meaning Multi-programming with a fixed number of tasks Ika the memory was partitioned in fixed slices And you could run different tasks in different slices. Of course, it was all batch processing So it was not interactive and MVT multi-programming with variable number of tasks, which was big and Required a half make-up item memory, which was actually a lot of memory at the time they all had filename structures like this the file naming had a structure so you couldn't you could have a hierarchies and various for more remote access except for the PCP and sub tasks except for the PCP and Actually, the PCP was not even released in the final version of s360 because it's like you can just use Doss, I guess So Claudia what could possibly go wrong if you have operating system with a variable number of tasks, but no virtual memory memory fragmentation Hey, so what happened is that memory could get fragmented and if a program needed more memory They would just the OS would just figure out which a process to literally swap out to disk So not a page like the whole process would be swapped out and memory freed for another process, which was Yeah, I mean not exactly super nice, but it worked it worked So another interesting thing is the Cambridge monitor system written by the Cambridge IBM lab as a single user operating system and can run by bare metal on s260 and People really liked it people really like this single user operating system, but they wanted To have it kind of like multi-user so they did the logical thing and they've wrote a hypervisor to Have a bike basically a virtual machine and and run their single user operating system and inside the virtual machine But you had multiple virtual machines, so you had a multi-user system kind of logical logically, of course the virtual machine was a trap and emulate type of machine and It run the first version run on a specific specific specially modified s26040 which did not have virtual memory, but they literally hacked it They literally hacked the microcode and the stuff to have it and was used as a prototype for the actual CP67 control program 67 It was actually used in production, but not supported by IBM IBM. They don't like this Staff they wanted to push their TSS remember they don't like this that they showed that they never really supported it They didn't really care too much in the beginning about this and eventually it became VM 370 VM stands for virtual machine and yeah Another interesting thing is the LMP control program Real-time transaction oriented not a general purpose OS as in you cannot even compile a program on it you need to cross compile for it and It's it can do lots of transaction transactions So airlines used it for reservations and then banks were like hmm. You can do transactions in real-time Hmm, so banks are using it and then IBM was like maybe you should not call it airline control program It's a more like a transaction processing facility. Yes, that sounds nicer. And he's still there still exist now It's called ZTPF and works on a 54 bit architectures, so 70s new evolution of the architecture S370 for the 70s Visual memory was now part of the base architecture, but it was Different than the S3 667 Different and incompatible. So you couldn't you couldn't just take a no-printing system from the 67 and run it here It's supportable two kilobyte and four kilobyte page starters. There were some other interesting things like support for multi-processing was built in Pro later on the architecture was extended with the extended architecture which Had a switchable per process 31 bit mode Yes, it's not a typo. It's 31 if you wonder why 31 because Well, we had 32 bit registers So the the highest bit was used to switch between 31 bit and 24 bit for backwards compatibility At this point only four kilomite pages were supported from vector memory and for search keys The channel you was completely written and the way also very nice vector instructions, which unfortunately I don't have the time to talk about but they were very nice Only for that specific model in the beginning and then Yes a 370 architecture with our address access register mode, which is a crazy thing that Can be used to to have like up to 256 other spaces at the same time in the same process It's still there in the architecture. It's it's it's quite complex And watch the talk from tpn 2019 in case you're interested what that is and how that works I had I had a talk and you can just watch a recording from yeah Or ask us later and interesting thing is LP our logical partitions. You could slice the mainframe in Logical partitions that behave like completely separate machines so you could run like production and testing on the same machine because they were actually Two different partitions and they could not influence each other and it was stable and And they still there by the way it's in modern mainframes That was for the s270 so virtualization at some point IBM starts to realize that hey in virtualization Maybe it's important. So first in 1980 they had an Small extension to to allow VM to run faster and then in 84 with the s270 xa the start interpretive execution instruction, which is kind of like the VM start instruction that you have on modern machines, but like on x86 machines, but that was like there in 84 already and nested paging was supported like out of the box since the beginning and most instruction executes inside without even an exit and and There's a control block describing the gas CPU and is still there today That's the instruction that like KVM uses on s290 to to run virtual machines So in the beginning were some clones that were more like copying instructions said not exactly being a drop in replacement Except for the Soviets that they actually wanted to clone it and then in the 70s one guy former IBM Gene amdahl Formed the amdahl corporation and started selling drop in replacement for IBM mainframes and some other companies noticed that and they said we're like hey, we can do that too and so they did and It's a bit embarrassing, but at some point some of the competitors sold better hardware than IBM itself Yeah So this is an overview of the operating system. We can see the DOS on the left West to 60 that became MVS and then became OS 390 and then became ZOS which is still there The control program which is VM which is now goes to that VM TPF and then we have a bunch of unixes. So The first one is actual Unix apported with a help of AT&T working inside SS basically using TSS as a Supervisor and the kernel was kind of running as a process inside TSS and Then also in the 80s amdahl corporation they had a different port of Unix running inside VM instead So to avoid interacting directly with the hardware they Relyed on on VM and that was quite successful and IBM had to kind of run after them So it was I X 37 and VM I X which were actually two different pieces of software One more based on the Unix from above and one was a different thing. I looked I'd searched in the documentation. I digged into All the old documentation and announcement. I couldn't really find much information about this in 1988 a I X 370 was released again a proper Unix and in 1991 AIX ESA which could finally run bare metal and not necessarily said VM And then the whole thing was kind of merged into MVS itself into the flagship OS then Linux happened and 2001 ZOS Unix system services. So basically IBM merged Unix inside ZOS. ZOS is actually possessed compliant and Interestingly in 2008 someone ported Open Solaris for the mainframe, but it was a short-lived effort So, yeah, let's talk about the FV 90 the further Evolution of the architecture. So we get additional floating point registers We now have 16 in total and Additionally to the hexadecimal floating point. We now learn the proper IEEE 754 floating point format So you can do both whatever you prefer We also get a few additional Instructions to work with immediate to work with relative addresses And I think the most mentionable thing is suppression and protection and we need to look at what suppression and protection actually means So if we want to talk about Unix something very important that we need to talk about is the fork system call And what a fork system call does is if you have a process on that process Executes a fork system call you get basically a second process that is identical to the first process and Because copying a process and basically means that you have to copy the memory and that would be slow most operating systems do it or Almost all unixes do it in the following way You have the pages of the first process. These are read, right? And then the process executes the fork and in the forked process these pages become read only and Then what happens if the second process the forked process and tries to write through one of these pages? It tries to write but the pages read only So what happens an exception is generated and the operating system can do copy on the right It can copy the page and then perform the right on the copy. So the original contents of the page are not destroyed and Now what is an important thing to have for this to work is that this Exception happens when the instruction that caused the exception did not really do something yet so and it must not happen that you do a right and then you for example cross a page boundary and Then you get the exception Just when you cross the page boundary because that would be bad and would mean that there would already be something written in the previous page so that's very important for Unix based operating system and As 390 actually behave that way so every 90 it could happen that you got the exception but part of the Instruction was already executed. That's not so good So that makes it very hard to implement for it. So basically what you can say at 390 was not a very good fit for Unix based operating systems We Yeah, so in February 1993 IBM realized that and added suppression on protection to the architecture Which basically made when you get this exception You're guaranteed that this right has not yet been executed and a few other things What is kind of interesting is in 1993 there already was a Unix, right? How did they do that? We don't know how they did it We assume that they had some internal knowledge on how the machine works exactly inside So they could kind of work around it a little bit But yeah, it was probably very hacky. So it's good. I'll be I mentioned is that yeah But I know that for a fact that they did do copy and write because I looked into the documentation And yes, they were doing copy and write somehow so We don't know but yeah, the architecture now has support for suppression production. We can do proper unixes on the architecture now good, okay, so Bipolar vise versus CMOS and so bipolar and CMOS these are two technologies how transistors work how CPUs are modeled and Mainframes because they're so old bipolar was there earlier and they're almost always traditionally based on the bipolar technology And the bipolar technology that's interesting or that's very good because it's so fast You can switch the transistors really really fast, but it has a striking disadvantage. It's extremely hot So things run very hot when you use bipolar technology And IBM really really had to spend lots of effort to get rid of all the heat So at my university we always had a problem that the Basement was always very cool and I remember asking the question guys Why is it so cool here? Can't you do like the climate one one a little bit hotter? And then they told me no, it's not possible It's actually already on the lowest level because the building was built in the 70s And they expected that we could get more bipolar machines They built like a huge AC into the building and now we have to always put it on the highest setting and still too cool in the Building. Yeah, okay. So that was bipolar times And and yeah, so IBM slowly started realizing we have to spend lots of engineering effort to get rid of all the heat and In 1994 IBM started digging into the CMOS technology The CMOS technology allowed the machines to be a bit slower. Just transistors could switch a little bit slower initially But they were running much cooler they built a line of machines Which based on it was based on the CMOS technology They removed a few optional features for example the vector instructions and the first CMOS machines Actually have to admit they were not so nice. There were slow and like prototypy things but There was some application, but they were really really was slow and Slowly IBM started evolving the CMOS technology and We designing the CPUs we found a paper that does some data we designed the CPUs at the time And CMOS was getting better and better and better and at some point It was actually better than a bipolar technology and it didn't have as much hassle with the heat And with the fifth generation of this machine CMOS based machines Sixth generation they were actually faster than the bipolar machines and eventually the whole mainframe moved from bipolar to CMOS technology So one thing is what actually is interesting about the architecture too is ABM built machines that had more than two gigabytes of RAM and of the 80s Yet our addresses were just 31 bits So you kind of might ask the question How can you do more than two gigabytes of RAM when you only have 32 one bit addresses? So there are two things For example for the address space and the application cloud you mentioned earlier the access register mode Which basically means that one application does not have one address space one application can have up to 256 address spaces so this way you can have an address space of effectively like half a terabyte and for one application Allowing the application to deal with a lot of data And the second point is how can you actually physically address that memory more than two gigabytes with only 31 bit addresses? And for that IBM invented the concept that is called Expanded storage and expand storage basically means that you can tell the machine. Hey, I have this page here in physical memory Please put it in another thing that is almost as fast as my RAM And then I can just take it back from there So for a very very long time the main frame was actually able to deal just fine with just 31 bits And it also meant that you didn't have to do much adjustment to your application And that's always important to keep compatibility with your applications But finally in 2000 it happened and the main frame moved to 64 bits So what IBM basically did? Registers were extended to 64 bits. There were new instructions that they could deal with the 64-bit registers and We still kept the compatibility with 31 bit in fact also 24 bits. So you can just run Fine 24 bit application. It's not a big problem Page tables levels were extended and you can basically say upon this point the clone manufacturers are basically dead So they could not keep up with the switch to 64 bits I would just want to point out that the number of levels of page tables is variable per process. That's right to do not have To help performance. Yeah Good, so let's take a little look a bit at evolution And with the C 900 in 2000 we get the so-called IFLs IFLs are special processes that can only run Linux The idea is basically that if you only want to run Linux on a certain processor You get that one cheaper With a C 990 you get the C AAPs which are again specialized processors that are meant to run databases and Java applications also for licensing reasons with in 2005 Mainframes they don't have a BIOS. We have something that's a little bit nicer We have like a fancy web UI where you can configure your computer how it's gonna boot And this HMC and we call it HMC It run it ran OS 2 and in 2005 we actually finally switched to Linux for that We got large pages a little bit late to the party, but yeah, we got it In 2010 our process reached a speed of 5.2 gigahertz fastest clock speed at a time probably With C 12 we got transactional memory With C 13 we got vector instructions again differently Yeah, different learning compatible with the previous ones, but yeah, but we got them now And we got SMT so you can run multiple fetch threats in one CPU and that was mainly also made for Linux With C 14 we saw we stopped supporting 31-bit operating systems Which doesn't mean that you can still run 24-bit applications just your operating system needs to be 64 bits And with C 15 we get support for secure execution, which basically means running confidential VMs on on your mainframe Good. So I six C 16. I always forget that so new and we have neural network assist Which is basically a neural network acceleration So let's take a look if you want to port Linux to our architecture you need a compiler first and for Linux That basically means you need GCC So we first need to answer the question. How did GCC come to the mainframe? And if we dig into this a little bit and we find out that in October 1993 In GCC code appears for the so-called I 370 backhand It was developed outside of IBM by someone who wanted to run Applications or wanted to compile applications for the MVS operating system And it was really basically targeted at that So you could compile applications for MVS and run it on the mainframe and it was not really meant for operating Systems and basically just for that purpose and then for quite a long time absolutely nothing happens In 1999 IBM then 1997 yes IBM wants to replace internally a compiler they used which was developed at the end of the 80s and There's sort of they're looking for a C compiler and they take a look at this I 370 backhand for GCC and While analyzing that they find yeah, you can actually port GCC and it works on the mainframe So they take a look at their requirements a little bit and they realize yeah, but it's not quite at what we need So they start a new port the so-called S390 port Which has less backward compatibility and it's also the different name 390 370 And yeah, so the developer GCC backhand for for the mainframe So in 1998 Linux ports to the mainframe start And this further accelerates the development of the compiler. So for example, we get ELF support in GCC and other features that you need for operating system for Linux in general Yeah, and finally after quite some time After the S390 backhand is maintained out of tree But in 2001 IBM finally upstreams the code and gives it to the free software foundation and since then GCC can Use the S390 backhand natively in the upstream Good now. We have a compiler imagine you're employed at IBM someone in maybe 90 1998 You have this shiny new operating system, which is called Linux You might playing be playing around with it at home a little bit and you have this nice GCC compiler So what do you do? Yeah, you basically start prototyping the port of Linux to the mainframe and that's actually what happened. So in 1998 IBM engineers in their free time basically make a prototype Yeah, we can port Linux to the mainframe and yeah, they show it to people and people seem to like it and IBM initially Yeah, makes it the official thing and but they're a little bit hesitant. So on the 18th of December 1999 The source code of the S390 port for Linux is released But just an IBM's FTP server. It's not like an upstream thing. It's first on IBM's Web server FTP server. You can download it from there. Yeah, so yeah, it's just an IBM thing Until in January 2000 just a little bit later it appears in the release of Linux of the Linux kernel So in Linux 2 to 14 Linux actually learns How to run on the mainframe upstream that was fast that was really fast so people seem to kind of like it it seemed to be interesting for people and Very quickly after that the first Linux distribution appears the Marist Linux, which was basically only targeted to the mainframe and Yeah, so then Linux runs on a mainframe. That's basically the story So I mentioned early that were actually poets of Linux to the mainframe And there are two ports of Linux to the mainframe. There was the S390 port which was developed by the IBM engineers That in their free time and then later made an official IBM project and then there was also the i370s port named after the GC back end And and this was developed outside of IBM It was developed by people that did not have access to the newest shiny machines So the i370 port was also compatible with older older boxes It also used a little bit of a different syntax. So it's more familiar to people that Program the mainframe to understand the code about S390 uses a little bit more different syntax It also uses a completely different toolchain the S390 port uses the GC S390 Toolchain and the i370 port used the i370 Toolchain of GCC You have to say that i370 was certainly less stable than S390 port And I read online that it could boot and it could spawn a shell as the in it process But then it tried to open the console and it crashed. So you got fire, but it was not very useful So yeah, but you certainly have to acknowledge by someone who is a volunteer who does it basically outside of IBM it was actually considerable achievement and and yeah, because the S390 was done by IBM employees, yeah, of course and The i370 port was then later abandoned when the S390 port was published by IBM Yeah, so that's the story of the two ports of Linux for the mainframe Good. So the question that I asked why was this interesting? I mean this that the source code came into Linux kernel so quickly and people build Linux distributions for it and there was general interest on this why and these are the reasons that I found maybe they're more but this is what I found and So first reason that I found is consolidation. So in the 2000s the data center looked like it was boxes So you had one box for the web server. You had one box for the database server You had another box for the mail server. Then you had another box for the FTP server and whatnot. You had just an absurd amount of boxes in your data center and With the mainframe and because the mainframe knows virtualization quite well And with CVM and Linux you could basically just put all that boxes on a single mainframe So it was kind of attractive because this would save space in your mainframe It would save hassle by running around and replacing hardware all the time It would save also energy So this was one reason why people said Linux is interesting because I can just take CVM and Linux and run all my boxes on the mainframe Second reason that I found Java people really like Java They wanted to run Java applications Okay, they did maybe I don't know why and it worked well on Linux. So yeah It was attractive for people to run Java applications on Linux Another reason was Unix so application vendors often said that yeah, we want a Unix like OS And yeah, that was another reason Yeah, Unix like OS. Why not Linux? Also save costs. I think that was also a very important point. It's a comparatively cheap operating system mainframe operating systems are expensive and Later IBM also introduced I already mentioned it cheaper CPUs that could only run Linux the so-called IFLs and I think you also have to mention this. It was just cool. I mean people liked it They were just they like Linux. They ran it at home. Maybe so they also maybe wanted it on their mainframe Good, so let's take a little bit look at the evolution of the Linux kernel 1999 Linux 4s 390 is published In 2001 Linux learns how to run on a 64-bit S390x architecture. That's what we call the 64-bit variant The 64-bit kernel of course can run the 31-bit users base just fine and still do that today In 2003 Their mainframe gets or Linux gets a CSI support up until that point You had to have a specialized storage for mainframe So you had to buy a specialized storage box just for your mainframe Which was expensive and you maybe already had a SCSI storage around somewhere So since 2003 you can use your SCSI storage with a mainframe 2008 the Linux kernel learned how to run VMs on the mainframe Yeah, so what what demonstrates the virtualizational capability is that KVM from 2008 IBM showed a mainframe or a Linux partition in the mainframe that are in like 200 Linux VMs, which is I mean quite a lot. So it shows that the platform can do virtualization quite well In 2013 the Linux kernel learns how to talk to PCI devices On the mainframe It's basically a more standard interface to the hardware like for NVMe's for network cards for hardware security modules But you don't have to believe that you can just take any hardware and plug it into the mainframe most of the time You still need specialized hardware that yeah has some adjustments and also if you're wondering why This came so PCI support came so late is because well mainframes had Channel IO which was actually quite good Very good. So the PCI was not Needed basically until until until it was which is still quite late So in 2015 IBM releases the Linux one specialized system that can only run Linux The idea basically is that you all again save costs because with a traditional mainframe you need to have at least one processor that is a non-IFL processor with Linux one you can run Just Linux and again save a little bit of cost if you intend just to run Linux 2017 the mainframe gets support for SMC, which is basically a socket like connection that is in the background shared memory works locally and also remotely between mainframes and in 2020 Linux learns how to run confidential VMs with secure execution and For secure execution Claudio has a talk Yeah, I mean secure execution alone is a whole talk which I have already given actually the KVM for in 2019 So if you're interested just look that up or talk to us afterwards So these are a few references that we have you can find the documentation or the manuals the principles of Operations as we call them the manual that describes the instructions and the CPUs some extensions You can look them up using this these numbers and then you will find the documents And in case you want to play with it buy a mainframe Yeah Maybe not no not really okay So if you have an application and you want to try it on big Indian They can actually use this link Sign up there and then you will get a Linux VM running on a mainframe So in case you have something and you want an architecture as big and you know Maybe you like the mainframe you can just sign up there and you get the VM and you can try software on it and play with it Play with Linux a little bit In case you enjoyed and and then there's also CP DT which basically allows you to run COS But it's also expensive and a product And there is also QM you that can emulate a new new mainframe hardware and for all the hardware There's other emulators and you can use and software Claudio now well Copyright work differently in the 70s And if you did not put a copyright notice in your software or whatever you were publishing then it was not copyrighted and IBM didn't bother to Protect their operating systems in the beginning because why what does if you if you get the West and you're gonna buy a mainframe to run it right so who cares and Then the clones started to arrive and they started to care but So you can still get West 360 and those 360 and those 360 For free they're literally public domain. You can just Google and you find it and for VM You can actually get even VM 370 public domain because IBM didn't care about VM in the beginning because they were pushing TSS which you can also find because it didn't really go well So if you want to play a little bit with these things you can literally just Google you will find Websites where you can just download an image ready to run in an emulator Good and the file any questions My question is how many of your customers are still running non-linux applications and Are there special features to communicate between Linux and the other operating systems? I actually don't know how many customers do There are special features. I think it's called hyper sockets That you can use to communicate between seers instances and Linux's and many customers use it to have like a legacy application on the US and then you have like a fancy in Java Linux note whatever application that talks to your Traditional application, but the numbers like we don't know. I mean someone at the BM does know but not us It's not us. I Found one point missing on your reason or list of reasons. There's a many things missing in this We brought a presentation and wait like we had to throw away like two thirds of it because But yeah, customer of mine bought Mainframes larger than they are a mainframe usage So they had to think about how can we put load on the mainframe that? Isn't generated by batch processing And that's the reason why they in Yeah adopted Linux on the mainframe Ah Okay, the CPU got that faster than the usage that they had reserved they could use cheaper Interesting interesting detail. I noticed that the version numbering was a bit weird like 990 to That's nine and then up again and at some point there were other jumps. Is there a reason for that like marketing? so so it's just like Just like Apple skipping numbers. Yeah, actually if we go back We can see something interesting if we go back to the s3 here You see g1 g2 the generation one generation 2 and g5 g6 and then we jump to Z900 because it sounds sounded cool and then 990 but then it's that nine. Why why because this was g7. This was g8. This is g9 g10 that They decided to do something else here and then we are back on track. So it's it's marketing I Hope they they will stay like this, but it's not in our control. It's not ab stick It's just we can't count. Um Thanks, first of all, thanks for the talk IBM has another kind of obscure hardware platform with the power platform and I Spend a lot of time working of power and power has the need possibility to run SMT 8 so basically 8 threads on one core, which is kind of crazy So I was puzzled a bit whether this is also Available on the mainframe or whether we are talking SMT 2 here SMT 2. Okay. Thanks