 Today, we'll start with Jack Cummings. He's been a Linux user for about 20 years. Yeah, using NYXA as for the last two, three. He works at Intel on solid state drives, SOCs, team, so I think this is going to be a very interesting presentation, so... Let's start. Different, at least. All right, like Rock said, my name's Jack. I work in Vancouver, British Columbia, Canada. That's the view from our lab, actually. So that's not the view from my cubicle, I wish, but... So what we do in our office, we do a lot of things, part of the team that designs the SSD controller ASICs. And if you don't know what SOCs are, it stands for System on a Chip. Part of Mer's laws are going forward as we integrate more and more into a chip and previously where it would take whatever, 10 chips to run a computer, and now it takes one. So the controller ASIC for these things is getting bigger and bigger and having more and more stuff put on it. But I spent a lot of my time at work figuring out how to be more productive, how to make the people I work with be more productive, how to make the computers that we have be more productive, and just be able to do more. So this is an SOCI I worked on. It had the project name Sage Peak. Someone has mentioned earlier that they use birds for project names. That's what ARM does, actually. But we use places. They're kind of... There's no meaning associated with them, so it kind of provides a community thing to talk about that no one can figure out what we're talking about. But this ASIC went into a product called Fultondale, which is an NVM Express SSD. NVM Express is in itself kind of a fast and any technology that we worked with, actually, the Linux kernel developers in developing a lower latency IOS stack. So there's actually a whole bunch of these things on the market right now. So Samsung has a few, we have a few. There's more people generating them. If you want to talk more about why this is really nifty over SATA or SAS or other kinds of technologies that we're the person to talk to. So back to what a system on chip is. There are integrated circuits, as you can see there, and everything is in that one little package there, and actually in this case a single die. It integrates all the components of a regular computer onto a single chip. So this includes all of the IOs, which are input-output, basically electrical things that turn digital signals into these analog signals. So it's easy everywhere. They're in phones, they're in the Internet of Things, all based on SoC's computers now. So it's kind of a big business, right? There's billions of these things made every year. All right. What are SoCs assembled from? Well, it turns out lots of things. The part that we do most of is digital logic. And digital logic is still analog-electrics. It's just that we do some statistical proofs and characterizations and analysis to make sure that things behave digitally. So it's either on or off. We also don't actually do, I guess that'd be your right, is what we call a registered transfer level code for what is the diagram there, which is actually an inverter. So you can't really see it from the slide here, but there is a pin called in on the left and a pin called out on the right. And that is actually what it looks like in silicon. That's cribbed from a magic schematic. And so the code there is actually what infers that. And you can tell there's a fair bit of boilerplate logic here. The logic in the input and output are actually the wires. And the AlwaysCom says that whenever the input changes, the output should change too, and it should be the opposite of the input. That's kind of the definition of a trivial example. But this is a single gate, and some of the designs we work on go into the order of 50 million gates. So what happens is when you stitch all this stuff together and you stitch these things together into something called a net list, it's actually just a humongous graph where your nodes are logic gates and your elements or primary edges are actually wires. So we use Verilog and System Verilog. They're both terrible organically grown languages. They kind of remind me a bit of C++. It's not a nice pure design language. It's something that's kind of evolved over time and grown all kinds of words. So we also don't do the actual place and route at our office. So we do generate a net list that contains all the standard cells, which are the AND gates and OR gates and flip flops and everything else, and hand that off to another team. I'll get into why that's important later. But that in itself is a huge job. Just a bit of talking about the problem that I solved and the solution I came up with using Nix and some of what made it better than what we had before. So one of the problems in hardware design is the cost of screwing up is really high. So we tend to spend a lot of time making sure we got it right the first time because the cost of once we ship something off to the fab or the foundry or to actually be manufactured into a real thing, and that's kind of what's neat about this. You write code that turns into something you can see, well, with the big enough microscope. But once you do that, that takes months for them to get through all the wafer processing stuff and it's very expensive. So if you have a bug like back here, if I forgot that little tilde in front of in, that's no longer an inverter, that's an AND gate. Pardon me, it's a buffer. So that wouldn't work. That could be a seriously critical bug. So anyways, we're fairly rigorous about our design flow. So we start from requirements and look at these requirements, figure out how we're going to meet those requirements and that's called architecture. We write a specification meet the requirements and then we do something that's incredibly valuable and this is actually the reason why things work is we hand the same specification off to two people. We hand it off to a designer who actually writes the code and we hand it off to a verification engineer who will then write a verification in an environment for it, a model as it were. And then when they get to various checkpoints along the way or whenever you feel comfortable, they start comparing the two and make sure that they both agreed on what the specification was and that the design meets the specification. It turns out that that is a huge part of what we do. It's incredibly labor intensive and it's very difficult to converge on. I'm sure that everyone knows testing is hard and it's really hard to test everything but we kind of have to... We have a special methodology we've developed that at least lets us know what we've done and when we're done. So once we've done the verification, we have the RTL and the RTL is actually the code that we can synthesize into placed gates and hand off to be implemented. And so the actual synthesis is a bit like compiling code except that you add additional constraints about, you know, it must run at this frequency and must fit inside this area and it must use these certain standard cells. And then we make sure... Because we don't trust our tools because they're actually quite crap. We have another tool that we run to make sure that what the first fill did was right. And then we generate this net list which we hand off to the layout team. And they do their layout and then hand us back placed gates and then we run that tool again on it to make sure they didn't wreck the thing when they were doing layout. So we get this... Working for your Intel has certain advantages. We get some... What they call superscues of a processor. So it'll take a high-on-Zeon processor, cut out half of the cores that are too slow, disable hyperthreading, and crank the clock frequency on the others because they're all specially spring-skilled silicon. So these are Ivy Bridge EP compute servers that are running at 4.5 gigahertz. They're really good at single-threaded performance stuff. They probably have less throughput than the newer Broadwell EP stuff, but none of our CAD tools vendors understand how to do threading properly, so it's still all single-threaded performance. And interesting enough, because our CAD tool vendors don't like changing operating systems much, that we have to run with these archaic kernels. I think... I'm not kidding. That thing is what, six years old now? Yeah. I have no idea. It's been patched, I don't know. It's either 7,000 times or 700,000 times? Yeah, I don't know. So this actually causes unique problems with NixOS stuff because G-Lib C 2.19 is the last G-Lib C that supports kernels that old, so I have to have a special version of Nix packages that use an older G-Lib C. So this is the way we used to do things. We typically use make in isolation for various tasks, and since some of our tasks have multiple files, we use touch files, we have a bunch of shell scripts written by engineers, and engineers aren't really good at writing code. Then we have these purl scripts that parse the output of make-debug, which is fragile and kind of horrific, and there's more confusion. And so to get around all this, we have checklists that we follow to make sure that everything was done right, which usually means you have to go back and try and figure out why your make scripts failed and more confusion. So it's usually, of course, by the time everything's coming together right before you meet your deadlines, is when there's the most stress, and then finding that, oh dear, we forgot to compile that in, or somehow we got the wrong version of something. It can lead to tragedy. An example, this is a bit... I'll come back to this, what this is doing, and there is a bug in here, actually. I just discovered I was going through the slides this morning, and if someone pointed out, when I come back to the slide, I'll be very interested. But this is verification, and it has all kinds of nice things in it that NICS does. There's a map reduce, so you can see here, to the regression is actually mapping the attribute up top there against a function to build simulations, and then reducing all of the outputs of the simulations into a coverage database. And this is something that it's neat to see that it's just, well, okay, maybe it's not just there, but these kind of things are built into NICS, and it's really easy to just be able to use them. Doing a map reduce and make is a lot more work. It's kind of hard to tell what you're doing. So I'm going to talk a bit more about the design partitioning and the design process we do, because it is kind of important, and the reason we talk about it is, when you start looking at the design flow, you can kind of also almost start picking derivations out of the design flow. It's like, well, you know, doing an elaboration is just a derivation. Anyways, there's, in the process of design flow, it's kind of a dual of partitioning and the actual tasks. So the point of partitioning is we want to give individual engineers a block that they completely understand so that they can take ownership of it. And this is kind of an important thing for engineering, is engineers work better when they take ownership of things. When you have four owners for something, people don't have the same quality of code they put into, then if they own this block, if there's a bug in it, then that's their responsibility to fix it. But the problem is our designs are too big to give ownership of everything to one person, so we have a hierarchical process where we recursively partition things down until we get to blocks that individual engineers can deal with. It's important because it bounds the complexity of design and it bounds the complexity of verification. So a hand-waving mathematical proof is the possible state inside one of these blocks is 2 to the power of the state vector inside. But when you're doing verification, we do block-box verification so you can only actually really stimulate the outside of the block. It's kind of a perimeter versus area thing. The bigger the area of your block is, the harder it is to reach all the state inside. So if you have too much, if your block is too big, it starts beginning very difficult to reach all the state inside. So back to the design tasks. So this is what we apply recursively to the blocks. So the interesting thing here is requirements, pardon me, a specification for a higher-level block becomes a requirement for a low-level block. So you can do this recursively forever until you get down to the size, I guess your fixed point is reach something a single person can deal with. And then when you roll this back up, you have different levels of hierarchy, subsystems, and then a chip level. So this is the chip-level lead for the last chip I was working on this. So I don't understand how the individual details over the blocks work, but I know they meet their specifications. I know where the specifications are and I know they've been verified, so I don't need to know. We don't do code reviews, surprisingly, because we verify that it meets their specification. I actually don't really care what's in the blocks as long as they synthesize it. All right. We've gone through this same chart a few times. So this is about an implemented index. We could actually implement the architecture and layout stages, and the hooks are there, but the architecture is actually kind of fuzzy in the process we're doing that, and layout is something we have a third party to do. So the design verification implementation from local ones checks are all done with NICS on the project they did. And this is actually the entire tasks of what I did. It's a bit of an eye chart. Let's see if I can zoom in a bit. You can actually see the derivations. Oh, come on, events. So you can do the verification task here. You can see all the different derivations that get used as we grind through here. And you'll see, notice, you'll recognize some of these from up above. I'll show you. So why NICS works for us, it's nice and pure. It does lazy evaluation, which is important because some of these derivations can be incredibly expensive. It does fine-grained dependencies, which actually really helps only building what you need to. After one of the presentations yesterday, I changed the good documentation, the good reference documentation because the language specification is actually really nice. All of the built-ins are well-described. How everything works is really nice. The assertions in NICS are actually awesome. I love them. And that's important when you have a lot of different types of derivations. So, all right. Talk a bit about design tasks because it kind of gives an idea of what things are like at the bottom. So this is what a directory structure would work like. We have these .v files and the .svh files, which are .v files. They're actually verilog files. It's extension of verilog. Verilog looks a lot like C with teeth. So this directory structure is important because we use mercurial, or we could use gear that doesn't really matter, for our blocks. Each block lives in its own branch. And then when a designer decides that they're at a particular good point to release something, they tag it. And then when we integrate stuff, we just merge in tags. So that works nicely because all the stuff for this met block is in its own directory structure and we merge it with some other block that there's no conflicts. So and also the way this works is at the higher, like a subsystem level, I can actually, I just import that meh.nix file, which contains all the information about all this RTL files. So what's going on here is, I use a probably lesser known feature in Nix. It mostly gets used in Nix packages for patch files that are loaded in the Nix package itself, or version control and the Nix packages repository itself, is if you reference a bare file in Nix, it just copies that file into this store and then gives you the variable, then becomes the path to what it copied to in the store. And it turns out to be really helpful because when you're iterating on stuff, as soon as you do a Nix build, it's copied everything into the store and you can go back to editing your workspace and you don't need to worry about interfering with the build that's currently working. So this is what one of those net.nix files looks like. The only real complicated thing here is this compile units, which it kind of begins to look like I'm using Nix like make here. This is a simple compile unit. It is a list of attribute sets. And the reason for this is it's a very log LRM thing. It deals with the scope of which the compiler will pay attention to includes. And because includes are a terrible source of impurity, I tried to very carefully bound that and to the point where if you want to pound include a file, you actually have to specify here in the includes what the list of files you want to include are so that you aren't including something out of some random spot in the environment that's not tracked by Nix when it's building stuff so that if you change something in that include file that your hash as well change and Nix needs to rebuild everything. So there are a couple of useful attributes in this net.nix file. The lib attribute and the alab attribute. The lib attribute is basically it compiles everything into a library, which is a big binary file that's specific to the simulator. And that is what we will use at a higher level. So we just include like we just start using the dot lib file from all these little blocks and then we don't need to recompile things. And excuse me, the elaboration is I suppose the analogy is linking a binary to make sure that all of your symbols are resolved. In the case here, this makes sure that all your ports match up and that everything kind of fits where it's supposed to. There are relatively cheap things to do and they catch a lot of problems. So when we want to use this MEH block higher level, this is something that instantiates the MEH block. It has compile units too because it needs to instantiate it somewhere. And it has a lib too because it has code. But the interesting thing about is during the elaboration you notice that the libraries now has pulls the MEH dot lib attribute out of here. So what that does then is when it does the elaboration it says, okay, pull this elaboration library out of the store. All right. And then I've got a bunch of regular stuff, regular IP that's kind of derpy that we get from third parties. And they just use standard derivations for putting that stuff in spots. And of course, since everyone packages their IP differently if you want to put the models or the specifications in all the spot you have to have different derivations for all of them. All right. Verification flow. This one I talked a bit about before, but basically what it comes down to is this is probably the core of what makes us do SOCs without many revisions. I'm not going to spend a long time talking about it because I don't really have a lot of time. But basically we extract requirements from specifications. We map those what we call designer requirements into verification requirements, which are the things we want to see and the things we want to check. We map those verification requirements into tests and we run those tests. This really describes part of what we do because this is all just directed tests, which is probably more of what's understood in the software world. But the majority of our coverage of things, functional coverage, comes from random constraint simulations. So when I talk about coverage, I'm not talking about code coverage. I'm talking about functional coverage. Functional coverage means that we gave the design the stimulus you want. It did the right thing when you gave that stimulus and we saw it. So this is important because what you really care about is the thing did what you wanted it to. Oh, as a quick question, does anyone know the difference between verification and validation? Anyone? All right. Okay. Verification is a design meets a specification. Validation is a design is suitable for a purpose. And they're not quite the same thing because we don't always get the specifications right. I'm just kind of outside of what we do. So that brings us back to this. So anyone forget where the bug was? It's an off by one. This is supposed to be a one. Testing. All right. Pardon me. Okay. So what we're doing here is we have five different tests there and they run a whole bunch of seeds for it because they have random components and our simulations have random stability, meaning that as long as the design doesn't change, the same seed will generate the same stimulus. That's kind of important for debugging is when something crashes, you want to be able to figure out why it crashed. But also you notice there's a bunch of skips in there. That's because you may notice that when you're designing something or testing something, often your bugs are actually in your tests, not in your design or your code. So this is just acknowledging the fact that often when you start really testing things, especially random tests, you kind of across all kinds of things you never expected to come across, like address collisions and all kinds of stuff. The reason this is supposed to be a one is because when you do a range from zero to 200, actually that's 201 elements, not 200 elements. So what this does is polls runs all the simulations. So this regression attribute here is a big list of lists of all of the simulation elements which are generated with this build regression function. And then we merge everything with this coverage thing here. And what this does is when we... There's stuff built into our language that says when you see something, say that you saw it and it keeps a database of what it saw. And then we merge across all the simulations we run to see what we saw everywhere and it worked. And then if we say we saw everything we wanted to say, see, we're done and it worked. So this also means if we didn't see for this SSID, we didn't see a read followed by a write ever that worked, then that's a pretty good place to look to say, oh, there might be a bug there because we probably generated the stimulus for read followed by a write, but we never saw one working. It's kind of a fascinating way of looking at things. All right. Talk a bit about one of the simple derivations. They're all basically based on the standard environment, make derivation stuff. I had to... I was seeing how the assertions are pretty awesome. You can actually read those assertions. They actually kind of make sense of what they're doing. It's like the built-in is list, sure. Make sure that a laptop is a valid module name. For instance, if you have a space and a module name, it's not a legal var-log module name. The next two are kind of important for the way the tool flow works is you could, since the libraries is a list, you could give it a list of strings and those are... If you tried to feed those into the simulator, it would just... Yeah, it's going to crash. But the assertion here goes through and checks this CAD type thing and all of the derivations to make sure that it matches. And the next assertion is if you have two libraries... I discovered this one a while in... If you have two libraries with the same name, it doesn't pay attention to the first one. But you might... It's probably an error in one of your derivations somewhere else, but it's kind of annoying that you lost half your design somehow. So the reason for doing these derivations is there's an ecosystem of CAD tools and there are certain points which we divide things on that compile units, for instance. There are logical partitioning where you can swap out tools. So the idea being is this is using a tool called ModelSum for doing elaboration, but there are many other tools to do elaboration, many different simulators. For instance, there's the open source Icarus var-log simulator and a few other ones. So the idea being is if you go back to the meh.next file, it just calls make a lab. It doesn't care which simulator you're using. So you could swap out in the back end here, you could swap out to use Icarus now, or you could swap out to use VCS or NCSIM or whatever other simulator you want. And it takes the same arguments, it takes the same compile units, it just does something different under the hood, generates a different derivation. To talk about something... The reason I put this in here, because this is using another nifty thing, I discovered in MNICS. We do these things called ECOs, which stands for engineering change order. So what happens is once we ship off this netlist, hand off the layout, we actually can't re-synthesize anymore. Once we've handed that one off, it takes 12 weeks for them to finish they're doing all the layout. But if we find a bug that we need to fix in the netlist, we can't just re-synthesize it. So what we have to do is we have to write a script that disconnects wires, rewires, standard cells, instantiates new standard cells, and the netlist itself. And this is kind of complicated, because we don't simulate the actual netlist because it's not worth doing, because it's incredibly painful to simulate gates. Just don't do it. There's no value in it, because you don't actually get any knowledge and just spend a lot for doing it. So we simulate the actual RTL code. Let's say we have this netlist handoff that we usually did, and it turns out that we shipped it, and then the verification team in their last simulations discovers a bug. So what we have to do is you have to go write an ECO against the RTL, so just fix the code and verify that they fixed it. And so that gives us a, okay, verification is good, you fixed the bug. And then we have to patch the netlist and then patch the place gates. And then the problem with that is then we need to make sure that those fixes were equivalent. That is actually what we use those form equivalence tools for. So this part of the flow isn't as labor-intensive as the verification flow, because once you go into more rigid formalism, you can use more powerful tools. The place gates and netlists are quite rigid formalisms. You're just all the realm of Boolean logic. There isn't really any room for interpretation. So this is what the ECO scripts look like. And the reason I brought this up is you always end up doing more than one. In this particular design, there are 34. And that's across the entire chip. So what it is, is you also have to apply them sequentially. And this is just a fold left. So you take the original netlist and apply ECO sequentially. And it all goes into that fold left. And this actually is another place where Nix is really helpful because it's expensive to do these, because you have to load the whole design in. And each one of these scripts takes like five minutes to run. So you can see there's 34 of them. So 34 times 5 is a long time to actually general this stuff to make sure that it works. So what I typically do after doing this is after I generate the last ECO there, and I push that into a binary cache, which is accessible via NFS, so that if someone has the unfortunate, needs to add another ECO, then what happens is Nix will just pull the intermediate generated netlists out of the binary cache, which saves immense amounts of time. We also, there's this ticket thing in here because part of when we find a bug, we have to file a ticket. So we have a nice place to put all the discussion of what the bug is. What happened with it? So we actually start putting this apply ECO. This apply ECO actually puts in the comments in the netlist exactly which bugs are fixed in it, which is kind of a handy tracking thing. Now on to the tools. The tools are, the CAD tools are the worst software I've ever used. They, yeah, it's, I don't know, considering we pay an immense amount of money for them, they're just horrible. So the, as you can see, part of what I do is I actually have something that looks like Nix packages, but it imports Nix packages that I load the tools in out of their torables, patch off them all to use Nix and execute them from inside the Nix store, just like regular Nix packages. However, they come across things like no one actually uses libtermcap anymore. And because about, I don't know, 20 or 30 years ago N-curses replaced it and it's actually ABI compatible. So you can actually, before patch health grew the ability to change the R, the DT needed in an L file, I was actually using sed to change the DT needed in a, yeah, and I don't know, it's kind of satisfying to do great violence upon the tools because, yeah. In the future, I'm very keen on the FACS shrewd environment because that means I wouldn't need to do this anymore. I can just create a red hat or sentos FACS shrewd environment, throw the cat tool in there and run it from there. Unfortunately, I can't really do that in the environment I have right now because I don't have root access so I can't use shrewd. All right, I mentioned this bit earlier but we have a lot of external IT. We don't do everything ourselves and one of the problems is that when you have 400 pieces of external IP and you want to actually compile them all, no one gives you things in the same format. You can't ask them to give them all the, with the files in the same spot. So Nix is really good at being able to manage all of this because you just have different derivations for all of them. You can manage them independently. You can give them the version numbers that you can pull out of the, you know, build time dependencies. So you can figure out exactly what went into a particular derivation. So that it makes the job that I had to do as a librarian a bit easier because otherwise it just gets untenable pretty quickly. Normally we'd have to have one or two people doing this and so I did this, I appeared a librarian as well as actually writing Nix and doing all the chip live integration. The other interest, the other module in the Nix CAD stuff that I wrote is this project module which is kind of a metal layer on top of all of the derivations you saw before which basically says what set of derivations you use, what CAD tools to use, and what versions of CAD tools to use because I think I went through about 40 different versions of various CAD tools throughout the project because they're so buggy. Which standard cells to use because when you think of AND gates or gates and nor gates and exclusive or gates you think there's, I don't know, 20 or 30 of them. I think we had about 6,000 standard cells because there's different dry shranks, there's different threshold voltages, there's different sizes. The technology that you're implementing to you, so the metal stack, the silicon process, that's all important when you're doing synthesis and operating conditions. Silicon, I'm not sure if everyone's sending any overclocking stuff but there's a reason you cool things down to like negative 40 or liquid nitrogen cool processors to make them go faster is because silicon is faster, cold. It gets really slow when it gets hot and the difference in speed is like a factor of 3 or so. So things that work in the fast corner don't work in the slow corner, things that work in the slow corner don't work in the fast corner so you have to kind of meet timing across all these and it's actually a fairly challenging task. So I wrote a metrics module. This is an area of a block so it's... I have a bunch of hacky scroll scripts that look at reports generated by various CAD tools and write the JSON to you feed into Influx database which the people in Influx database then promptly stopped using JSON so they decided their JSON parser was slowing them down so they went to a different format which means I have to rewrite all my scripts. I used Grafana to generate graphs I think it's IACA it was yesterday I was talking about using Grafana Influx database so I thought it was kind of interesting. We used morgoth for anomaly detection so there's a couple anomalies in that graph that are kind of interesting. So Hydra, during its constant evaluation evaluates blocks and extracts metrics from us and doing these blocks so you can look at I don't have to individually look at the output of all these things, I can just look at a dashboard so someone screws up one of their blocks and it synthesizes down to nothing because our synthesis tools are at least smart enough to know that if you tie the reset for a block off to zero it can never come out of reset so they don't bother implementing anything in there. So this chart is interesting because what it is is we synthesize both the block level and the chip level and what's interesting to see is when we synthesize something in the block level how much area it uses, how many gates it has versus when you put it in the chip level and put it with all the other blocks how much area it has. So the anomalies here are actually because are at the chip level because what had happened is in one of the register buses to read and write registers from in the design to control what it does there was a bug that made it so that it could never actually read or write registers so all those registers got blown away. So you can see that the block got like half the size it was supposed to be which means something is badly broken. And I have a special module for controlling when we have to make a handoff to someone so we build board support packages we build a net list of the variables we do ECO deliveries so that we have a bundle of ECOs we handoff to the people doing the layouts so that they can see here are the scripts you need to run on the placed gates to make sure it works and for the board support packages here all the register header files all 300 specifications you need about all the IP. So those old things you want to kind of abstract a bit because everyone wants them a little differently. We use Hydra a lot so I had as an idea of basically the scope what we're doing here there are about 20 blocks for subsystems at top level and the board support packages so of each of the level blocks there are 24 derivations to get evaluated from just doing an elaboration simulator an elaboration and synthesis tool doing a synthesis to doing checking of all the power collateral to making sure it meets timing to linting to all kinds of stuff that the subsystems have less because some of the subsystems are more difficult to synthesize by themselves and the chip level has more because there are more checks you need to do at the top to make sure it all works so that doesn't look very pretty actually it's kind of a there's a lot of orange, red and brown on there things are in a constant state of not working but that's okay because if everything worked all the time that means you should have shipped a while ago so in the process of open sourcing all this stuff actually I have approval from Intel to do this which was a presentation I had to make to the I'm not sure if you guys know but Intel I think this year became the single largest contributor to the Linux curl and so we have a lot of people that work on Linux inside Intel which is great they turn out to be the most knowledgeable people I know about how computers work I think it comes from having to know how good specifications and how everything works and figure it out yourself so I have approval to do this but the problem is I have to meet a bunch of legal requirements saying that I'm not using commercial code or what is this or code that I'm not supposed to be that is an incompatible license site so NICS packages is a re-standard MIT license always NICS calendar MIT license because it is a runtime user of NICS packages and it just kind of the suggestion from the open-source team was to use equivalent license to what's the committee is using one of the reasons I'm doing this is because I don't honestly expect ANMLs to pick this up immediately but I have noticed that the code I commit to NICS packages gets better like the ZFS module I contributed I don't know three or four years ago has subsequently been evolved into something better because people use it and find ways to improve it and that's stark contrast to code I commit to internal repositories which is like rots until it breaks and then people get really angry at me instead of just fixing it there's a couple of things in here that if someone else does use this and find a different way to do it it makes everyone's life better and that's actually something Intel does a lot which is why Intel develops Linux even though developing Linux helps our Intel's competitors it also makes Intel stuff better and it makes the entire ecosystem better so another thing is we demonstrate that we're doing interesting things and that's important for our site locally because we need good people and good people are interested in doing interesting things so demonstration that we're doing interesting things means that maybe we'll get good applicants on that note if anyone will be hiring next year so if anyone's and I kind of like it it's kind of a bit of a special snowflake infrastructure right now and I kind of want to grow beyond that a bit so the other thing I need to do is finish open sourcing the XCAD I was planning on doing being done by this but I had to tape out in the way so I had a lot of work to do I didn't have a ton of time to analyze this but the big reason was is I have to take like ten hours of training to be able to use this code scanning tool and the training is only offered every two weeks I want more purity right now there is lots of opportunity for impurities to come in because I'm not even shooting stuff so you could just be using random files off of people's home directories I would like to there's actually a fairly vibrant community on GitHub for doing open source harder projects that probably port a few of those to using GitHub I put a hierarchical constraints in there I'm not going to explain what that is take too long and the next thing I would love to be able to do is using the update operator to take an environment and update all the derivations in it to based on whatever tools you're using so you can stitch together whatever tool chain you want to be able to use so the first case is the environment that we're kind of using which is a model simulator design compiler which is since this tool on formality which is a formal governance checking tool and if you don't have that yep I have one side left I think and it'd be nice to be able to use open source tools Any questions? Thank you that was great How easy has it been to convince your colleagues to use this and get them up to speed on it and are you the only person on your team using it or is there a whole team? So there's a couple interesting things there and this summer one nice thing about bringing for Intel is every seven years you get an eight-week sabbatical so I got my sabbatical this summer so I went on vacation for eight weeks and came back and people were still using it It turns out that because I could do a lot more of this stuff and maintain a lot more of this stuff with the same amount of effort of maintaining once the descripts are broke all the time for doing one of these steps I can maintain this and Nick's doing about the same amount of effort which means that people actually want to use it now because they don't have to do it themselves so it's the big productivity improvement for me to be able to sit on top of doing all this Any more questions? Is there a technical reason for using this old corner? For using which? This old corner Oh it's a somewhat unfortunate so the CAD tool is the VA vendors the CAD tool people write the CAD tools say oh our CAD tool will only work on Red Hat Enterprise Linux 3 or something like that we're only ever going to test it on that because it turns out that compiling software is really hard and we don't want to test it anywhere else so they don't really want to change any of this stuff so what happens is speaking of the Venn Diagrams the only Venn Diagram that all these CAD tools matches on like one particular version of Linux Sousa 10 or something that went out of support four years ago and I'm sure Intel is paying Sousa a lot to get patches for all the security vulnerabilities in the kernel that old Any more questions? Okay Thank you