 And thank you so much for showing up for the first session of this morning. I really appreciate it. And yes, thank you for this really kind of introduction. My name is Regina, and if you have any questions or remarks or anything, I'm more than happy to talk to you after this session. I'm not sure if we'll be able to do it in any time. We can do it better, but yeah, just come and talk to me. Right. So I got my PhD, I think six months ago or so, in grassroots design, grassroots innovation. We're going to be talking about something completely different today. So we ran a small project with the FCDO on the frontier and telemetry's help in the UK to try to figure out whether distributed manufacturing was something that was already at a scale of a stage, or if preliminary work needed to be done to support the work of distributed manufacturers. So my question is, because we're in the hardware track, right? Do you know about distributed manufacturing? Who does? One, two, okay. Maybe you know about it from me. Okay, my next question would have been is how many of you are involved in distributed manufacturing? One of them is four. Okay. Can I ask you what it is that you're involved in? Me? Yeah. There's a group of people that are out of the gallery for open science hardware that is starting a program called the Open Science Shop. So they're trying to form a collective that will eventually be distributed manufacturing of science hardware, but they're in their very early, early stages of this. Amazing. Okay. So for our listeners online, there's going to be a project that allows you on the stages, but it's going to be within the framework of the open science hardware. Very cool. All right. So, okay, well, this is good because I came with two definitions for you. Of course, when you're starting a new project or something, it's good to define what we're actually talking about. And I feel like scalability is a term that is used quite often and quite generally, but it's actually quite interesting because when you're looking into the literature, we found that the term actually has no generally accepted definition. So what you're trying to see here is that scaling is not necessarily about growth only, but it means both potential of project you're accompanying and their ability to exploit the economies of scale. And it is indeed the promise of exponentially returning, increasing returns to scale. So this definition is actually designed for conventional businesses, whereas with distributed manufacturing, we're kind of looking at a slightly different approach. And therefore, what we are working with is a definition that has been created for the humanitarian sector, which I thought was quite interesting. So ELLA is an organization in the United Kingdom where they have an excellent report on scaling humanitarian innovation. And so their definition is building on demonstrated successes to ensure that solutions reach their maximum potential, have the greatest possible impact, and lead to widespread change. This is kind of a more human friendly approach to scaling anything. Right. And with that I would like to come to distributed manufacturing. So distributed manufacturing is an approach to production that involves improvements in smaller quantities, more locations, and closer to the point of use. And what I also need to say here is that while we're talking about these two terms, that most of the literature that means that scaling and distributed manufacturing is that of approach closer to user. There's actually a challenge which should not come as a surprise because distributed manufacturing is actually understood as a small practice. Right. And still, we can make sense to try to support those endeavors and also to try to see how they can be scaled because what we have seen, especially over the last couple of years, but it's been going on for a while, is these larger trends where traditional centralized manufacturing has kind of been failing people on all sorts of different levels. And centralized manufacturing is often built on long and unresponsive supply chains. So we can't really respond to some urges and needs. And it is also far from the consumer, so consumers get what is being produced, and that's it. And they often use resources that are already sparse in inefficient ways. These are of course major problems, right. At the same time, distributed manufacturing has advantages that we have seen, for example, during COVID pandemic, when makers were stepping up producing protective equipment, right. What we've also seen is, for example, humanitarian aid, manmade and natural disasters, supply chains often break down. There's no way to bring things to a particular place, which is only the most urgent thing. And so there also distributed manufacturing can help. And what we're also seeing is that there's actually a lot of large corporations engaging in distributed manufacturing. They just don't really talk about it because of course they want to keep the information themselves, which makes perfect sense. But what we are seeing is that we are looking at the future of supply chains, trying to figure out how it might work in the future. And also producing parts for machines on demand, for example. Anyway, I'm happy to send you some reports about this if you're interested. So what I have to say is that basically people are quite optimistic about distributed manufacturing. And that's a lot of people think, also a lot of large corporations think that it could be a special piece for to provide a future collapse, compliance, means of production. And what is predicted that we will be seeing some future collapses. Right. So what we did is we did a literature review. And this is kind of what I reach out to people until now. And then we did a bunch of case studies. One of the things we're looking at is, I know that the 3D printing model, it looks a little awkward. It's a 3D printing project, not just 3D printing. But we interviewed a lot of amazing people who are experimenting with smaller or larger scale distributed manufacturing. And we tried to understand how they have been approaching growth and scaling, and especially what happened with them before and leading up to the pandemic and then doing the pandemic as well. That's how we created a bunch of case studies. And what we got out of this is a framework, at least preliminary framework. And I would like to give you a brief overview about this. I know that this is a lot of very new, but I hope that some of it might be inspiring for you as well. So what we found is that efforts to scale distributed manufacturing actually have to focus on the ecosystem that a distributed manufacturer is in. So we looked at the ecosystem on three different levels. Micro would be the distributed manufacturer themselves. So that's one organization, one particular node, so to say. The next level, which could be the collaboration between the distributed manufacturer and others. Or it could be a regional scale. And then macro, of course, is the global level. Which we also see a lot, I think, with the network. People are trying to bring these different initiatives together, but then also, of course, entities at the United Nations or international development agencies can really react well on that level as well or support the development of the ecosystem. So we organized a framework around this. And then the other part is the five success factors that we have found. These success factors are necessary when you're trying to scale distributed manufacturing, either from the inside or the outside. And these five are make or making products. Make or making products, while also, for example, having access to different inputs and different skills, sell or selling products in the market, for which one, of course, need market knowledge and the readiness of the market as well. Operate or how the organization that does the making and the selling is operated into the infrastructure, which some of it they actually have some impact on other infrastructures have to come from the outside. And also they have to understand the regulatory environment. Collaborate, which I think is something that we don't relate to. How do they actually work together with others in their own ecosystem and how that can be supported? And invest or the need to invest to grow or scale distributed manufacturing and this includes experimentations, as well as the iterative development of business models which is currently for distributed manufacturing and site traditional business models. We don't really have these novel business models yet, so people have to understand that the duration is needed. And of course, access to funding, right? You can't do it always on your own. So now I would like to give you two brief examples because I know that I was almost over. So basically along the lines of these five success factors we have created this home complicated matrix of all of the things that are needed or might be needed to happen. Some of them might be there in the ecosystem or maybe some of them might be new. For example, for, let's say, for making, looking at quality and compliance whether quality assurance is there already can be a challenge. It can also be an enabler, right? So for example, if you have ISO certification it can give your customers confidence in your products. At the same time, if you are not able to reliably generate produce links in the quality of quantity that is needed it's going to be hindering your growth, your, especially your scaling. These are some basic things that need to be said and need to be compiled somewhere. And so this is why we have created this report and I'll tell you a bit more about that in a second. And so what we also try to do is gather different strategies on how to make these, how to overcome these challenges and how to turn them into enablers. So for example, as you see on the right we have the micro-mes or the macro level. Compensers ERP is something that can help on the global level, on the macro scale with the quality assurance. And then on the micro level there is a couple of smaller tools that we got, for example, process documentation. We've seen that it really helps people scale up their efforts. And we know that it's always a pain to do proper process documentation but it's really really helpful. Right. For example, under Invest we have the different business models. So of course, as I said, uncertainties about business models are still something that make it really really difficult for people to grow or scale to manufacturing efforts. At the same time, getting the right business model can be something that can unlock funding and start generating money and basically help you become sustainable. Again, these are basic points but they need to be gathered somewhere. So what we recommend in here is an iterative approach that includes trials and errors. And of course it's easier said than done because you have to make money to pay your bills to have your machine running and so on and so forth. So what we're recommending here on the macro level, the top-down level is to share information globally on different tested business models and share them openly and especially also the context in which they have either worked for sales so that people can learn from those. And so we also have created a checklist just getting this really manufacturing. These are maybe for high-level use but some of them might be neighbors that maybe you want to do this really manufacturing and that would occur as well. So again, this is about creating healthy ecosystems for distributed manufacturing. It should include building partnerships including strong relationships with local communities standardizing manufacturing processes especially distributed manufacturing, right? Automating manufacturing and administrative processes automating things away that you don't need to think about in your daily lives is something that is going to help you scale obviously. Building the capacity of all actors and stakeholders is going to be done locally in a makerspace it can also be done together with a TVET institution, right? And investing in skills development is a similar but also a knowledge transfer so sharing knowledge with each other and with infrastructure creating supportive policy and regulatory environment again, this is something that can also be used for for paramotor which is the social and environmental sustainability in all of the aspects of the operations this is especially important because if what we're going to see is a bunch of distributed manufacturers popping up into places who then don't actually pay attention to how the ways it's managed and so on and so forth we've seen some examples that could lead to further environmental destruction as opposed to the promises of being more environmentally friendly and of course finally the marketing the benefits of distributed manufacturing so that people start talking about it. I know that this was a lot of information the report is actually going to be down-locable very very soon today or tomorrow so I hope that you are going to go and download it check it out we still have a lot of work to do with it but we are also planning on launching an ongoing knowledge sharing group and I would like to advise all of you to come and join it you can either talk to me if you need my address I promise to do the same or reach out to the team and I would also like to, of course, thank you both for two technologies helping pay you the last CDL and maintain that manufacturing change and thank you so much for your attention Thank you very much Is there any questions or anybody? I need to run over to get you to the mic So you mentioned such things but if I wanted the world to contain one million more widgets I don't care how they are running it would seem to me just from first principle that a more efficient way of doing that is centralizing manufacturing you can call it efficiency without scale so how is this pension resolved? You're absolutely right so centralized manufacturing has actually been designed to do exactly that to do things efficiently and fast and it makes perfect sense the problem is when you're seeing things like the COVID pandemic suddenly things were not available because the supply chain was broken down and so what I wanted to highlight in the beginning of the presentation is that especially again in humanitarian aid settings for example there are always different scenarios that are not the norm and these future collapses so to say we need to find ways to comply with them and so distributed manufacturing the local production which is of course it has different other advantages like being close to the end user so being able to individualize personalize particular things while also passing things on larger scales at the same time it is possible and we've seen some examples that distributed ways of operating can actually be more useful in those crisis scenarios Thank you for that question Anybody else? No questions? Really? So thank you very much Up next we'll have let's get the details correct here we have while she's coming up to set up Joyce Hi, come on over she's going to be speaking about the verifiable computing project please give her a round of applause for this Okay, hi Yeah, my name is Joyce and I'm here to talk about the verifiable computing project today So today I'm just going to talk about these things why do we care about verifiable computing project why do things like FPGA is verifiable computing right so what if this thing goes out there that's something what I will not cover is things like computer architecture digital design and DLSI design those are more complicated topics that I'm not going to tell you today So what do these devices have in common? They are all computers all of them are in front of servers they have zones to even programmable controllers and industrial devices all of them are computers right So, why do we care? So for any user perspective let's say your journal is going on a sensitive assignment and let's say you want to do an interview right in a sensitive location if I put my thing into a pipeline oh that thing can I be sure that it really isn't a thing that I'm not transmitting anything right now we don't have that right even for chip design if I send a chip design to a app how do I know that the chips I get are exactly what I get that they are exactly what I design files right right now it's very difficult to verify as I'm going to show I do have a real life example for several months right now a few of a bunch of files actually have been progressed by a certain professor who is not doing who has actually taken to compromising some of our friend's devices such as an Apple pen and a MacBook and I mean these are the parts inside your MacBook right and they use you can see that they use a bunch of chips that are made by Apple and you'll know how they work you'll know what software is actually running on it right and it has taken from the Apple P2 chip found in 2018 and made the MacBooks there are bunch of security files from all of these files which allow for certain or some things like you know you're able to run code and in this case this person has effectively said you know I love this phone right and not just the phone but the MacBook itself I'm able to tell that this person is here you know at home and the question is how did the phone get bugged even how did the MacBook get bugged even when I checked it the disperser they said they locked it they left it unattended but they locked it so how did they lock it they don't know it and for the phone the adversary was able to tell us every single thing I've said on the zoom call they were able to tell us police forensics were not able to find any trace of software presumably because it was white the phone is not even jail-broken by the owner the owner is not even a technical person that does not know how to deal with these things right so as you can see many consumer devices nowadays they might also need platform boxes and they do not know how they operate or how they work there are lots of security flaws that are even in the chips itself and we thought we're hard man this is a simple inverter of it so if I was to put let's say 0 in input I'd get a 1 in output and 5 in disperser right so here is 2 discharges 2 inverter designs for the chip are twice of them the difference here the chip is very difficult to see but you can tell because you look closely you can see that the doping helped me change compared to this section so it is a key to open itself and to open the original design so what happens no matter what you do no matter what input you give this inverter will always remain on it will only just give you 1 and when you apply that to things like more complex like the Intel crosshairs for example one of the things that we may be interested in is things like random number generation generators used for encryption and other forms of encryption but I think as L that sort of thing but you need to start somewhere you need to get a random C to somewhere right this is how this generator is in hardware so this is how that works how that looks like in the Intel implementation and this is for the IV bridge design so what is that you have what is that this is you have two registers the state register K and C and the C is included by one so what happens if I set state register K to some fixed value in hardware so that my register is always a fixed value and I can set the state register C so that I can now decrease so let's say you have a 426 bit C I can actively reduce the key complexity that's about 526 bit and that's why in this in this example this thing was still passed a bit in some times and as you can see the truth is very very difficult to spot this even if you use a very very difficult spot because even if you use a scanning of the microscope there's no way to see you're just changing the token you're not changing the structure itself so yeah in this I'm not going to get into what people say and all that but yes it reduces at this stage it reduces the patch complexity from 528 bits a level to just end it and chosen as this is a patch and as it says the hardware children can pass to the insurface and as I said earlier because all you're just doing is changing the token levels in the chip itself but you're not changing the structure of the chip there's no way to find this children and the only way to detect this is if you have a golden chip and you compare all your productions of both so it's hard and see it's very difficult to know that you know if you don't even know that it's been time going to begin you have to know that already before it can look sorry and it's very difficult to inspect this figure out like where the capital assessment comes from and this is sort of thing the state level actor or wealth fund actor could do unfortunately there's no open-source chip fabrication process as yet although work is being done in these things such as a very famous example Sansa Loops you know this G2 chip which has literally 1,000 of her visitors and this is all done at home by the way this is his lab all at home and I think this is an older picture so yeah so I'm not going to get I'm not trying to say we should build all our own labs and things like that at home that becomes very expensive but what we can use at EGX because they are general developers right and they use all sorts of specifications from networking to networking, radio and a lot of applications so it's very difficult to target like you know a certain batch of IPJs say you know I want to modify this batch but you can have no idea when a batch of IPJs is going to include all sorts of applications so it's much more difficult to target from a supply chain perspective right and if you write KCL code you can actually inspect that code you can verify the operation of the IPJ and yes if you so desire to be able to modify your devices if you have HDL and this is a general structure of an IPJ you have a bunch of old logic blocks and you know at a supply chain you have your in output blocks and then you may have other blocks have memory, modifiers and you can select it in between so in a specific design this is how like a general overview of how the IPJ looks like and each of these PLP's are your programmable logic blocks right and this is how the programmable logic blocks look like very simple part of this what all you're just doing is to configure if you program this look at people here you can say that okay set up inputs right and a certain you know clock for example let's say I can or my output I give you a certain I give you a certain output that is the clock inside that or clock age and the IO looks like this so you can configure the input output and things like that or just by writing code in our distribution language there are certain orientations such as where our fund is free for observages for example where you can actually experiment with it and mobile communication and the like and inside inside this is how it looks like so you actually have an idea of how this whole device looks right and the controller here which handles all your power and whatnot and then this is the main processor inside which also handles the encryption inside the clock and this is how it is where right which is you know your very long HDL code and this is how it looks like and this is what increments in your gateway basically so when you start out with an FPGA project so what you're doing is that you're doing things like I'm designing like a high level design right like what's my design supposed to do then you know I write my code in whatever hardware distribution very long or HDL and then I can synthesize that design I can do simulation and once I'm done simulating that I can actually implement the design synthesize space and do the case and then the final output the bitstree can be programmed into an FPGA like a physical FPGA device so the three general processors for FPGA design flow out synthesis so take your hardware distribution like rich and convert it in a netless file that describes the logical connection of the blocks right of your IE blocks or a configurable logic block and that sort of thing but please include, takes that synthesized design and turns it into a physical implementation creates a specific FPGA so if you're using a sign-in it creates a part of that a bitstree that we're using like this, I-S4D gives you a part of that bitstree yes, your final process is to generate a bitstree which is then put in this final file and put the program into the device there are some open source thinking about this procedure right so there are some open source tools that I use for synthesis next place will be or this place will be this generation which projects an F2PGA that's a whole bunch of devices like IE to the left IE to the right design and so on what you want to talk about because I'm using that right is something called LightApps it's a framework for building a system on chips right so multiple processors and whatever PCIe Ethernet and so on an FPGA design so you can build things with 5% combined with some other peripheral like PCIe or Ethernet and this thing supports this user's idea which is a file instruction like which base you can customize your own design with whatever peripherals whatever functionality you want all the devices in the box and the open source IE so whatever peripherals you call it so yeah so I have an example which I'm going to demonstrate today using a EConsealE215 this is what the design kind of looks like although I don't have the SMT and so on so in code I can actually change all my definitions like what the input output is what you do so on so how can I change that in code unlike a physical chip where all sets go back and change the point to change functionality based on my needs why is this why so the processor here we're using is fine so why do we use that because it's an open source instruction architecture and it's a very simple one you can implement the most basic 32-bit integer instruction set for this file and the spec itself is really frozen although you can make your own custom extensions so if you want to build something that's not if you want some features, something that's not supported like the spec itself you can implement that and this is just how the instruction set looks like this is just for this file the 32-bit integer so the ACON basically a mining this used to be like a mine cryptocurrency miner device which now it's not profitable anymore to use for mining so you can buy a whole bunch of these for less than 100 US dollars and this is not like that of the device and what it has and it uses the XYLINX RTX 7 in fact the highest end FPGA in the RTX 7 family it should cost a few hundred dollars you can get a board now for less than 100 if you know what to look and right now in the office I have one set up I was not able to build Linux for some reason but I have a demo coming up right now of course I did not bring the device because the last time we did this for big camp there was a lot of stuff to bring right so I just have it set up on the both desktop and all so how that looks like now let's see let's do a reboot so yeah of course I didn't I was trying to compile the mix I didn't manage very to compile the work in time for the demo but the virus itself and it starts up and you can see that it has a mix of everything for good image so yeah some of them so there are two projects which is the RISC 5 laptop which is supposed to be verifiable laptop development is all the RISC 5 processors implemented on FPGA and the other project is the LWI project similar to the laptop where we are investigating the possibility of implementing the mobile communication on FPGA and software dependent so this is what we are working on currently and I expect RISC 5 to work here for the third quarter this year so here are some references about this topic of course there is one big person and his project is actually very well documented we want to check it out like X of course that's the thing I am using for this demo and the final link actually shows how the project built Linux live and implement the processor on FPGA and things like that and this is the paper for the Healthy Hardware Children that I talked about earlier in my presentation and this is very fun to try that's it thank you very much Joyce I am sure there will be a dozen few more questions really cool talk I would love to have one of these laptops so what's like the top rate of this RISC 5 run inside the FPGA I will talk really good remember what is set at 100MHz it's probably set 100MHz so it's not that fast right so the thing I am really trying to figure out first you have to go running 7 series and they are the RTX so in this demo we have the highest end RTX 7 and they are the next the next in the line is the Kinect 7 which supposedly the fabric is much faster than them or somewhat faster than the Kinect 7 so what I am trying to figure out is for the given speed what are the performance like for the processor and expect to publish those laptops once you get there so the chip would also need to drive this type of thing so the LPDX ios and the FPGA itself so the display controller itself should be able you should be able to implement it on the FPGA maybe when you open another additional one of the chips for each MI or whatever display you are using so yeah cool talk you were talking about Litex and you were talking about uosis so how does it work do you program your stuff in my gen and then you give it to uosis and then it comes out so what's that so Litex itself is basically just a set of tools so if I remember correctly it uses some FHD also like Python so you can build modules they can build modules in FHDL which then can add the Litex project to compile system so I am still experimenting to change such that I hope that I understand how to use it alright last question so if you are talking about such a target same level of attack if you are trying to use chips and you can pay cash if someone gets an EPU and then there is no more surface area to attack it but if you use a soft CPU like this it can be constantly under attack I mean if someone manages to get access to a computer they can probably replace the the FPGA configuration with a cogen one first it is much harder to replace the EPU so what is the scope here are we talking about like one target it is the vector level of threat or it is a bit confusing to me how the how FPGAs and given how complex the translation from each bit screen are so it is not like humans can possibly really tell that the bit screen has not been compromised it is all things together right? I mean you have a point but what I want to say is first if you have physical access to a device then no matter whether you have a heart you are very using a heart of a shell processor or a soft heart processor in an FPGA device it will basically finish to begin with if an attacker already has physical access to a device that is as for the gig where it sounds right this is the problem I am actually thinking about and I am actually looking at not just people with sensitive conditions but also for any users and the basic question is what sort of tests can an end user do unless I get the device it is reloaded some bit screen so what sort of tests can a user do what sort of things can a user do what do you know what do they know about these tests for them to be confident that the device is actually clean so those are some questions I think we will go full on in a moment and I don't have an answer for that that's it I think we are actually running out of time so we are already over past our hardware schedule and time swap so I am sorry I can't tell you about it but you already asked a subsequent question so thank you very much Joy Good morning everyone so as mentioned I am a business fellow in NUS but today I want to represent YMAP as the owner of the open source of open source which is Vivo so but today is not technical I want to make it more technical so we will see some quick demo later we can have some longer period for demo but before we start I want to discuss a little bit more this before so basically as mentioned this is a language that intends to be used to program different networking hardware or you can select networking chips including the smart mic, mpga, intercopino or even the Linux networking so if you want to see the details I have one and a half hour tutorial about the Vivo in my last event so you can check it out in this youtube link maybe I can send out to the slide then you can try to learn about Vivo I will give you a little bit introductions or maybe analogies why we need Vivo before we create Vivo so everybody like talking about October the programity right as mentioned we have a CPU that can be fully programable we have the vocal difference and so on and we have the network so the question is everybody know about the programming I will explain this one but overall you make your own programs can be simple, can be complex it's up to you and then you note it into the source code and then go back to the binary or our PC and then you go to the computer and then the computer behaves like you want based on the application similarly with the mobile device for example Android so you can have very complex apps for your applications right you just fill using the Java programming rates and then you compile it put it into the what do you say class and some manifest and then pack it into the kpk in the play store or whatever it is and then your application can be run in any hardware that use Android now the question is can we have the same in the network so basically it's very similar I have network it can be 1.5, it can be 2.5 or it can be 100.1 okay I have my own networking logic I want to set behavior of the network including behavior of the packets coming and going to my network okay so I need to define how the packet is behave and then I translate into the programming language programs and then I do it into the OS programming systems and then of course you need to have some compiler driver or run time to interact with this hardware okay so the idea is very simple similar that we have in computer and also in programming okay now the second one is okay we cannot just program and then we need to have something to play okay I think everybody knows in the computer programs or computer applications usually just put out from the field look into the IDP or the database try to build, run and then some of you may run inside the containers it can be one container it can be multiple containers up to you depending on the applications and then you see the behavior before you can deploy into the production right so this is like computer programs okay and then how about the mobile phone applications so everybody knows that there is android simulator very nice so you don't need to have real devices to program the android device we just use this emulator and then make your own programs and want to make sure that program can run in any different device that use similar passing of the android program okay so now the questions how we test the networking program in some testing environment so now we have what we call internet it can be also used to verify the people programs basically it's not simulator you can say that this is emulator because actually you run the real software OS sorry the software switch and then you also run some software of course based on the container and so on so basically it's emulator but in some sense very small scale of the network so you can change the behavior of your switch, your host and everything inside this emulator okay so we got to put some demo any quick questions not let's see the demo okay so basically before I open my computer anyone know how the routing is worked so in the internet there are thousand routers in the world but do you know how actually the IP routing is done okay you may imagine that okay the process is very complex because there are many purposes IP addresses and so on right but in terms of the packet if the packet is going from left side to the right side okay I have crossed one of the router nothing changed the data is not changed IP header is not changed only the Mac address is changed okay why because router has their own Mac address so if I send the packet to others I need to pack the Mac address in my own so that the other know that how to use the Mac address okay so it's very simple I mean in terms of the packet process itself is very simple now let's see if how we convert into P4 program okay so P4 program basically is I mean you can say something like fair world basically it's like match action table so you match something and then what is the actions okay so here we can see the logic of the routing so we have a paper called IP4 working table and then what do you want to match I want to match the IP4 headers okay and then the type of the match is perfect because in the routing if we have a certain end place that match actually I'm looking for the longest one okay so then we define the actions which means IP4 forward okay so as we see in the previous one basically when the packet come to my device right I just need to change the Mac address I don't need to touch anything and maybe some of the action maybe like I need to direct to the product interface or maybe reducing the PTR of course to avoid the loop right yeah okay let's see so basically I see the some packets okay I type the mac address now it's going to my address but when I send to other I need to change to another okay then in second one I need to change to the correct work number okay to the correct destination and then I need to reduce the PTR the time to loop PTR is if I go like usually like 2 5 5 cops right I consider loop only okay so I need to reduce the PTR every time I push okay so let's see the demo so the demo something like this I have the topology in the BINN so we have 6-1, 6-2, 6-3 and 6-4 this is the topology and then I have the host 1, host 2, host 3 and host 4 so if you want to see the detail of the example of the demo you can see but let me open up my PM okay anyway this one is as I mentioned before I have the table for the PM but in the action okay so I mean the program is there but no action basically what will happen is I do not know what will happen okay let's see if we try to do one hand okay so basically how to compile this one is same as if you compile what to say the program right I need a computer like you just to the main so if you compile the apparently of the code the before code and then also looking at the BINNET okay so this is the console of the BINNET so in here you can see let's say set up but you want to see the host right so you can go to the host so it will open another terminal this is the host which actually this is a real component so you can see the because it's on linux right so you can give any comment linux comment for example in concrete okay so the IP is this one or you want to see any other comment as so basically it's content this is actually content so every host in the BINNET basically content that you can consider is the cool features of the host okay now we want to try that as we see in the demo right this one we want to check the connection between the h1 and h2 so let's try to bring the h1 to h2 I mean the simplest way you can just use this one okay this doesn't happen first as we know that there is a program but there is some actions now you can also do it from here if you want to think first you want to fit the IP address of the h2 or host2 to just type same thing here but we will do the same okay now let's try to to become like a router so I just close this one so now let's clean up the previous build basically it's cleaning up the previous case that we built before now to speed up the demo I just copy some the solution for this one okay so now okay now you can see here the action as we mentioned before right so I will get the MAC address and then I change the MAC address previously it's coming to me and then I need to change to destinations and then I need to specify the blackboard and then I need to clear up so we have this one ready so let's try to so depending on the topology that you think you may have to win that several minutes but you have any powerful the transparency can be okay now we have already this one and as we described before our intention is to make it the host1 and host2 communicate they have a different subhead right the subhead is different 10.0.1.1 and 10.0.2 for two hits the different subhead because this is what I discussed before okay now let's see whether we can do the thing okay now the program that we created is successfully done so if you want to see the details you may also do the same that's what the school we will try to think from here so the same we want to think the host1 so we can okay so this is the overall idea how we can program our the media networking we not only complicate one switch but we can complicate as much as we can okay so the same program okay any questions? okay now if no questions okay I have some bonus maybe somebody maybe before it's only working for we can get forward simulation no it's already verified it's working with the hardware and one of them is the Intel Torbino even though they say close the Intel Torbino but there are another target that can be used like that FBGA is marketing even the Linux DBVF and so on and so on so the difference is there is a language but we have the target target can be changed but language can be changed so one of Intel Torbino's list from like in the US okay so you can see here we have this one is the Intel Torbino switch and then I have the servers one and server 2 to be the host1, host2, host3 and host4 okay and then it's connected to the I just want to show that whether it's convenient or whether it's in the hardware side it's connected to the same the program is okay so let's say the demo okay so I'm accessing the switch okay and then same I do to make because I have the program that I want to compile okay of course the compiler of the community and the compiler of the Intel Torbino and then the compiler for FBGA different compiler for the FBGA okay now as you can see here it's done now the next we need to start the switch to load our binary okay so you can see there is a binary called program of team and the connection basically is information that how you will interact with this switch okay okay so now it's done okay now since it's using the hardware so the server is something that you need to configure manually even the host can automatically but this one you need to configure manually so we need to assign the port obviously it's down because I don't assign any port to the server right now I assign the port to the server and then pick up up okay and then I need to set right manually because it's the real server okay basically it's just configuring the hybrid rest to pull out and then set the first you know that before I can think I need to do the RRP but this one since I know the MAC address I just look at static and then it's bounded by our program okay so this one just configuring the IP okay now you can see all the IP is configured and then the RRP is already done okay now you can't configure the sorry this one I can't configure my own IP but I cannot pin here the destination but there is no rules that I set up in the switch now I set up the root from the switch as well okay so basically the same now the switch that we already configured can be used for configuring the RRP process okay I think with this one thank you very much any questions anybody yeah so when they use to do those things that people can do something like fire work for our machine depends on your creativity some people use for the switching some people use for the switching some people use for the routing some people use for fire work some people use for the GRI tunnel some people also use for the PNG like working with your program so depending on how you make the program how you logic of your program actually it can be even some people you can just they try to do something like limited message learning yeah I think we are really short of time so I'm sorry we are we end here so thank you very much my name is I'm an engineer I've been presenting open source software tools for FPGA so just a short introduction my name is Rahim so I've been working for a staff for quite some time and you guys can do it if you want to see my profile and I was there on twitter and I was wondering if you have a question ok so to put things in context I'd like to look or to refresh your mind you remember we saw it for beginning with FPGA development cycle so I came across this paper from which I expected this figure on the right so it's a typical FPGA development consists of the software aspect and then the hardware aspect we are doing this electronic components from the software aspect we have this requirement specification everywhere that we do we have always verification and checking if things are designed the way we want then we have the design, implementation and integration variation and really when you look at the software aspect of the tools we look into the design specification RTL with this registered transfer level which is basically designing circuit logic using the hardware description I think the course mentioned this earlier using languages like they are very long long years we have then we look into transmission to get legal design this is the synthesis part before that we can always look at the behavior simulation we want to be a set that we want to use our course and those are our and then once we pass the get legal design we look into the layout which is this place and get aspects and this is really choosing which FPGA we want to implement and then afterwards we have an FPGA bit stream which is the file that is sent to the FPGA to program so I think not to repeat too much but the FPGA is shown on the left so we have really a field programmable get array so we have an array of supposedly gates and within we have those logic blocks a lot of logic that you can use registers etc and the primary thing of the FPGA is that you have all these buses the interconnect that allows you to customize with the functionality within the FPGA and then outside you have this IO group blocks so some low end FPGA have like really IO normal input output signals but the IIN can have perceived more advanced blocks from which you can do some high speed high speed blocks and then the more the FPGA involved over time they added more block ram, TSP slides etc so processing blocks from which you can exploit and increase the complexity of the design so today we are really mainly interested in the synthesis and the place and root from the development software aspect but we have to remember that this is an electric component that we programmed with the parallel between the hardware and the software so I put the paper here this paper actually refers to some IEC standard so it's always interesting to see that for some you also have those specifications so not open source but commercial tools for FPGA development are mostly free now at least for Xilin which is now brought over by AMD the tool is this one called Vivago for Altera which will be involved by Intel tools and not those like this as a group of human doing is the new quite recent FPGA from China quite a small model at least for this I'm I'm with the Xilin guy so actually I really really use both of the FPGA but there's a lot of choice for us to choose which we use more and then on the left you can see the simulation RTL analysis and then this synthesis implementation and then in-stream generation so these are the standard that those tools will do for you in the middle with our ESGL so FPGA programming is quite niche it's not something that is really as popular as C++ and software engineers more electronic people moving into the programmable aspects FPGA usually the transition between some accelerator that you want to make a nasty chip and then you actually exploit the FPGA aspects to some test so the focus of this talk is really your system next PNR in terms of the part of it earlier so for me as a scientist I came up from that early paper in 2018-2019 where they introduced this free and open software architecture really architectural because they want to really be as generic as possible and then they want to have this framework to have a flow for the placement of this and the early stage was really from that paper in the small size FPGAs in the ICE these are low end not as low quality but small resources and the hobby price the FPGAs and really this I think we did a lot of hype and we started with the open source framework for FPGAs so I actually made available the PDF it's on my GitHub somewhere so you can click and go to these things simply your system can really be compiled from source so instead of having a tool that you download from Dylings it's a 50 gig with massive files so that you install half of the the thing installed is sometimes not used because they make you download some intrascale or whatever private for you never use the system and the PNR part is really smaller and you really from a open source have access to everything and see what they are doing contribute etc and then it supports mainly very long and system PNR but there is some initial attempt to create the L and additionally to this your sys they have also some little project additional project that cannot specialize so for the famous one is Trellis which is used for the Lattice CQ5 FPGAs and then maybe also you could talk later on dealing with the RISC file so you also have the Kiko RB32 RISC processor so everybody heard about the RISC file the inclusion set 32 is for the 32 bit and the L is for the contribution vision and the compression instruction so the RB32 imagine of a processor that you basically know you move or add a specific instruction by yourself and not like the PNCHAM4 or whatever you really have access to all the source to build, modify and you need to look at aspects of security so this is really used for a lot of the RISC file development so one of the additional project that is from this F4 PGA is this project expressed for myself dealing with mainly Xilinx FPGAs this is a project I tried to document the bit stream of those FPGAs so if you use your alone and you have a certain design you create this bit stream that bit stream is really the kind of natural property and first you sort of try to reverse engineer that bit stream to identify what you modify in the design so for this you can actually help to contribute if you have a specific FPGA that they don't have in the catalog at the moment you can basically use those mini tests experiment and puzzles which basically help to generate small designs and have them to study the bit stream and ultimately this basically continue to this project generate properly the bit stream so at the beginning you have these aspects of you are reversing reversing, generating the bit stream personally I still use it as a tool but ultimately maybe some of these big companies will be doing more and more open source and they will see the appeal of having a community of course for signings they have such a large variety of FPGAs that you have to focus on specific ones so the most popular FPGAs are the ones that are at the moment so the next tool is the next PNR the place I look to so you have the same you compare from source the whole and then you have a different project I store Chinese outside and then for the Intel some experimental so you have a part of groups or people who are specifically focusing on those so you can go I think for the technical presentation going into too much like even demoing because it's too short but later on you can basically try to look into those FPGAs the Godwin and Lattice they are quite small FPGAs where the Xilinx and maybe Intel, IMD and Intel now they can be some quite big FPGAs so to introduce quickly also we have an FPGA community Singapore so we have a Facebook group and recently we've had the Agraware X FPGA there so everybody is welcome to join those days where we introduce FPGA on works if you want to explain to a certain person you can comment they will talk about applications and over the summer this summer we can try to have a session specifically on those we try to have a proper tutorials additionally to finish we also have a session called paper we love I think with teammate at the back and organizing days in the past so for the FPGA aspects we have those two papers complete open source design pro-form grouping I don't have this FPGA but we can still buy one from AliExpress or whatever source that has this and then we can go through the kind of steps when you read the paper they explain what we expect to do some testing this session is not going to go too deep so it's good to introduce this new movement quite driven by the RISC-5 actually from my point of view of tools with the open source from which you can learn and from which you can develop specific aspects to optimize to improve aspects in terms of security in terms of other things so I think end was presented to me so if you have any question I'll be available I don't want to spend too much time because I've already used one thank you very much our questions we have one minute left anybody? no questions? if someone wants to get started with this tool what is the best way to get started? so you think do they have to compile them from source or how does that work? usually if you tell them you make clone you clone the archive and then there's a whole sequence of make etc and afterwards when you have those tools that's a big example and that's why I had a couple of videos on YouTube that I've now discovered and actually they go through some examples so I will add this at another slide to be honest compared to the normal if you're used to those programming this one is more of a Linux part when you script a lot but you can automate a lot of things alright thank you very much thank you very much why did I always pay attention in music class I knew if I cut up if I cut up in a class I knew the partition would be D major good job do you want to go to the screen? so like that alright let me know if it's all good alright so are we good? can I use this one? yeah you can use that yeah so Jinme is a CTO of Subnero he's an engineer he makes devices that can handle water do you agree? I thought about water during the day go back at night I make them during the day and then at night I do other stuff I don't know so we were asking about how to meet up on Hackware we get to get involved we have the web we have the web no it doesn't matter take it away alright cool thank you Harish for that interesting introduction so my name is Jinme I'm a CTO of Subnero I also organize Hackware which is a monthly electronic hardware meetup in Singapore we just had one two days ago at Hackerspace next one next month if you're interested I love everything that's at the intersection of hardware and software I've been working in this sort of intersection for 15 years and you can find me here on socials I want to talk about our journey at Subnero to sort of look at using Julia language as a modern embedded system it's a very strange it's a very strange relationship between Julia and embedded software usually embedded software is written in C or some kind of low level language but I'll go through the journey and tell you how we decided to go with this RNY and how it worked out so a quick brief overview and context of what we do at Subnero Subnero is a local grown up startup we make underwater wireless devices so we make physical devices and we write all the software that runs on them and these are what we call software defined underwater models so these are basically a communication system where all the communication bits are done in software instead of being done on hardware so instead of what we talked about earlier with the previous start with FPGAs and ASICs and all of those things we do a lot of processing the software because it's much easier and you can update and configure and change things much faster than doing all of these things on hardware but that's a different topic we run a lot of this stuff on physical devices, Jetsons NVIDIA Jetsons and we use embedded Linux as the main operating system for these things but we have to do all the single processing and the numerical computation required for the communication side of things on the devices in software so we created a framework called unit stack which is a network stack that's sort of your standard networking thingies but that's designed for underwater and there's a lot of special things for underwater models so the entire journey was basically talking about how we should implement the standard stack it has a lot of special requirements and that's why how and what, which language would be implemented in to spend some time thinking so let's go and look at one of the requirements it has to be software defined which means it needs to do all the processing in software it needs to do a lot of numerical computation so we have to do error correction coding a lot of MAC basically it needs to do single processing so we need to do take all the data that's coming in process it process it further and deal with it and we need to be able to work with hardware because the data is all coming in from physical hardware and lastly we need it to be configurable in field so what does that give us from the language requirements as to how we implement this stuff so the software defined means it needs to be something that runs on some kind of a Linux platform not necessarily but it just makes life easy if you have an OS and an EQ the numerical computing means we need something that's high performance and also high level in a way because writing very performance code very low level just gets very hard to make and I'll talk about that in a bit and we'd also like to have GPU access hardware we run on that since have GPU so we can access the GPU easily that's really good we would really like that the single processing means that we need something that is low latency it's something that can work really fast and be able to deal with low latency scheduling and threading and all those kind of things hardware integration means whatever language we choose needs to be able to do low level stuff we need to be able to talk to GPIOs talk to hardware buses and all of those things and the input configuration means it needs something that is scriptable and high level and users to be able to use it and maybe write some little scripts a little bit like what we saw with people or kind of thing so we have this really weird set of requirements low level and high level and also scriptable but also low latency and that was hard to solve and this is what in a lot of real world scenarios people call the true language problem the easiest solution for this which is what initially we went with is a two language solution so you have two languages one does all the high level stuff one does all the low level stuff and you sort of match them together somehow so that's what we went with this was about 10 years ago and the idea was to use Java for the high level part and C for the low level part so Java does all the high level stuff performance people would think Java on an embedded platform that's free but I guess android has shown that Java is actually quite performant JVM can run on very little memory and it is quite performant on an embedded system it's quite scriptable there's a JVM language called groovy which is a scriptable version of Java and then for anything that was low latency and needed real low level access GPU access this worked for us for a while but this this entire true language has a problem and that's basically the bit where they talk to each other which is in this case JNI is where it gets really painful because every time you change the language on the other side and then keeping everything in sync and working just gets hard so JNI was rigid it got painful and basically we ended up not changing the C side as much which means we didn't add more features to the C side of things which means a lot of the product we just wanted to ship we could ship or just were getting the data so seeing this for a while we were thinking about the solution for this and I've been following JuliaLang for a while and by the time we decided to look into this it was mature enough so we decided to look at using JuliaLang for some of this so who here has heard of JuliaLang cool, that's sweet lots of people do so it's a high level high performance dynamic language created at MIT version 1 about 2018 so it's cool enough and it's open source but it's really strange because JuliaLang is normally used in machine learning so why unembedded platform but it's got a lot of really interesting characteristics for unembedded use cases it's super fast it uses LLVM at the back for a difficult compilation it provides a lot of features in IO first control logging profiling it's dynamic it's got some nice cryptability to it you can write VSLs in it very well it has a great community on the numerical computing side of things and it also has some really nice language features to make composition of things very easy but the website for JuliaLang for me I think the interesting thing was the community was really great I think JuliaLang has a really nice community it's got a lot of open source packages that are available especially in the numerical computing single processing kind of things site so we could use and leverage a lot of that it's very low level in a sense that you can actually very easily interact with OS and hardware I'll show some of the examples of this later but it's quite it makes life very easy when you're trying to do low levels if unlike Java it's got a great support for GPU integration even on unembedded views so you could an example for this coming up as well super fun and I think this was really what personally took the cake which was the community really cares about memory and speed so the community always tries to run benchmarks for performance and memory use continuously so every package every core feature everything tends to have discussions about are you going to do this way are you going to do more allocations are you going to use more time and that entire mindset is really useful in an embedded world because you don't have much memory you need to do things fast so having that mindset consistently throughout the entire community I think was really what sealed the deal for me but yeah it's not really easy to learn it has a pretty steep learning curve there is a couple of very specific new ways of thinking that you need to pick up and once you have those then it's straightforward but it took me and the team quite a bit of time to sort of ramp up on this but once we got going so what worked I'll go through a few examples of things that were that worked for us so the speed of development so this is great for a very high performance code it's very test code but I can really go through what you're doing this is where we are reading something from an ADC into a buffer and then sort of reinterpreting it and then sending it up to a high level function that we use with the data but in the middle of the reinterpretation we need to do funny things like take a 32 bit integer shift it down by 8 bits and then take a 24 bits of that and then convert it into a 32 bit sign integer so all of these things can be done in a nice and terse single liner but the best part is while this looks very high level this compiles down to something that's very very low level there's actually a construct that allows you to look at what low level code it generates I didn't have it on the processor I could otherwise have just shown it as a mid-generate but it's very short so it generates very very short fast code even from something that looks very high level and complex like this things like the thought equal to operators do in place operations so you don't even do elevations so this entire code doesn't do any elevations which means you could run this in a loop forever and it's not going to do any elapsed which means you're probably going to be able to get pretty good speeds for this stuff like this is what I think makes Julia great for high performance code low level control so this is something that I struggled a lot with Java which was how do you do IOCDL calls because when you're talking to hardware you've got to do IOCDL calls for random stuff and then you've got to go through J and I and it's painful with Julia they have a C call function in their base library which basically allows you to link it to any C library or even to OS calls like an IOCDL call and then all you need to do is write some data structures to what kind of data you want to pass through down into IOCDL and and then just call this. This is basically your right to see write in Julia so very easy to write, very easy to reason with, very easy to maintain even low level and then I think that the great thing about Julia is there's a lot because of the ML relationship and what people use Julia for the GPU integrations are very mature especially on the NVIDIA side so this is for example a simple function that does a vector multiplication so it just multiplies X and Y and puts output in out and this code is just pure software code it will run in Julia as is so if I call it like this it will just with this vector multiplication this is 100% software no GPUs involved and all I have to do is add this at CUDA at the beginning and the whole thing is going to run on a GPU so there's lots of magic involved to get this working but for maintaining code that you want to be able to say in certain scenarios use a GPU sometimes I don't use a GPU this gets really, really comfortable and can be so we really enjoyed that but not everything worked fine and there's something that didn't work training and scheduling so Julia at least the earlier versions didn't have much granular control over thread and scheduling and this came in contrast with our environment following it so we need to be able to deal with that we didn't run that thread that I was showing earlier but it reads data out of the ADC all the time and that can't block so nothing else can block that and getting that working Julia was a struggle so Julia uses green threads because green threads are great for computation but they're not great for IO so that was a struggle it does depth first scheduling which also means it's not great for real time so we struggled a lot with that in the old world in Julia in Julia 1.9 which is coming up soon they have some so this is a problem that the committee was so they have been trying to fix it so this is the first solution they have for 1.9 years basically allowing interactive threads we did come up with a solution for this using some low level stuff that we had to sort of import but it wasn't pretty but now with 1.9 we don't have to do that the other problem is it doesn't really have a standard approach to sort of bundling and deploying applications it's designed to be used more like a REPL that's the main use cases and a lot of there is no sort of a bundling story for it of course nothing a bit of our thing called council but in a production device this is something that's very critical and again the committee who was the boss there was a keynote at Julia 1.9 last year and how this is a big problem and the community to solve it hopefully you have a solution for that soon and the third one which also bit us for a while which is that Julia takes a long time to warm up so Julia is a JIT compile so the functions are compiled by LLBM into native code the first time they are called which means a new launch your application is going to take a while to do everything cache it and then run everything fast so if you will get it speed but it needs some time to warm up it's a little bit like JVM if you use JVM it needs some time to warm up some large applications can take up to minutes to start up which is very painful and in an embedded use case where you want to reboot the device and it needs to start working that is very critical it works with some functions it works with some success we use it initially and then with Julia 1.9 which is coming up soon they will cache this code which means as long as you don't change code you should get fast work so again community knows about this people are working on it so hopefully we should see some fixes for this so while talking about writing this talk I was thinking we did this we moved to Julia people love Rust everybody is like why don't you use Rust I think personally I found that there wasn't much community around the single processing as compared to Julia so we didn't have much leverage on we would have read a lot of those things ourselves so there was one of the reasons we tried to look at Go as well but it's even worse than Go although I do like Go's there's a lot more there we could have done a lot more there but the library and the community is very different in Rust and Go but yeah there's anything else that you guys think we should have used come to me I would love to know about these things these are stuff that I generally like to do and play around with all the time so maybe now so 100% of the low level code is in New Vienna so most of our device shipping devices run Julia for all the low level stuff most of the high level is still in Java and Ruby the migration is slow as in when we write new code is written in Julia a lot of the tooling is moved to Julia as well but the embedded stuff like the really low level firmware for microcontrollers and still in C I hope we can move this to Rust at some point but it needs a little bit of time so as a summary what to take away what I really wanted to share with everyone was Julia is not just for logical reasons Julia can work very well on small embedded systems small Linux, more Raspberry Pi that kind of stuff it has some treating problems it's new it's only a 10 year old language that's new in programming language systems but I think it has a lot of problems and I think if you are looking for a programming language that needs some of the requirements that we had give Julia a chance try it out it can do some really fun stuff it's really powerful and it's great that's all in a quick shout out you're hiring at Subnero so if you like this kind of stuff come talk to me and add some stickers for both Julia and Subnero at the back you can talk to me after I'm going to be here the whole day and I'll be happy to talk about any of these things thank you questions yeah first question here I'm curious I didn't know Julia did so many specific languages are you guys to be a fan of that a little bit when we did not as much as we wish we could and I think once some of the high level stuff goes into it when we do like user configurations I don't think that's where I think the DSL would really shine mostly for nice you know things so on some of the stuff we did with dealing with the IO so we wrote some echoes around to allow us to sort of write our own create our own low latency trends to that point something I would suggest that you might want to look into is the Erlang beam it's just really good dealing with protocol stuff and it would solve all your concurrently issues I've definitely looked into that it's something that has come up many, many times on the last about hiring so how's it been how's it been looking forward to the developers in Asia surprisingly we had a great success we found someone locally in Singapore it was a Julia fan but yeah it's hard it's very hard but yeah I was hoping that we can start a community we're trying to start a Julia meetup in Singapore to get enough interest and excitement around it and we can get more to the developers yeah so you were saying that you were open for jobs right so currently I'm studying in IDE AI applications I have no real knowledge about Julia but it would be great to learn Julia on the job as an intern well let's talk afterwards and we'll be happy to look at what we can do and one question for the organisers is there a way to watch the post-Asia live streams in China can you talk about live streams in China live streams in China so somebody else wants to take a look at that so anybody have any questions any more questions can you see Julia as a Niko Kukura project can you see Julia as a Niko Kukura project Niko Kukura project can you go? can you go? yeah I don't think there is as far as I know I don't think there is any real work going on there most of the types what I see in the discards or the chat when people ask about this because I just go use Rust instead it's probably a better way to do it rather than solve this immediately so so the lagging would not be for calling someone who actually learned Julia in IDE not from NUS or anything well surprisingly for a while apparently but not for too long probably like a year or so it was in the US there were statistics correct but it was just one person who was teaching it and I don't think it's that important alright I think that's about it thank you very much well that jokes no not now my sons were very upset don't not you again okay so are we good to go alright so thank you very much yes next after me will be lunch so keep that in mind as a target alright what I wanted to do for this talk when I put up the proposals I wanted to level set to the extent I can about this whole thing about ISA about Rust 5 there was so much stuff floating around there was a lot of confusion as to what on earth is that supposed to be and why do I care the terms that were used was oh it's open source CPU no oh you can do whatever you want yes however what is it that you can actually do can you build the chip not necessarily because you don't have the capability so what I wanted to do is to find a way of succession so my intention behind this talk is not directed at this audience for a future audience who may be watching online at some point in time wondering what on earth is a CPU so I'll be talking about stuff here so that hopefully it's useful that my grandkids at some point in the future they understand how he talked about this so there will be grandfather jokes as well okay alright so that's the topic so first part first I want to acknowledge this gentleman he passed away last month Gordon Moore a lot of he was 94 years old and I think one of the things he said in 2008 was that all I was trying to do was to get that message across that by putting more and more stuff on the chip we are going to make all electronics cheaper I think that part continues to be true but the question is what is it that I can put in the chip that can also make things different he doesn't talk about that which is fine I mean that was not his intent but this is actually very telling because this is the only way we are looking all the $1 ESP $3 chips from as expressive I mean you can do wifi and do all kinds of amazing stuff so there is a lot of capability and so what he said was true so thank you to Gordon for the insight in 1965 alright so with that let me just move into this slide anybody know what that is what else diminishing returns diminishing returns I like that any more suggestions okay that anything else if you are an economist when you see this you probably understand what that is S-curve smart guy so what is an S-curve and why do I care because he looks like an S that's why I care but more importantly is this one this is from Clayton Christiansen's book in 2000 whatever it was okay which year it was anyway he is talking about S-curve as defining how technology adoption happens over a long period of time so I will then take that thought process and translate to where we are today from a chip perspective so the bottom dark the one that says first technology the first S-curve would say are things like the mainframe computers then subsequently probably chips from Intel from 6502 chip that was in the original Apple and 6800 and all those guys at that lower level so these were the early CPUs that started a trend in terms of all kinds of innovation that was happening and it continues but it doesn't mean just because the first S ends at the top that it dies not necessarily it can still be there today it doesn't mean it expires it may be less users being deployed to but it is there the second S-curve from my perspective for the purpose of this talk is the ARM CPU so those devices that were ever going those CPUs were ever going to be running in a mobile phone I mean they did try, it's not that they didn't try I know Intel did try but somehow they just didn't get that and in came ARM and today there are more ARM devices out there than there are Intel devices on a regular basis which is phenomenal way it doesn't mean Intel is dead so I am now suggesting that the third technology is where RISC-5 is going to come in so it will be a combination of in the larger ecosystem globally there will be RISC-5 CPUs in all kinds of places where ARM cannot possibly be it may be there today but they will be displaced Intel is probably never going to show up and AMD whatever and whoever else left I'm not sure who else is left but the important thing here is that none of those previous technologies are necessarily going to disappear they will kind of probably retreat or find a new life in a specific niche area where everything else will be covered and done by RISC-5 that's my prediction we can all be to it, we can come back here five years from now and see where we are and it's recorded so so by the end of this talk I want you to understand what is a CPU what are SOCs and how one instruction said architecture is making waves I'll try and do this I only got 20 minutes I got a lot of slides, so part 3 so what is a CPU we all know what a CPU is but for those people who are watching this from the future a CPU is just a piece of silicon somewhere currently is still on silicon there's not likely to be a biological CPU yet but for now that's what it is and it's about circuits in there that can be manipulated electrically to create some output whatever the output may be you decide whatever it is and then the ability to do so is by a software that manipulates the signals that goes through and propagates through the CPU and we always use the term that the CPU is the brain of the computer I put it in quotes because maybe it's a brain, maybe it's not maybe it's a shared brain, I don't know but let's think through, let's consider that as the brain that will work with the rest of the silicon on the board, motherboard or wherever, or even on the same chip to do something, whatever the something may be so historically where have we come from the earliest ones, anybody here have any device at home that has got a vacuum tube what do you do? anybody else? I didn't say functioning, I said have a device with vacuum tube functioning is the secondary question it's a nice piece to be put on the shelf yes that's fine it's an interesting thing if nothing else it generates heat if you turn it on so it's a good heater if we not that we need heat but in case think about it all of those shrunk into a transistor and all of that transistor shrunk into a micro controller a silicon base on a chip so the transition was dramatic and yet if you want to think of it differently these are three different curves three different S curves, they are still around they are not disappearing but they have narrowed and regressed into not regressed, they have picked a space that they work best in so you find a lot of vacuum tubes in high end audio stuff and maybe some radar stuff I don't know some others so if you go talk to the military I think that's where they like this so there is a space for that that's a space for that what we have here, computers are made of silicon chips early days they have multiple so in the early days of computers I mean even today these devices if you open it up you see on a motherboard there will be one big chip perhaps and then a few bunch of other chips to do whatever else that needs to be done things like video encoding storing files into a disk all of these things are still current and valid today but if you opened up your mobile phone today how many of those discrete chips are there increasingly you find fewer and fewer and fewer if you look at a Raspberry Pi how many does it have very very few a lot of it has been integrated what does integration here mean it means putting stuff on a system of chip so the work that needs to be done to interconnect all of this is now going into into one chip so the question I have here is can all of these discrete components we put into one the answer obviously is yes and this is an example of what a typical SOC looks like a system on chip so in this particular case this is from from Cyprus so it has got an ARM CPU here they are embedding an ARM CPU they have some of them here I don't know what else they have I saw a USB here somewhere serial wire debug ARM bus ARM bus, USB 2 now all of these things are individual circuits that previously had to be put on a different chip on the motherboard now you have an all of one chip that's interesting so now you have a situation where you have one device everything is in there and your outputs and inputs are all controlled by the whatever connections you may have now, question here is how many of these discrete, well I call it discrete for now how many of these discrete components here do you have the opportunity to change? you don't I mean I'm sorry let me rephrase it you may not have you can have it if you pay a lot of money that's this money talks right that's not an easy thing to do so all these just contain many different components how do you sculpt them to fit what you want to do one of the things that ARM promises and sometimes actually does deliver reasonably well is lower power consumption compared to the others that's one of their claim to pay but can we make it better if you get to ARM or Intel or AMD can we improve the power consumption well maybe they may be but it depends on who you are if you are a state actor perhaps I have no idea but who knows right but can I with myself not with those chip sets it's not ever going to happen so like I said there for example if you want to trim some of those things that you saw on the SOC I'm speculating I have no idea I don't know the truth behind it like Apple's M2 chips they use ARM now could they have trimmed some stuff within that I have no idea to make it perform better in terms of power consumption I have no idea it's possible if they pay a lot of money fine I don't know so could you do also the same thing with the other components on the SOC the challenge here is that you have no control about everything else you are dependent on whoever it is that has done it for you so let's then ask the next question right if I can create a SOC a system on chip that is complying with standards whatever the standards may be that means it's not appropriate standard and I can run a standard operating system whatever purpose you want to run it for I want to run it from an IoT point I want to run a camera whatever it is I want to do if I can have an SOC that does that and to be able to do exactly what you are targeting it for for example you may say I only want to do integer computation and no floating point because I don't need floating point can I do that now in the Intel ARM space well just don't use it don't use the circuits and the rest of the stuff that is already on chip I only pay for it first second it is occupying space probably also consuming power while it is not being used so you see the trade-offs are beginning to become a little more obvious right and I want to have it such that there are no NDAs to sign I go on an NDA I got no reuse restrictions I can just give it to anybody else okay and openly publish the design for others to use have you heard that kind of stuff before yes it is called more essential freedom okay thank you very much more essential freedom perfectly fine that is the FSF model but that is how open source functions so in the early years so I cast my mind 30 years ago 1992-1993 when Linux was happening we were wondering oh this will never overtake the proprietary operating systems out there no way is ever going to happen who is going to do it there are a lot of things to do now 30 years later where are the others the only ones that matter are the Linux environments or maybe the VSD environment as well the rest they are there again as a niche it is the S-curve just think over the S-curve it becomes very critical so I can now tear down my Linux kernel to do exactly what I want it to do for the purpose I want it for can I do the same thing with my CPU that is the question I have to ask and that is where these guys come in these are just pictures of three architectures right there are three CPUs what can I do with these two guys on the left no it is not going to happen I will give you a different model this is the other one I will give you another model but can I change the thing inside you keep it as it is or if you have a lot of money maybe yes but the last one hey do what you want so if you want to think 1993 you were born in 1993 or maybe not right so some of you are not so around 1993 we recognized that there was this ability to change to change the tweet to do stuff to change and do stuff which what was happening then and people say no it is not going to happen it is not going to succeed and now I see it now this is beginning to happen in the hardware space going down into the chip itself that is dramatic the next big wave this is the time you lost the Intel you lost the ARM the next S curve so what for why is RISC 5 able to achieve that particular design goal so very quick history there are a bunch of information there are a bunch of things I am talking about but more importantly this last one highlighted the instruction set architecture the ISA that is what it stands for the RISC 5 is built on is published open to anyone to use including removing portions just like what the free software foundation would love to hear you can take out, add, do whatever and share it as well without seeking permission the only difference between the FSF model of the 4 freedoms and the RISC 5 is that the RISC 5 ISA is in public domain it belongs to every one of us so that that rule of the FSF rule doesn't quite apply it's not on GPL so you can technically take the stuff close it up if you want so if you want to say hey Arish I like to create a company since you see this is the next wave why didn't you start a company that takes this stuff reliable verified circuit that implements the ISA and make it available to sell it to people you don't have to publish it you're not obliged to publish it you can keep it private so yes it is not entirely open source in that perspective but it starts from an open source because it doesn't have the GPL license on it it's in public domain it's good and it's bad so I will skip some of these things because what I wanted to show here is the fact that RISC is not new it has been around for a very long time it was just not understood in the manner that it's available today so we have the people who created it David Patterson in 1980 with the RISC 1 that was the original one in Burkitt subsequent a few other chipsets whatever they did it and then the challenge they were facing these guys in Burkitt was when you want to teach people chip level stuff it's just like 1993 to teach you operating systems where you're going to get the operating system code then Linux came out so that's how you understand how to do the OS from a chip how do you do that so that became an interesting idea to change that dynamic and so they crafted this thing that eventually was called RISC 5 it is pronounced as 5 not RISC B okay very important to therefore the question obviously it was was there a RISC 1, 2, 3, 4, yes there was but those names were retrofitted to explain the B so that they have to say we did all these 4 other designs before so we're just labelling it like that which is fine just one I think you missed the MIPS thing I do have the MIPS there there are many attempts at that idea but the one that has so far seen more traction is RISC 5 not that the others haven't they've had but they're kind of at different stages and some of them are just tanking and not going very far alright so part 5 what is ISA? very quickly what is ISA this picture I like because this explains exactly what we're going to talk about the instruction set architecture what is this? this is like the dashboard of your car you have all the buttons and the steering wheel and all the good stuff every car has similar stuff but what is inside the car when you press a button you turn the steering what does that actually do? the doing part is that part the engines whatever it is that you need to move that is the micro architecture that is what it's up to you to design I will tell you I need to have a button to do this another button to do that I want to do an add, I want to do a multiplication I need to do a subtraction, I need to store a value is what I'm telling you go figure out how to do it in circuit so that's the circuit part this is the the marketing part so when you look at the number of instructions that you have in these three architectures Intel has in excess of 1300 this has over 500 instructions in the arm and with 47 plus a few more it takes about 6 hours to read the document would you rather read the document in 6 hours or 182 hours I would add one more line item this is the publicly known books that's being very unfair to these two guys but it's a point taken yes I agree alright so this is where it comes up to so in 2015 recognizing that the first five as an idea was being adopted by a lot of universities and organizations that burntly said hey we can't do this also this is an academic project so they decided to create an entity called risk five a foundation in 2015 and shortly after that maybe a couple of years later issues around blocking people's access to stuff I should not mention the person who wanted to do all those kind of things they decided they're going to yank it out of the United States it was established as a nonprofit in the US and moved it to Switzerland and renamed it as risk five international so risk five international is the custodian of the standard for risk five ISA and any further development is out of that group in Switzerland and what they have also done is they have made available opportunities for all of us to be individual members of the risk five organization at no cost what does that mean it means you can go ahead and attend talks events, participate in whatever it is not that if you're not a member this gives you a little bit of a leg up to be part of it which is kind of nice it's kind of nice to be able to do that so this is interesting they are relatively well funded although it's a nonprofit the memberships are also open to corporate entities and so on so that's a different pricing model for that but the important thing is they are the custodian of the ISA so it's never ever going to be closed up it's going to be available in probably domain anyway but they will keep the traction there so as I was shown in a previous talk these are two pages two A4 size pages back to back of the entire instruction set you can't see this because I'm going to look it up it's called the risk five reference card that's it there you go that's one of the other things you know honestly I mean years ago I did look at the Intel stuff there are a lot of instructions do I ever want to use that I don't need to use it but you know what it is in the chip so it's up to you to use it if you want to use it I mean the intent is okay but I think this doesn't make sense to me so how do I identify risk five I think this is probably the most critical thinking all the chips that are created under the risk five umbrella okay risk five and then you specify whether it is a 32 bit 64 bit or 128 bit who of the others have a 128 bit CPU nobody question you may have have asked Harish why do we need 128 bit it's just the same thing you know years ago something like 640k was more than enough right I told you you remember that right so this is the type of chip the number of bits now important thing the alphabet soup after that okay so this tells you what is it that I have and I want to use so if you go down this part here and it means you have an integer multiplication A is for performing instructions F is for single precision floating point and so on so all these letters so if you wanted to run a laptop like this a CPU that can run Linux on it I need to have a bunch of things I need to be able to do floating point I need to do a few things so what happens here is that a few of these letters are combined together and there is a specific model called the R which is translated from here the risk five 64 bit integer M is what I give F is an integer multiplication A is for atomic action and D is for double precision double precision so I want to have all of this in order to run an operating system so instead of having this alphabet soup here they added just the letter G so when you are looking to buy a laptop that has got risk five on it that's all you need to look for look for the RB 64G or RB when it happens 128G that's all you need now anything else why would I want to add the other stuff specific use cases then I just have to add those additional stuff and that's it and a lot of this can be compiler defined so you don't need to put in everything so the chip may not even have the capability but that's okay so part six show me the code now there are some simulators online if you want to have a look this is one of them which is actually pretty cool now there's another one at the Econel that actually does the instructions and you can see all the instruction sets how the thing is working and so on and the future I think I want to go to this last one I want to just close here so I think the biggest problem we have the challenge is fax we don't have a way to create the chip itself and I think somebody was showing this homebrew chip a fab system it's not easy for us to do it so that's the next stage of innovation we need to see make happen as well and I would say thank you to Google for trying and offering ability to do a fabrication of code and I think somebody will be talking about it later 128 bit before you know it others will be catching up that's fine and that's it, so that's my yes let's say if I have a horrible yeah and you say that if I have a piece of me and that shows that the re-letter it's like the you have the picture of the way around but it doesn't produce the other force power because you know so so what is the value in Japan and USA should the real value be open source after implementations like can I maybe could be answer that so no I think the question is very legitimate why would I want to care the same question you could have asked in 1993 you have all these systems running this particular operating systems do everything that you need why do I need another one there was DOS there was DRDOS there was CPM there was whatever else but where are they today so the same idea applies however will Intel AMD ARM open it up for you and me to pair it down to what I need that's not going to happen so that's really the benefit here so it's an argument to be made both ways I understand your point but the question at the end of the day would you PA want to experiment with this and create something more interesting because you can as opposed to the other one you cannot no but my question is the gargantuan world to an actualization don't you have an ISA yeah and you have all the tools all the open source tools to do the entire implementation chain all the way down to sending it to the fab we have all the tools already verification making sure that you can reuse the circuits that you have designed I don't like the phrase that in this industry in the semiconductor they call them IP so I have to license the IP from you which is really a library to do something so I prefer called libraries that I want to use I need to license it from you and so on that's it I think I run out of time okay done job 30 so with that I think we have to wrap it up so thank you very much I have to moderate myself now thank you very much we will restart at 130 we will restart at 130 so thank you