 It will be an opportune time to take them. It's okay, we've got a few minutes before the next one. And Zohan, if you're in the room, please step down. Are you? Zohan Lehotsky, please step down. Okay, we're gonna go... Let me get to the schedule to make this video. Let's see if I'm up first. And then I think I will take the other man. One, two, three. Hands on here. All right. Welcome everyone. As it seems that we have some issues with the other presenters, but that shouldn't be an issue, because now I would like to show you how to turn your software into hardware. I'm Zohan Lehotsky. I'm here from Hungary. Actually with my colleague Almos, who is back there and will do the next talk. I will start with a simple question. Who is a software developer or a software guy here? Who is some developer here is probably a software guy. Who is a hardware developer? Okay, we have a pretty good ratio, probably one-to-one, because usually there are not many hardware developers, and the reason is that hardware development is, well, as the name also suggests, hard. But the advantage of going bare metal down to the chip design level or the logic hardware level is that you can get a substantial performance increase and the power efficiency benefits. And this is exactly what allows Haslier you to do that, but as a software developer. Because with Haslier you write your code, you write your dotnet programs actually, and Haslier will turn it into a piece of hardware like this one here. And again, the benefit is that it will be potentially faster and consume much less power. So let's see how it works. I will now go to Visual Studio because we are talking about dotnet here, but don't be afraid, you can use not just C-sharp, but also Visual Basic or even Manus C++ or functional languages like F-sharp or even scripting languages like PHP JavaScript or Python. By the way, the whole thing is up on GitHub. I will show you the link later. Now, what we have here is an example called Parallel Algorithm. As the name suggests, there is a massively parallel little piece of code here because that's where you get the benefit of going onto the hardware level if your application can be massively parallelized. This example, what it does is that it first takes an input number and then starts some tasks. Now, is somebody familiar with Task Parallel Library in dotnet here? Nobody. Good. Just then, I can tell you anything, but honestly, what they are about is that tasks are basically an abstraction over threads. So when you are starting a task here, then eventually that task scheduler will select a thread from the dotnet thread pool and run as many such threads in parallel as many the platform allows. Now, this laptop I have here has two physical cores in its i5 CPU and four logical cores. The hardware level parallel we get here is around four, but we are actually starting 280 tasks. Now, these 280 tasks will still run on these four cores here, but when we convert the whole thing into hardware, then on the hardware, on this device here, we will get a hardware level parallel of 280. Here, 280 physical little processing cores will work all in parallel. Now, what we are actually doing here or computing in this sample here in parallel is pretty simple. There is a bit of conditional logic here, nothing fancy, but it is executed 10 million times. And then the result is returned back. And again, the whole code and many more samples are up on GitHub. You can check it out later. Then we wait for all of these tasks to finish. And once that happens, we sum these outputs together and that will be the output of the whole application. Now we will run this. This is part of a small console application. What will happen is that has here in the background is converting the whole thing into hardware and we will eventually start communicating with this device here. It will execute this piece of code three times and next it will also execute it on the CPUs on my laptop here. So let's see what the results are. And here we have the results. It took roughly 300 milliseconds all three times on the hardware to execute this parallel algorithm sample. Actually, if you take a closer look, it's exactly the same amount of time, down to the 10,000 milliseconds. And the reason is that with the hardware, you basically get an application-specific co-processor for your application. And the execution time is also deterministic. There is no operating system, no noisy neighbors, nothing else. On the CPU, it took around five seconds to do an order of magnitude more than on the CPU. Now, of course, this is not a very scientific example, not a very scientific measurement, but still, this should give you an idea of what's possible with Hesleyer and with hardware conversion like this. So thank you for your attention. This was from me in five minutes. I will be here for the rest of the day. Let's talk if you have something cool for Hesleyer. That's awesome. Thank you for finishing on time. As we are more or less discarding the timetable completely, can any lightning talk speaker, so in the room, you have a lightning talk to give one? No? Blink? Okay. Well, you're next anyway, so come down. Wait, it's, you know, given we have a little bit of wheel room, it's fine for Hesleyer at this moment. It's unusual, but we both tap our speakers. Okay. I love the idea of simplifying FPGA development. Okay, next up. Hello, hello, everyone. I'm Amos Sabo, and I would like to talk about a new number format called Posits, and it is proposed by Dr. John Gustafson to become a better version, an improved version of the existing floating point standards, and we have implemented it in .NET C sharp, and we have also transformed it to hardware logic with Hesleyer. So, about archipelage floats, you probably know that they are basically everywhere, that's why they are, that's because they are archipelage standard. So whenever you use a floating point type on your computer, that's probably an archipelage float. And so as the problem with them, I do need posits instead of them, because, first of all, they are wasteful. For example, in a 32-bit archipelage float, there are over 60 million bit patterns that only say not a number. So you don't assign the real value to those patterns, and that's just a waste. They are also slower than they could be because the smallest numbers are represented in a different way. They are called subnormals, and we need different logic for that. So it makes archipelage computation slower than they need to be, and also the chips are bigger than they need to be. And they are not distributed really well, because, for example, in a 32-bit archipelage float, there are 8 expoend bits always, whether you need them or not, because small magnitude numbers wouldn't really need 8 bits. So posits could be better, because there are only one value in this, so all the others are assigned to real numbers. All the values are handled the same way, so there are no subnormals, and they are handled expoend bits differently, so there are more values distributed around 1, and less very large or very small negative values. So that could result in our computations that are scaled properly. They could be more accurate. So now I would like to go to the studio. This is the method. We are going to run both on the CPU and on hardware. Let's start it now, because it's going to take some time. What's happening? So we just read the number from the memory, create a posit out of it, with the value of 1, copy it into the variable B, and in a for loop, for 100,000 times, because that's the number we read from the memory, we will add this number to the number A. So basically we just count up to 100,000 to demonstrate how posits work, and then we convert our result back to an integer by a simple cast, and put in the result variable, and we will write it back to the memory. So let's see how our regeneration is going. Our sample has already run on the CPU. It took about 159 milliseconds. And now we are waiting a bit for the hardware generation to run on the FPGA board, as this happened. Took about 211 milliseconds on the implementation. So that's how it works. So you can learn more about Tesla and the posit number format on the link Sabal, and find us some email address, and even if you want to talk more about this. Thank you very much for your attention. Thank you. I think firstly any of the other scheduled lightning speakers in the room, no-takers, fine. Ben, just pull it. I can take HDMI, I'll need a stereo microphone and a grand piano. We might arrange those for tomorrow. Okay, this thing is going to go out there. Okay, so this is just a quick informational thing. You all heard about the Dow hack. I'm just going to tell you how that came to be. But a little bit of background. This is an interesting tweet that I saw recently. Real scientists often deal with real issues and have real ramifications. They have consequences. They often make the observation that computer programmers don't deal with consequences. They haven't really had to live with that. And I believe that's because any subject matter that has the word science in it is not a real science. Chemistry and physics are real science. Political science, not a science. Computer science, we're not a science. We have not dealt with real ramifications. So we don't have any skin in the game. We've got some very serious ethical issues. And now that we're talking about building our economy on this crypto ledger stuff and this blockchain stuff, we've got to really get it together. So one year after the ERC-20 was really standard, came the Dow. And on July 17, 2017, we encountered real consequences. Existential consequences were over $50 million and the destruction of a $150 million organization. So yeah. So this is not for people to... It's not good, you know, like, if you're writing JavaScript code for the next Facebook killer and edge conditions cause your program to break, you can reload the browser, right? But if three lines of code causes your organization to be expelled, then maybe you're picking the wrong platform upon which to build your future. So I call this Dijkster's Revenge. So how do we get here? Okay, where's my code? Okay, here we are. So real briefly, and now this link has, I think, the best analysis of the Dow exploit, but this is the actual code that is the problem. So there's a function called split-dow. So what the Dow is is you invest in a project and if you want to get your money back out of the project, you say I want to get out of it, you call this split-dow function in the contract and it computes, oops, it computes up here at the beginning is just figuring out how much money you are owed. Okay, and the very next thing it does is it calls this withdrawal reward for to the messenger, calls a function on that messenger's contract which actually sends him the money that he's owed. Okay, now after that, the code updates the local balances of how much money the Dow has left, right, and updates your ledger and all that kind of stuff, except the problem is, is that this withdrawal reward for makes a function called to the guy's contract and taking advantage of some of the nasty behavior of Solidity, which is awful language writing contracts that need to be secure, it allows them to recursively call the function. And so instead of continuing down and updating the balances, it jumps back up to here and then comes back down to the part where it sends the guy money and jumps back up to there and comes back down to the part where it sends the guy money until it runs out of gas and the contract can't execute anymore. And so that's how all the money got taken away. Now, had they moved that line, the withdrawal reward for function, underneath these two lines of code, it wouldn't have happened. That's the whole thing. 150 million bucks, right? So what's the issue here? We've got imperative programming model logic with magic object model behavior, completely not obvious, and we're dealing with billions of dollars of value on top of it, right? No atomic paired operations. It's a freaking ledger. Does anybody know what the word ledger means? How do you update one side without the other one, right? What should happen is if any part of the transaction fails, the whole thing should get rolled back. So it should have recursively functioned, called, did its nasty bit, realized that the thing hasn't been updated correctly, undid the whole thing, right? That's the way a proper ledger works, right? So, you know, this day and age, what could possibly inspire such obviously defective models? No, JavaScript and Golang programmers. That's my punchline. Thank you very much. Nice finish. Thanks, Ben. Right. Have any of the other scheduled speakers materialized? Lightning, lightning, lightning. There is no lightning. Hello. I want to talk about how Bunny broke his leg. Does Bunny want to take a baby's leg? You got to come up with a better story than that, man. This has got to be hardware related or something. I slipped and fell while soldering an FPGA. So, hello everyone. I'm Fatima Rafiki and today I'll be talking about metric visualization in Grafana. So, I was working on Grafana while I was in Oracle to visualize various virtual machine data, data of various virtual machines. So, first let's understand why it is important to visualize data. It is often said that data is the new fuel. It helps us visualize, analyze and discover new patterns, strategies, and which help us in taking important and crucial decisions after we visualize our data better. Therefore, it is often said that data is the new soil or the new oil. There's an example which shows that why data visualization is important. We have a plot which is against an average artist's revenue pay with respect to, along with the total user base, you may see that although YouTube has the lowest pay, it has the largest user base. So, it would be better if the artist, like the artist would be more likely to choose YouTube, although it has the least pay, but because of the largest user base, there are chances that the artist will earn the highest revenue. Grafana in picture. There are a lot of tools that can help you visualize data, but since this is an open-source conference, I would say that I have some experience in Grafana, so I would like to share that with you. Grafana works almost equally well with all of the database, but it is often said that a time series database works a lot better with Grafana because with each record you have time stamp associated with it. So, the most commonly used databases are in FluxDB and Graphite. They are both open-source and very easy to use. The syntax of in FluxDB is almost similar to that of MySQL. So, MySQL developers will not have any issues while working within FluxDB. This is a snapshot while I was working. This is one of the snapshots which shows the CPU utilization of the CPU utilization with respect to the time stamp, and you can see that it is for one particular host. Now that we have discussed the database population, we have to add that data as a data source in the Grafana dashboard. These are all the data sources that I have added. The procedure to add a data source is quite easy. There is a simple form where you have to fill all the name, type, and URL of the database. The type includes the database that you are using in FluxDB, Graphite, or MySQL, whatever you are using. Then you fill the database details, the database name that you used in FluxDB, or Graphite, and you just save and test, and that's your done. That's done. The data source is added. What next? The data source is added. How do you plot the graph? You just add one graph in the Grafana, like there's an option to add a graph, and there you can add a query. The query is exactly the same that you used in MySQL. It is that select this field from that database and group it by time interval or whatever reference you want to group it by, and the graph graph is plotted. These are one of the few graphs that I plotted during that time. The first one is the deployment bills of the queue count, like the bills that are currently queued, and the second one is the deployment bills that are currently running on all the virtual machines, along with their respective time stamp on the x-axis. This opens a wide sea of possibilities for data visualization. There are a lot of tools, other tools present, but Grafana makes it really easy and simple to use, and you can plot any type of data, not only simple graphs. You can plot pie charts, box plots, histograms, pie charts, any of the plots that you want, and eventually you can visualize your data better and more efficiently and make more strategic decisions which can help you gain more insights on your data and make full optimum utilization of that. If you want to discuss more about it, I'll be walking around through the conference and we can discuss more, or you can just ping me on my handle. Thank you. Very impressive. The audience can't see it. She finished as the thing went 0-0. Much appreciated. I suspect we'll have a few minutes break unless any of the lightning-scheduled speakers have turned up. All right. In that case, we resume in about seven minutes time, hopefully with force to them. Thank you.