 that's true all right hi everyone I'm Hudson Jameson I work for the Ethereum Foundation doing some DevOps and core developer liaison work actually let's go I guess do intros real quick and then I'll get started on kind of an overview prop and we'll jump into some of the other stuff around it so that's my intro hi I'm Martin Hall Spender and I work on the get team and I'm also the security leader for the foundation in regards to the Ethereum infrastructure I'm Ram from Least Authority I work as a programmer in the security research Hi I'm Liss Steinerberg also from Least Authority right so how many of you in here are familiar with ProgPowl is that almost almost okay we got at least one person so I want to explain it so ProgPowl is stands for programmatic proof of work and basically in the Ethereum white paper it was mentioned that Ethereum was designed to be ASIC resistant ASIC being a specialized a specialized hardware chip that's meant to I guess in the cryptocurrency world minus specific cryptocurrency algorithm so they're supposed to be ASIC resistant so there's this thing called the T-Hash or F-Hash people call it different things and it is the the proof of work algorithm that is used in Ethereum so when ASICs came online recent or in the last year I guess year and a half people started getting worried to some people because there was because ASICs were originally not supposed to be on the network and there was some concerns from other cryptocurrency communities like Monero and Zcash that ASICs would join the network in a way that was not as altruistic as GPU mining pools or would attack the network or form these secret you know mining groups and the fact that it would push GPU miners out of the ecosystems GPU miners were pretty on not having those in there so that's kind of what PogPowl came from a group called IfDeafElse who was semi-anonymous created PogPowl the IfDeafElse was a Christy Blee minehand who's been the spokesperson for PogPowl for this time the other developers have not been revealed since then there's been a lot of politics and some technical argument on PogPowl in general including some conspiracy theory some fun you know this and that back and forth on Twitter a lot of research has been done on PogPowl because of this I believe it was first made in like March of 2018 that your first start being developed and we're here now in I guess November it's November 2019 October and we're still talking about it so it's very I would say there's a contentious topic within the Ethereum community back when the core developers were deciding whether or not to implement PogPowl once it was at a point of being decided for implementation the core developers made the decision to implement it into a future hard fork but because there was all this noise around the development being of PogPowl itself the back in January the Ethereum CAD herders were assigned to have some audits done or basically to organize audits around PogPowl and we selected Least Authority for the primarily software part of the audit and Bob Rao as the person doing the more hardware piece of the audit and those were completed months ago and yeah I'll have Least Authority obviously talk about their end of the deal and Martin has been helping with the Gath implementation of it and also started a testnet which he could talk about a little bit I think it's called Gangnam right testnet for PogPowl to make sure that it is viable for the network so we'll get into some discussion kind of around a little bit of the technicals a little bit of the politics and kind of like reach out to you all to see who is wanting to ask questions and where you want these topics to go so I'll throw up to Least Authority and then they can throw it to Martin and then we can go from there so yeah we were selected basically to do this audit and we know that sometimes that what we knew we were aware this is pretty contentious topic but we do see our role as auditors to just look and report on what we see so we can't really offer too much in terms of what the community should do and how should handle it and everything but it was more about just reflecting on what we found with PogPowl so we looked at it for I think a little bit over a month I think it was about five weeks and then we produced a report in mid August and I believe it got published towards the end of August or in September but it's public now so that report is out there for people to read some of the feedback that we've gotten on the report is that people understand it but kind of don't understand it but it's just because it's really it's really we had to get into really technical stuff when doing the security audit to really just like dig deep and try to figure out what exactly was going on and you know we report things on a high level and then we also gave suggestions of what should be potentially improved about PogPowl but I mean basically that we found that it achieved its goal of making ASICs basically need to be like GPUs in order to in order to work with it and so that really disadvantaged any kind of ASIC production to I mean ASIC advantage when they were produced so we also looked at what like the so knowing that we also wanted to really analyze what would happen if hardware advanced in the future in particular ways how that would also impact PogPowl to also like kind of extend the life of the audit information that we provide to the community so because you know there's only so much you can write about like hey this is just how it works so we wanted to really look at that so we tried to capture some things in the report too that said if this specific hardware advancement happens in the future then this is the potential issue that might result and more specifically that was around the light evaluation mining attack and then we also had some made a suggestion on catch-up function too and those two items seem to be the ones that are you know most up for discussion because besides that we also did some other suggestions there that were about the documentation and so yeah so you all can read the report for more details but we're also here to talk today if there's any specific questions about that I can just mention that testnet so there was a testnet started long long time ago I can't exactly remember when it was it was based on a purpose of parity had done an implementation which was later merged into the parent client so it can be activated by a genesis for geth it was never merged into the client so the geth clients that participated in the testnet were basically based off geth PR which by now is kind of stale as far as I know the testnet is still up and running and the miners are running it they are also running it on a dedicated GPU mining and I don't know at this point how many epochs it has transitioned through but it's quite a lot and as far as I know there's been no problems with it so yeah that's how I understood it as well so diving into the politics a little bit the latest on that I guess or actually maybe in the background so there has been some back and forth just a kind of air this all out there Christy mindhand is a very sort of magnetic figure like a controversial figure in the space because of her past affiliations and the way she's kind of handled communications in the back and forth that they've had with ASIC manufacturers which I should mention the ASIC manufacturers at times have also been you know a little bit like this rowdy with their communication as far as being anti-prog pow obviously because their ASIC manufacturers they weren't going to want it so I don't really want to go into like super detail about that but there's a lot of interesting articles and back and forth some Twitter conversations to dive into to get more information on that and I think that there are two sides to this obviously the technical and the political and the technical has been fleshed out using the audits and technical discussion and the political side has not been fleshed out because there's a lot of very strong opinions from certain people there's so many sides of the debate it's even a debate as to whether the community is loud about it or whether it's just a few very little minority groups that are just really loud about it in general so it's really hard to parse the signals out of that and as someone who's kind of a community manager in the space this has been one of the hardest things to reason with is getting the signals from the community about whether or not prog pow is wanted or needed or things like that we know the GPU miners positions we know some individuals in the community their positions we for the most part know the core developers position you know the ones who have spoken out for prog pow a lot of it is kind of abstaining from an opinion as well which is a choice and is something that is important in being able to make that choice and that decision when you are in a group deciding something so I think it is really important discussion I think that front look we might want to do this for the panel we might want to just like have technical questions from the audience or something and discussion and then if people want to throw in some politics discussion later which will probably happen because more people are more familiar with that side of it I think and that would be cool too so if anyone has questions on more of the specifics that Liz mentioned or any technical questions feel free and just if you don't mind come up to the microphone so it can be on this on the recording or whatever you don't have to show your face I'll just hand the microphone to you so does anyone have any questions yeah Jeff already added you whatever so I know we can read the report from this authority but I think like one of the technical questions that I've seen come up a couple of times is this notion of like if the creators are part of these shadowy figures like what if there's some secret asic ability you know in here and I want to keep you just kind of like summarize like like basically the rough conclusion of the report on this idea of like is it possible that that the developers of the algorithm have a secret asic up there's leaves and on a high level not that we could find I mean security audits are a perfect thing we look we spend a certain amount of time and we look as hard as we can also Bob spent a while at least six months maybe a couple months also Bob Rao who was looking at the hardware and he really like I mean and our team was communicating with him too and we really dug into some things and that's how we came up with even the light evaluation that one too and even that one we we think is not really an issue either so and I don't know I want to give a little bit more detail about that one as far as hard where like any secret stuff that any hardware manufacturers can do there are some like supply chain attack or manufacturing side attacks people can do certain things but as far as this project is concerned it cannot really affect anything other than produce black blocks it cannot really affect anything more than that it can only just slow down people producing like bad hatches and just slow down things the other point is the report is the least light evaluation mining attack which is possible in the current that is like currently we have the cash and then a tag is calculated from the cash so cash is a small like 30 and we start with and just grows and then you have the bag which is much bigger like 4 GB start before like 1 GB or something and then grows over time now what if you can evaluate the tag on the fly so currently the tag is pre-calculated before a period that for the next period you pre-calculate the tag and switch to it so that you don't suffer any delays when you switch the new period so now what if you can calculate the tag on the fly from the cash so for that you need to store the entire cash on chip so right now the on chip SRAM is very expensive on hardware so it is really expensive to put higher right now you see the current CPUs and GPUs there are like few kilobytes of SRAM available there is also the power so power to access the external memory is much higher than the power to access the internal memory so there is a big advantage when things are in the internal right so now according to Bob who is an expert in hardware after few years maybe four or five years down the lane maybe 2023-24 we may see higher and higher capacity of as on chip SRAM on it will become cost effective to SRAM more and more SRAM on the CPU on the die so that is probably a point where these attacks will be more practical so there are ways to mitigate that you could increase the number of parents that are going into the bag so that the computation part of it dominates the memory part of it so because you're also doing some computation right to do FMB the values together and then hash them so you could increase the number of parents that are fetching blocks from the cash that's one way to mitigate that and also the size of the cash and the pricing so that would also be working against this hardware advances because on one side hardware is advancing that it will become more and more feasible to put the memory more and more memory to the on die whereas if you increase the cash size for example say oh next iteration we are going to have say 200 MB or something so then it will become more difficult to put that cost will work against you okay do we have other technical questions so when you say that it was a light mining attack or can you give like an overview or was that was that pretty much the succinct okay good yeah that'd be perfect I guess basically what that means right now is that it's not possible right now that we think it could be possible in five four to five years but there is a mitigation strategy that could be implemented now and it also affects the hash so not just brought out with the hash to anyone else yeah sure I mean this might be too broad but I'm just wondering if you need like the like mile-high explanation how power works like a contact if that's not too far as cool yeah you're welcome to so prop how is the variation of each hash it was it's implemented in job it's been a few months maybe a bit rusty member from my memory but so first you need to get some seed values in your mixers from this bag and that's where the main memory access comes from and then you load those up and you start I think 32 lanes each one of those lanes there's like some 16 or 32 things that you do of math these parameters change all the time so I'm sure we've never drawn but with some of the numbers you do some math and to do that math you also take in each lane stuff from various parts of the DAG all over the place to do that math and you do this some number of times you know there's there's more cash reads and there are math executions right now so you do this and each lane is entirely independent there's like supposed to be 32 lanes I think or 16 so each independent lane will take its math from its mixed a take some stuff from the DAG do some math together and then send it to another lane and you do this about 30 or 40 times and you get that mixed a together so it's designed so that the computation is very recognizable especially in context of a GPU architecture and it saturates the whole GPU by using all of the functions of GPU might have except for floating point because floating points kind of a murky thing across all the vendors integers is you know if you can't do integer math you should get out of computer science it's pretty simple floating points is where things get weird so they left that out of the spec because it's not certain enough so you take all that stuff you hash it together at the end you take that final hashtag you're in a file to check hash against it and then that's what you that's the result of your of the hash that you get and then you do the regular old mining stuff where you check against the difficulty see if it's successful enough if not you pitch it back and if you're a miner out there you have a less difficult difficulty to prove that you tried and failed so there is you know that the regular old mining techniques that go on from that that's the high-level overview of kind of what happens I have code if you want to look at it so that's also how it's implemented in like go there in parity when you implement this for a GPU there's also this internal period of 50 blocks I think in the yeah and then it went down to 10 for the for the for the next spec of Progbar and yeah one of the reasons that they get Progbar hasn't been merged so we're not sure if we should merge the 091 or 09 next anyway so the thing is if you run from GPU what you do is that every 50 blocks you recompile the GPU program and you reprogram the GPU for the next 50 blocks and that's the kind of programmatic part and this chip that the GPUs can do but which makes it harder to implement this as a specific circuit and as a program actually I mean totally skip the programmatic part there's so many moving pieces but one of the interesting things that one of the implementations found is the action found a bug in a GPU compiler when they were running the program and had a very simple fix but the thing about these programs is they are they it's not really random it's pseudo random and the seed is the block number of the period that's going on with it so you can totally run all your compilations up to you know 10 million 20 million blocks to make sure that your CPU can handle it so you know there's some there was some concern initially that this might break you know the CPU things are at our risk and any responsible minor what you know be running these programs out a day or two in advance to see if stuff blows up or you know in advance you just have you just need to have one rig to test to make sure that your stuff works awesome answers alright any other questions I just wonder are all the is all the controversy non-technical are there any technical controversies and then I was curious about your description of the test that it seemed that to me you don't you want to have a very rigorous test that it was such a rigorous turn as the first sorry second question first so in general when we set up a test that tried a new hard work what we want is a very lively test that so that we can like for example for Boston and then we can execute transaction and see that the execution transaction still happens the way they would and the way they should and that we're all consensus for propel that's totally not needed at all the only thing that we want to verify is the envelope of the blocks the proper work and verification and the mining of this proper work so we totally don't care anything at all about the content of the blocks so as long as it's what we wanted to do was to get maximum coverage of epoch transitions because we realized that there one one of the interesting points where things could break is in epoch transitions because that's when the DAG changes and if I recall correctly there was there is some minor tweak to improve power which made it interesting to just see that we would be able to manage moving across epochs and this whole thing about the block period of 50 bucks so that the main concern about the testnet is to have it running over a long sequence of blocks and have GPU miners mind them and then it doesn't really matter how many clients are on the testnet and not even matters which clients are on the testnet as long as at some point when we take the parent implementation we check that yeah I can import block 0 to 2 million and it verifies correctly it doesn't need to do it in real time as the blocks come along I'll take some of the technical controversy questions that's what I was fairly new to a core development when PROC POW is really getting its main push so I was drawn to that I want to make sure that you know that there are any technical concerns of those that iron out quick one of the first problems that had is the spec is initially written was it really cleaner implementable it was depending upon code locks and test vectors that weren't very well specced out test wise so I went and I implemented in Java I had to use see the Powell's implementation to get some test vectors out of it to verify no job has got some of its own issues it doesn't have unsigned integers which you know is where most of my bugs came from but from those I was able to create some some unit tests and some data tests for for transition to prove that you can get the particular blocks and there's so many different functions there's there's a test vector now for each of the functions and that's what was needed so I tightened up some of that and the bit I mentioned about the concern about compiler plugs was another technical controversy but that is you know basically how good or minors that they're DevOps I mean this is their business the only thing is there's another thing that they might need to pay attention to but you know the good ones will know what's going on with it and you know these as long as they're in communication each other like you know like a lot of them are it's just you know it's another thing you got to worry about that's it's totally within their own way of what they can handle so I think that covers most of the technical controversies were you aware of any others I think someone said that in video GPUs were faster than AMD GPUs and I think there was benchmarking done to prove that wrong yeah there's something like that I think that's where the o93 spec came from because I dropped some of the numbers to make sure and to say that the AMDs and the NVIDIAs are stronger it's really you want to because for a particular AMD model there's like an NVIDIA one is much lower than the other so the question is you know is that the NVIDIAs are better detached or that AMD is worse and you know do we make it so it's you got the same sort of adjustment it's it's a lot more within ballpark now between those two there's no architectural design decisions done that particularly favoring video or AMD not that I know anything about how to do the architectures the big changes were to take a lot of stuff down to 32 bits and all GPUs nowadays are 32 bit GPUs so if there was a 32 bit version of a particular algorithm that's known was preferred and that's why they went to check at 800 is because it's 32 bit based and you put that through a GPU through the 32 bit integers those things are like optimized to know and then just blazes right through so that was part of the decision of why they did a lot of their changes was to make it optimized for the general class of GPUs not one over the other and another technical I wouldn't say controversy per se but maybe questions still to be solved is if going from eat the hash Hashimoto to prog power and there is still the difficulty and so there are there are uses a difficult formula which is totally dependent on the previous block and if the prog power hashing engine is half as fast that means that it will take you quite a bit longer to to find an appropriately difficult block so there will be very slow blocks unless maybe a halving or a modification to the difficulty formula difficult calculation at the point where we switch is made so that's still something that can be leveraged with sure that like progress is nice it's not a picture of all of us before Martin has to leave and then we did we just have us sit out thanks everyone okay and yeah you got to make your next one okay perfect all right any more technical otherwise we'll jump into political there are things you mentioned that the block jump if we did nothing it'd take about three or four hours to cut in half there's also a slight impact on the ice age and that impact there is it would only felt like two weeks to a month earlier because of the exponential nature of the ice age and I think there was a third thing related to the bump that might have gone in but it's not too critical it's every it's no worse than what happened when they when they thought the transaction rate at the beginning gets it's you know we communicated ahead of time will be fine or we went go ahead and do a difficulty cut combined that with an ice age cut you know I think it'll be fine can you talk a little bit about the FPGA resistance of prop out if it's you know cost-effective actually try to run prop out FPGA I think the talk about the 10 the period the 50 blocks the random match period we need to recompile the program like every few blocks that block period is too small to really decompile FPGA program furthermore the power requirements in FPGA is a lot more than FPGA right now so it definitely is looking much harder than we were also looking at that so it definitely seems invisible so that's a little bit today there exists FPGA accelerators that you can plug into the GPU to offload catch up that's a catch up part right yeah so to offload the catch up calculation from the GPU to an FPGA and so that works really well for it has I think two portions of catch hacking so that so using one of these accelerators on 80 hash and can definitely make it more effective and on the propo it's a lot harder to get that kind of boost from FPGA accelerator because the portion of power that is spent on that operation is a lot less okay so we have about 10 minutes left so let's go into politics I guess so the politics of it does anyone have any questions or comments I'm not gonna say anything I've said I've heard that they have little like knobs in it that you can like adjust to like make it harder I yeah I don't understand it but you know for that what that's a question for the answers of that is we would need to know the answer when it's 2.0 is happening and even then we need to know all the details which we're getting really close to having all the details about how each 1.0 is going to interface with the 2.0 and I believe the latest answer is it's going to be in an execution environment already which will then I guess be decided if it will go indefinitely which it'll go for a while stills my understanding and then it might phase out naturally or there might be a push to move everything from 1.0 to 2.0 or it'll be its own shard I've heard many different things does anyone have any yeah so my answer to that would be most likely but it's really nice if we keep the freedom to say that hey we don't commit to anything we might swap out PROC power here from now with something really silly and that is trivial to implement in an ASAC or we might do something totally different you'll never know just and realistically yeah I don't think I don't think I mean if we switch PROC power now I don't think it will be simple to do another switch later obviously I thought PROC power would be kind of uncontroversial and pretty easy thing and most people will be like yeah whatever so we don't anyone else okay other questions I didn't know if you know the biggest concern for me is a fork or a contentious fork so I mean is there any evidence and I apologize I've read the report that ASACs pose a systemic risk to the system or there is something wrong then that poses a systemic risk or is this just to really appease the GPU miners and if this is you know a short-term thing before we approve a stake is all of this really needed to have potential contentious fork so my personal opinion has been developing over time on this and personally and this is not necessarily a community perspective but one of the most interesting I guess ideas behind this is the fact that right now the GPU mining pools the major ones we know them as then you can go to a chart of them on Etherscan or any of the other block explorers and you can see in chart pie chart it says like Ethermine and F2 pool and spark pool and so we know them we know that spark pool has helped out in the community we know that F2 pool has helped out a little bit and we know that ethermines helped out a lot with the technical stuff so I guess either might be more technical help spark pool would be both technical and community help with their initiatives to do stuff in the ecosystem with co-working spaces in China and conferences and educational sessions so we know they're all good actors the thing with I always think back to as a manero when suddenly there was a lot of hash rate that dropped off after their fork that we had no idea who they were and they didn't know there was a manero ASIC and it wasn't public so someone built an ASIC they started mining on there and so they could basically do whatever they wanted to anonymously so as we know all the actors now if spark pool tried to do a double-spend or they tried to I used to think there might be a way that like they could mess up the 2.0 transition that is less likely now after talking to Danny Ryan but there are some things that an anonymous large percentage of hash power could do like double-spending right before because they know that their investments about to drop off completely GPU miners can switch to other algorithms ASICs they're stuck on AT hash or they're stuck on PROC POW so if we're the only major one implementing PROC POW then what do you have to lose if you're anonymous and just want to do a double-spend on an exchange because our confirmation times are much lower so that's the best argument I've heard so far in favor of PROC POW so that's kind of where I'm at as far as good arguments go and the fact that I do shoot down the idea that we made a deal with GPU miners or that because of the issuance reduction that there was this implication that we would be doing PROC POW because I know at the time there was some people saying that but I did a huge blog or read a post that kind of dispelled that it included timestamps core developer calls and things like that where we discussed not doing that so as far as is there gonna be a fork or not there's no way to know and it's gonna depend on I guess my main opinion on a personal level is that there won't be because to have a successful fork you need a dedicated team of developers who can at least code one client and there would be people maybe from ETC or other forks of Ethereum that could pick that up but I don't see any of them speaking up right now maybe if it gets closer so we'll speak up and they'll say yeah we're ready to take on the work of this and we're very we're against this so much against the principles of it that we're gonna do it but I just haven't seen that yet so it's hard for me to believe people underestimated that with ETC we had never had that before again so I'm not saying it's impossible at all it's definitely possible but I don't know the percentage right now possibility no I mean if that did happen I'd see that arising organically in that same kind of ETC you know kind of way yeah there's a lot of people who can fork a repo can run continuous integration can tweak parameters they don't have the ability to move you know move things forward much beyond that but they can at least keep the network going they can they can do a simple fork and then hope to gather you know an army behind them and that's how I'd see yeah you know that contention happening they could happen in either direction there was actually code there was actually a developer who was on the discords back in January February it was about to do a prop how fork and he had code ready and he had dates picked I don't know why he backed up maybe we persuaded him the prop how wasn't dead but he was very committed to make it happen so he actually had code in place and people in place and connections to actually or he was really good at bluffing it's hard to tell without an actual fork the other thing that happened at the time of the down fork is that there was another proposal which was sort of like okay let's make a new one so we're not gonna fall but we want an aetherium but but we well we're basically going to start from scratch we'll make a new Genesis yeah yeah that's that's another possibility so our time is up unfortunately I'll be in the hallway to talk board and Martin has to run so yeah tell me on Twitter if you took a pic I'd love to see my face everywhere so yeah thank you all for coming so much