 Dwi'n neb yn ychwanegu'r cyfrydd, a fyddwn yn ymnod Abend. Mae'n cael ei gwerthu a chydoedd o unrhyw fφraswch a chygofnodau bysbun. Mae'r dgofnod o'r prysancau a chygofrwlesau, mae'n trofnod amser ar y glir. Mae'r trofnod o'r trofnod yn cael eu gofynu yn ll-frwles. Mae'n trofnod i'r trofnod o'r trofnod ar ddal. Mae'n trofnodi'n trofnod o'r trofnod o unrhyw, mae'n trofnod o'r trofnod o'r trofnod. nid i bethau am ychydig edrych a'r ll chociażbyn rhai. Eda ni o'n gwneudion ar hyn o'ch gweithio gyda'r clwcle. Mae llyfr gennym ar gyfer i'r fath o'r blwg, mae'n bwysig sgol gyda'r fath. Ond yna'r popeth, rhywbeth enwedig, ddim yn iawn. Mae'r brif, yn ni'n gwneud. So nid o'r blwg, mae'n alygu mae'n gwneud. Maen nhw'n dechrau ein prif yn y rhai fath o'r llach, mae'n eich cyfan yn cael cael rhaid i gael'r brif. mae'n meddwl ybawd, dwi fydd iawn, chi fod cyd-ddiogel iawn, felly mae cyd-ddiogel iawn o amser. Mae'n meddwl, ac mae'n meddwl sydd wedi cael meddwl sy'n gofynnig. Yn oedd oedd y niferstwyff pan mae'n meddwl y ffyrdd i'r hyn ymlaen o'r ddrwng yn ei wneud, oedd yn ei cael mwyaf oherwydd mae meddwl. Ra, mae'n meddwl ychydig o'u meddwl oherwydd mae'n meddwl ac mae'n meddwl o'r dweud, I was working in the stair office and I left out a zero, so I had a zero zero two instead of a zero zero zero zero two. I obviously never found that, so that was there since maybe 2000 or 2001. And I discovered that last year, so that's a 30 year old, 40 year old book that was discovered by Diverty. So yes, so Diverty does find read problems and often find lots of other things that are not problems, so you get a huge amount of noise, and especially for us in the office that time we would factor something that we'd mapped as being a false positive would return because the code changed sufficiently that it was no longer seen as being a copy or a duplicate of the previously mapped as a false positive book. So generally, our tubing noise must have been good. So basically we set off to try and get rid of all of our Diverty warnings, mostly so that when we got new warnings we could know that they actually were new warnings and not old warnings returned again. So we put a big effort in there over the year and a half. So this is the, yeah, this is contemporary results. Yes, last year's density was 0.08, this year's density is 0.00, and we have it at 0.00 for about six months or plus. We had it statistically 0 for that length of time in practice. For instance, today we have four bugs in Diverty. We've had a run which found four bugs or four warnings. We fixed those four warnings and the next Diverty warning is still pending to come back to see if it lets you get an actual zero or whether it's found more stuff. But I would expect it to be statistically 0. This then compares to projects of our size, and we're obviously in this large bracket. These are the amount of defects that are in our Diverty database over time. That is 2012, 2013, and then that's obviously now. The warnings from Diverty in this chart include the ones that belong to the projects that we are set to be ignored and all of the things we've mapped as possible. So this is your floor here. You can't go below that unless you go and fix our third party dependencies. For the most part, it's been a downward slope. You get these upward spikes every now and then, especially when exceptions, listifications are modified a little bit. That's what the large jump there is. So I go up front and you come back down again. How we integrated Diverty gives projects, or open source projects, of our size. Currently it gives us two slots a week, so you can do two bills a week, and then you can build again until a week later. So the way I do it is I don't easily space them out. I do one build for Diverty. I get back to results, I fix those warnings, and then I do a lot of build regions. We are trying to get this case where we had generally actually zero warnings, and I can use a second slot to verify that the results of the first slot are actually fixed. If you get it wrong, then you have to wait an entire week. So that's what I do, two back-to-back instead of evening space there during the week. The way it is now is that because there's such a small amount of warnings generally, I have the results mailed to the email list, so you get your Diverty email once a week, and at least in the email, generally the lines of the code that have been affected that have caused the new warnings. So you generally don't have to visit the website or to scare your screenshots of the website July for getting down into the book. I generally send an email out from Diverty to everybody that has committed access for a couple of days, a week or two I've sent out an invitation from Diverty. So if you have committed access and you'd like to use the Diverty website, and if I haven't sent you an email, just let me or Norbert know, and we just urge you to be invited. I generally don't give access to people outside a project that has to observe the books, or why not, because there's a lot of historic stuff in there that might have some significance that I don't want to have people digging through three or four-year-old warnings, looking for stuff to see to them yet what a minority of fun course is there. But yeah, all the new stuff is in the list that doesn't matter so much to them. So it takes about four-six hours on this laptop right here to build on our site that you uploaded to the Diverty website, and then it takes about 12 plus hours to try and write it. I'm on their side of things, up to a date. Yeah, the Diverty we're using is 7.6.0, dyslocated to 7.7.0, and the Diverty servers are now claiming that it fails on their site, so either we've broken 7.7, or we have some other problems, trying to get in contact with Diverty means to either see if there's a problem that they can fix, or I might have to revert back to 7.6. So that's the key part of this stuff, pretty much on chain status wise in last year, except now we email those to the list and our numbers are far smaller, they're smaller than last year. It's kind of intuitive to our process. Crash testing then was the follow-up project. So this is basically Mark's got this over running. What we're actually doing is we're loading a whole pile of documents and then we're saving a subset of those documents and see if any crashes. And by crashing this case, we build a Diverty utility build, so we're getting certs as being a failure as well. So when I talk about crashes, I'm talking about crashes are certs. So, for example, the existing crashes that we have are all certs that don't crash in a release build. So it's a higher order of quality that we're holding ourselves here to. So when I say that we have this many crashes, we actually get a real-world version of a linear office that's a release build that's not going to crash any previous launch as we play here. So, on the input side of things, we have 118 columns in our output and each column is either a file format or a location where we get documents from. Somebody's a kind of pointless now, we have a whole pile of Staraff's binary formats that we were loading, but now we're loading and there's just no ops, it's being loaded. It's been typed, then, and it's missed or loaded as text. So, see if anything crashes. See if anything will trip us and start to... Outside of the loading, then, we save a whole pile of these documents, We load up all the ones from the compatible formats for output to the column soaf gofyn, EODI T, ODS, ODI T, DWPGX, RTFX and SXXX. So, we export out those larger office formats. So, we don't load up all the image formats, and we save them to other image formats. We just do it for this. So, if we load up an ODIt, we save that out into all of the Compassion5 format. So one import of a dot or one import of an ODIt turns into three exports. So, we multiply those there. a ein bod o ffordd sydd o'r lleif, o'r ffordd yr oeddgoedd. Rydym yn ddiwylliant gwasanaeth fel hefyd, yw'r cyllid o'r hanfod o'r cefwng lleif. Ac yna'r deithio 64 cofau, eich cofau o'r 4 gigau o brannus, eu bod yn dda i dweud. A mywch dŵl allu cwmfyrdd, Cymrwyntgyrchu ar gyfer byw, y cyfr northfiydd yn gyfr dudiau, o'r dweud, o'r disbyn. maen nhw'n rhoe ddau sydd yn ymwyno o'r 80,000 yddorach iawn. Gyd yn ymwneud y yrhaf yr hyn sydd oedd yn cywbeth. Dyma'r hyn o'n gwneudio'r mynd i, ac mae gofynig a'r byd yn ymwneud a'r gwybeth, i'n gwybeth, â'r dyfodol bwyd i fyflaidr honno, ac mae ymwneud ti'n gwybeth, ac mae'n fofio'n gwneudio'r mynd i sydd oedd, sy'n 180—olaw'r 180 mwy, o'n 5 oed. mae'n deallweddio. Mae'n mirau yna, mae hefyd yn unrhyw. Mae'n加油wyr Roedd ddechrau clywodol am gwybod ar gyfer ein bod hyn Byddedigoedd Clawdeddon, a'r cyfeirio ym strongest ar gyfer ei dweudio. Yn holl hyn o hwn i'r cyfeirio cyfeirio cyfeirio cyfeirio cyfeirio cyfeirio Cyfeirio Cyfeirio Clawdeddon, ac nopech yna hyn o'n ym如果你wyr mawr. ac felly dyna i gwellwch eich ddechiwnau ac yn llwyddofyn â'r ddechrau hyn ar y dy๗ ymgyrch. I ddim ynエrydu'r byw yw'r peth yn oall, ac mae gweld, yn eich ddechrau, yn ei ddechrau. Mae'r ddechrau ar y cwrdd yn gwneud o bwysig gyda'i amgylch yn y prifau energol o'r ddalun Cyfnodigol, ac mae'n ystod o'i unrhyw i ddod yn gweithio i'r rhai oedd yma, oherwydd i fath o ddaeth a'i gwylo ar y cyfnod, a bryd yn ddiwedd i ymgyrch fel y dalnon. ond i sylwbit i ni wedi'i rhaglen newydd a sy'n ffordd gwaith sydd wedi gweld ei wneud a gael i ddaeth ei fod yn bipod gyda tr� o'r cael y mwyaf ddewch ar ddissig lle. Dwi'n sgwmpio brydwch â'r rhaid o'r clywedau ei wneud. Mae'r wneud wedi'i gyrddwch ei ddaeth, ni'n sefydlu gweld eげf gofio ar gernegol o waith hoffi'r bwysig o ddaeth Cymru oherwydd i fynd i'n sgwmpio'r gwneud, o wneud i weithio i'r enghrefan? FellyByddon ni'r hunain i gweld yn y dweud cyfeidio'n ddyn nhw. Bwrwaf, yn y ddyn nhw, yn ddyn nhw ddyn nhw'n ddyn nhw'n ddyn oherwydd mae'n nghefn o. Mae hi'n meddwl i'r rhaid i commanderau a lamedig. A roedden nhw ni wedi gewch y bodRIm am terfodol ar y bodys. Mae'n ddyn nhw ddyn nhw wedi'n ddyn nhw'n ddyn nhw yddy'n ddyn nhw. Mae'r hanes hwn yn lwybr wedi ar gael i'r rhagl. Ond ar��, ac yn gymryd ar flog o'r gwleidio, Y proses d为u'r bucheg ynghylch yn gweithio'r buchig yn gweithio'r bwchig am hyn. Adeg ddod fel clyw, a'r bydd yn ddod o gweithio'r lleig, mae'r ddod o'n gweithio'r boll ynghylch yn tuadol o'll bydd yn elau! Mae'r ddod o'r bydd y Chwyddoedd cynnig yn gweithio'r boll, ac yn ddod i'r boll yng Nghych yn ddod o'r boll, ac yn ddod o gweithio'r boll ddod o'r boll eithaf, y dyfodol yn ysgol. Mae'r proses wedi bod yn ychydig yn dŵr, ac mae oeddwn i ddodd y ddau'r ddau'r ddau oedd, yn ymdweithio'r ddau. Mae'n ddau'r ddau'r ddau'r ddau'r ddau, a rhai'r ddau'r ddau'r ddau, reidio'n oedda'r ddau'r ddau'r ddau. A we ddweud o'r ddau o'r ddau'r ddau yw ddau'r ddau'r ddau'r ddau'r ddau'r ddau. Nid ymddangos ni do jobio i butfyn y llun, ac mae'n wnaeth i'r ddawn ni'n gweithio'r ddawn yn gweithio'r ddawn i'ch wnaeth, roeddwn i'n gweithio'r ddawn i'n gymerwyd iddyn nhw'n brerach a'r d winners i ddylo'r unig across o bobl yn unrhyw o bobl yn gweithio'r ddawn. Rwy'n pennoch yn fwyaf, felly mae'n gweithio'r ddawn i'n ddweud, ac mae'n lle wedi peirio'n diolio'r arfer o'r uwch. That category wouldn't be put in too. Not mummy, but it would take it at all. All it just means, like that would be look at the historically data, it'd say that there are a lot of darkue And lot of those darkue and actually RTF crushes, some of these are not There were different files from it. So you tend to look at the collins and see if this bit was You know, absolutely useless. It's one of the other ones is really important. So there's lots in this categorised stuff in Mo wordszilla and lots of archive files a'n cael cyfnod arbennig chwarae. Dych chi ddim yn fyd, ac dych chi'n ddim yn ymwneud yn gofal. Ond mae'n debyg i'r ddechrau mendigol oherwydd i fynd i'n dwylo cymaint o ysgriffau dysbwythio'n erbyn i cael bod yr ysgriffau sydd wedi'u gwneud y dydyddai'n gofio, fydd yma os ydych chi'n trefwn o'n mynd ymgweithio. Ac mae'n gweithio'n typ i ddwylo i'r 12wyr. Mae gweithio ond sy'n mynd i ddefnyddio. Roeddwn i'n dweud o dros dros gyflawni, dyma yw'r ddweud yn cael eu ddweud, bobl yn edrych ar y ddalun, a ni'n dweud o'r ddaf yn cyfrifol yn cael eu ddweud. Felly mae'r ddweud o'r ddweud o'r ddweud, ac mae'n ddweud o'r ddweud o'r ddweud. Mae'r ffawr y gallwn ar y dyfodd i ddweud o'r ddweud am 13 o'r ddweud. Geni yw'r ddweud yma'r ddweud o'r ddweud? again getting very serious to just getting these numbers down. They had two and a committee between them as a week. But now for the last two or three months you're down to one minute a week. So they're not evenly spaced. Yeah, so like you take a few peaks from this diorama in order to actually fit things in. Cos otherwise if you just left the original peak you'd have one gigantic peak and then just a straight line and you wouldn't be able to see any of the local ones. Felly, dweud hynny y gallwn hoff cyfan, er fyddai hynny'n hynny'n hynny dweud bod gennym o'r documentau sydd yw mewn hwn. Onw'r dweud ydych chi'n gweithio gweithio ddim yn ddod yn gweithio ar y gyfer hyn yn eithaf hynny'n hynny yng nghymru gyda yn yw. Felly diodel ar gyfer hynny yw y gyrfa oherwydd hynny'n hoff i ddweud hoff a gilydd thoseiol yw'r documentau sy'n hal ei fod yn bobl yw ddim yn arfer. Or, if somebody added in the search to catch things that previously would have silently been ignored so all these documents that were always working continue to actually really work or not, there's no actual crashing, a difference in their crashing in the real world to be detected in the whole pile of stuff. So, you can use the import testing to find real world examples of something that you take as a problem as well. So, if you have like a legacy assert that just warns, you must get it closer, you realise that it's a real assert, it should be a real assert because this problem should never actually exist, you toggle it on and if there is a document out there somewhere in the horde that comes through that bad case then you can keep writing the results and then come through. It's pretty standard, pretty common thing to happen in writer, especially in layout asserts for the SW index assert that warns about pens that have lived beyond, that lived after the stuff that they pointed to has gone away. So, that's the importing, there's a lot more documents involved in import as an export, you've got 80,000 documents in import but for export then you only have to subset for that export out to the common office file performance. It is multiplied by about three from the input, has generally each document exports out to about three export file performance but the numbers should be that tomorrow. So, even though the numbers of documents we're talking about are a lot smaller, we can see that initially we've got a huge amount of failures in some of those product spikes, there's like 3,500 failures versus the worst that ever happened on this chart is about 400. So, it doesn't make a difference initially in the quality of import and export. So, relatively speaking, our import was never that bad when it comes to crashing, but when it comes to asserts and crashing on export, there's a whole different story there. A lot of doggings, failures especially, a lot of dogg failures, a lot of crashes on the export to those file performance. Again, this game is the same from about October 2020 to now as well and again. The overall trend in this world goes from big to small and that's how it really matters and you can get these places where lots of documents share the same flow. We like those ones. Yeah, so, last year was what we were doing during the last year, last year we had looked at export crashes at all, but we had made some progress on the import crashes. So, the three things of our severity stuff, our import side of crash testing and then once that was finished we basically moved over to export crash testing. We also wanted to get the numbers down. So, there was one week this year where we had a converting report came back in zero. It was zero import failures and it was zero export failures. We hit deal. We hit the triple zero buying point of water or whatnot. Yeah, and I thought it was kind of interesting because that happened on the week of 20th of 27th of August and I put an awful lot of people on their holidays. So, I guess that what we need to do for better quality is just go on longer holidays and then we'll have them go away. So, yeah, so that's, you know, obviously it's not a static project and a lot of the reason we were doing this was to convert the ones so we could get rid of legacy warnings. So, when there was new warnings we would see them straight away so that you could tell them that an actual new change had been made that triggered that and not to do with the old ones. Similarly for the import and export stuff here as well, what's also important not to crash and import and export, what you're really hoping to do is use the whole thing as an early warning system so that anything that goes wrong, that creates a converting warning, import crash and export crash, can be detected pretty much in that week so that the person who made the change that found that trigger it is still interested in being fixed in it and not maybe six months later. So, this is what you get typical for a typical run. Like, this week we have four clarity warnings pending the next build because four warnings are fixed. This week there's zero import failures and this week there's four lines in the export on the bottom which has four documents. They are talking about two different issues. We have two asserts that have two problems that have appeared in the export this week. That's a fairly typical thing and that's pretty much where we are. As far as I'm concerned I think that the whole thing is a success in that you have very, very small amount of numbers. They do change over time a little bit but you can use them to determine that a single individual has made a change that he needs to examine to see if it's something he needs to revisit within the week of him doing that. Right, so the reason I cared a lot about the security warnings and the important the export stuff was so that I started because I was interested in just the quality of the important the export from the sense of hitting off any security issues around five months. I didn't think it was all at a point getting into document fuzzing and document filter analysis while you had all these actual real world crashing test documents. There's no reason to go search for difficult crashing test cases when you had a pile of 80,000 documents that created 400 or 500 crashes in the first place. So now that we're kind of finished I suppose the easy real world case of import testing crash documents and the theoretical world from any of the security stuff that comes to T to data and things like that. We've made an effort there to validate data and we've used security to tell us where some of this data might be data that's coming in from the outside that we haven't done any bounds checking on. I'm starting to move on to generating troublesome documents and finding other crashes like that aggressively by going out and creating them myself. So I've played a couple of fuzzers over the time. The search for a fifth is not bad. You get some good results out of that but it was not super fast. The best one I've had so far is the AFL, the Merit Fuzziwp, and that's a lot of fun if you enjoy that kind of thing. So the best approach we have to use that is that you build your project with the replacement compilers of AFL-CAN and AFL-CAN campus. And I think it's a coverage-assisted fuzz testing tool that instruments your project that finds all the branch points and it's a keystack of what data that's read calls your project to go down one core branch of the other branch and then it goes back and it generates input to try and get you to go down the other part. So it maximises the coverage that your buyer uses. And it's pretty good because it also has a lot of smart bits into it and it knows how things go wrong and knows the kind of mistakes people make. So you turn it on pretty fast. It's giving you documents that are causing your interest to overflow and giving you zeros and all sorts of places. So, you know, you turn it on and just things start breaking. And it's got a really great user interface. I don't really know how much it works. I just write the read me and things like that, but what I do know is that it's obviously grey up there in the top corner and it's got cycles done. When you make quite a bit of progress, it goes yellow and I'm told that when you're coming close to exhausting all the possibilities of your programme that you're coming to the end of the process that goes blue. So one of the days I see it go blue. It doesn't go blue yet. I give you some numbers on that as well. The crux of this, for all these closing things, is that they give you a document engine, start your application, report your documentation, your application ends, and the crash testing, or in this case, the photos crash testing, monitor and see if your application crashes or not and then restart it again with different inputs. It starts and starts and starts and starts. Again, because Beckham is user interface, there's a number for cycles. I won't show that screenshot because I want to. But there's a number that says how many executions it is per second. And if it's under 100 executions per second, the user interface goes red. So obviously the message you're getting there is that if you can't execute 100 times a second, your applications will be too slow to be fuzzed effectively in this manner. So yes, our office itself, the office being the fat one, is really slow. So I was getting 0.18 executions a second just to go with the PNG. So that wasn't really where I wanted to be. So I found out that for just starting and stopping, that one of the big things we had was configuration loading. And practicing it is pretty expensive. Normally we just overfet a binary load at once and then, you know, it doesn't matter too much. But for these little really tight loops, then the configuration is too expensive. So we were rather going to use the fat half as well than use a much thinner custom written by the FF tester. That has no user interface and no configuration and so on. Number of being that, they fucked the executions a second for PNGs. That's 200 times faster, but it's still pretty slow. So yeah, about five minutes or about one minute on this field. So yeah, so I hope I get to the story of the situation which is that there's a special mode where you don't start the stuff. You just load a document, you go to the next document, you just loop and you can foreign the monitor that you're ready to go again. And that way then you lose all of the start up time you have. And when you do it that way, you get up to 3,000 or 4,000 executions per second. So that's 20,000 times faster to work on the document from 0.18 to 4,000. So that's a pretty big number. So you see here back at this, you've got like 2,000 separate minutes. That's not an execution per second. You generally want to get those speeds with the simple documents. I have to say when you get to the more complicated ones like Doc and Powerpumped, then you're talking like 50 or 60 executions per second. And in lots of cases, as you go over time, you're going to these long deep valleys for many couple of hours, but things are very slow. So as I said, 64 power blocks for that. I've got currently about 20 plus instances running for the last month or so, and lots of documents are failed. But typically what I'm finding is that crashes are very rare, but hangs are really common, very, very common. So what I've done is I started on the crash testing corpus. The following documents we have for crash testing have taken that. I run into this minimizer that's fine with the minimum set of documents from the input that basically exercises the maximum of co-pents. And I use that as the initial input. So those are clear to date. And we've seen the results in co-pents work that actually makes some hope to progress. And I hope to see eventually that it can glue now that it shows up in the market and operate. OK, Dan Fish. Yes? Yeah, I wanted to ask you, so you can just terminate the process for a while. Yeah, we'll restart in two days. Yeah, so exactly. So I've run for a couple of days between crash testing runs. I do the ffuzzing runs. And then you generally find a bunch of hangings. Fix the hangings. Build the source again. Do the crash testing for a day or two. Then go back and fuzz. And you can restart, or at least use the intra-linguals at your head at the time you exit. And go to your state and continue from there. Any other questions, targets? We haven't done it for a while, but we did do it for a bit. And yeah, that's the thing. We were just talking about it yesterday that we haven't done it in a while. But we have done it in the past, and we have little results for that as well. But that does need to be updated. And it's the unifying behaviour one. We just sanitise it. And then there was a memory sanitiser as well. I think it made sense. Any other questions, sir? OK, that's it. Thank you.