 Yeah, about time. Okay, let's just put that on, let's go. Okay. All right, we're going to start with the next session. So the next session is again, a panel, second panel for today. Just like the first panel, it's a bit of an experiment. It's the first time we run this, well now it's second time. And this panel, we thought we'd set up, well we set a few experienced LLVM developers in the front of the room, and they're up to answer all of your questions, well all of your questions that are at least somewhat related to LLVM or its development. So to start off, maybe can each of you do a small introduction where your experiences are? So hello, my name's Peter Smith. I'm working at Linaro at the moment. That's working primarily on LLD. So I'm mostly in the Linker area, ELF type area, so please no complicated compiler questions for me. Hello, I am Jeroen Dobbeler, I work for Synopsys. In the group that I work at, we work on a tool called ASAP Designer. And that's a tool that allows you to describe a high level, in a high level language, a processor architecture. And from that tool we produce the hardware, the simulation, the software tools, including the compiler. And I mainly work on the LLVM site there. Hi everyone, my name's Nick D'Zoniay. I work on Android's LLVM team at Google, and I work on building the Linux kernel with Clang. I probably should also do a brief introduction for myself, so I'm Christof Bez, I work for ARM. I've been working on compilers there for about 10 years, mainly on LLVM and LLVM related projects. So I do have one or two questions prepared to get the panel started, but I really want to open the floor for the audience to ask questions all the while. Hopefully that will give the most interesting questions. So please do raise your hands if you've got any questions to ask in the back. So I'll repeat the question, was about how is Asim Godu? So Asim Godu shipped in Clang 9. So if you need it, it's there. We actually put out an RFC for extending it. So this is the curious case of what does it mean for LLVM to extend a GNU C extension? So we actually have patches. There's actually like a constraint in GCC where if you try to use output variables, you can't, you get an error message in the front end. And there's definitely like ambiguous cases of like, what happens if you have two Asim Godu statements that jump to the same label, but then they have conflicting output constraints like one says put this output in one register and one says put it in a different register, right? And I suppose you can detect that and just error out when that's the case and then still write assembly that isn't overly constrained like that and support that. So patches are out in their under review currently and there's still a lot of work that we're doing there. But Bill Wenling is driving that work. There's posts on LLVM Dev for the RFC for that. And we'll probably have kernel patches soon that detect if your compiler supports that and then makes use of it. Cheers. So just a quick follow up question on that Nick. You said that is extending a GNU extension. That seems to be similar on topic for the previous panel. Also have this LLVM and the GCC community work together. Do you have any insights for this specific one? I think this is something where we've picked up on like kind of feature requests on it. And I think a lot of the pushback on the GCC side has been this is not something that's easy to implement. Not necessarily that it's impossible just that it's not easy and there are lots of kind of edge cases and things that we can think of kind of thing. And I think if we were able to show that it is possible and come up with some test cases and try to work out and understand these edge cases like conflicting output constraints, we might be able to take this back to GCC developers and say like, hey, we have it working with some constraints kind of thing. And then there's an actual important code base that may use it kind of thing. And that may help drive prioritization to implement it as well kind of thing. I think a lot of different features like it always requires someone to take a crack at initial implementing something initially and kind of work out the edge cases and find what's sharp. And then typically a second implementation to really work out interop between other things that the initial developers didn't think about. And then there's always a question of, well, is this a priority or not for me to implement kind of thing? So there's definitely been like back and forth that I've had with GCC developers where I've said, hey, as a developer trying to write portable code, it'd be great if I had has built in, right? We have has attribute helps write kind of more portable GCC and C++ code. If I had has built in, that would help as well. GCC developers got back to me when they got around to implementing it saying, oh, we noticed difference between GCC and Clang is we happen to expand macros then passed to has built in and Clang does not. Like we think this is a bug, what do you think? I said, yes, this is absolutely a bug. I filed a bug, but it's not my priority right now to go and fix that kind of thing. So I think that that's a tricky part too is like, if it's gonna be like part of the explicit language standard, that sets a deadline of like, when should we have this implemented by? There was a question here. I was wondering whether you can maybe tell something about, so when you have a batch for LLVM, so there's the system for code ownership, but sometimes it's not very clear who is the owner of a certain piece of code or who you need to review a patch at, who you? So the, trying to repeat the question, if you've got a patch and you're looking for someone suitable to review the patch for you, it's unclear sometimes who you should ask. And there is, there's a code owner's file in the directory showing for different areas who the code owners are, but even then sometimes it's unclear. And so on the code owner's file itself, I think some of the code owners listed there, they're not very active in the project anymore, they used to be active. And then for probably some of the code owners have just areas that are so wide that attract so many patches that it's impossible for them to just look at basically just everything. After that, it gets to the point where somehow you need to figure out who would be the most active person understanding this area to be able to review this for me. Of course, if you're working in a team with lots of LLVM developers around you and with lots of connections to other teams that I don't know, other companies, it's a bit easier to know and understand who to ask. But if you don't have that, it gets a bit harder. So some of the tricks I sometimes suggest to people facing this problem is first of all, look at who last touched the code that you're touching. That person probably understands the code at least somewhat and there's a chance they're still active in the code base. What's another suggestion? Look for anyone who gave a talk on that in that topic in the recent years might also be a suggestion or look for who actually reviewed similar patches in the same area. If they did good reviews in the past, that's a good sign, probably good reviews in the future. And then yes, probably just like in any project there are some areas where it's just there are one or two experts and maybe they started working on a different project and it gets hard. That's an awful lot to add to that. You're right, it is a bit diffuse and it's not any formal policy on who can sign off, who cannot sign off. As far as I understand, I think anyone can sign off on someone else's code, but it has to be sort of in an area that you're known and familiar with or that it's kind of expected that you make that check and say that standards across the code base vary quite wildly. Some of them have quite strong owners, some of them not so much. Quite often a good way to do things is just to look for areas where people have strong opinions. Certainly if you patch in somewhere where someone's got a strong opinion it will probably get attention at that particular point. So often finding out who's the sort of person you're likely to upset if things go around that particular point. Sometimes also IRC is a good thing to go to, say, hey, I've got this patch, can somebody advise me who can look at that that type of thing. So sometimes you might find going on the LLVM IRC channel and going and asking on there can often help. Anyone else anything to add? Question in the back? Or it's not a question so far away. Yeah, Nikolae, what's up? If someone happens to answer at the end of the review work. If you want to repeat that. If you want to repeat that trick. Yeah, so this particular trick was if you push your patch to the mailing list and someone, well, makes themselves a victim, then by all means add them to the thing. No good deed goes unpunished I think in that particular case. Okay, first to review one. You mentioned IRC, now there's Discord as well, which is much more newcomer friendly. And as you can ask over there, is anyone willing to review my patch? Okay, yeah, so to the comment there was there is also a Discord server. Is that actually on the LLVM mailing list of where to find that? I think it's on the home page on the left-hand side, so you've got all links for IRC Okay, so yeah, so there's also a Discord server that's potentially more beginner friendly. And so on the trick of looking at who touched the code last, you can say like get log on the individual file or directory kind of thing, but not just the author, but if someone committed their patch with ARC, the mailing utility, it'll typically say who reviewed the patch. So you can get the kind of list of reviewers who have reviewed people touching this file most recently. In fact, fabricators will, when you put your patch on fabricator, it will also come up with some suggestions already. I noticed. First, the follow-up for good to lesson? Yeah, so I'm kind of interested in sort of like in developing the as-and-goat as represented as an instruction of the call and the broadly. Right. And so I care about ADGPU, where interesting things happen with control flow, and one of the things that they're kind of thinking about is having target-specific intrinsics that can be called with this call branch instruction. Is that something that you think people have problems with? Is that something that there would be obstacles with in the current communication? Just at the IR level, obviously, the backend would have to do something. So the question was could we build additional, like higher level intrinsics on top of the call-BR instruction that is used in the IR level to represent control flow? I can't think of anything. I think you'd probably help us work out more bugs in various transforms on call-BR, because it's been like a minefield, and that's like part of the discussion in the LLVM community right now is kind of the overhead involved in adding new instructions to the IR. Tends to like you have a lot of case statements in certain places where like the default branch is absolutely not what you want. And there's like that happens all over the place. And so adding a new instruction, like a lot of things are broken and it takes a long time to find broken transforms. But we've put, I've paid a lot of blood into call-BR. Like if you want to build on top of it, it's relatively stable kind of thing. Community, a while ago, one of your top contributors wrote you something you could not be called political reasons. As I said, they deserve a negative impact on the community that the atmosphere changed. Do you notice that he's missing? Because he contributed so much. I can only give my personal view and I didn't notice the difference. Raphael was a big contributor to LLD, I guess. So I guess I know him from those particular ones. And yes, did the velocity of the LLD project. It would have been faster if Raphael had stayed around. And yes, it almost certainly would have been. But I think other people from the community have come up and stepped into that place. And whilst we may not have been as fast as we were before, we have managed to pick that up. But yes, there's always a risk when anyone leaves the community for any particular reason that the project will slow down. But generally, when someone leaves a hole, other people rise up to fill it. It's all I could say there. Anyone else want to add to that? It's extremely hard to deal with LVM from the perspective that disillusioned forbid me to have an unknown version of LVM. But LVM APIs change so much, and you have five years of branch support. And it's a nightmare. And I don't know whether you have any experience. They assume you have some experience like that. So if you have any tips on that, I would be very interested. So I haven't had any direct experience myself. I know that I think one of our earlier presenters today, Alex Denisov, I don't know if he's in the room at the moment, but I remember him presenting about his particular strategy for one of his tools that had to support multiple backwards revisions of LVM. I think, in effect, he had basically wrapped version sort of interfaces where it had various different ones. I think it's one of those things that the, I think the community sort of, it follows the live at head mentality, unfortunately. I think it's one of those things you want to use LVM. Unfortunately, you pretty much have to subscribe to that. Otherwise, you're in trouble. It's painful, yes. I think it's just one of the drawbacks of that particular model. Anyone else? I think I can only make like a general statement that it's, it's a trade-off between improving the system you're working on requiring to break APIs or being held back by keeping stable APIs for a long time to come. So, yeah, maybe my guess is that most of the people who contribute a lot to LVM maybe feel the pain less because they don't have the use case of having to support something that was the version from five years ago or five multiple versions at the same time. So, yeah. Using the LVM, what is it called, attribute deprecated to just keep old versions of overloads, for example, around and just mark them as deprecated so that, because we have the same problem as well because some of our stuff, this is actually an outside LVM project. Yeah. I think we should do a better job. I'll try to summarize. Nikolai thinks we should do a better job by using deprecated attributes. And if I understood correctly, having wrappers at the old API keeps on working but under the hoods calls the new APIs something like that. Where it's feasible. Where it's feasible. Yeah. Surge? Surge remarks the REST project allows wrappers to have some API stability and the C API is also mentioned. Apparently it's a less of a pain but also sometimes you do see the same problem popping up there. The question in the back. I think it's slightly rusted. We can see our runtime. But it also has the beginnings of a bit of a sink. This indicates that we have to come in a bit more and have more run types. If so, I'd love to know roughly where the lines are here. So I've got a load of runtime libraries which I can distribute really easily. No. So the question is, or I'll just try to rephrase. There's a number of, if you want to get any code running you need to run time libraries. LVM for example has open CL run time libraries. There's a C run time library project starting. And the question is, LLVM seems to be open to accepting more run time libraries. You've got a bunch of run time libraries and you want to contribute all of them. How do you go about that? This is pretty much speculation on my part but I think a lot of it depends on is there a large subset of the community that's interested in those particular ones. I think a Lib C obviously is of interest for quite a substantial portion of the community. Although that does bring the drawback, there's a lot of different opinions on how to do Lib C. But I certainly think if the various run times that you've got reach a significant amount of people, particularly some of the larger community members who can potentially throw more of their way to actually get some of that through. So I think the answer is probably it depends if something is considered useful by considerable amount of the members. I think it's worth a shot. Is it compiler RTs? Sorry, isn't compiler RT has separate like ISO specific builds? It does. Right, so I'd assume it would follow a similar model to whatever compiler RT does. Have you talked to any of the compiler RT folks about this idea? Well, git log, compiler RT. Sorry. Yeah, I think maybe just one last remark on that is the only way to find out is actually have an RSC proposal and see what kind of answers you get. Try to explain your rationale for wanting to contribute. That's the best way to find out. And I get this right. The Rust folks are just basically normal users of LLVM or is there some sort of cooperation between you guys? Are you working together on some things? So from my point of view it's not like the LLVM community is like a very cohesive small set of people. There might be close collaborations between some people in the LLVM community and some people in the Rust community. Me personally, I don't have a close collaboration with Rust people per se. So from my perspective I would say it's more like the users of LLVM and probably some of the Rust developers probably contributes changes to LLVM. But I'm guessing some people who are active in the LLVM community might see it differently. Mostly interested because I would like to know if there have been any changes to LLVM to accommodate things Rust? Have there been any changes in LLVM to accommodate Rust? Can anyone think of some? I can't think of concrete examples of changes to LLVM, but the last time I spoke to Alex Crichton of the Rust C developers he explained to me that they have significantly expanded LLVM's interface. They have quite a few additional kind of methods that they expose in LLVM for whatever additional whatever their front end kind of needs additionally and I said, oh wow, this is great, have you thought of upstreaming this? I mean, I'm just so busy, I never have time to kind of thing. So I think a lot of the folks at Mozilla were interested in this compilation pipeline of like Rust through LLVM to WebAssembly so you had people working on WebAssembly back ends and then like Rust front ends kind of thing and then it didn't look like too much development within LLVM itself kind of thing and I think since then I think they've kind of picked up some people in the Rust community looking at making modifications to LLVM itself I was surprised most recently to find some folks on Google's Fuchsia tool chain team contributing to the Rust compiler kind of thing and kind of using existing LLVM experience to extend LLVM for Rust but concrete examples I can't give you one, sorry. So yeah, the question is in our product then the LLVM is used for the generation of the hardware or only for the extensions, the instructions. So LLVM is used as the front-end compiler CC++ and it maps to the specific instructions that you have in your hardware so it's not used to produce the actual RTL hardware or do any optimizations there so it's not LLVM IR2 RTL but it's C to do your particular instructions that you have defined. There do exist LLVM IR2 RTL generators but I'm not sure if they are open source. There have been talks in the past about such systems. Question? So the question is that you can use LLVM to compile code but you can also use it to generate run time code after run time, so JIT and the question is if you can use so the question is if you can also use profile guided optimizations for the JITing. In our tools we also use LLVM as JIT engine for the simulation and we do make use of run time information there. We are not using profile guided optimizations although if you can get the numbers, the probabilities I'm sure you can also make use of the passes when doing JIT. The main issue was actually getting the data back into the compiler. You can just use all the code that LLVM has for ahead of time profile instrumentation. You just need to mess with what the run time does to save the data and get it back out. I think it's comparable to JIT in the graphics world. Maybe one more example I'm thinking of is there's been a few presentations at the dev meetings from how Azul uses the LLVM JIT in their Java virtual machine and therefore sure they need run time information to be able to optimize well. Exactly how much of that infrastructure is fully upstream versus part of their products that remains a little bit unclear to me but at least it shows it's possible maybe you don't have all of the necessary infrastructure in open source LLVM. Well as LLVM is mainly just a bunch of libraries and in the main clang you can do this so once you can provide actual numbers of measuring the number of iterations on then you can just include those libraries and you can get it done but it's of course extra work. When we are talking about JIT there are two interfaces the MCJIT and the ORCSJIT and in other areas LLVM is always eager to have a good interface and there's two interfaces for years are there any plans to stream lines as well? I don't know too much about the JITs but I know the author of I think ORCSJIT and probably ORCSJIT 2 has just started a sort of I don't know whether it's a weekly or monthly sort of summary of the progress in that sort of area so there's being a bit more on LLVM Dev a lot more sort of consolidated reporting on that so if you're interested in that area it would be worth following that so I think if you follow that that would be worthwhile to see if there's any development in that area. Does anybody want to go first? It would be nice to not have a long tail of like compatibility bugs in like weird corner edge cases I find myself working on a lot of those I would like to get back to just traditional compiler optimizations like instance combine is like a nice pure classical Maybe it's not the top three but one of the things that keeps on annoying me is that maybe the first question is who actually can sign off on a particular change so with the most of the time actually it's I find it nice that there's more of this consensus driven model towards having decisions but at some point some discussions just keep on going and going and having no decision seems worse than having either option A or option B decided and move forward I'm not entirely sure if there's a solution to that that's not worth that status quo sometimes that annoys me I think that there has been some move at least from Chris Latner to maybe try and open this kind of worms for sort of saying how do we come up as a community with a sort of decision making procedure to try and break some of the dead locks and I say I fully welcome that I think that's hopefully as a community we can come up with something that over several iterations we can all agree with but I think I agree with Christoph the majority of things that I want from the LLVM are more community based than technical based I guess from a technical side I would love that it didn't have a horrible experience for people with the default build options the first thing you type or ninja or make you end up with static linking debug with as many threads as you can use and it generally blows people's memory apart and there are a difference set of build options to avoid that but they're not the default and we don't even document them so lots of lots of questions on the main list about that as far as actual features that I think hope promise in the future I'm particularly excited about post link optimization and I think that's being explored in a bunch of different spaces including parts of LLVM but a lot of it is kind of like the linker process has ended and you've thrown away all this knowledge that the linker just accumulated such as was there a relocation here and who really needs this value to really be in this spot and I think people are kind of proving out that you can there's still performance we're leaving on the table that post link optimization can win back things back but for very very large binaries or like certain large programs there's like assumptions that post link optimization just completely wrecks the binary and doesn't give you something that's actually runnable or usable but I think if these were more tightly integrated into the linker perhaps we may be able to like keep some of that information around and actually like do further optimizations that we're not doing today I guess the question was like what's the status on the original pass manager versus the new pass manager my understanding is that passes are slowly being ported over to the new pass manager I don't know that there's any deadlines or plans but do you have something to say about that? I also want to make sense of what I are but adding MLR into our compiler pipeline gives me nightmares because our customers are already complaining that LLVM is too bloated having an MLR to LLVMR position puts a barrier where you can't do pass reordering anymore it blocks you so I was just wondering how people feel about we've seen people want an accessible LR and there is understandable hesitation to expand LLVMR itself but I think if we think about carefully what we can do in small sets then maybe we can get to a point where LLVMR is as accessible as MLR so I wonder if any of you has thought about that before? No I haven't really thought about that I'm happy to offer an opinion so try so you've criticized the design approach where you create something brand new both on top rather than gradually changing something that exists in this case LLVMR can demonstrate so I don't know in this specific case maybe that's the only way to demonstrate it's useful and that's how you have to do it maybe gradually MLR and LLVMR could grow closer together it just becomes a dialect in MLR might take a dozen years or more to get there so I would say I definitely recognize people being bearish on additional intermediary representations one of my favorite examples right now is this project called CraneLift that kind of so LLVM has two different pass managers it also has two different instruction selectors and two different register allocators at least two but the CraneLift says let's bypass multiple IR conversions and just go straight to machine code and we think we can cut out a lot of compilation time or time spent lowering by not spending so much time going between IR to IR to IR so I can agree from that perspective the spirit of the question of MLIR where it really shines to me is kind of converting to and from a textual representation and then doing kind of your dependency analysis or like use def chains or certain compiler passes are really kind of language agnostic and it would be really cool to be able to generate a compiler from an abstract description of your IR 20 seconds left does anyone have a 20 second question oh, Andre, 20 seconds I just have a quick to see pass manager there was an RFC on the mailing a few months ago are you ready to switch to the new pass manager consensus was yes we are expectation was now we have switched but no we have not as far as I know all passes with an LLVM have both interfaces and porting is very straightforward but no we have not switched there are a few other things missing yes as far as best consensus to switch all right I'm afraid we have run out of time for this session thank you everyone for the questions thank you panelists and I would like to also request if you have any feedback given that we run this as a first time and as an experiment please to share feedback either in person email me or on the FOSM website at the bottom of this session or every session you can also share feedback please to share your feedback thank you