 All right, everybody. Let's go ahead and get started. As we can clearly tell, it is the morning after the party. If you look around the room. Also, partially why we're running a little late. That was, I think, mostly because I just lost track of time. So this morning, we're going to feature some of the work that's been going on over the summer, mostly at Red Hat. We thought this might be a fun idea, primarily because I don't know about all of you, but is a professional software developer. It's really interesting to see what's kind of going on in research and academia. And so we thought we could highlight some of that. We thought it'd be interesting and kind of moving on from there. After that, we're going to go through and vote on the lightning talks and figure out which one you want to see. And then we're going to have that person come up and give the lightning talk. To make it more complicated and awkward, if you are doing a lightning talk, could you please email me a link to your talk so that I can put up your slides, assuming you have slides. So I am Langdon at redhat.com. So that's L-A-N-G-D-O-N. And just if you're planning on doing a lightning talk, just send me a link. And then I'll pull it up on this computer so we don't have to swap computers. And hopefully this doesn't go terribly. But I thought it'd be fun. All right, so getting started. Oh, sorry, I meant to cover a couple more logistics. Obviously this is the last day of the conference. At the end of the conference, we will have a closing event in the same room where we usually do trivia about the conference. And there will be awesome prizes. So don't learn everything there is to know about the conference. And then when you come, you'll be able to win the fabulous prizes. Usually it is a significant amount of swag. And when I say significant, I mean significant. So that'll be, I can't remember exactly the time. I want to say it's like 3 o'clock. But basically it's after the end of the sessions. I hope everyone's been having a good time. Please let us know if there's anything we can change. And there is a very good chance that I'll say, hey, maybe that's a good idea for next year. But definitely let us know. And we'll pretend like we care. Sorry, no, we really do care. And we really hope you're having a good time. And I think it's been going pretty well. So without further ado, let's go ahead and get started. All right, so first one, because it's intern programs, everybody's got to have their own special acronym. So I may be struggling with some of these. I apologize in advance. But this one is the Greater Boston Area Research Opportunities for Young Women, AKA GROW, which is a really cool high school program for obviously women. And I actually had the pleasure of working with Nazari here on the left, who was one of my interns. And all three of these women were kind of sitting at Boston University and cranking through on really interesting stuff doing essentially machine learning with AI and Python and eating all of the snacks, like all the snacks. It was impressive. But as you can see, they actually were able to put some posters together that showed off their results. And these two were actually working together. So they were the right two in the last photo. I don't know what I have for my next. Yeah, so what they did was they were actually looking at comments on poll requests and analyzing the sentiment of them and giving them a hostility measure and all this stuff. It was really quite interesting. So if you haven't seen it, you should try to check out that research. And then we've actually been in a kind of a parallel program actually been also expanding that research or kind of related research around in general looking at how diversity works in GitHub and what happens, which is both fascinating and frightening at the same time. So this is Nazari and see, that's me over here. And she was working on a real software engineering problem, which I think gets pushed under the rug a lot, which is like, all right, we've got a whole bunch of scripts and stuff that got to run to make the stats or whatever go. How are we going to do that in some sort of automated way so that they actually run every X amount of time? Oh, and by the way, we have a bunch of non-software engineers who are actually authoring these scripts, so we want to make sure they can update them because they're data scientists or what we used to call quants. So how do we let them update them on their own without calling me all the time to update their scripts and do the calculations? This is specifically around statistics in baseball. And if you watch the news, you may have heard about this project and kind of watched this space for some bigger announcements about it in the next few months, which has really been a lot of fun. Oh, sorry, we do have this one. I thought this was earlier. So again, basically the same project. They were essentially working together on different aspects of looking at, like I said, PRs on GitHub. All right, then we have the ramp program at UMass Lowell. So I'm not as tightly integrated with this one. And this is, let's see, research, academics, and mentoring pathways program. So this is a summer jumpstart program for incoming students, including women and racial and ethnic minority students, designed to help you design a pathway to successful graduation. All right, so I'm not sure. So one of the things we wanted to do was are any of these students in the audience? I'm pretty sure the three high schoolers I was just talking about were not here. So that's why I didn't mention it earlier. All right, so none of the ramp students are here either. That's too bad. So here we have a very large program, honestly. And Denise, who is here in the middle, was a big catalyst, in a sense, for it. And she's the VP of engineering. Her official title, I think, is VP Engineering Global, but really what that means is she's the software engineering lead for all of Enterprise Linux, which is kind of amazing. And then, let's see. So moving on, we have the SOAR CS program, which I am led to believe is meant to be a pun. I will let you figure out what the pun is yourself. And I think those students are here because I see shirts. So if they wanted to come up, come on. You can come up here. I take no responsibility for picture choices. So what did you work on? I personally worked on a virtual reality project on this platform called Myer, which Dr. Fred's team created. And yeah, it's essentially just a world that you can see in virtual reality. And everything you code pops up in the world. Nice. What did you work on? OK, so I created a SOAR CS app for our group. It's something small, but I used a little bit of experience in Android programming. Nice. So is it a mobile application or web? Yeah, a mobile application. Nice. Very cool. Very cool. Yeah, you often need actually software to run these programs a lot of the time. And that usually gets what we refer to as short shrift. And so everything ends up terribly. I can speak from experience for running this conference. All of our software is terrible. So. So I use the Microbit, which is a microcontroller. And it has a couple of LEDs on its surface. And I use that to make an infinite runner game, which I call Jurassic Jump. It's not. If you know that little Chrome game in the one with the dinosaur. My kids literally disconnect the internet to play it. It's an attempt at that on a 5 by 5 LED screen. Awesome. Awesome. So I used Myer, which is that virtual reality program, to just craft an army of snowmen, waving their arms, pumping, getting ready for battle to take over the world. All right. So me and Kate made, using the Microbits, we just sort of had all these sounds and stuff in tones, since it does single tones. So we tried to work with both of them and the radio feature with it to make two guitars that play simultaneously. And the song was, Here Comes the Sun by the Beatles. Only like the first 30 seconds, though. It would take too long to do the rest of it. Cool. Most people can recognize Beatles song in the first 30 seconds. They don't really? All right. I also made a virtual reality thing with Myer. I created a world with trees and grass, and then thunder clouds to it, too. Cool. I made a Spotify controller using the Microbit and Python with PySereal and Spotify. And what does it do? You can search for songs. You can pause and you can resume. And it'll tell you exactly where you stopped in the song. Cool. Very nice. Hi, everyone. So I'm Fred Martin. I'm the faculty member at UMass Lowell, and also associate dean for student success. Can we have a round of applause for our students who made all these projects and are here? Very nice. Wait, I'm a ham. Before I give the mic back to Langdon, all the projects they just described are on exhibit in the hall, so you can see them with the virtual reality goggles. You can hear the song. And then also, if I'd asked my students who created and ran the program, would you please stand up? So these are undergraduates at UMass Lowell. Thank you. All right, thanks so much. See, it's not that bad. It's not too awkward. It's working. All right, so this is another program that I've actually been involved with. It's just down the street. And we have, as many of you know, we have this kind of what we call, I don't know, game up with this term, but a collaboratory, which is basically an integration with Boston University and Red Hat to kind of feature a whole bunch of different projects. So actually, before I move on to that, so a lot of those projects you might have seen talks about actually here. You also, the Massachusetts Open Cloud is one of those projects. The Chris Project, which is also a program for kind of doing health care machine learning kind of work. And then a bunch of other stuff as well. So this has been pretty cool. If you've seen any of these speakers, they were all PhD candidates, or they all are currently PhD candidates. We had a bunch of them be able to speak. And it's been pretty cool. And that project over there is one of the ones I'm involved in. So I like it the best. And I'd like to point them out. Any of them here? Or is it too early for PhD students? I think it's too early. So this is where we were most toss up on whether or not we would have people this early in the morning. But you know, hey. Then we also have this global intern and co-op program. So we have these kind of centers around research, particularly in Boston. Also in Brno, which is actually a huge program. And if you don't know where Brno is, Brno is basically the second biggest city in the Czech Republic. So nobody's ever heard of it. But it's awesome. And I highly recommend going there. And it's very, very inexpensive. And then we also do a lot of this work in Tel Aviv as well. And I can't think of any of the school names. But so long story short, we kind of have these strong relationships with various universities. And we would love you all to be more involved in those. And then, is that really fuzzy for you guys? So, sorry, I actually forgot the slides here. So yeah, so huge program in Brno. We had actually about, do we have, yeah. So we also had another, what do we have? 50 in Boston downtown and then another 50 in Westford? Something like that? No. 56 in Boston, 35 in Westford. And I don't think any of the ones surprisingly were actually gardeners in Westford. So, but in Brno, like I said, we have a huge program as well as Tel Aviv. A lot of interesting programs as well. Like I said, some of that's featured here as talks or some of it's featured at tables outside like the Source CS guys. Then we have, we did some lightning talks inside the office where we had a competition and people voted for the best lightning talks. And then whoever did the best lightning talks got to win a slot at DevConf. And so these two speakers, the one on the left was a talk about, for lack of a better term, diversity within computer science or within software engineering. And then the person on the right was working on a, or is working on a really cool system for analyzing your basketball shot and how you can actually, if you can look at how you are shaped doing your basketball shot versus how a pro player does their basketball shot and if you can look at those two models, maybe if you make them more similar you'd be a better basketball player. So as I said before, all the snacks, like all the snacks, all the snacks. And then it's been really, it's been really a lot of fun. I've been a lot involved in some of these projects and I just got to stop moving my arm around, screwing up the sound. But so this is our Boston office and featuring some of those students. And this is actually a particularly, are these students from this program? The all for all high school program? No, right? Oh, yeah, okay, okay, sorry. So this is our people teaching an external program of high school students, which are the people sitting here in the chairs over here. That's why I was confused because I was like, no, those are not high school students, I know for sure. As you can tell, I may not have put all of these slides together myself. All right, and that was it. So round of applause for all those people. Extra thanks to Heidi for actually putting these slides together for me and remembering all of the different programs we've been involved with. Yes, I was afraid, are there any here? All right, so let's bring our interns up on stage. You know who you are. We know who you are. Oh, all right, here we got a lot, okay, sorry. Oh yeah, oh yeah. Anybody we're missing? Oh, all right. Okay, so what did you work on this summer? Okay, so I was an interaction design intern in the user experience team at Red Hat. And I worked a lot on the business automation platform which is under the middleware portfolio. And I worked on pattern fly, which is our design system. So I don't know if any of you went to talks on that, but there were a few. And we have a booth outside for the UXD team if you guys wanna participate in some user research that we've been working on. And yeah. Very nice. Pattern fly is a very handy set of, you know, the things for web development when you are a backend software engineer and can't do anything related to graphics at all. It's really, really useful. So I highly recommend it. I worked in DevOps with the release engineering team of Red Hat in the Westford office. And I helped write and maintain tooling to help ship Red Hat Enterprise Linux. Nice. Hi, my name is Shania. I am in the engineering team, but I am a UI design intern. I work on the CRIS project for image processing and I create mockups using pattern fly. Hi guys, I'm Shruti. I'm an SRE intern. I worked on the AWS auditing that would help Red Hat analyze cost effectively. Hi everyone. I'm also Shruti. I worked with the AI COE as a data science intern on a project thought. There were a couple of presentations yesterday. So it's an AI recommendation system for software stacks. Hi, I'm Gabriela. I work on the CEP team and CEP is a distributed object file and block storage system. And I work on a specific tool called CEP Medic where we run the tool against CEP clusters and it helps diagnose any issues with the clusters. Hi everyone. I'm Aakanksha and I'm working with AI Center of Excellence as a data scientist. So I've been analyzing the executive briefing center data for the first half of my internship and for the second half, I've been working on a project that allows us to convert the blue jeans slash video data to text so that it's easier to perform analysis. Thank you. Very nice. Thank you all. All right, a round of applause. I was trying to avoid any feedback and thanks so much for all your help. And I also want to thank especially some of these interns for how much they've been helping with DebComp. So another round of applause, thank you. Although we'll thank them more formally a little later but thanks. All right, one other one I almost forgot to thank was Beverly who is, I don't think here but she also helped put these slides together. And so thanks to her, oh and her name is right there. I thought it was only in my notes. So we are done with that portion of the show and now we are going to try to move on to voting for Lightning Talks. So I have never used this software before so I'm hoping it's gonna work out well. But here we have the options and so what you can do is if you go to sli.do and then basically it's gonna prompt you for like the key or whatever and that's pounddevconf-2019 and then you can vote on each of these. Many of you are very technical. Please try not to hack the system. And so the first one is pulp three and basically being able or the argument is to stop using R-Sync to mirror Apple. Next one, open source at Red Hat, the story of form and ansible modules and AWS audit with OpenShift dedicated, cost saving using spot instances on AWS clusters, fun with statistics, women in open source, how real is AI? And then I don't know how to say this word. Thoth? Toth? I really don't know. I've seen it written far too many times. And then how to recommend the best possible Libs for your app? This isn't supposed to be showing the results. Now I lost my mouse and we're totally in trouble. And it still shows the same thing. I'm taken in the way. All right, let's do those votes. All right, raise your hand if you're done voting. All right, is there a question back there in the audience? It's pounddevconf-2019. Again, who's voted? Quickly now, quickly. No pressure. It'll be fine. All right, I think we're going to start with how real is AI? And come on up. But did you send me your slides? Or do you have slides? Yeah, just send me whatever. Or you can bring your laptop if you want. Like I said, this isn't going to be awkward at all. It's going to go perfectly, I promise. Do you want to use your computer? Good morning, everyone. I'm Akanksha Dugud. And I'm working with AI Center of Excellence. And I'm a data scientist slash data engineer. And all this summer I've been working on analysis of the Executive Briefing Center, which is in the Boston office. We bring all our customers in there, tell them about what kind of technology we have to offer, what are the solutions that can make their work easier as well. So recently we have built a huge Executive Briefing Center for our customers. We can have multiple meetings at the same time. And that allows us to gather a whole new set of customers. So when you talk about having customers, it is really important to know what your customers feel about you. And that's what our team has been working on. We've been working on sentiment analysis and what are the topics that we discuss over time. So we use two main machine learning models that are sentiment analysis and topic modeling for this purpose. And besides this, all of us here have a lot of meetings. So usually at Red Hat, we have blue jeans meetings. So we are trying to come up with something that allows us to convert the video into text so that we can perform analysis. Because all of us usually perform analysis on text. It's harder to perform analysis on any sort of speech. So that's like the end goal of this project. So I'm going to touch upon how real is artificial intelligence in terms of speech recognition. As a child, we all possessed an innate ability to learn any language using general skills. There is no genetic code involved in learning a language. We all have the capacity to make sounds. But our genetics allow us to make transitions from these sounds to actions to ideas. So having said that, we will talk about how we will make a machine learn a particular language. So a lot of viewers have gone into converting our knowledge into coming up with a model that allows us to recognize the speech. So I'm going to talk about artificial intelligence. So artificial intelligence is a simulation of human intelligence processes by machines. These involve learning, that is grasping of information and the ability to understand the information. The reasoning, that is the establishing logic and relationships in order to come up with a solution. Self-analysis and correction, just as we humans look back and do not repeat the same set of mistakes, that's what we feed in our machine to not make the same mistakes again. Having said that, we will come up with a model that allows us to convert the speech into text. So how machine learning works is first of all, we have a data set, which has the correct set of information. So we will have a file that contains speaker ID. So 10 people are sitting here. We each have one ID and an audio recorded by each one of us. And then there will be a transcript. This way, the machine will know who is speaking and what that person is speaking. Again and again, when we train the machine on the same text, the machine will get the heck off what this looks like. Once it hears the same word again, it will recognize how this word is written in textual form. And since it knows how each speaker uniquely speaks, it will be able to find out who is speaking. So we use a model, which is an extension of Deep Speech 2 that is developed by TensorFlow Google. We tweak it a little according to our needs. So it has two convolutional layers, five bidirectional RNN layers, and a fully connected layer. We use linear spectrogram that extracts the audio input. And we use the connectionist temporal classification as the loss function. So the aim here is to minimize the loss and come up with a transcript that is as good as a speech that we are saying. So I have a small demo. I'm sorry. I'm so sorry. So before this, I'm going to make you listen to an audio very, very quickly. Hello. So I'm going to make you guys listen to an audio very quickly so that we know what our machine is going to do. You guys can hear it. Can you? Oh my god. Is there a way we can make everyone hear it? Yes. OK, so can I have a volunteer, please? Anyone? OK, yes. So if you can tell people what it looks like to you. She washed water all year. She had Yadaksu and greasy wash water all year. What does it look like to you? What does it sound like? I mean, yes. She had Yadaksu and greasy wash water all year. OK, so I'm going to run my program on this and let's see what my machine translates this into. Meanwhile, I'll still play it for you guys. She had Yadaksu and greasy wash water all year. She had Yadaksu and greasy wash water all year. Oh no. Can you guys see it? So my machine has translated it into, she had Yadaksu and greasy wash water all year. So if we give the machine correct data set and information to learn from, it is as good as a human and that is how real AI is. Thank you. All right, let's see who our next winner is. It looks like the pulp guys have it for the next round. So, all right, come on up. Do you want to send me slides or do you want to try to run them? You guys have any sound? OK. Hi everybody, it's a quick introduction. My name is Mike DiPaolo. I've been a CIS admin for years ever since age 15 and I recently became a software engineer. My name's David. I've been with Red Hat as a software engineer for about five or six years now. OK, so my talk is titled Pulp Three and it's condescendingly subtitled Stop Using Arcing to Mirror Apple. So, why would a system in want to mirror a package repo? RPM in particular, so, because, you know, say you have a data set that we have 1,000 servers running application. To deploy that server plus the application, you need to deploy all the RPM packages, probably something like 1,000 per server and you need to deploy all your application. And in Python, you deploy all these Python packages from, say, PyPI. You probably have configuration and management code too and Ansible and that's also a bunch of roles and collections too. So, you would probably, rather than have all 1,000 servers reach out over there and download gigabytes with the data, you'd want to have a mirror of these package repos onsite. One reason is just so they can all download it more quickly and save your overall bandwidth. Another reason is so that if it does go down, you can still prune your systems and manage them. And many, some customers too have other completely air-gapped, physically isolated networks or some sort of semi-isolated network-spread firewalls that don't want to talk to the internet. So, you would think that, especially as a system and you're familiar with writing bash scripts, the easiest solution would just be to call R-Sync. R-Sync is a very flexible and very good at job utility to mirror folders and files from one system to another or across the same system, exact just copies of replicas. Repo-Sync is similarly kind of like that but understands RPMs a little bit more. And once you have it mirrored, you'd put the word up on a web server so that all of your young clients that would instead of accessing HTTP colon slash slash but or a mirror list would access your mirror server on your own LAN. So, at first this seems pretty simple. You might think your script would only be like say 20 lines of code. And before I begin going on the complication of this, I'm not picking on Apple specifically. It has problems that many other repos have for mirroring. Similar to CentOS or REL, there's over 10,000 binary RPM packages. CentOS slash REL plus Apple are together about 50 gigabytes. But unlike those CentOS or REL, it does not keep old versions of the same package. So, for example, if CentOS launched with Ansible 2.7.0 and it's currently on 2.7.2, CentOS would keep 2.7.0, 2.7.1, 2.7.2, but Apple would just have nothing but 2.7.2. So, here's how that 20 line of script storage becoming ballooning in size. First, you're gonna realize, oh, I don't wanna serve the V push snaps on on the web server until the synchronization is complete. You don't wanna have, so you have to add like verification and checks and moving folders based on whether the sync is complete or not. Then you start keeping, I wanna keep multiple snapshots of the repo because, say, your production environment is on the snapshot that's two month old, your testing environment's on the snapshot that's one month old and dev is on the very latest. So, you start having to do even more moving or similar links for the scripts. And then you realize, and you also wanna keep even older snapshots because you realize that there's a change in behavior that was introduced six months ago. I wanna see how it behaved before six months ago. See, maybe now you're keeping like 20 snapshots. You also wanna be able to easily access old patches that are removed because Apple, for example, all removes patches if they're not being maintained by maintainers or have security vulnerabilities. And at this point, you know, you have 20 copies of 50 gigabytes worth of repos, so that's now one terabyte of disk usage. Now you wanna deduplicate identical content, RPMs, across multiple versions of the same repo. So you need to start looking at utilities to replace duplicate copies with hard links. And you also, it's also where some updates will break you, you haven't, or they have a change in APIs, and you need to adapt to the new APIs. So you need to use the latest snapshot, but hold back some packages. So, eventually that 20 line of code script is now hundreds or thousands of lines. And every few months or so, you have to make some change to unbreak it. It's maybe it's not your fault that it was broken in the beginning with, but it's broken now due to external factors or new use cases. And at this point, like Rick and Morty, you're probably burning yourself out. It's a pain in your side, not the biggest pain in the data center's operations, but it's a medium size one that interrupts your work. Pulp is a, I'll let David Davis explain what Pulp is and how this can solve the problem. So Pulp is a software platform. It's for managing software packages. It's not strictly limited to RPM, although I think that was probably the first use case it was designed for. So yeah, it's a content manager. Here are some of the other content types that you can see, like Python, Docker, Chef, RubyGems, and so forth. It's architecture. It's a plugin architecture. So there's a core and then there's a plugin for every content type. It's open source. So people can develop their own plugins if they want to add a content type. And we have several that are maintained by community members. So if you're interested in looking at Pulp, we have a website and we're also on GitHub. All the code is on GitHub. And it's written in Python. The new version is in Python 3. So as I mentioned, the new version is in Python 3. Some of the new features we've added are we have repository versions. So anytime you make a change to your repository, it creates a new version so you can easily roll back, but also you get easy promotion. So if you have a new version of a repository, you can distribute that and make it accessible to clients in a matter of seconds. Also we have dynamic web APIs in Pulp 3. This was kind of hard in Pulp 2. This allows you to mimic Galaxy, the API. We have improved performance with Async.io in Pulp 3 in Python 3. And also we've expanded deferred downloading, which is like, Pulp won't actually fetch the package until a client needs it. So it saves resources. And we have API docs that are auto-generated. A few of the improvements under the hood. And we're using Postgres instead of MongoDB. The old version uses MongoDB and it was a memory hog and stuff like that. It was hard to manage. And we're using RQ, which has a smaller footprint than Celery. Okay. We have an API. We're not using SimLinks and there's a lot less code as a result. That's pretty much it. All right, Ron Vos, thank you guys. Apparently you should go get some Pulp 3. All right, we're gonna do one more and then I think we're out of time. Oop, sorry, thought this was off. This'll always go to the wrong one for me. Sorry, I just gotta find the window. It's probably letting other people use your computer. All right. Yeah, it looks like women in open source have the next one. All right, but I don't remember who it was who was actually doing that one. All right, sorry. Did you send me slides? Up on the one called Lightning Presentation. Hi again, I'm Gabriella. I'm Shruti. And our Lightning Talk is on great women in open source. So in a recent GitHub survey, it was found that only 3% of respondents identified as female. So this really showcases a lack of female representation in open source. In our talk, we want to specifically highlight two women, Michelle Baker and Denise Cooper. But before jumping into that, we want to briefly touch upon the history of open source as a whole. So what is open source? So the idea of sharing recipes and something you've done with other people has been around for a long time. But the idea of doing this with software was really introduced in the 1980s. So the free software movement happened in 1985. And this essentially means that software users have the freedom to copy, change up, and redistribute the code, right? So the open source movement branches off of that. And this is essentially using the community as a resource to collaborate on the code that you do. And a large part of this company is open source. And fun fact, the person who coined the word open source is a prominent female face in the community, Christine Peterson from the Forsyte Institute. So Michelle is the current head of Mozilla and has been since its inception in 1998. And she's also widely regarded as one of the internet's pioneers for bringing open access to people around the world. In 2002, Michelle joined the Open Source Applications Foundation, where she used her position to advocate for open source technology. Denise Cooper, she has the nickname as the open source diva due to her contributions. She first started at Sun Microsystems and actually quit because Sun wasn't involved in the level of open source that she really wanted. So then they made her Chief Open Source Evangelist Officer of Sun and now she's been part of the community for 17 years. She's been known to knit through her meetings. So that's a fun fact about her. So the next generation of women in open source begins with us. And our internship at Red Hat has really been our first experience with open source and even our first experience with, at a tech internship in general. And throughout the summer, we've learned so many new things. It's been a really great experience and we've learned to use a really huge variety of open source technology. Yeah, and here's just a quick slide through. She's working on CEP, on the CEP team. I'm working with, as a data scientist. So I use quite a lot of Jupyter Hub and open source stuff. Let's see. So our advice moving on, what we've learned. So asking questions and being vocal was something that we really had to practice. So in being women in this community, it's really important to be heard because there's so few of us. And our hope is that this talk will highlight. Go ahead, sorry, that's you. Oh. Our hope is that this talk will highlight the significant contributions that women have made to the open source community. And we also hope to provide some inspiration for other women in the field, despite the numbers. That's the end of our talk and thank you so much for listening. So, just hang on. Yes. That's all the time we have for today. But I'd like to, another round of applause for the presenters who did it, as well as the presenters who suggested that they would. And I hope that was enjoyable. I kind of liked this format. Maybe we'll do something similar again next year. And it'll be a little less rough, but I wanted to see if we could make it work. So thanks everybody. And please enjoy the, we're actually having like a coffee break right now and then more talks again in approximately 20 minutes. So thanks a lot. And I hope you guys see some good stuff and I hope you're a little more awake now. And I'll talk to you again later.