 I actually think I should leave it to you. Okay, so let's start the lightning talks. One last announcement, there's like four more entry tickets to the party in the evening. And yeah, our next speaker, Malin, will tell us something. I don't want to mess up your name, so welcome. Okay, if people have difficulty hearing me in the back, just raise your hand or something. Remind me to raise the volume when I do. Okay, so I'm here to talk about building container images without using Docker-D or in fact at this point using Docker files because we're talking about some really brand new code here. If you've been building container images for a while, you've noticed that they're essentially going to be composed of multiple tar balls, one or more, each one representing a single layer in your image build model, and then some metadata describing what order they're applied in to create the root file system of the image and then for a container that's based on that image. So when we started working on CRIO, one of the things that's explicitly out of scope for CRIO is building container images. So earlier this week, we were throwing ideas around on how, what would a tool that did that for the CRIO ecosystem look like? So naturally, we would use the same libraries that CRIO was using for storing images and storing containers while working containers that are being used to build an image. It would probably use the image library to pull things down and write them to destination locations. So we started putting something together. And the tool, which as of right now is probably about 73 hours old, is called Buildup because those of you who've met Dan know that he refers to Docker as DACA. And it has had a couple of other names and they all failed the Urban Dictionary test. So this is what we're at now. It may move, it's probably not going to stay in my personal repository forever, but essentially it's a command line tool that has a couple of sub-commands in it, one of which is from, which vaguely implements from, one of which is commit, which vaguely implements commit, and mounting and unmounting. And then between those things, you can screw out the root file system, extract your changes into a new image, and maybe build another container out of it. And as of this morning, the code actually works. So the rest of this talk is essentially we get to watch a shell script running the commands, building a sample image, writing it back to the storage that OCID is using, verifying that you can see it in OCID. And then maybe building another container image on top of that. So without further ado, actually this, do I take questions now? Or, okay. Everyone clear so far though? All right, let's see. Yeah, so this is not a very smart script. It's just restarting OCID. Did I just kill it already? Okay, now this is just me checking that we got a couple of images. I have previously pulled down Fedora and Ubuntu images because I don't want to fight with all of you for Wi-Fi right now, creating a working container. This is it invoking the build.command, telling it the name of the source image, and that just prints out the name of the working container. It has a mount command which can mount it and it prints the location where it's been mounted. That varlib.containers.rvla is an actual live root file system. We're going to add a file. Basically we're just going to create a very small file into the mount point. Then we're going to use the commit command to actually commit it. I think that's running right now. It's going to take a little while. What it's essentially doing is extracting all of the image layers for the original image, extracting the contents of the changes that we made, which is just adding one file, storing them all in a temporary directory, and using the image library that Scopio uses to copy it back into the destination. In this case, I've told it to copy it right back into the image stories that we're using in OCID. The next thing I'm going to do is ask OCD, which has been running the whole time, hey, what's there? There's now a third image, that I just created. I'm going to unmount it and delete it, which is fine because we don't need it anymore. Now we'll try it without OCID. I'm going to stop it. Let's go ahead and use that image we just created. We just created a new working container using the image that we just created 30 seconds ago. Now we're mounting it. We're going to check that that file is there. We've added it. Yes, it has contents. The image was correctly added to the previous image. We're adding another file, but we're not going to really test anything about that. We're going to get this to yet another image name. Once again, we're extracting all the layers into a temporary directory and then importing them right back into image storage. I was hoping this would run a little bit faster because I didn't have a lot more to say in that sentence. Okay, now let's unmount it and delete it. We can actually start OCID and ask it, and now we have four images. Now clean up because I ran this script like 50 times while I was testing it earlier today. That is the end of the demo, and this code is in GitHub right now, and it's probably going to move, so don't depend on it staying where it is or having the same name because it's early days yet, literally. Any questions? Looks like I have five minutes left. No, in fact, I believe... It's not been running the whole time. Oh, that's not helpful. Oh, sorry, repeat the question. Dan was asking, do I need to have the Docker demon running at any point during this, and the answer is no because it's using the same infrastructure that OCID uses, and OCID does not have a runtime dependency on Docker either. It's, in fact, using the same libraries that we used to build OCID. Yeah, it's not running on my system, and I actually forgot to check on that, so thanks for reminding me, Dan. Anyone else? Yes. Any destination that Scopio knows how to write to, we can specify as the output location when we commit the image. In earlier testing, I was using the directory output where Scopio just dumps everything into a directory, tar balls and all. In this case, I was dumping them back into the image stores that OCID is using. Those are both available, and someone else had a hand up. Yes. Question is, am I retaining the layers? Currently, we're retaining all the layers from the original image plus the one for the working container that you've made all your changes in. There is currently no facility for doing multiple layers from the original image between that and the point where you tag a new image because I hadn't written it yet. I don't... It's really new code. Okay, Antonio, you had your hand up. Oh, yeah. It's actually... It's github.com slash null and d slash build up, which again will probably change at some point, but hopefully there will be a pointer there that will tell you where it went. Yes. Yes, it is a go up because it's built using go libraries, and as Dan mentioned, I think in an earlier talk, that's what all the cool kids are using these days. Anyone else? Wave your hand if I'm not seeing you. Okay. Thanks very much. Okay, thanks. Now, that was a really nice lightning talk, even faster than I expected. Okay. And we still have a few entry tickets to the social event on the table there. If somebody wants to attend and drink beer, please come pick them up or I can deliver them. Huh? Oh, fine. Not for you. Yes, welcome. Tim for his lightning talk. Tim, welcome. One of the questions I've gotten quite a bit over the last couple of days is, what is Fedor doing for CI? Are you guys doing anything? I figured maybe it's time to do a lightning talk. So, getting started, defining, and I purposely put that in quotes, microservices, because what we have are probably more like mini-services, but I think it's close enough and it's a buzzword. So, we basically have separate services for different concerns. There's a system that's only handling results. There's a system that's only responsible for listening for Fed messages and triggering the different tests in response to things that go on in Fedora. And that will, as long as the interface is behaved. So, you can ask the question, well, is there CI in Fedora? And I'm going to answer that with a question. What is CI? And in the context of what I'm talking about here is a job, generally for testing, spun up in response to an event within Fedora, usually a completed build. Wow, I have duplicated slides. So, in that case, yes, yes, we do have CI in Fedora. So, the way that it would generally work is we have a build. Once that build is done, there are things that are run on every build in Fedora that stores results in a place that anything can query. And then there's a question of, you know, are there specific tasks? We do support having different tests for an individual package and for different modules. And if there's a there, they get run. And if not, the robot cries because his heart is broken. This is a demo. And this is going to be on the wrong screen. So, this is showing off some of the new that is not what it's supposed to look like. No, not really. All right. So, this is just bringing up the Docker containers because this is... So, this is our new interface, or will be our new interface. So, basically this is running RPM grill on LibTaskatron. Right now it's not actually running, it's scheduled to run soon. It's queued up. I mean, most of this process is going to be familiar, but it goes through as it's doing the steps, you know, cleaning things up. All tasks that we run, all tests that we run in this CI system are whole-git repositories that the author has control over. So, cloning the repository, running the test suite, which I kind of wish was faster because we're all standing here, but almost done. You just have to ruin my fantasies, don't you, Adam? All right, so then it goes through and at the end of these, we do store and we do host all artifacts. So, if it produces log files, if it produces just about anything, we save that. This is another part which is what we're calling a dashboard. So, you can see right here, this is the thing, well, it just happens to be the one that just passed, but really, really, let's see if this gets any better. I have screenshots. All right, so then here, we have two of these in the same update. You can click on any of these to then get to logs, and that ends quickly, so that's exactly why we can go through the demo highlights. So, this is that interface which just happens to be the first one that's listed, and we can see that the thing that just went through, it passed. We can do searching here and these two failed, but this one did pass, and because it's an update, there's multiple packages in it. This one's been fine, this one, you might want to look at it, so let's click on that and actually get the logs, wrong button, the logs from this, and these are taken directly from the system that runs it. Finally, if you want to get started, come find us, either come find me, come find Martin either in person or in Fedora QA on FreeNode. That's our mailing list and I'm doing a talk tomorrow at 3 o'clock if you want to see more of the direction that we're going in Fedora. So, in end, don't make the robot cry, we want happy robots. Questions? Because there are only 10 minutes and I don't want to do live demos with network components. So, what he is pointing out is that Bodhi, the update system in Fedora, will also query our same result system, so if you add tests for your package, it will show up in Bodhi in the pages for your update. Any other questions? Go ahead. The question was, is there a way for someone who's not the maintainer to add tests? And the answer is yes, maintainers. The way that we have things set up, the default way to put things in, the test can change how the package is gated in the future, so they kind of need to know that. But if you're adding something separate, as long as it's in a separate Git repository, yes, you can just come find us and we'll help you get that added. Other questions? Okay. Well, thank you very much. One, two, three. Let's start. Yeah, exactly. Maybe I'll give someone more time. So hello, everyone. My name is Stanislav. I've been a part of DevOps automation in Bernou for a while. But about one year ago, I was the only person here in Bernou. Right now, there's, I think, 14 of us. So we've grown a lot. And we have growing pains because part of those 14 people are interns or part-timers or people who just finished school and they don't necessarily come with a lot of experience with Git and the whole thing that goes together with it, especially code reviews. And so it was happening quite a bit that when I was reviewing their code and I was like, well, this looks mostly fine but can you please just put this comment into two and I saw the fear in their eyes. Like, oh my God, I just don't know how to do Git commit at and maybe some other things but how do I split a comment? And so Git documentation is great and there's a lot of literature and Git book and all the good stuff, I'm sure. But what I was missing was a way to give them hands-on experience with more than just Git in it and Git at in the very basic stuff. So how many of you have seen... Try GitHub I.O.? How many of you have tried it? I see one, two... Well, very few have. So I'm not going to go over this. You can go have a look. So it's an interactive web interface which tells you run, Git in it and it will do something and it explains things and it's very nice but it barely covers the basics. It covers like I said, Git at and Git commit and Git push and a couple other Git merge but it doesn't really cover okay, how do you split a comment into two or how do you add just a part of the file and not the whole file if you were coding, coding, coding and you forgot to commit and then you realize hey actually I want to commit in two comments. And so what I came up with is an extension or continuation I guess of the Try GitHub I.O. Which looks something like this. So I have a Git repo on GitHub with sub modules, so separate Git repos for different tasks and I'm going to actually go and run through one of them which I was just mentioning splitting a comment. So when you go into this repo it will give you a task and the task is your task is to split the latest comment into two smaller comments one for each method that was added. So this is our latest comment. We added square and print square methods so let's see how would we split it up I'm going to quickly go through it so I'm going to Git reset oh sorry, Git reset is mixed and to previous run, right? So that gives me those changes reverted and just end the current directory. So I'm going to add one of them with Git add-p I don't know how many of you are aware of this but you can split this and just okay I just want to add so I did s and then I'm staging just the first hunk and quitting with this I know that I have only this in the index, only the first method so I'm going to commit it add-square and that leaves just the last part so I'm just going to Git commit all add-print-square and as far as I'm concerned I think that should be it, right? So each of these repos has a solution branch which tries to show how things could be done actually there is each rep also has a readme file that describes how to get the solution it should be all the same for so just Git show solution we'll show you the solution which is just like this comment on the solution branch I'm not necessarily saying this is the only way to do things but my idea is right now there's three examples I have a bunch of issues that I filed for myself in GitHub with ideas of additional things that could be added, yes? oh sure I can make it bigger is this better? cool so I have currently three tasks made, tested, seemed working I would like to add more and this is my way of inviting other people to contribute see if you can maybe add your own if you think this is worth it and maybe I wouldn't mind putting this in a web app similar to try a GitHub IO but I'm not a web developer I don't want to start doing those things because it would be a disaster for everybody involved but that's it from my and any questions so where is it so the question was where is it on GitHub so the GitHub address for this is GitHub IO my name and slash get challenges I'm not sure that's going to load there's not enough people now so it's there I can I don't know, try and figure out I can zoom it so like I said I created five issues for myself to keep track of some more ideas like rebases and I really like for example that you can do get add dash p and then e which opens up an editor and you can selectively line by line whether you want to add it and so there's an example for that as well there's merging and rebasing and all kinds of things some of the examples, not this one but some of the examples do require some setup I haven't figured out a good way to do it so I have an instruction like please before you begin do get reset something so that I can get the repo in a specific state I can buy something or that makes a requirement on them but that's it any other question thank you very much yeah okay so okay let's we'll start in a minute so I guess everybody can hear me so now I have to stop being stupid and start presenting so welcome to my talk called Fixing Community Infrastructure Security with this proposal which is a long title so for people who do not know me my name is Miquel Charell I'm a system administrator at freddit and I'm working in the open source and startup team and I'm not going to spend too much time on me because it's a lightning talk, it's supposed to be fast and people may have seen me last year at that conference speaking about the curious case of the Gloucester.org shell server that got compromised to remind people it was quite bad someone coming from a country not far away managed to get the root access on one old server that old server had a slash home shared on NFS with SSH key with SA2 credential GPG key SSH key connecting to the root account of other server and Apache.org and it happened one week after the other curious case of the SAF.com caecinic server that got compromised for the same reason like old QIP server no security etc etc and well I found that it's happened quite often so just for people to estimate since 2000 how many of search incident did happen on whatever community you can think of and I'm ready to offer one or lunch to who get the exact number so please give a lunch tomorrow because I do get a ticket and I didn't spend them not too much 53 yeah no that's not well I know because I did the account I mean no no so far I found that much it depends how much you count I do have a list that I plan to publish and the problem is too often it leads to backdoor I don't know if people remember but there was some issue with Linux kernel some issue with ERC which are business critical for me some issue with VBSD with jbm with ubuntu with use with radot with fedora with SAF with Gluster with pweek with mysql with PHP.net with ruby world basically everybody and so it started when we discovered the secret incident it went up to the management chain we said this is not normal someone needs to fix it what about the guys that found in the first place so I'm trying to fix that and for that we started by what big company do starting a working group and for that working group we have a simple three step process well the first step is to discover what is going on and try to get a list of everything going on why do people are not doing properly security is it too complicated is there not enough time can we help them so we are still not sure of the second step that require to get something on the first step and we are quite sure that for the third step it will be profit and for now we are still on the second step and I'm here to request your help I need to find all people who are doing system administration for free software infrastructure be it for website CI system download server and everything and I managed to get with a catchy slogan which seems to be inspired by the department of homeland security so if you see this admin say something so for that I just want you to come and contact me so it's quite easy if you have a Twitter account you do not use it if you have a Facebook account you do not use it because I'm not using that if you want to contact me by LinkedIn please do not do that I get enough spam for them you can send me an e-mail at miscathreaded.com I usually answer to that e-mail unlike my personal e-mail that I never check so sorry if you are an engineering prince trying to give me money and do not check that you can also ping me as a misc on whatever IRC channel you can find you will be surprised to see or you can just look tomorrow for me you just check for someone who has a blue and purple hair and with the not gen trigger so that would be me and if you have any question or any suggestion for that I'm open to all questions I think we still have 5 minutes for that unless if the question is you should write a security guide because that's everybody told me and the problem is not to write more documentation is to get to read that so just to answer preventively to the question and that's it thanks for coming and I have also two other presentations tomorrow with more fun more interesting stuff and likely more slide because it's 20 minutes thanks what? oh I think he is still in the other booth so no question like how do you manage to be so fabulous during the pandemic you can ask whatever you want no? thanks for listening to me do you guys want to start? okay awesome first of all thank you for being here I actually really appreciate it I also want to thank the only voter who actually voted for this talk I don't know who you are it wasn't my colleague actually I actually asked him to vote I promise you to be cool sorry I thought about saying something like running browsers in Docker word distributing you know browsers running them I mean there are so many I don't know and I honestly don't know what exactly to tell you because this is a big topic testing in general running Selenium also who of you actually used Selenium before so they know what Selenium actually is okay who of you abandoned Selenium at some point in life okay one guy someone thinking still maybe the thing is Selenium is actually a great testing API the only problem is when you actually try to roll it in production you tend to run into inconsistencies so for example to actually run a browser you have a library which is for specific language let's say I don't know Python or Java I have to have a binary driver which would actually do the talking with the actual browser now any of these can actually go out of sync so you can have for example the library version mismatching with the binary driver or the browser mismatching and here's where the fun starts because from this moment on you have to just maintain that infrastructure not only the API is not exactly user friendly but also all the infrastructure actually adds a lot of weight on people and I created Selenium because I actually have a tester in my team which I kid you not this is all true which is 63 this guy now this guy he doesn't actually use I don't know if you know for example if you want to do in a browser something like control click right you would have to in the in the land of Selenium and you would first press down control then you would actually send a click and then you would release control now this by sound funny but when you actually have to type it everything and you have to create an action chain and then you have to pull in there like no control down this is actually in a constant classes you have to put control and whatever and then you click and then you release it it gets less interesting it's not exactly as fun anymore when you have to write I actually want to test just the website if it functions so in order to do that I decided I will create an API which would be simple for my guy to use and oops okay the idea would be something like this I want to be able to just open browser and I said there what browser I want to open and I go to a page and then I do some asserts like I know the title is Germanium and then I click and I do some other stuff like I'm waiting for the page load and I'm waiting then for the actual text to be there okay so this is the API it's the first step in getting the testing easier if you notice in here there isn't any CSS or expat locator of any sort and for the most part I try to get rid of them I actually where's the second I actually wrote a script this is my script in here on how I'm filling my time sheets this is another joke this is what I'm actually using we have to fill in time sheets every month so what I'm doing actually is I'm just getting the event viewer the events from windows export them to csv I create a shell script and actually iterate over it but again the key thing is this one actually runs on on outlook right so in order to do that if you notice there is an expat locator you don't see expat you don't see css what you see though is I'm clicking an input which is right off a text right what happens if this text is multiple times well it will only find that input which is actually right of it it doesn't randomly filter everything which is right it actually also does matching on width and height and so on and so forth and these are complexities which you don't have to think about if you would actually do a css locator if you would do an expat locator in the world of Selenium you would actually find all the text even with the ones which are not visible on the page right and these ones they actually automatically filter the same story goes with type keys if you notice in there there is a type keys control a delete and then I'm actually using some Python formatting and throwing some other stuff into an input which is right off the text and time why is that it's because I want to press control a and delete where everything there is that simple if you would have to do that in any other API you would have to send some enums for some sort you know for no good reason I don't know I wouldn't like it honestly I wouldn't like it so I decided it's going to be the simplest API I can possibly imagine and the most work it should be actually done in the API itself okay so this is the first thing there is a lot of things there is a long project there is a lot of documentation on the site as well this is all the actions drag and drop so on and so forth all the actions and all the ways of selecting elements they are documented in there now let's get to the next part the next part is energy infrastructure we already discussed the first topic would be kind of okay how I'm writing the test simpler the second one is how can I manage my infrastructure on one hand I decided to create two versions for python and for java which would actually do the driver management by default so I don't have to download binary drivers okay I actually package them in a package and then you can just do for example in python you can just do pip install germanium it's actually my package and then you can do the following you can actually just do pip install and then I did the import before open browser and then we write chrome and this will actually open chrome on my machine the driver is already packaged into a separate thing so I can do the action like go to wherever google sorry google some stuff will go down okay and then I can actually shut it down close browser now this is all fine in the end but if you notice that the actual browser opened on my desktop right if I actually want to do real serious testing I don't want to do that I don't want to have browsers opening on my browser I can't use the computer you know so we live in this wonderful world of containers now selenium has a project called selenium hub selenium grid actually it's a project name the idea is what if you can provision some containers which would actually run the browsers in a container right make sense you can just run it in there independently so I created some docker containers which would actually fire up the hub there are also two docker containers one of them for chrome one of them for firefox which can actually run the browsers in there so in order to do that we will just start them I have a script it is also on the get page whatever thing so being germanium run grid 2 this is actually this is published on the docker hub the public thing I'm actually first trying to destroy them I'm not super attached to my containers you know and what happens now you see we had before only two containers which is the nexus and Jenkins if we do now a docker ps we have these other three containers which is firefox chrome and the hub and let's actually see them if they're available we can actually connect I have a vnc server in them and we can connect to them I'm using icvm to actually have a sort of thing okay so we open the firefox firefox is an interesting thing because firefox for example right now they have broken selenium support so having it pinned down in a docker container it's impossible it connects to the grid and we can access it otherwise we would have to use the esr support now if we go to our grid we see you have some stuff there running it's actually configured to run up to 10 browsers so on and so forth and in order to open it if we're back in our code what we would do is we would open browser and then we'll say firefox but this is I don't want a local firefox right so what I do is I just prepend at the end the location to the hub which is going to be on there I think vd hub and it's port sorry now this one should open the browser using the hub okay which you actually did but if you notice now there's no browser open on my stuff I still have the same commands as before you know I can still go to google let's go to google sorry there's wrong address doesn't matter okay and I ended somewhere you know whatever that is and then I can actually if it finishes hopefully mr. mandit martinez whoever you are yeah I won't I won't see whatever's in there okay now it's finished and let's close it now on purpose I missed an elephant in the room there is one browser missing in the story if you notice so far it's the best browser in the world IE now IE it's very tough to actually automate you know because it needs focus and it needs a bunch of stuff the thing the good thing is Microsoft offers virtual machines and on the website called modern.ie you can download virtual machines and what I have in here magically enough it's a virtual machine more or less pre-steen from the good guys at Microsoft and I also made another thing in there which is helping me to provision these machines this is also on the page written in python everything everything is open source in the patchy licensed so you know I'm not selling you now afterwards you know if you want to menu let's talk you know so this is an open source project all of it all you see in here and this tool in here this actually will connect to the internet you need to run it as an admin or if you it would be nice if you would run oh yeah I really run a build this is a modified version which uses local how do I call that the local IP so I'm not going over the internet I just patched this is why it's called main if I refresh it actually this is deleted or it's not but I can't access it and then this one actually asks questions it actually checks your path and sees okay you have IE okay fine do you want me to support IE say yeah actually all the defaults except one question is you just press enter if you press enter you'll be yes yes yes okay fine whatever and then yeah where is the hub world this is the question which you should actually type in right and then in here if you type one two two wherever there was the address I already have it configured so it doesn't matter it will ask you one last time is this what you want to do and if that is the case it will start pulling Java for you if you don't install it actually detected the habit installed in this Java 8 so it's not going to do anything it detects for example if you have Edge as well I tested it actually only on IE and Edge I don't provision Chrome and Firefox on these guys I mean I'm not going to install a Windows virtual machine when I have Docker containers which can run my Firefox I'm not going to do that it's much easier to scale the Firefox and the Chrome ones and at the end it will and this is not again not a joke try it out it will create a desktop shortcut so the moment I double click this this also creates the configuration everything for the hub so this is on the C drive by default in the germanium folder this is pretty hard coded at this stage but it's okay and downloads the drivers in Selenium standalone and so on and so forth and it creates a batch file which now oops which actually registers now my IE right so in order to actually provision my whole Selenium grid I need roughly 10 minutes with the downloading of the machines this is actually what takes the most part because I actually need the thing to test and how this germanium stuff itself gets tested this is where we get into the interesting part germanium in the project I'm actually releasing a bunch of stuff simultaneous I'm releasing the germanium docker image which is the default one which actually just has a python which you can instantiate the test inside of that machine it also releases the grid and the apis now to test that stuff doesn't fall apart I'm using pipelines in Jenkins this is the blue ocean FYI interface which looks much cooler we have to submit now for python for example I'm actually compiling in building machines for python 2.7, 3.4 5 then I'm using nexus to actually have my own python package index because I don't want to release the drivers right to the public and then I realize I screwed them up you know they cannot run them no because if something else is broke right so I test against it in this network which is completely isolated I see that okay everything passes through all my docker machines are running in the moment I can say yeah okay everything works then I can publish and I can tag it that's basically it this is kind of my whole presentation the thing is if you want to build tests reliably and knowing and having a piece of mind all the apis I'm not releasing until my test pass you know the API is a subset if you want the web driver you can just call get web driver and you get the actual web driver instance and you can do whatever maybe I don't offer you an API I don't know I'm not offering drag and drop files from desktop for example I plan to implement it you know but I don't offer it right now if you want to just upload a file with the file dialog this is already working again tested and whatnot and you can still use it because you know germanium is selenium that's it questions oh no don't yeah true now you don't test when you when you're creating your grid you basically say something like this I want an instance on Firefox this is what you're telling your grid my node is configured to port up to 10 nodes 10 actual files or not a node 10 Firefox instances running on that node that's what that configuration value what you're seeing there you always run only with one browser you don't really maybe to be a good idea to implement but that wouldn't be really a feature of germanium itself because you get into inconsistencies pretty fast what if one of them fails you still have nine of them kind of passing you know it's I don't know what we're using it for we have a very complicated javascript application with iframes with what not and we want to test it reliably across all these browsers and we want to know and I didn't delve into but the selector stuff allows you to express objects without css selectors which is very important for us we don't want to get tied into whatever blind then we refactor we refactor it a lot now we use the yy we're switching to view I don't want any of the classes or bio specifics a lot of people use constants the big problem with constants they are not reliable how do you relate them to other elements in the page right if you have a constant right the css whatever how do you say okay this constant has to be in this other thing but you can find it only with another element you cannot do that in germanium you have this idea of filters out of time yeah yeah yeah we do microphone okay alright so that's it for today for some of you see you later at the party the others you probably tomorrow and thanks for attending that's it