 All right, we're recording Hello, my name is Adam Miller. I am the newest member of the Thorough release engineering team I currently work on Thorough engineering before that I spent three years working on OpenShift Online for new ways through with OpenShift. It's it's a lot of fun But today we're talking about next generation tooling in Thorough release engineering and that's a half lie Because it is true that we're gonna talk about that But before we talk about that we're gonna talk about some background topics and some of them are gonna be kind of fundamental to understanding What we are today what we were yesterday as a project is a distribution of what we're going towards tomorrow And then what that means to release engineering in terms of things that we need to cater to goals that we have tooling that we need to either Alter refactor or come up with completely new in order to deliver those things So background so we're gonna start off with what is fine release engineering We're going to define what an operating system is and that seems very kind of ground layer like but just base level but When we add to that defining what application isn't discussing the separation or possible separation of those things It gets it gets interesting and then what is a package? What is an distribution and how these things correlate to both Fedora's project as well as to release engineering from there We're gonna talk briefly about Fedora rings and where we're going with that if from a release engineering standpoint from a Compose and build tooling perspective what we have in place today what we're gonna work towards tomorrow And I see Ralph in the audience and I realize I forgot something one of my slides that we were talking about for release tooling I'm gonna mention it and I'll add it later So background to kick it off. What is release engineering and I shamelessly stole a nice formal definition for Wikipedia So it is a sub desk the sub-discipline of software engineering concerned with compilation assembly delivery source code to create products and saw other software components and The goal for Fedora anyways, and and I put in there that surely I just because I do and this is There's two URLs here and because of the contrast It's kind of a difficult to see but I believe slides will run up on the Internet at some point You put them read them and there is the overview of what for a release engineering is In a more detailed context and what that means in definition of roles and responsibilities And then as well Thorough release engineering philosophy and that breaks down Basically what these two lines come up with is maintain sanity in the pipeline going from upstream source code into Something that is consumable by the Fedora community in the user base And produce and maintain tools that facilitate in that goal So in in a nutshell that that is kind of what our our aim in life is to be and what we want to deliver to the community and and to the project and It rolls around this whole concept of build sanity and build reproducibility and other attributes of that so Before we talk about The tooling and what it means to release engineering these different concepts. I want to kind of ask a few questions and define a few things So what is an operating system? Operating system is software that manage your computer hardware and software resources provides common services computer programs Also shamelessly stolen from Wikipedia And so at a fundamental standpoint This is what allows us to interact with our hardware. It provides the core components that we need to do everything else And what is everything else well generally everything else is considered an application at least in current standards of definition and nomenclature and common I guess paradigms of thought process around what a computer experience is not just you know the operating system But everything and that you know kind of branches and bridges from what we do with like servers and big iron Mainframes down to what we do on tablets and cell phones. It's it's What does that mean what does it mean to be an operating system? So that kind of brings us to the next one was an application Applications a set of computer programs designed to permit again. I stole this from Wikipedia I'm just shamelessly like I have my credit lines and I'm happy with that So does I permit the user form a group of coordinated function tasks or activities application software cannot run on itself But it's dependent on software execution and what I love the most I Felt like it need to be said again is dependent on the system software to execute. So it's dependent on the operating system and That raises the question That I first heard from Alex Larson I don't know where it originated entirely But I heard it from him in a talky cave and I thought it was fantastic and I like to bring this topic up again Because I think about a lot Probably to an unhealthy extent Just like I think about Self-hosted versus non-self-hosted operating systems and primary languages and these are things to keep me up at night because I am Different and that's okay, and I'm okay with that and it's healthy or not healthy But one of my favorite tag lines because I'm a packager and I came into Fedora as a community packager And I love packaging and that's very weird and I get strangely passionate about packaging software systems In in different language, you know framework So, you know like in Python and Ruby and NPA like node.js and this like kind of clutch that is kind of go packaging That's anyways What's a package? What is a package if we have an operating system? We have an application and there might be some definition of where one in the other stops What is the package? Well the package is is effectively a build artifact from source code that be that can be consumed and that can vary in terms of what it is based on content versus code because if it's content It's like icon to wall papers or documentation, but if it's code then it becomes An application, but it could be part of the operating system because the base OS is built out of packages But everything else as well. So when you get into the world of packaging systems At least as a distribution is built and we're next slide is what's a distro In a distribution is you have a package format. So for Fedora, it's RPM for rel is RPM percent of RPM That is the thing that we centralize on it's the common component and it's very powerful in nature And there I don't see any reason necessarily get rid of it or try to Change what it it does it's you know functional level of the ground zero, but What about things that aren't rpms? What about things that people in the community? actively use that are flourishing on their own things like Python pip and Ruby gems and no JS npm and Maven and others that I probably am not very familiar with. I Know like the c-pan is still a thing You know, there's peckle and pear and like all of these different You know systems out there that very large groups of people in very vibrant communities in their own right In their own piece of the technology world are very interested and enthusiastic about All also the point that I'm enthusiastic about RPM. I'll be not as much. I get a little Animated in my excitement and that's that's that's good, but Where you know, we're kind of in the stack. Do we differentiate how much of that really needs to be brought into things and? In in a Fedora context Actually, I don't want to get myself in our context We kind of roll towards though like what's a distro so a distribution a Linux distribution is an operating system made as Software collection based on Linux kernel and often and a package management system and this rolls back to that So from Fedora, what does this mean today versus women yesterday? So yesterday? The distribution was a one-stop shop. I mean when when the when source forage rain supreme nobody Who ran a package of an operating system or a distro based on a package management system would just randomly grab things and compile it Like nobody ran that production. That's not the way we did business. It's not the way that anybody like who really had a good Set of standard practices in place did things however What we're starting to see in this kind of shift people are running things from npm in production people are running things from Ruby gem production Is it a good idea a bad idea? I think is up to the merit of the people doing the work to debate and that's fine whether or not they do believe or don't believe but then you add containers and you have these Docker environments and you have Rocket and you have run C You have you have all these different things that allow you to kind of put a contextualized sandbox around This environment that may or may not be what you traditionally thought it was so a long time ago The young repository was the place to be that's where software came from that's where you got your software period period the end full stop And that's kind of changing we're seeing we're seeing a fundamental you know branch out in you know avenues to to consume software so It used to be this one soft shop, and it's no longer. It's moving into this thing where people aren't doing that as much One of my favorite actually one of my favorite examples of this Has actually isn't fedora But Jesse frazzle she's a Docker core contributor, and she's also a debbie and maintainer But she is notorious around the internet for giving these talks about running just wildly interesting things in Docker containers And like one of the most recent things she did was like ran quake for something In a Docker container on her laptop on stage at a conference Just kind of showing everything and she has this github account where she's got like something the ballpark like 40 different applications That are just standard desktop applications Things that she uses as a day-to-day basis, and she runs like her entire desktop in a set of Docker containers You know like everything is alias and things so that when she runs in her So time when the manager I use i3. She's also an i3 user, but yeah So there's a system launcher called D menu, so if she runs a thing in D menu will actually map to a Docker retainer It's very interesting to see this Did you know that's completely different software delivery mechanism pulling directly from the Docker hub for your registry and you that's where you get Software from and if you read the her Docker files, it's not actually coming from a distro package Like they're all built from source on inside of that container and then like pull down I don't know if they all are the few that I looked at where and it was just I thought it was quite interesting but just that that's a an example of Where this is wildly different from how it used to be done like that if you if you proposed that to I think a room of system administrators or a room of traditional willing to share users five years ago They would have laughed or thought you were crazy But that's not that far fetched anymore like it's just not like it's just because of how things have progressed and changed and I think a lot of these different avenues for consuming content or consuming code or programs have matured as time has gone on It's not as Wild West crazy idea to do so Thinking about Fedora Where does the opera system and the application begin? I got a little bit of myself on the slide because I just really like that particular question. I think that's a It's an arbitrary concept But it's one that you have to define and I think it needs definition at some point because reality is is there is a difference and I think good examples of this are generally mobile operating systems Android iOS Firefox OS from OS They have a very firm definition of where the operating system stops in the application begins such that you can you can Completely upgrade in it and maintain each one independent of one another and completely separate life cycles so for an app an Android application as an example, so I I'm a science. I'm on user. It's a community distro of Android I'm a big Fanboy of all things community and open source and things like that in their community. I'm sort of free build of ASP A lot of things But anyways, I run the nightly builds because I'm crazy. I also actually run raw hot on my laptop they They put out these nightly builds and I can upgrade my operating system underneath my applications every day of the week And my applications don't need to change nothing about them changes the cash gets erased and it read as a the art VM caching optimizations when it reboots, but you can maintain these things separate in separate life cycles and those kind of thing and if you take that concept into you know a more I guess Large-scale operating system for lack of a better vocabulary term How do you take that concept? Down there. Well, it's containers. I mean, we're it's already being done right now. Like you have containers I think a prime example of that is and is rel 7 With certified rel 6 containers You now have applications that ran on an older operating system that you can now run at a different life cycle than the OS Who's actually touched on the bare metal? And that's very it's I think it's a fascinating thing in the whole concept of what we're able to do with that moving forward What we can do in fedora, I think is is very special there I think going back to my comment about rawhide. I I love the idea of rawhide I want the hottest bits at all times. I want them on the laptop yesterday today and tomorrow But right now if I do a DNF update and I pull in 130 packages This is actually something happening. I pulled in 140 some odd packages something broke And I started going to Koji because we don't keep you can't do downgrades You don't keep history. It's just rawhide. It's the latest for the pole for the compose So I go to Koji and I grab packages one by one I downgrade them and about 30 packages in I didn't find what had broken it I just gave up on a reason on my laptop to the current stable version for the time and I just kept moving on with my life But if we did a No s first application split And the OS was I don't know an atomic image and I were to just switch Do an atomic host downgrade and I switched to yesterday's tree and I reboot all my application space is still there It has not modified, but my OS is different It's we have a lot of advantages there We can make rawhide a more attractive option for developers and contributors within the community who want to work on the late-breaking most cutting-edge technology With that you know kind of safety blanket easy rollback and not modifying or altering an application space. Yes Well because So that's actually an interesting question let's say for example, there's a post a post install or post uninstall or trigger in The RPM transaction to where some artifact is left behind when I do an upgrade or a downgrade and now the functionality Something has changed and that doesn't happen often, but it has happened Whereas with an with an oh it with an RPM OS tree. It is a literally a full tree It's effectively like a git snapshot of a file system with this set of packages So you have almost a guarantee that the tree that worked for you yesterday is going to work for you today because it is unmodified whereas With your file system if you did a young or a dnf upgrade and then a dnf downgrade You still have modifications on the system like it's state changed good And I don't disagree with that and I think that's a good point is is a lot of functionality like that does work in well, and I think it's because of Probably the engineering effort and the extra QA effort that goes into those things and and the support team behind it However in Fedora space, I'm thinking more about the cutting edge of the raw hide like in raw hide nightly builds We test zero of that and we have zero guarantees that those kinds of things work And there could even be a bug that was introduced that would break some of that functionality And that's and that's kind of more of what I'm getting towards because in release engineering standpoint We're always looking towards the next release We're looking towards maintaining the current one getting updates out the door But also the next release the deliverables for that and so from from the release engineering perspective I'm thinking more of those concerns as opposed to long-term maintaining a running system That may need to live around Yes, okay, so I don't disagree with you But I also I want to kind of move because what I'm getting towards is part of what the like Fedora's a project wants to do so Just very quickly if he seemed more tightly coupled right now today, we have docket Docker rocket XGF if anyone's not familiar with XGF. It's this kind of very interesting concept about running containerized desktop applications inside of Effectively a container sandbox such that it is shipped holistically as an OS tree I was actually just about to mention that there's a thought like the right after this It couldn't have been more conveniently time. There's a talk on that and everybody should go I'm actually I will be there I'm very excited to see because I very cursor like on a cursory concept know what it is But I've seen it. I've only seen it work in like a demo on the Internet. I'm very excited to see Anyways, so for a reason This kind of brings around to like the third or next concept has been kind of mulling and manifesting and coming to light in more recent time and so if we were to think about it like the face design would be our operating system and then Everything out here could be our application space and that's that's very interesting From a release engineering standpoint because what we need to build and how we build it might change Dramatically because right now it's all it's just one big clump. So it's a giant package set. It's a giant repository It's a giant composition of things where we generate metadata and have tracking for what needs to go into ISO images for the different products and those kinds of things. So Thorings question mark. What does it mean to release engineering? So we need to be able to adapt more rapidly and this is a big thing is because we have this we have we just we have build artifacts and our Purposes to deliver them as quickly as possible to cater to the needs of the community to cater to the needs of the contributors However, we need to maintain Sanity within that we can't allow things become the Wild West I think a great example of this that kind of has enabled a lot of people to do a lot of things more rapidly is Cobra Cobra has allowed people to build Anything and they can do it as rapidly as they would like to on any release of fedora without breaking The bulk user base and that's great. I think I'm a big fan of that It it caters to that need without breaking the core deliverable From a releasing standpoint that defines The product of for our station or the product of our server or product for our cloud It lets these things iterate independently and people can optionally install it on top So it has it has its own independent life cycle. I understand like its delivery mechanism is still very similar to the traditional setup that we have But it can optionally override components that are within the core, you know the core environment so In terms of water at what is actually delivered We need to maintain this kind of concept or this set of Facilitated sanity and and stick with the release engineering criteria things need to be reproducible audible final and deliverable And and what that means to different people is varying and the link that I mentioned earlier has formal definitions Of what we mean by that but basically reproducibility is we need to have and if anybody attended the reproducible build talk Yesterday Okay It was amazing talk. You missed a good one But it discussed a lot of what this means to us and such that if you have the same build environment the same Build repository the same set of point-time snapshot of all the packages with the same inputs you will get the same outputs It's not like a bite-for-bite thing because if you do check sums on like Modified like m-time and a time But you would end up with the same output We need auditable Build tooling in such that we can trace it for security auditing CVE's that kind of thing. There's some interesting things going on right now in Docker space where like Docker images are just kind of like this black box You may or may not know how those builds are created and what's in it You can unpack the tar ball and stare at binaries all day, but then It's difficult to audit there's people working on it. There's actually an auditing toolbox that's being worked out on get it up I'm looking forward to that which I'll actually mention in a minute and something that we're working on so built tooling today And yesterday so we have Koji and Koji Original format was RPM centric. It was built in a time where the RPMs was the delivery mechanism It was built in a time where the disher was the one-stop shop and in case you're that very well Somewhere along the way image tracking integration was added. It allows the churning of things like cloud images and And then we also have the live CD creator, which I understand is being replaced at some point I'm not I'll be very I'm all of me. I don't know what's going on with live CD creator I'm punchy. So punchy is a composed tool And that's that's been around for a while. It's currently iterating being rewritten and adding new functionality Lorax works does tree compositions Boaty is the update mechanism that everybody knows and loves and then the Wild West is copper so tooling for tomorrow Tomorrow the arbitrary tomorrow coach, you don't know so coach you don't know We'll be content generator centric and if anybody went to the Koji to talk yesterday We'll have a lot of more insight on this, but effectively a content generator is a build mechanism or a build system or tool that will Take a set of inputs and provide a set of outputs based on a standard definition of what metadata is required to Define and create those things and that metadata and the bill artifacts can be passed within those systems RPM builds would then be a first-class Content generator and there could be other things now the content generator is going to be specific to Koji to we're supposed to have that in Koji 1 to 11 But in to it will be kind of a first-class data type for lack of a better term and and that will allow us from Relationary engineering standpoint to pivot more rapidly on these newer types of built artifacts that need to be generated and produced Based on community command, so I think one of the big things that I will admit that we're like behind the right now is Docker I think I think Docker has exploded and it's very popular and from a fedora tooling standpoint In terms of what we as a project officially release. There are certain things that I think We're a little bit curved on And that might that might actually not be fair from a distro standpoint, I don't even know if anybody else is doing it at all What we're working on so maybe we're ahead of the curve But I like to think that we We are we you know features friends first We're we try to travel. Anyways, so One of the things that you know, we could we could add is whatever the next Docker is, you know, if it's not containers You know could be some other piece of technology gets really hot in five years And it's just exploded overnight and the whole world's excited about it It's very revolutionary and and you know kind of Changing in the way that we traditionally do things or paradigm shifting that kind of thing and we should be able to adapt that better and I think having something that is Centered around this concept of this content generator. We could then have a system that would integrate very Tightly, but be loosely coupled in the sense that it doesn't have to be Integrated into the Koji code base to function as though it was And that that's gonna That's gonna lend itself to us in a big way So punchy for it's currently in use It's constantly being iterated on that actually took compose time I think from something like 12 hours down to four hours like it was the huge performance Increase it allows a lot of different building compose tasks to be farmed out through Koji Sort of things that should have been able to be run in parallel, but weren't able to based on punchy three like Design something or other they've been resolved a refactor to read you know reinvented in such a way that you know Parallel tasks can be executed parallel and it just sped things up a lot for the composers Which is going to lend itself to allowing rawhide to look so rawhide nightly builds are gonna look more like a traditional Release so when there's a test candidate or a beta or alpha beta build or release such that you have all of the everything artifacts the isos Those kind of things the rawhide nightly's are gonna look identical to that and that will hopefully lend itself to More easily integration with qa processes because all all of the composed outputs from The release ensuring standpoint will be identical. So we're not messing with different batches of Bill artifacts put out So layered image bills for containers This is what I was kind of alluding to a little bit that I thought we could could have done better on but Anyways, so maybe we're head In Docker, there's this concept of layered image. You can have your base image just be a core set of honor packages then Let's say your next layer will have Python and then well, you know because you know your next layer has What's cool this week? No, no JS on top of that you have Grunt that's yeah front the thing people like run grunt and then Insert thing here and then your application on top. So you have multiple layers underneath Let's say layer two has a security exploit Let's say for example the build system has 40 of these that share layer two How do you audit them? Well the concept of the layer build image? the layered image build service will allow number one the Docker files to be integrated into disk it number two the build to be executed through Koji to the Fed package Fed PKG client and then number three these automated rebuilds of things based on audited notifications for CVE's so that people who use and consume applications and Containers that are layered images out of the Fedora project can have confidence in the fact that they're They're maintained at a certain amount of reliability a certain level of quality Much like the distribution itself. So it's kind of a new type of deliverable from Fedora It's being worked on right now It is being worked on right now. I actually really want to have a demo for everybody today, but I found a bug in a thing It's OSBS stuff. Yes. Yeah, so I have a slide on it a couple in like two slides But it's OSBS. It's the open ship build System build service service so we should build service client actually so open ship version three Basings of on top of Kubernetes adds a lot of abstraction for things like builds build integration source image building And depth tooling and entire lifecycle chain The entry point into the ownership depth and build tooling has a profile for a custom builder The open shift builds service client tools call into that API entry point and use a custom builder to afford us all of these features of Of the more verbose build tooling So that's something that's kind of being worked on right now We're aiming to have something available and staging in the next few weeks And so that we can kind of announce it let people start getting the tires on it More details fall on that there is we have the team is working on it We have a public con bon board But it's in a system called Tiger that may or may not be long for this world. We're discussing We actually will probably tomorrow as a group get together and discuss a Handful of tools and try to decide on something for project project management type things within the floor community And that's probably end up being a very long-winded discussion But anybody who is interested in project management style things In the door space We should organize and get the group together and talk about that because I know the QA teams in fabricator reason tiger We need to I would like to just as like a quick little tangent I would like to try and find a project management style tool set the standard eyes on it as a for our community Such that there's like an entry point for new contributors coming in who want to look at what's going on all over the place And you could just be like, oh, I'm curious what's going on here. That'd be great And integrated into Fedora hubs everybody see the floor of stock. Yeah one. It's gonna be amazing I'm like I'm so stuff who like okay. Who has an Android device Okay, who uses Google now With cards like bull swiping imagine that but better in web format and widgets and things that pop up based on like Yeah in for Fedora all things Fedora. It's You should have gotten stuck. All right Two week releases another big thing that we're working on right now is for atomic the project atomic group is pushing the envelope They're doing a lot of iteration. They do two weeks release two week Cuts for upstream components. They work on they work with us and actually requested that we start churning out two week releases on that So for floor 23 and beyond hopefully we get it in for further 23 final The plan is to actually have that be a new life cycle deliverable within the Fedora space and that kind of starts That's gonna be in I think the first thing that we start delivering outside of the general six-month cycle To start kind of like start paving the way to allowing different components or different products within The Fedora space so workstation and server and cloud could later maybe pick their own life cycle It was in Fedora and actually set up a release tooling to allow that the Wild West is gonna be copper doper is actually copper But for doctor which is already out there and staging it's being worked on. It's it's it's very it's very good Tech you should go check it out other so the other is just we don't know like we don't know what's gonna come next What's gonna be the new interesting thing that everybody wants to check out and do I'm Standing in the way of the slide deck in the camera thing. Hi everybody. Sorry So we don't know what everybody's gonna want to work on it We don't know what's gonna come out in the next few years, but we want to try to cater to that and be as quick to pivot as possible So really quick atomic two weeks We actually have a plan like there's a plan to make like tested automating things and gets out the door. So effectively So we will trigger an OS 3 kickoff build based on Detecting a change in an artifact that goes into that right now today. That's just set of RPM So there's an update to an RPM that lands in OS tree Koji should right now Bodie does but based on a fed message it will kick off a build No, that build happens It will automatically pick up a test and that test be marked pass or fail If it fails it will go down to a bug filer or yeah bug filer if it Passes it can be marked as a release candidate Once it passes the set of tests and the tests are not yet up on the bird right foot the girl to IO PAG you are into IO. It's an open source get-up phone. You can sign with your fast account. It's good stuff There's gonna be a lot to talk on today. You should go So Pierre a pingoo on IRC he wrote it. It's good stuff We've been using it has existed for like a month So if you don't know about it, it's not like a huge like you missed the boat It's very very new. We've been using it release engineering for our stuff and it's it's been very good for our workflow but anyway, we're gonna host the The atomic test up there that will run in two near well. I went to Kushal's to near talk earlier today Yeah, one. All right All right. No, it's it's a very simple testing framework the language for it is is based on the premise of Exit codes of commands so you can use any test suite unit testing whatever Cucumber anything if the test runs and fails then it that step in the test fails There's things to modify, you know non-zero is not a failure, etc But it's very simple So we're at first pass the test Then it will become a release candidate and then at the point of release the The latest release candidate will go in it will be uploaded uploaded to cloud providers will be uploaded to FTP There will be an automated kickoff to update the website That does not yet exist, but we're working on it down the wiki if you want to check out the status of that and then update links send out an email to announce and maybe roll back if we break something and then Our we'll have quick docs links download launch instructions those kind of things So this is the base idea and a very shameless rip off of this. I stole this diagram from Matt Miller I want to thank him Exponentially for writing this up because it's it's amazing. This is the plan in visual form and I'm bad at diagramming So this is my diagram handiwork This is why So this is the layman build system So if you're just get with that package Disgit is where the Docker files will live and that package where you will set your commands off much like you do our cam builds today The build will call about the Koji cogs links into the open-shift environment via the OSBS tooling The OSBS tooling will actually do the build inside of our open-shift environment and then we'll export all the build artifacts and store them in Koji and then we can also auto upload them to a Docker registry of a choice Whether or not we host them ourselves or externally that has yet to be decided But that is on the road map So this is the layer built kind of Visualized versus me just hand waving and blabbing about it So what next so containers the morning also your container formats is something that we've had requests for and Is kind of on the road map for some indeterminate amount of time in the future, but it is something that we're looking into So rocket run C is new in the open container foundation or whatever they rename themselves again. I think I don't know So run C and then freight agent who is not good with freight agent. It's a it's a new It's not really a new it's a new delivery mechanism for container Root file systems that has system DN spawn on the back end and it like is all kicked off with system The unit files and then other but we did I mean what's the next big thing? We don't know I mean there's there's new container technology pop-up all the time The comic workstation though a ton of workstation proposal to hit the mailing list recently the desktop mailing list Which I don't know Somebody in the working group wrote that at some point so at some point we're gonna have to Figure out what that's gonna look like and how we need to build that how we need to deliver that from a release engineering standpoint So that's something that we're kind of trying to pay attention to as it develops and as it progresses over time I for one I'm like really excited about it because I I've almost just built my package set as an OS true as a atomic image anyways, but I already have Ansible playbooks from my laptop some weird So no, it's not familiar Newlycule is an app as a container application specification such that you can link together multiple containers to comprise an application Such that you don't have multiple services running a single container each service would exist in its own Provides a metadata for linking it that kind of thing the build system So the layered image build system will at a later date support this and allow people to actually pipe in a new Coal application builds into the into the environment and then pop out the containers and people can pull this back and it'll If we do it right it'll work And then the next new hotness we just we don't know but these are kind of the things in our general You know future unknown Timeline roadmap that we want to work on from a listening standpoint in the next generation tooling On top of and to the side of currently continue to maintain everything that is today Questions that's either really good or really bad. Thank you all