 Well-informed person, you know like I'm not dumb, but I had no idea you could do any of this This talk is about secure multi-party computing in the cloud also known as the CC project and These lovely folks out of Turk my Aunt Maria Earl sing Yes Are gonna come talk to you about all of that the possibilities for this Expand to things like we can start doing actual statistics across Medical data that we can't share. I mean, it's really amazing. All right, so without further ado out of Turk Thank you So I'll talk about C2D our privacy preserving scientific data analytics Environment framework that we are building on an open cloud and this is a collaboration with in the Massachusetts open cloud the sale team of Hurry Institute of the Boston University the data verse team at Harvard and them and redhead My name is a church. I'm a research scientist at MOC and and I'm gonna be presenting together with my Aunt Maria Ben Getchell and Parul Singh So I'm gonna start the talk with talking about some of our use cases So that you understand what we are what kind of problems we are trying to solve and these use cases are literally the use cases that are coming to us for Addressing in in our own platform the first use cases Companies in Massachusetts want to compute average seller differences across gender ethnicity different groups Within the state, but they want to do it while Not exposing the average of any company any individual company or any individual group Within a single company. So they want to keep their private data private But they want to have the aggregate average a similar use case again from the Boston area is here here on trauma centers in Boston They want to generate aggregate reports about cases they service without revealing any patient data They are governed by very heavy regulations. They are not allowed to share data with each other But they want to be able to say for example, how many trauma cases they service during the Boston Marathon bombing. So Another use case we have currently within again in the Boston region is researchers in hospitals Want to generate aggregate statistics about rare diseases that they they care. They cure Cross multiple hospitals. They have so many few cases of these rare diseases that they cannot do generalizations that they want to do So they want to have more examples across different hospitals But because of regulations like HIPAA, they cannot share this data with each other and they want to be able to do this Aggregation the statistical analysis without revealing any patient data Similarly companies or organizations want to run data analytics jobs in the public cloud, especially for for for data analytics jobs that require more Computing resources than they currently have in their private data centers But they do not simply trust to a single cloud vendor. So they are looking for jobs where they could Divide their data partially their data Into multiple clouds and run their analytics across multiple clouds without revealing any one particular cloud any meaningful data So the sharded data that they have will not make any sense to any public cloud Even if it get the public cloud decides to go behind their back and look at their data they won't lose any critical information the the assumption is those multiple public clouds will not collude and Combine the sharded data into a single data So so those kind of use cases are the cases that we are trying to address in this framework that we are building. So this This framework depends on multiple tools out there available and also An infrastructure service solution, which is the one that Massachusetts Open Cloud is providing and then MOC is very important in this aspect because it allows Different parties to own their trusted enclaves within within within a single public cloud without Needing to trust the public cloud provider. So MOC's role in here is is practically that We are using data versus our data set repository solution because data versus one of the well-established Dataverse a data set repository solutions out there open source dataset repository solutions out there and it hosts a lot of a significant number of Scientific data sets especially in the social sciences area And we are using conclave as our multi-party computation framework beckons to provide privacy preserving scientific data analysis I will bring you talk about these components Massachusetts Open Cloud Hopefully some of you know is a very very unique entity It's a collaboration between academia the government and the industry which you do not see often, right? And the most important aspect of MOC that you make use of in this framework is MOC tries to disrupt the single vendor cloud model. We want to offer a multi-vendor cloud model and What this means is within a single data center, you will have multiple cloud offerings That you can mix and match and create your own cloud from those offerings That's what MOC is trying to enable and also MOC is not trying to do this in an academic setting We are really trying to do this in a real data center on a production scale public solution So we operate over a 15 megawatt data center and in Western Massachusetts in Horioc our Cloud solution is mostly backed by open stack on open share Dataverse is an open-source software platform for building data repositories. It provides incentives to share data And also it provides mechanisms for controlling the access to the data sets uploaded there and It has a very large community. It's a well-established software It has been in development for the last 10 years and especially for social size data Chances are if you have a paper in Science that your data set will be available in and in nature's so it's it's installed in more than 20 repositories worldwide And let me give you an example Dataverse installment the Harvard database repository for example hosts more than 70,000 social science data sets so these are data sets that researchers have published papers and Utilize these data sets or created these data sets and decided to upload these data sets to Dataverse so that other researchers can download and use them MOC and dataverse is now in collaboration to build a new software that we call cloud dataverse that Not only allows you to download these data sets, but also enables you to compute over these data sets So cloud dataverse extends dataverse to support much much larger data sets Then that are mostly available in social sciences We achieve this by storing the data sets in an object stores in our case. This is surfed backed by Back by self and we add a compute button next to each data set so that you don't have to download these data sets You can do on-site computation I'm a software engineer at the software application innovation lab here at BU. I'm talking about conflict for a second So there's a bit of background MPC allows a group of individuals to compute you know aggregate functions over sets of data without Revealing any of that data to one another so they can more or less compute over it like as though we're in the clear, but nobody but nobody Reveals anything that they want to so it's really powerful protocol, but it's difficult to implement because most existing frameworks either require domain specific languages their knowledge of Cryptography, which is hard for people without knowledge of how could you do Concrete is different because it Interpret sequel like statements and automatically generates in dispatches code from MPC back ends So you don't need to understand MPC in order to use it So as an example if you wanted to perform an aggregation over a bunch of individual data much of log files You could either just do one large Aggregation concatenate all the data and aggregate it or you could locally pre-compute an aggregation Using some distributed back-end like spark or something like that and then just submit the the count to an MPC computation Here we go. So concraves integrates seamlessly with existing infrastructure and That supports a pluggable back-end structure, so you don't like if you want to integrate some New framework for MPC that just came out. You can do so with with relatively little ease So we've we've described to you that three different types of technologies that are involved. Yeah, so we've we've just Yeah, so we've described the three different types of Technologies that are involved in our project conclave dataverse and the moc And I want to describe to you now why we are trying to combine them together and why we think it's such a powerful combination So first I want to describe the integration of conclave and dataverse So conclave is a technology that allows you in principle to protect any kind of computing over any kind of data in theory It can allow you to perform anything securely so that nobody knows To compute over data that one cannot read but dataverse is where the data actually live It has tens of thousands of data sets that are already indexed and curated and available for use and so the idea is to make it so that all of that data is available for the research community to analyze to try to extend to try to Reproduce results, etc. All without even needing any real Involvement by the owners of that data on dataverse indeed the owners benefit from having their data be more available and more reusable And conversely dataverse has an extensive access control mechanism that allows us when when we produce new data products as a result of any Kind of secure analysis over existing data any newly derived data products can also be stored back within the dataverse Repository and tagged appropriately and it gets good strong access controls from it Second the integration of conclave within the MOC These are two very natural things to play nicely together because they're both built upon the same kind of principle the idea that you Want to federate trust rather than centralizing it the idea being that you don't want to have to have People who use the cloud or people who do analytics to have to trust jointly any single computer or any single Organization to hold their data Instead they can sort of they can they can on the fly choose Where they want to trust or even not have to trust any individual entity But instead know that just a collection of one out of n organizations is doing a good job of protecting their information Furthermore in addition to sort of their compatibility from a trust point of view They're also very Synergistic from a from a performance point of view secure multi-party computation as an idea that we've described and also Andre and the keynote it tends to be very network bound and in particular latency bound and Whereas you know the internet is designed for sort of high bandwidth Communication it's not so great from a latency standpoint And so putting all of the work to do secure multi-party computation within the MOC within a single data center really helps us out from a performance point of view And so bringing all of these three pieces together. We think introduces something really powerful where basically You know right now nowadays in the world We live in the hope is if you jet if you're a data scientist and you do some sort of research and you gather a data set Your hope is that you did one large amount of effort to gather a bunch of data So that then the whole rest of the world can benefit from it But usually due to security and privacy concerns, that's not the case you end up siloing your data You end up isolating it and anybody else who wants to do something of a similar nature has to go reproduce the information over again so so with this sort of Combination of services we believe that we've designed a system so that when somebody Makes a data set creates a data set they can make it available without making it readable So that other people can analyze data that they cannot even see and that that way your data of your effort that came into Providing and curating etc. A data set gets huge value throughout the entire community So there are a variety of examples of this many of which I've described on the first slide Let me just sort of give you a notional conceptual overview of one of them the application to medicine So we can think about in this example There's two hospitals Boston Children's Hospital and Mass General Hospital And you know let's think about some of the immense data sets that they have and it would wouldn't it be great if the Information that they had about say patient outcomes or anything else We're made available to a larger group of medical researchers to really see any kinds of correlations Or anything that they can find about sort of what can produce better health outcomes But of course this data is very very sensitive for a variety of reasons So these hospitals don't just make it readily available for the entire research community to view So using sort of the way we envision our workflow going and then parole will next describe in more detail how this works In depth but sort of at a notional level the idea would be that these hospitals could then put their data in the Massachusetts open cloud, but but always encrypted using just standard cryptography protections and make it And register it within data verse so that the knowledge of the data sets Existence is public knowledge and available so that people know that these data sets exist even if they cannot read them And then suppose there's a medical researcher who wants to do some sort of joint analysis So like Ata was describing earlier say maybe they want to analyze some sort of rare disease where neither hospital Individually no single employee at any of the hospitals has enough data to make enough understanding of what's going on with this rare disease But together maybe there's over two or three or more hospitals There's enough data to do some sort of Sophisticated analysis the way that it would work in our system is that the researcher would submit some sort of query You can think of it as a sequel query or something like this That then would get Transformed using into this cryptographic system and using conclave into this way to compute this System privately so that basically Yeah, so that even though the data sets the raw data from the individual hospitals lives on different machines And they never share the data with each other's machines that Collectively they can do some sort of communication between these two machines to send information that looks like gobbledygook It doesn't look like they're sharing any actual data But somehow in this process following this cryptographic procedure that conclave enables It would allow the researcher to get somehow the result of the sequel query or the analysis that that the researcher wanted all without Without any of the sensitive data ever leaving the the pods that the hospitals chose to entrust with their data and Furthermore, not only could this result of the analytic be useful to the researcher who actually submitted the query It could actually furthermore get pushed back into up into dataverse with whatever access permissions were derived from the original Whatever the original owners of the data would have wanted for their derived data products So that even in addition to the original researcher who made the query anybody else could could benefit from the results Anybody else consistent with the wishes of the owners could benefit from the results of that as well My name is Farul Singh. I'm a graduate student at Northeastern, and I'm working on MOC as a research assistant I would be explaining how we implemented C2D framework on MOC Our choice was to run C2D on containers and we needed a container orchestrating platform So we moved to it open shift and Kubernetes because it gives the powerful job framework And it also gives the capability to manage slack resources on MOC Right now we have a single open shift cluster with multiple projects and parties But down the line we are planning to integrate trusted elastic infrastructure to build trusted secure bare metal in place for parties C2D framework runs on open ship, which is on top of MOC The concave web is the app through which the user interacts with the C2D framework and For each party we would have an open ship project When a user summits a workflow concave wave generates An organization pod with all the Kubernetes components like the concave container which have the back end as well We use the config map to load the protocol and the input data and config The reason we used for config map is like we want to segregate or we want to decouple the configuration artifact from the image and We use EmptyDir which is a scratch space on the pod to get data from Swift and also to store the intermediate results This is how a single organization pod would look like but for multi-party computation to happen. We need more than one Pod so each organization will have their own pod on OpenShift environment Definitely they would need to interact to each other because MPC computation needs to interact between two Organization and we do that using the service exposed on each of the pod Once that computation is done the result is stored on Swift which is an object store and the analyst can use the result from the Swift Bent is going to talk about a video demonstration or how the C2D framework works Okay, so all of our codes hosted publicly on GitHub under the organization of that name But we also made a short demo video Here we go That was already one five. That's great Okay, so we have a command line tool that we made so developers can launch jobs, but we also threw together a Web UI. I mean this is the OpenShift web UI, but so if you look on the side there on the right there There's the protocol that the analyst would have written up on their computer Which is just like a short Python file in the top right corner There's two data sets stored on Swift, but I just have into there that's called in 1 and 2.csv then we go to the UI you can Enter in a fast forward until they're there Oh, yeah so you enter in the Swift endpoint the container name and the data sets that you want to compute over upload your protocol and Hit compute And then in the OpenShift dashboard you see there's a server which is handling the so GIF is the MPC background You can use in this computation and it handles all MPC traffic, which is all Encrypted using public key encryption so that it can't know anything about what's being passed through it These pods are just waiting to start for them to start up that once they start up they will perform that protocol so it looks like they started up and finish and then You check on Swift and the output dataset open.csv is stored there If you have questions for for the team, I'll bring you a mic When when you do the analysis, what is to prevent the analysis being constructed in a way that kind of inherently exposes sensitive data So so by its nature the system doesn't Control that we are building a policy mechanism But the idea is the data owners have data tags and I'll go tags associated with each data set that Identifies what kind of computations can you compute over the data set itself? So if the data owner allows a computation which eventually reveals that they any information that they did not want Exposed this system by its nature is not controlling that It's only making sure that if all the even if all the data that is exchanged across different parties are is exposed somebody gets that data still No, no information is revealed even or even if One party Yeah, I think that's it Just because I had the same question he did so I just want to make sure I understand the answer so if I'm the researcher and I Craft a malicious query. There is not a safeguard, but the safeguard prevents third parties like AWS or the cloud hoster from putting that data together So as the researcher I could as a third party observing packets on the wire I could not So there's there's not yet Protection the the policy piece is a work in progress. It does not exist in the software at the moment But it will and the idea is as I said that when the Data owners specify like provide their data sets. They will once we build this part They will specify a policy for how their data may be used and then when a researcher Submits an analytic if it's not compatible with the wishes of what the owners specified Then it would be rejected So then like the point is that the data owners would not be willing to allow their data to be used to be contributed towards an analysis That they were not okay with you can imagine that part of this will be like partially mechanized ahead of time They could prescribe the the the kinds of analytics they're comfortable with and also there could be a method that if it's something that is Not already in the white list of what the policy prescribes that also the owners could sort of manually Override and allow things that were not initially part of the the set So some things that are decided ahead of time and then for sort of things that Might have been rejected ahead of time then the the data owners could go approve it later if they wanted But it's not there yet. That's our plan for the next year. Okay, just a quick clarification if a Person writes a query that only one That will only pull out one Individual within a data set would it be set as a could it be set as a policy so that the owner of the data set says Any query using this data must have at least five individuals Would that be an example of a policy Sure But it would probably be so so first of all anything that would pull out subsets of the data set no matter how Larger small would presumably be forbidden by most people the idea would be to do things that are sort of information lossy metrics things like Finding aggregate trends or our analysis over time like a time series thing or something like this that isn't about pulling out individual data from within any of the data owners Datasets but about analyzing trends between them and one could specify not just that it would involve a large fraction of your own Dataset, but that it also must involve, you know a large number of other people's data sets in conjunction with your own You may not even want people to see trends over just your own data, but also together with others So how are we receptive of companies like you know master in hospital and children's hospital then to to to this new paradigm here and What is the uptake? Yes, so we are in talks with the tier one Trauma centers mess general hospital Boston Children's Hospital in using this framework Boston Children's Hospital is very much interested in it Tier one trauma centers are also very much interested in it. They actually Tier one hospitals tier one trauma centers request is one of the driving forces of this building this framework In general in general, how is the community of businesses? Overall Adopting this or accepting this what what is your biggest hurdle? And an implementation of NPC by the sale team is in use by the businesses of Massachusetts Maybe we want to talk about it Sure, so so secure multi-party computation this cryptographic technology generally is in use by a lot of Entities throughout the city and state. There's a lot of folks in the room right now Who work in sale who could tell you more about it than me But for instance the pay equity project that I've described at the very beginning is an actual thing that the city of Boston used to calculate Overall pay disparity between men and women throughout the city without learning individual information about payrolls at any individual employer They collected data for about one-sixth of the the greater Boston workforce using this There are other applications of this in use by the greater Boston Chamber of Commerce Etc. So generally speaking the the secure multi-party computation stuff We at BU have done a lot to to try to generate interest of that within the business community this specific talk Though this specific conclave system in the integration with Dataverse so far We've mostly just done the discussions with the the medical community that ought to describe So I've got two questions. So so what are the operations supported by this by your MPC? for example addition or multiplication or whatever So second question is Do you just use the regular MPC or the MPC with cheater tolerance? Sorry, what was the second question? So with cheater tolerance meaning that if one of or a few of the data uploaders are dishonest in the computation Okay. Yeah, good questions. So the the the first question is the conclave system Okay, so maybe to vote for the questions the conclave system itself Can integrate with any Backend that does secure multi-party computation in principle that you want I mean, there's a few number of them that have actually been programmed in thus far But in principle so so so what conclave does is it takes any existing piece of software that does cryptographically secure computing and it sort of lifts it so that input programs can be specified So you don't even have to specify it at the level of addition or multiplication to your question It was that you specify it as sort of sequel like query So so you specify an analytic and then it gets translated into any existing MPC engine to your second question The different MPC engines that we have integrated in the background have different kinds of trust profiles in general They're sort of honest majority systems So you need half of the machines at the moment to be honest, but but the system is pluggable You can extend it connected to other ones as well That's just the stuff that we've like Operationally connected it to from a software point of view at the moment, but but that's not a rigid choice I'm saying like you could do other things Yeah, so I apologize. I'm not very familiar with the Multi-party communications. Maybe there was an earlier talk that I missed But it sounded like in response to one of the earlier questions that basically each, you know Individual provider for example like Harvard or MGH You know, they're the only ones able to decrypt their data But the operations that you can perform on that is basically like, you know They have a given set of operations. They allow you to perform And you had said earlier that the encryption like the encrypted data was just standard, you know Standard encryption algorithms So I'm curious if this is using any kind of like fancy homomorphic encryption or anything like that or if it's really just You know restricting which aggregate operations People can perform on that and allowing each individual organization to define which aggregate algorithms They allow people to perform on their data and then you so then your like conclave or musketeer Back end then is able to Notice how to combine those aggregate Operations across the different data providers. Is that effectively how it works? So sort of to the first part of your question It's kind of neither of the two things you said It's not just like encrypted data plus, you know Just some sort of like list of operations nor is it the full power of something called fully homomorphic encryption it's something in the middle and If you want to think of it as if it were fully homomorphic encryption It's that's the style of the guarantee that we achieve But we do it a little bit differently than what literally the term fully homomorphic encryption is which fully homomorphic encryption tends to be Computationally intensive but non-interactive. You don't have to do much in the way of communication between the parties Multi-party computation is the other way It's not nearly as computationally burdensome, but it involves a lot of communication between parties So basically all of the parties locally store encrypted versions of the information But also they have to do work between them in order to understand the trends in between the different parties data sets And that communication involves it's not encrypted data But you can think of it as that if that helps it's some sort of encoding of the information that allows you to compute the actual You know sequel query that you want but without actually learning anything other than the answer to the sequel query So the work that they do while trying to process the query together These machines are trying to process the results of the query together because it's a query over all of their data And so while they're trying to process this data They send in coded information of a particular type that is of a type that facilitates actually doing the Analysis but that does not any allow anyone to view any of the intermediate state while the analysis is being done and Somehow kind of magically through the way that the crypto works The the answer falls like the output of the query falls out of the system in the clear But but without any of the other like byproducts while computing that being available for use I don't know if that answers your question Okay, other questions. Can you explain how the protocol works to share like the encrypted data? Sorry, we're trying to find the slides that show that Okay So, yeah, sure I'll give a simple example of how to do this thing called secure multi-party computation For this example, it will just cover suppose there are three participants the green blue and red parties And all they want to compute is the sum of their three numbers, okay? It's like a super simple analytic It can get more complicated than this of course But you know just to keep things simple all they want to do is they want to compute the sum of their three numbers And so I'm sort of visually depicted the numbers think of like the size of this green box is like the actual number x Like like the length of it is actually the the number itself And so the way that they do this okay So they want to compute the sum of their three numbers, but nobody is willing to share their number with anybody else, right? That's the point so what they do is locally on their own computers They split their number into three pieces. So part of this green party party x splits their number into three different numbers that have nothing to do with x modulo the fact that the sum of these three numbers Actually equals x okay, so so think like this is visually like x1 plus x2 plus x3 is x But otherwise the three numbers individually mean nothing and the other parties did the exact same thing And then what they do is basically they share one out of three You know each of these three tiny things like x1 x2 x3 they give one of these things to each of the parties So x1 gives the green party gives x1 to itself It doesn't go anywhere the second piece goes to the blue party and the third piece goes to the red party So they've just shared numbers that have nothing to do with their actual secret number, right? like they're just you know useless junk and So this is sort of my picture that like just from seeing one of these pieces You have no idea how big the actual x was right it could be like a little bit bigger It could be a lot bigger But this number could even be negative so it could actually be the like x is smaller than x3 whatever Limitation of the picture and so everybody does the same thing So everybody sort of receives as part of this like one piece from each of the other participants, right? With the property that remember the sum of the three green pieces really was x the sum of the blue pieces really was Why the sum of the red pieces really was the which means that the sum of all nine of these pieces really is still the answer We're looking for even though no individual person has learned anything in this process about what anybody else's number was They can sort of locally now each participant can just take the sum of the three numbers they received Right, so now this person has one number here this person has one number here This person has one number here and the sum of those three numbers is the sum of x y and z even though they like They themselves have nothing to do with x y and z and now let's say suppose that the green party was supposed to learn The answer. Let's I don't know just cuz then everybody could just simply give to the green party Like whatever the sum of these numbers were like not the individual pieces But just the sum and like the sum here and so like the sum of these three pieces is the answer to the question But in the process Nobody has learned anybody else's information This is what I meant by sort of an encoding of x is sort of splitting x into these pieces and sending these pieces off to These other parties Note that to do this addition we did a decent amount of networking, right? We sent all these like little encoded pieces to the other participants This is why I said at the beginning that of my part that that this process is very network bound and why doing it inside of a Co-located inside of a single data center is useful So my question was how this works for non-community of operations Right so so Okay, so so differently. I don't I don't have a short answer to that unfortunately differently like like the basic principle I want to argue is the same in the sense of you come up with a way of encoding the information such that the encoding So so the key properties of this were not the commutativity I argue of the operation but the fact that there was an encoding of the system that was conducive to performing the computation But this encoding any individuals piece of the encoding was not conducive to learning the information Those were the operative pieces the commutativity made addition quite simple I agree and other operations are going to be less less good from a performance point of view But but those are the operative pieces and I claim you can do this for other operations as well claim without proof So quickly here only one party needed to know the final answer in the situation Well, all three parties need to know the final answer. It will be you will start from scratch again with different numbers or Distribute the same no it depends on the assumptions you're making about these three parties So for what it depends on if you're concerned about the parties from a confidentiality point or all only or also from an integrity point of You so if you're concerned about the three parties learning it more information than they should but you're not concerned with them messing with the State of the computation If you trust them from an integrity point of view then once one person learns the answer They could simply tell the other people if you're trusting them for that right So so for what the what I've shown on this slide is all that I have shown is how to make sure that nobody Learns anybody else's sensitive input If I am you with this picture that I've shown if somebody wants to mess with the computation in the middle Right like if the if the blue party wants to make sure that like I don't get the right answer They could simply not have sent me the correct answer like like there's nothing in what I have shown That would protect against people who are cheating and violating the terms of the protocol one can do this too The actual conclave software does not but it is it is definitely possible And there's a lot of research in multi-party computation that does also protect against integrity that basically along the way people would prove To the other participants in the system that they're doing things correctly it is possible to do that but right now we are not We're only trying to protect against confidentiality in which case the answer to your question is simple that once this person learns the answer It just tells the other two Okay So say you just have two parties right and then the computation you're doing is you're calculating the mean of certain Categories or something between the two parties right so then the result Like if you see the result and you're not a member of either the it's their adversarial to each other So they don't want either to know the other ones data so then If you're not a member of either of those and you don't know the data from either one of them But if you have the data from one of them that you can you can infer the data from the other one based on their result Right, so that's isn't that a limitation? Yep So the this is what I was saying in an earlier question, which is that a secure multi-party computation does not protect anything from I'll call what she's described sort of an inference problem that taking the sense So this is why my example is the sum of three numbers you figured out why we did three if we had done to this whole thing Would be kind of useless because if you know your own answer And you know the sum of the two then you know what the other person's answer is so This this this encoding mechanism is about protecting the intermediate information in the process of going from the inputs to the outputs It is not about ensuring that the outputs are safe to reveal that has nothing to do with this process That is an independent question And that's what Acha was describing earlier that one thing we want to add on top of this in the next year of the Program is a policy mechanism so that the owners of data sets could prescribe ahead of time Which are the kinds of which kinds of computing would they be? Willing to allow their data sets to be used inside of that has nothing to do with this This is about sort of protecting how to get the answer to the analytic without revealing other information Not what analytic you would want to have computed in the first place We're over time, but we can take one more question because now we have a short break No, all right. Let's just have a big round of applause for these guys is awesome