 Thank you. Is this on? Okay. Cool. Again, I'm Chris Brown. I'm just happy to be here. This is my first DevConf. I also wanted to say I'm part of the Diversity Scholarship Program, so glad to have that program here. I'm thankful to be a part of that. The conference has been going well so far. Alright, so first, a little bit about me. I did my undergrad in Computer Science at Duke University. I graduated 2013. And then 2015, I went to NC State to start a PhD in Computer Science. Currently heading into my fifth year, so hopefully finishing up pretty soon. Apart from school for professional experience, after I finished undergrad, I spent two years as a Python developer at Bank of America. I'm just working on back-end and front-end stuff for them. And then since then, I've had two internships. One summer as a software engineering intern at Blackbot. And I spent two summers as a quality engineering intern at Red Hat, working for Satellite. Alright, so, if you haven't seen the movie, Sorry to Bother You. This is the main character, Cass Green, or Cassius or Cash. In the movie, he's kind of down his luck, but he gets the job as a telemarketer. And at first, it's kind of hard for him to adjust, because he's always calling people and they're like hanging up on him or ignoring him or cussing him out or whatever. And so we're kind of just using this as our metaphor that making recommendations is hard. So a lot of people here have tools they want to recommend, tools and projects they're working on. People here are also looking for tools to use for their projects. What's the best way to say, hey, you should use my project? Alright, so that was a brief introduction. Now I'll just go into a quick overview of my research, just kind of share things we've tried that work, things we've tried that don't work, what I'm currently looking at, and then the future of recommendations in terms of what I'm looking at specifically for my work. Alright, so the overall goal of my research is to improve software engineer behavior and productivity, specifically looking at development tool options. So there are all of these tools out there, static analysis tools, security tools, continuous integration tools that are built to help developers complete tasks. So like making you better at your work essentially. The problem is developers don't use these tools. So one of the things I'm trying to look at is how can we encourage developers to first discover these tools and then actually use the tools once they figure out what they are. So Greg Wilson, he's a big name in software engineering research community, he tweeted out I think the most interesting topic for software engineering research in the next 10 years is how do we get working programmers to actually adopt better practices. So one of the other things I'm interested in is how do we actually get people to adopt practices and tools that the research community comes up with. So being previously in industry and now in academia, I see people in research coming up with all these cool processes and ideas, but in reality software developers don't care, they just want to get their stuff done. So one of the things I want to get into is how can we actually bridge that gap. So people in industry can kind of see what's going on in research and then people in research actually care about, or people in industry, or yeah, people in research care about what the industry people want. So there will be a little opportunity at the end to participate in the research if you would like, but it's just a quick survey once this talk is over. Alright, so we've tried a few things to see what does and doesn't work, what do developers like and don't like. The first thing we found developers don't like are emails. So for our methodologies, they're just kind of explaining what we did. We were recommending black, which is an automatic python formatting tool. It's an open source tool, people can contribute and use it however they want. We looked at python projects on GitHub and we used the Google Cloud platform to basically search through GitHub commits looking for the string pep8 or flake8, so we're essentially looking for developers on GitHub who contribute to projects and they commit a fix for a python formatting error. Based on that, then we sent an email like hey, we saw you manually fix this error, if you use this tool black, you can automatically fix it and not have to worry about fixing it yourself. Then we also had like a quick survey to kind of see what people think about what the recommendations said, how likely they'd be to try the tool, different things like that. We found emails were not good, so for our results, we sent 100 emails, only 5 people responded, so it's like a 5% response rate. There are also 2 not so great responses not included in those 5, which we'll get in right now. So the guy that created black actually sent me an email saying hey, please stop sending these emails, recommending my tool. Basically he's like I received a few questions about the automatic form you were sending in response to cold quality setup activity on GitHub in at least one case. The recipient treated your automatic messages spam and in all cases it was received as a rather aggressive form of advertising. Unsolicited messaging like the one you're sending out has negative connotations, this can in turn negatively impact my project. So basically we found that people hate emails so bad that they wouldn't respond to our emails but they'd email him to say hey, can you get these people to stop sending emails automatically recommending your tools. The second one is from a guy in Europe, so there's the General Data Protection Regulation Law, so in Europe basically you also can't spam people with emails. This guy's saying hey, the way you're using my contact is not compliant with this law, I have not consented to receive this email, there's no legal basis for you to be contacting me. So these are just two examples kind of showing that people really hate emails. So if you have something you want to do or something you want to recommend, email is not the way to go. So basically the second thing we tried is what if we made automatic pull requests on GitHub projects. So this is just another way so emails don't work, what if we actually moved from email to the actual projects developers are working on. So what we did for this is we are recommending Aeroproam, it's a Java static analysis tool built by Google. It's not only open sourcing that you can contribute to the code, but you can also contribute your own bug patterns if you want to check for different things that aren't really caught by Java compilers or something specific to your own project. So like hundreds of different bugs you can check for that are different from like a normal check styles or any other static analysis tool. So for this we're targeting Java projects on GitHub. We looked at projects with Maven and what we did was we created a bot to automatically update the pom.xml file for Java projects and add the Aeroproam plugin to this tool or to the project and then saying hey, here's a project we found some errors, you can fix this by adopting this tool and prevent future errors in your code. And then the pull request is basically what we sent. So we're saying hey, here's this tool, we're automatically adding it to the project for you so you don't have to do it yourself. Try to merge this pull request and it'll make your project better. So here's an example basically this is just saying hey, you're not using any error checking in your build. Here's an example error that this tool can find if you want to try the tool just merge the project. And so we actually just, this is what we edited, we pretty much just added the Aeroproam plugin to people's projects saying hey, this will automatically run when you build if you just compile it. Aeroproam will automatically do the work for you so you don't have to do it yourself. We also found developers didn't really like this. So we sent out 52 pull requests just to random projects with Maven on GitHub. Only two people merged the project or merged the pull request. We also kind of gathered feedback to try to figure out why people did or did not like these recommendations. So the two people that did merge it, one actually eventually reverted back later saying oops, this is actually causing a bunch of errors, we don't really want this. But we still say it was successful because they at least tried the tool, figured out they didn't like it and then they're like okay, we're going back. So there were two main reasons people didn't like the automated pull requests. The first is just social context. So one kind of the main themes we got from feedback was people didn't like how we were just barging in and changing things on the project. So this guy he complained that we deleted his release profile which is part of the XML, part of the Maven build. He closes the pull request then he realized we didn't actually delete it but we just kind of missed up the format for the whole file. So developers are very picky about formatting tabs and spaces and making sure everything is aligned properly. Our tool did not really care about that, we just kind of added it and then set the pull request. But people do not like when you kind of mess things up in terms of format. We also received a lot of these. So when you contribute to open source software, a lot of projects say hey, you need to read the contributing guidelines, can you sign this agreement? Our bot does not actually do that, we just kind of do pull requests at random. And so we also received a lot of comments saying hey, we can't accept this change because you actually haven't agreed to the rules of this project. The second big issue with our bot was that it was interrupting developer workflow. So a lot of people complained like hey the change looks good but it's causing a bunch of errors. We filled a lot of Travis CI bills as you can see if you use Travis CI. And the thing is like most of these are actually from the tool kind of saying hey, these are errors in your code but people still don't want to have to go check out what errors these are that the tool is reporting, see if it's worth fixing or not. And that kind of also pissed a lot of people off. Alright so we have this problem. Automated recommendations are not good pretty much. People view them as disruptive, they're intrusive, you know I don't want people intruding on my projects. How can I actually recommend things in an effective way. So prior work also supports this idea that bot human interactions are very bad. Wessel, he did a study in 2018 looking at the power of bots in open source software. He found that developers really like bots if they do things for them. So like automating tasks, analyzing code, bots are great. But developers actually hate interacting with bots. So in terms of like chat bots or bots like these recommendations, developers really do not like those at all. This is just a plug for one of my colleagues. If you do have a project he's trying to put up this bots.yaml. So if you don't actually want bots to get onto your project, there's kind of a new format he's trying to start. Or if you add a bots.yaml file say hey, I do not want any bots to contact me by email, pull requests, whatever. Then yeah, you should add this file. And then we're trying to get also the bot creators to try to kind of put their projects onto this format so they say hey, this project has this file, I'm not going to touch it. So what kind of things do work in terms of making recommendations to developers? The thing that research shows actually works is peer interactions. So this is work by my previous advisor. He found that peer interactions are the most effective way developers learn about new tools. So peer interaction again is the process of discovering tools from coworkers during your work activity. So whether that's your pair programming work with another developer or another developer who says hey, here try this tool. You see another developer use a tool and you want to say hey, what's that? All of these are examples of like peer interaction face-to-face contact with another person saying hey, you're using this tool, I want to use it too. So what he did for this study was he looked at various ways people learn about tools interviewed and surveyed a bunch of software developers. I'm looking at like social media, RSS feeds, tutorials, documentation, discussion threads on chats and then people just randomly finding the tool while cruising the internet. Out of all of these kind of ways people discover tools he found that peer interactions are the most effective. So we did a study to figure out why. So what makes peer interactions or interactions with humans so effective for recommending tools to software engineers? So for this study we had pairs of participants working on Kaggle data science competition. So they have a list of competition every year. We had them complete tasks for the Titanic data set. We had 13 pairs of participants. It was a mix of student groups as well as professionals. We said here are those tasks you can use any tool you want. So whatever tool you think will help complete this task you can use it with your partner to complete this project. We recorded each session to try to figure out what tools were recommended between partners and kind of see which ones were effective, which ones weren't so good and how people react to receiving a recommendation. This was presented at Visual Languages Human Citric Computing Conference a few years back. So we looked at five different kind of characteristics of suggestions between peers to kind of see what makes recommendation effective. Psychology research says the first four things impact how humans make decisions without considering tools or recommendations. So if you're polite people are more likely to do what you say for instance. So if people make recommendations for tools to each other are they actually being polite and is that what makes them effective? For tool observability we just kind of looked at do the tools have a user interface or if they're just like a command line tool or keyboard shortcut where you can actually see what the tool is doing. So we actually had some success in terms of like peer interactions looking at peers recommend tools to each other. We saw a total of 142 tools recommended, which is 50%. So we went from like 5% on emails and pull requests to now 50% of people actually accept a tool recommended by another peer. So we have this big fancy table here which just shows a bunch of statistics stuff. The important thing to note is that receptiveness is the only one that was statistically significant which basically means the outcome of an interaction or recommendation is directly dependent on receptiveness. So politeness, persuasiveness, time pressure they're all important but really don't matter if you want somebody to actually take it or like take a chance to look at your tool. So what do we actually mean by receptiveness? This is by Fogg who looked at creating persuasive technology. He defines receptiveness as demonstrating a desire and familiarity. So just two quick examples. In our study two people were working on a task. Somebody said hey you can use add level to a pivot table in Excel. And the participant responded oh add level, yes awesome. So here we see that L14 who was a professional analyst number 14 they were actually demonstrating a desire to use this tool to add a level to a table in Excel and then they use it for the rest of the study I guess. On the flip side familiarity on another group who two students one guy was like hey we can do this in R. It's pretty easy I know how to do it. The partner says I don't know R and then they never use R for the rest of the study. So this kind of shows how people who aren't familiar with a tool are not going to use it. Because they don't know how to use it they don't know I guess the kind of ins and outs of the tool may seem very foreign to them. So you want to make sure that people are actually kind of familiar and know what's going on with the tool you're trying to recommend. Alright so there are a few problems in terms of peer interactions. One is scalability. So we just show that human interactions are the best way for people to learn about new tools. If I have a tool I can't go to every person on earth to say hey you should use this tool. So basically how do we actually scale this to the millions of developers around the world who could benefit from using our product. The issue with receptiveness is that out of all the characteristics we looked at receptiveness is the only thing that we actually can't control. So you can make the most polite recommendations just pretty please you can have the coolest tool, you can have the most persuasive argument for why this tool is better than all the other tools. But if a user is not receptive to using the tool then they're not going to use it. So this is kind of difficult in terms of making recommendations we can actually control how receptive people are to our recommendations. And then the biggest problem which was also reported in the same paper as before is that peer interaction is the most effective mode of tool discovery but it's also the most infrequent. So we see that the face-to-face interactions between developers are actually declining and we see that peer interactions are even though they're effective they rarely happen in the workplace. Alright so Sherry Tarkals here with this book called Alone Together which basically talks about how technology is kind of causing us to ruin or I don't know really declining the face-to-face interactions between people. So a lot of times rather than meet with people we rather talk on the phone. Instead of talking to people on the phone we rather text. So really face-to-face communication in general in our society is kind of going downhill because of technology. And we also see this as true in software engineering. So basically there's a study of Microsoft developers looking at open collaboration spaces so open work spaces were kind of made to like increase productivity, communication, increase collaboration between developers. This survey showed that only like 20% of people actually think open work spaces are beneficial to a coding environment. And other studies also show that open work spaces actually decrease productivity for developers and just workers, employees in general. And then there's this idea of global software engineering. So not only are developers not working together even in the same office but we have developers all over the world contributing to the same project. So when I was an intern at Red Hat we had our team in Raleigh. We were working with developers in Brazil, in Brno's, Czech Republic people in India. So we're all around the world trying to work on the same project. So of course in that instance it's hard to actually say, hey you should use this tool face to face if people across the world are actually working on the same project with me. So what does that mean in terms of how do we get people to adopt our tools and our products? It's kind of like we're all alone as developers, we don't want to talk to anybody. How can we actually effectively make recommendations to other people? And so the solution that we're trying in our work is called Nudge Theory. So this is a behavioral science concept made popular by Thaler and Sunstein. Basically a nudge is anything that impacts human decision making that doesn't offer incentives and doesn't ban alternatives. So a very popular example is the layout of a grocery store. So if you walk into a grocery store research shows the first thing you see is most likely what you're going to get. So if you walk in and see chips you're more likely to buy chips whereas if you walk in and see a bunch of fruits and vegetables you're actually more likely to buy fruits and vegetables. So the people of Nudge Theory would say as a grocery store owner or designer you should put fruits and vegetables up front to encourage people to eat healthier and make better decisions. We're not providing incentives to get fruits and vegetables so you don't get extra money or anything. We also aren't banning you from going to the chips aisle to buy chips and junk food. But just by the way things are arranged in the location it can actually significantly impact what you're going to get. So furthermore, wine men and colleagues they argue that as more and more decisions are being made online in digital choice environments this concept of digital nudging is becoming more and more important. So using technology to create nudges for people to change their behavior. So the popular example is like a Fitbit smartwatch. Basically it's a watch kind of tracks your activities. Using these watches and things like that can help you be more active choose healthier decisions just based on what the watch is saying not by any other means of forcing you to eat healthier or be more active or anything like that. For my research I'm trying to investigate process-appropriate digital nudges. This is just the concept of integrating things from Nudge Theory what Researcher.Nudge Theory says about making recommendations to humans and actually applying it to developer workflows. So integrating these into how developers make decisions on things they use and different tools and stuff like that. So what I've been working on this summer is kind of the start of this project and then I'll show the future work we plan to do later. We're looking at the suggested changes feature on GitHub. It's a new public beta feature introduced October 2018. Basically it allows developers to go to a pull request say hey I see this issue here. I suggest you change it to this line which is an improvement. So then you can either commit directly or ignore it or do whatever. Basically what we're trying to see is that this is an example of a nudge. You can commit the suggestion but you aren't really forced to commit it. You can kind of leave it if you want. There's also no incentive to commit this suggestion. But it's one way that people online can kind of make peer recommendations to each other saying hey here's an issue in your code try using this line and then having people who made the change or received the suggested change kind of decide for themselves are you willing to accept it or not. So this study is divided up into three parts. The first part is just comparing how this new feature compares to pull requests and issues. I'm just looking at in terms of how much people accept it. How quickly do people accept recommendations from suggested changes there to pull requests and issues and different things like that. For phase two we sent out an email to a bunch of developers just saying that or so developers who either use the tool to make a suggestion or receive the suggestion just to see like how useful do you find this tool. Does it fit in your workflow and things like that. And then the final phase we wanted to see is this tool actually useful for recommending static analysis tools to developers. So we made tool recommendations and compared using emails, issues, pull requests and the suggested changes feature to kind of see which ones do developers actually like in terms of receiving a pull request from a user. And we plan to present this at the all submit. The submission is next week. Submit to the International Conference of Software Engineering. So for the first phase that pull requests are still the most popular way to make suggestions on github. As far as people saying hey here's a change you should add to your repository and then the developer saying okay I'm going to add this change. Suggested changes are close but not quite on the same level. We did find however that pull requests with suggestions are significantly more effective than pull requests without the suggested changes feature. And we also found that suggestions are accepted about twice as fast as pull requests. So even though people may not accept them as much people can make a decision on if they're going to accept it or not. Much faster than a pull request or an issue. And again all of the reds, rectangles are statistically significant. So we basically ran some fancy steps tests and it says hey this is significant in terms of getting people to actually make suggestions and accept recommendations. In our survey to developers we found that they saw this tool is very useful in terms of like integrating it into their workflow and for teams. Towards the left is the more useful so more useful people are to the left and the people who say it's not useful to the right. Nobody said it's not at all useful. And most people say it's useful, not very useful but just useful. There are still some issues with the feature itself so a lot of people surveyed, really complained about not being able to suggest changes to multiple lines or to be able to commit in a batch commit so a lot of people don't like each suggestion itself as one single commit so if you have a bunch then you are like really extending the commit for this one pull request and they would rather see it all combined into one. As a side note for this another way to show that emails are bad is we sent about 300 emails to people and only got responses from about 50 which is like 6% so again don't use emails to send surveys or regimen tools. And then finally for phase 3 we had developers look at each of these kind of mock recommendations for each tool using a suggested change pull request issue or an email and again we found that suggestions were the most popular so the preferred way to kind of make recommendations to developers based on looking at different examples for each of these systems. Alright so moving forward kind of what I'm trying to look at for my future work is how do we actually implement this suggested change application system to real life or to actual developers on github. So if you remember from our earlier kind of the earlier slides the kind of four issues we kind of ran into were interrupting developer workflow and social context when we like automatically submitted pull requests in terms of like what actually receptiveness is we looked at familiarity and desire how can we target developers familiarity and target things that they want to desire or the things they desire in terms of tool recommendations and we plan to come up with this so this is a bot that automatically suggests tools on pull request saying hey here's a tool that can help you fix and find errors in your code. So to target desire we try to say hey developers don't want bugs in their code so you don't want errors this tool can actually fix all these errors so we kind of emphasize the fixing errors part of your code which research shows debugging is the most time consuming part of software engineering and the least favorite part of developers. For familiarity we just try to emphasize hey this is your code you made this change at the top this is your project you contribute to so you're actually familiar with this area in the code and without trying to be mean about it saying you added this error to this project. For social context just adding different information about the tool so kind of the error itself links to other places in the code where the same error appears and we have a link to the website down here like fixing the spacing issues from before so all of these are just kind of like providing more information about the tool itself and the things that developers kind of want when they see a recommendation or want a suggested change for a tool. And then finally for the developer workflow we kind of like make the recommendation on the exact line of code that has the error. We actually provide a fix which is one thing that in the pull request study people complained about all these errors being reported so here we say hey this tool suggests this fix for this project or for this error and then at the bottom there's a commit suggestion so the developers actually have the power to decide whether they want to commit this or hold off whereas for pull request it's automatically built into the continuous integration or build system and they automatically run and cause a bunch of things to fail. So in this case they kind of get to choose whether or not they have the power to choose if they want to try it or not. Alright so just to kind of wrap things up just what does this mean for DevConf we're all here. A lot of people are here kind of recommending tools just a quick glance through the website and the schedule and we see that all of these tools are being recommended here at this conference and this is just looking at the titles of talks not actually the talks themselves so if you're a developer you can see this list and be like well this is kind of overwhelming I don't know which tools to go to I go to talks and then I don't know how this tool in the talk will actually apply to what I'm trying to do and it could kind of be like I don't know kind of like a sea of tools in terms of developers. So just some things to consider. We found that the most effective way to recommend tools is from peer interactions unfortunately that's not very scalable or appropriate for how software engineering is built today. You are a tool developer or a tool enthusiast and you want to get your tool out there. The first thing we would say is to try to target receptiveness. Again looking at the desire and familiarity of programmers so how does this actually fit into developers desire in terms of improving their project. What is the goal of your tool and how can it fit into a workflow for a developer. How is this tool familiar to developers or is it similar to other tools does it complete the similar workflow that developers already do. What's the best way to actually kind of show them that you're already familiar with this technology just use this tool and we can do it for you. Social context developers don't like outsiders or intruders kind of invading into their project without being part of the community part of the ecosystem of the project. So without trying to like barge in by sending emails or pull requests you can kind of say like hey where is project we just kind of want to integrate in safely and smoothly without causing any issues. And then finally the user workflow so if you target users the last thing you want to do is kind of interrupt their workflow and kind of like make them have to do more work. So in terms of integrating into workflow finding ways to effectively integrate your tool into the workflow of a developer so that it makes it as easy as possible for them to kind of accept the tool that you're trying to recommend. And if you can do these things then everybody will be happy. Thanks for your time and sorry for bothering you. The mic. I'm going to pass the mic. Sorry. So one of the things that can happen when you're doing automated requests in any sort of environment like GitHub is that sometimes the bot can be over aggressive especially on a busy and a lot of pull requests. So one of the things that I was wondering if you considered would be some way of having a project to be able to opt out their entire project because they may not want to see 300, 3000 of these automated requests coming into them that could really sour perception and on an active community that could really impact perception of the tooling in a more broader sense. Wanted to hear your thoughts on that. So yes. One of my colleagues Martin Monpres, he's trying to get this bot.yaml. So there's a kind of movement going on from the bots and software engineering workshop which was over this summer basically saying hey, a lot of people on GitHub don't really like bots and don't want bots on their project. So he's looking at adding a bot.yaml file. So if you're a developer of a project and you have this file you can say hey, we don't want any bots or you can say hey we only want pull request bots and not email bots or if we kind of, you kind of control the power of the bot interactions I guess. And then for bot designers we're trying to get it so that if you see this file kind of adhere to what the project wants. So if they say no bots then no bots. If they say okay only pull requests then they just kind of say okay we're only made pull requests and things like that. But I can, there was a link I can send it to you later. Regarding the peer-to-peer interactions where you suggested a study you did with the Kaggle dataset. So that was a face-to-face interaction where you actually recommend tools. So what about virtual interactions where we can recommend tools via blog posts or via LinkedIn posts or requests of that sort. Does that fall under the peer-to-peer interaction category as well? This is not, in the study by Murphy Hill, my old advisor, he looked at those separately so saying like if a tool is recommended even if it's by a peer but through a virtual interface so like Twitter for example he looked at. People say Twitter is good for learning about tools but it's still not as effective as a face-to-face interaction. So I think we showed that human peer interactions are on the decline especially in software engineering. So I think recommendation like those would become more important actually. So I think that was 2011 when he did that study. So I think now we kind of see things start to converge eventually I think most people will start getting tools from social media or blogs as opposed to from other peers. So in that case I think that would be important to try to improve those recommendations. Sorry, any other questions? Sorry, if anyone if no one else has a question right now I'd like to ask a question. So you talked about how there's a good amount of research to show that people don't or developers particularly don't like interacting with bots in a kind of social way is do you feel at least like anecdotally not necessarily statistically that you have like kind of a something that provides you some amount of hope in making people feel more positively about interacting with a bot that's making pull requests or something? So yeah the first pull request study showed that people don't like bots so other research shows that. I think by trying this design in the middle we're hopeful that it can improve how people interact with bots. So using this type of bot is not as intrusive as say like hey you must do this it's kind of just like a comment hey you know we saw this error this tool can find and fix errors like that. So rather than being like super spammy we also haven't figured out the frequency so like of course if we run a tool on a project and there are like a hundred errors we don't want a hundred of these coming up on somebody's pull request. So kind of targeting when to recommend using this type of system and what's the best tool or like what's the best bot or error to target so like if you target errors developers don't really care about then they're not going to really care about the tool but if there's like a more important error or something that's higher priority they may try to look at that as well. So some of the things we're still kind of looking into. One less call? Okay. I guess it's sort of the opposite side of that question as well which is that there's obviously a history of bots that people really feel positively towards like the whole karma bots and all the python bots the pi bot builds out the soupy bot and stuff that let you do things from IRC and so on. So that line of it being some you mentioned that before about it being something that things I can do for you so it's a question but what do you like is there can you move this into some kind of methodology so that if I came along with any kind of thing I wanted to sort of recommend into people I could find a way to rethink and reframe it so that I could come out the other side like this and not spam people with 200 annoying messages. Right. So yeah most of my work is on tool recommendations so this kind of thing is recommended for tools. One of the other things I'm interested in is how do we actually recommend processes or practices so things like code reviews or testing things like that. How do we say hey we noticed you aren't really doing this in your project. Can we recommend this you to try this out. So I think again the WESEL study showed that people like bots doing things for them but don't like doing things with bots so improving that interaction is probably the priority and then actually just kind of saying like hey these things are actually important so like again the receptiveness thing if they think it's important they're more likely to have a desire to try it or if it's like hey you do this all the time so like the Python black tool for example like hey you always manually fix this in other people's pull requests but if you use this tool we can automatically format it for you so you don't have to keep doing that and then you can spend time on cooler projects and other things. Okay I think we're just about at time thank you. This is great. So I would like us all to thank our speaker. I think we have a short afternoon break and then there's going to be kind of a final conference event in Metcalf Hall I believe. Also there's a survey here if you'd like just to provide feedback or anything about the talk just a few questions to participate in the research.