 Hello, my name is Akata Kruzhikova and I am a PGD candidate at the Center for Research on Cryptography and Security at Faculty of Informatics, Masaryk University in Burnochek Republic, and I do some research in collaboration with Retired Company. My talk will be about the role of a contributor security behavior regarding user authentication in open source projects from usable security point of view. So, let's start with short explanation of the context of how to even investigate user behavior. To focus on human role in IT security, we have a research discipline called usable security. According to the International Organization for Standardization, usability is defined as follows, the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. On the other hand, security is defined, according to this again, as the prevention of damage to unauthorized use of, exploitation of, and if needed, the restoration of electronic information and communication systems, and the information they contain to strengthen the confidentiality, integrity, and availability of this system. Basically, a goal of the usable security is to balance the usability and security, or let's say, to achieve security via usability. Usable security field has focused on applied problems and the interaction of cybersecurity and usability. However, there are two common problems for the field that combines two disciplines, which usable security definitely is. Experts in one field usually do not have expertise in the other one. So, to be able to understand, contribute to security behavior, IT security expertise, it's not enough. So, we need to use some social science techniques as well. As I mentioned before, we will focus today mostly on security behavior regarding authentication. Firstly, let's see some more general problems in secure behavior regarding authentication, which is not related only to IT professionals, but more to regular end users as well. Users usually do not follow good security practices regarding user authentication and not only that. And it's very challenging for designers of these security solutions to propose solutions which will be happily accepted by the end users. 20 years ago, it was believed that the main reason behind this is that users are just careless, but now we know that it's not true, because the problem is usually not that the users are careless, but the problem is in low usability and incorrect understanding of security. Now, maybe you can object that this is probably more applicable to regular end users than IT professionals who has a better idea how the security features actually works than regular users. And I agree with you. As you can see from the graphics on the green triangle, which demonstrates the amount of users, we have much more end users. And yes, we can imagine that the green area is not only the amount of users, but also the amount of troubles of these users to use IT in a secure way or use IT at all. However, when we have a look at the blue triangle, which demonstrates the impact of the security incident, we can see exactly opposite trend. Whereas regular users or the end users usually treat them with their behavior only to themselves. Yeah, and sometimes with quite a big impact or consequences. For example, in the banking context, it's applicable only to one user, but he can lose a lot amount of money. But IT professionals treat them with their not secure behavior to all users of their solution, which is much more people. So we have to also distinguish between the IT professionals and IT security professionals because their understanding, perception and behavior could differ. We also have to keep in mind that not every IT professional is trained in IT security. And in fact, that even the studies of IT security expert demonstrates that while IT security professionals follow security recommendations more often than regular end users. Even though the usability is an important factor for the IT developers as well. So for this project, we focused on a simple piece in upstream development, authentication of independent developers on GitHub. The main reason for that was the possible impact of security incident as we can see from the previous slide with the two triangles. And some companies, including Red Hat, use open source projects as a source for their internally maintained repositories. And these internal repositories I maintained and maintained according to the fight and verify processes. However, the input to these repositories, the input source goals are codes created by independent developers. And these developers are external to the company. So no policy or processes are applicable to these contributors or maintainers. No single scion is in place, nothing like that. How and if secure authentication will be used in these projects is entirely based on the project developers themselves or maintainers. Is somebody still the access to their account or still their identity? It could have serious consequences. The attacker could easier to smuggle some malicious code into the supply chain. This could result not only in some probably obvious security incidents, but also in the loss of trust and reputation and credibility of that person. The developer, maintainer, contributor, whatever could be seriously damaged. So what authentication methods we can investigate in GitHub? The first method mandatory to all users is the first factor authentication login and password. Even though it's mandatory to all users, the perception how these users perceive it is interesting. Second factor authentication, this is not mandatory. Users can choose one or more of them. They can start to use it and then deactivate it, change it for something else. Just start to use one method then switch it to another one, etc. Between these methods, one of these methods is authentication application, which is basically software token user can choose which application they want to use. For personal usage, this is probably more suitable because no additional hardware is needed to buy and maintain. Another option is security keys, which is basically standard hardware token for personal use. People have to buy it, but they can use their token, which they get from the company if they are willing to. There is no problem. And if the company, of course, agree with that. SMS number, standard or SMS code, which is a standard method, I think it's not needed to provide further details here. I just want to mention one interesting note. We wanted to investigate GitLab as well, not only GitHub, but we did not get enough answers for this platform. However, the GitLab did not offer SMS number as the second factor, authentication. The SMS code as a second factor is not considered as very secure in the present days. You probably may notice that it's not possible to use SMS code as another level of security element for your online banking operations from the previous year, or maybe one year more, like two years ago. Regarding recovery options, they are mandatory when all, not all, but at least one of the second factor method is activated. The first one offered recovery codes, it's string of characters. You can see it on the picture on the right side, fallback SMS number, and recovery tokens. This is how it's called in the GitHub. But basically it's the interconnection with the account of one of the most used social network Facebook. We send a quantitative questionnaire via mailing list to the redhead employees, and in this email we ask them to fill the questionnaire, in that questionnaire we ask them about their actual usage and perception of GitHub account, sorry, and related authentication methods. We also ask to do some simple tasks with authentication log, some checking and filtering of the records, then security behavior regarding authentication, and possible determinants of this behavior were measured via the model inspired by psychology. Participation in the study was purely voluntary, data were collected anonymously and the company did not get any information who of their employees took part in the study and who didn't, and of course the company did not get any particular answers for their employees. At the end we got 83 full responses from the representatives of both groups, the open source project maintainers and the contributors. Most of them work as the software engineers and in the redhead. We got one-third of our participants from the US office, roughly one-third from the Czech Republic, from the Czech office, and one-third from the other redhead offices, and the data are from November 2020. So let's look at the minor results. Most of the people use two-factor authentication, which is very positive finding. The most used second-factor authentication method is the application or the software token where the participants could decide or the users could decide which application they want to download for free, and the second most often second-factor authentication method was a hardware token. This is also the fact that the SMS code was the least used method. Next we investigated how participants perceive the authentication methods offered by GitHub, not what they use, but how they perceive it. Participants were asked to evaluate all the methods, even if they were not using them or have no experience with them, again both in the terms of the security and the usability. This evaluation was for all methods, or the results are that the evaluation was for most of the methods, at least as a rather positive in the terms of usability and security, except for Facebook. Facebook as a method got significantly less answers than other methods. Less than half of participants decided not to avaliate Facebook just to skip this question. This could have different reasons. The method could not make sense to them. For example, they do not have a Facebook profile. So they didn't want to spend time even thinking about it, even when they were asked to evaluate all the methods regardless if they have any experience with it or not. Another interesting information which we can see from the evaluation from the small sample is that they write it as the Facebook as a very low, both in the terms of usability and security. So as a rather not usable and insecure. Interesting question came up from this, and that's it. If the system or the service deploy or offer some security methods, security feature, which do not make sense to users or users just do not trust that method, will be the whole user perception of the system security affected by this or not? Okay, let's look at not such a positive findings, which is about the policy. Participants were confused with the rules applicable to their account, which we can see from 36% of participants who did not know if any policy is applicable to their account or it's not. As I said before, we focused on independent contributors to open source with their personal accounts. So some of our participants could use accounts managed by their company as well. 26 of them actually reported to have some company policy applicable to their account. However, there is nothing between these two states. You have your personal account or you have your account maintained by the company or from the company. So where is the risk? It's important that users know if somebody is taking care of the security or the responsibility, for example, the company via the policy, or if it's their own responsibility. So they know they have to focus on securing their account by themselves. Okay, we have to be careful with interpreting and generalization of our results. Why? Because we registered 252 clicks on the survey link, but we got only 87 full answers and then 83 after cleaning. We send it on the invitation to the research on the mailing list with around 10,000 recipients. So the response rate is very low. Since IT professionals are very hard achievable sample, we have to rely on participants' willingness to take part in our study, which always brings self-selection bias. The self-selection bias could mean in our case that our sample consists mostly from people which are more interested in our topic. Our topic is more important to them. Our sample perceived themselves also as a rather IT security survey, which could also influence the results that only the people who are aware of IT security took part and positively influenced the results because it's important to them. It's interesting to them. They maybe have more knowledge about them. Also the number of participants is not very high. I would say it's quite borderline. But honestly, the problem is not only in our research, but in the whole social science or useable security or UX testing because people usually do not enjoy finding a questionnaire, especially IT professionals. So since we did not have access to the participants' github accounts, which would be, by the way, not ethical, we have to rely on self-report data. However, participants were encouraged to work with their accounts with provided links which could help them to fill the questionnaire quicker and more precisely. So be able to have stronger conclusions about the contributor security behavior we would need to have bigger sample. However, I would like to take away from my today talk these three takeaways. Usability and security of authentication is important to IT professionals as well. Not only authentication of comments is a serious risk in supply chain attacks, but also weak user authentication as well. And this is an important topic and this topic needs more attention in the future. If you are interested in more details from this research, please see our article in the Red Hat Research Quarterly from August 2021 for more information. Now that's all from me, so thank you for your attention.