 As Ross said, I'm Adam DuPay. This is Jan Shoshish-Tashvili. I mean, I can do it, but that's because I've known Jan a long time. And so we actually co-direct here at ASU, the SEPCOM lab, which is the Security Engineering for Future Computing Lab, which if you ever are in the position where you have to name a research lab, name it as broad as possible. That way you can do whatever you want, and you're not even stringing by your own name. Good advice, maybe? Yeah. Great. So a little bit about our background. So it's funny, as I was kind of editing these slides, I was telling Jan that we actually didn't have to update them very much, because we have similar backgrounds. Do you want me to change that? No, it just seems like you're not. Background is, I knew I just started. Yes. So my background is I went to UC Santa Barbara for three of my degrees. So I went there for a four plus one program. So did undergraduate and bachelor's. There I got involved in research. And at the end of that, I was like, I am so happy to graduate. I had a job at Microsoft. I'm going to work full time. And I never, ever want to go to school ever again. That lasted about a year before I decided, man, I really liked research. So I literally remember I was on. So I'd written a paper for my master's, and I was on the bus ride home from Microsoft in Redmond to my place in Seattle. And I was reading this paper. And I must have read this paper like 30 times, because we are rejected from two different conferences. So I'm working full time and trying to edit this paper. And it just kind of hit me that, wow, there's such cool research problems here. I did all this work. We defined, oh, here's where the field could go. And then I realized somebody else is going to go do that. But I know all this stuff. I should go out and do that. There's these research challenges that are here waiting for me. And so then I emailed my advisor, do you want to be here? And decided to go back to Santa Barbara for a PhD. So I was there for four years, graduated in 2014. And then I ended up at ASU. So I've been here for three years, going on four. And it was at ASU that I met Jan. So Jan, you want to? It was at ASU that you met me? At UCSB. Sorry, I'm thinking of this as my own thing. My history is less cheerful and more traumatic. No, I'm just kidding. I want to. You're not a superhero. You don't have a tragic origin story. Well, we have to. We are superheroes. No, I went to undergrad in New York, Upstate New York at Rensselaer Obstacle Institute, which is the MIT of Upstate New York. So it was an interesting time. And when school ended, I was as similar to Adam. I was super excited to get the crap out of there, because RBI is an area that's not very, very nice. Anyways, so I moved to Phoenix. And went into the industry and was making the nice industry income as well as part of a bank. And then I got very, very bored, similar to Adam. But without knowing what is the potential for research in academia. And so I ended up at UCSB through a completely different channel through hacking competitions. And so we'll talk briefly about this today. But Adam and I were both members of the Shellfish computer hacking team. And we participated in hacking competitions around the world. Went to Las Vegas for the World Championship and so forth. And in Las Vegas at the World Championship, participating with Shellfish is kind of where I proved myself to Giovanni Biniar, our advisor, and who then invited me to pursue a PhD. And I was so bored in Phoenix, in the industry, I was thinking, oh, sure, why not? And actually, in the qualifying event for the World Championship, which we all did, it's an online thing, we did it for Xanabarbara. Before I was a student there, my first time to the lab, that's where I met Adam. I was still a master's student there. And we literally didn't know each other at all before that, because he was just somebody's friend who showed up. But we ended up working on the same problem, reverse engineering virtual machine, which we still remember. That was pretty awesome. We didn't actually figure out that that's where we met until years later, which we're already doing our PhDs together. And so we both kind of grew up in this security lab in Xanabarbara, hacking on Shellfish. And we both moved into our new hunting grounds. Adam led the way and then told me here as well. And now we're here at ASU doing this sort of next generation cybersecurity research and also doing cybersecurity competitions. So we are the faculty advisors of the Pone Devils. And once in a while, we also team up with Shellfish, so like a conglomerate team. So yeah, so this is a student-led organization. They meet Tuesdays and Thursdays for two hours, usually from around 4 to 6. And so the Tuesday meetings are more beginner-style meetings where people get introduced to binary exploitation and binary vulnerabilities. And then the Thursday meetings are more advanced where we kind of cover real-world CTF challenges. So this is open to everyone, actually the current president of the Pone Devils is a junior. And he actually got started because two years ago, he interviewed me for the Grand Challenge Scholars Program and asked me about security stuff. And I said, yeah, I have this club that kind of meets. And this was very, they weren't even a student org or anything. They were very informal. And so he started joining. He went to all the meetings. He read books on his own. And then about a year into it, he was like, I think we should become a student organization. So he drafted a charter with other members. We voted on it. Then he was voted the president. So now he's the president. And actually, we used to have one meeting a week. And then this summer, he was like, we should have another meeting of beginners. And I said, I don't have time for what is essentially a full-time class load to do that. He's like, I'll teach it and I'll run it and do it. So he does those Tuesday meetings all on his own. And so we're very open. If you were bad at publicity right now, we're getting better and we'll do better in the future. But if you want more details, feel free. Email me. I'll have my email at the end. I'll be happy to give you all the information and point you in all the directions so you can come and join us. How many people on average attend here? I believe the beginner meetings on Tuesdays are about 10 to 20-ish. And in the advance, we're only around five right now. But when we compete in CTS, we've had anywhere from, I think, those 20 to 30 students show up for sometimes we do 48-hour competition. So we've actually been in the 209 is actually the great room that we've been in. So CTS, do you guys do like a year? So it varies heavily. I've been pushing us to do more than one a month. So that would be, nowadays there are CTS every weekend. Yeah, kind of crazy. We're almost non-stop. In a week, actually next weekend, we're taking a road trip to Santa Barbara to play on site with shellfish, kind of do a knowledge exchange. One of these days, hopefully they'll come to us. So yeah, it's an amazing experience, these sort of security competitions. And from there, you get ideas and skills that you can then apply in research. And that's kind of our secret sense. Actually, we both got into it. So the way I got into security research is I took undergraduate security from Giovanni Vigna, my advisor. And I did well in his class where enough that he invited me to join their hacking group. So I proved myself in class. He invited me to join shellfish. So I would go to the meetings and compete in CTS with them. And then I finally started asking him about research and then started bugging him. And it took about three months or four months. I actually probably took longer before he actually let me work with him. But that's how I got into it, was through proving myself through these kind of things. Good stuff. All right. So, Cefcom. Cefcom. Cefcom is our lab. It was. Oh, it's not right back backwards. Yeah, so Dr. Gail June Ahn, he is the founder of Cefcom. So he really started this lab and started doing the type of security research that both Yann and I really enjoy and like to do. There's some kid there. I think that's one of the PhD students. I think that's me, but who can tell what this is blaming on the projector and not on me. And we have Ziming Zhao. He's a research assistant professor in our lab and Yann. So we all four basically co-direct this lab, which is kind of insane. So we have about 20 PhD, no, 15, 16 PhD students and about five or six master students and then about, I'd say around 10 or so undergraduate students. And really, so we'll just give you kind of, we'll go quickly ish, because I wanna talk to you guys about stuff, but give you kind of a brief overview of the areas of, so in general, and because, so let me pull the room because this is an FSE class, right? So what's your guys' background? What area are you in? We're both computer systems. Computer systems. Computer science. Computer science. Computer science. Computer science. Computer science. Oh, nice. Yeah, this was a weird year where nobody outside of CS took me up on weird time. Okay, but they're missing out because, yeah, I mean, security, all these types of things. There's a lot of overlap. So yeah, in general, what we like to do, so we see our goal as kind of improving the security of software wherever that software may be in order to keep people safe on the internet, keep your data safe. So things like the Equifact breach, things like the OPM data breach, like these are all things that we think about, how can we kind of make these things better? And so in terms of research, we really, we are, let's see the, if you can think of someone who's the opposite of theoretical research, that would definitely be us. So we do almost zero theoretical style research because, I mean, not that that's not important. A, it's not my strong suit, so I'm not good at that stuff, but B, for a secure, you know, what I think about is like, you can have, you know, algorithms, right? You can come up with some new, cool algorithm to solve this particular problem, and that's a cool breakthrough, right? It may not be useful now, it may be useful five years, 10 years from now, it's hard to tell with that kind of research. But for us, we want to solve the security problems that are happening now, right? So we want to make an impact, and that's really what defines what we try to do is can we actually positively impact things now? So we're very much focused on that type of thing, on impact and actually improving the current state of things. And Adam mentioned keeping people safe, keeping data safe. That's super important, and that's the end goal of security research, but we also do a lot of research that is in terms of immediate applications, very offensive in terms of offense oriented. Offense defense, not offensive. Not, yeah, so for example, automatic exploitation of vulnerable software, this sort of thing. The idea is of course you build up a lot of techniques to automatically identify vulnerabilities in exploit software, might seem to be purely offense oriented, but in reality, if everyone has these tools, then those kind of bugs tend to be found before they become a major problem. Those tools can actually be used defensively. Yeah, and kind of in general, the way we think about things is so, just finding one security vulnerability is usually not interesting, right? Like I could take any of you, I could train you in security, I could give you a million dollars in two years and set you on an iPhone or Windows 10 or Mac OS X. Like I could set you on this and you would find an interesting new vulnerability that nobody's ever discovered. Like that's, I don't know, do you believe that? Yeah, so I mean, in Las Vegas at this, it's at DEF CON, it's kind of called DEF CON, it's the world championship of CTF. And, you know, every once in a while, you need a break from CTF. So just this past August, iOS, where there's like a zero-day in iOS is a one million dollar thing. We sat down and we're like, all right, let's find a zero-day in iOS. And literally the first driver to be opened had a vulnerability in it, right? So that requires still some amount of access to the device, but you can sit down and you can find it, you know? Right, and the point is that's not very, that's cool. I mean, I think that's cool, right? If you came to me and was like, I found this new vulnerability. I think that's very cool, but it's from a research perspective, it's not very interesting because how can you scale just throwing money and people on a problem, right, to find bugs? Like that's something that cannot scale, especially as more and more software is being written and is being deployed in your car, on your watch, everywhere, right? So for us, what we think about is not only, okay, yes, finding bugs is cool and awesome, but it's even cooler if you can develop an automated system that can automatically find and even automatically exploit bugs. And that mindset helps drive kind of our research as to why that to me is the interesting thing. Students always wanna, at least some of the undergrads, always wanna either throw machine learning system X at a problem or just find bugs. And it's like, no, those things aren't scientifically interesting, right? There needs to be some cool new insight that you can. And actually, to tamper the expectations that Zero Day wasn't actually exploitable for code execution, it was just an all-battery service you could crash upon. Nice. Ross, do you have a comment? Cool, so some of the areas that we have, if you wanna hit a button. So we have one area that we have students working in and we have an NSF project on is software-defined networking. So have you guys heard of SDN? Has anybody adminned a network before? Have you adminned your own wireless router before? Been into the page. So actually, shockingly, in even a corporate network, the switches you'd buy from Cisco or whoever are similar to that is there's a web interface and usually it's a horrible interface that only works in Internet Explorer 6. And that's how you configure the switches. So it's an incredibly manual process. Even though they have very cool functionality, you still have to configure these things. So the idea behind software-defined networking is what if each of the switches were incredibly dumb and you could just programmatically, you could have one central brain, they call it the controller, that can actually talk to all the switches and configure and change things kind of at runtime and can change how packets route throughout the network programmatically so you can write apps that actually get data from the controller to see the entire state of the network and can change routes. So you can do things like load balance at the network level. You can do all kinds of really cool things. From a security perspective, usually the traditional way is thinking about the firewall at the edge. So block all bad incoming traffic. But if you're a malicious employee, if you're either an employee who's just got fired and wants to do something malicious or you have malware running on your computer, you're gonna be inside the network and a firewall usually can't catch that traffic. So one of the cool things we're exploring in our SDN work is how can we actually use SDN to make every switch essentially be a little firewall. But how can you actually do that efficiently and effectively are some interesting things that we're looking at here. The web, so the web is my background. I do a lot. So my PhD was on web application vulnerability analysis. Like how can we automatically identify vulnerabilities in a website either through black box fuzzing interaction or by automated source code analysis. So we have students kind of following up on that work in different domains and trying to think of new ways how we can find new types of bugs or new bug finding approaches. So then we have cyber reasoning systems. So this is kind of an emerging area. I'm gonna talk a little bit about your background on that. Yeah, it's an emerging area of cybersecurity. I got into this again through capture the flag. So at UC Santa Barbara in my kind of later years I was the captain of Shellfish there. And this event came out called the Cyber Grant Challenge run by DARPA. I don't know if you guys have heard of that though. They paid money to invent the internet. Yes. They were the funders of the internet. We would literally not have an internet without DARPA. So the internet was originally called ARPAnet. And ARPA was the Advanced Research Projects Agency. And now that is called, that was split into DARPA, ARPA and so forth. But DARPA is the Defense Advanced Research Projects Agency. They put huge amounts of money into research that is applicable to defense. For security, it is heavily applicable to defense. And they ran the Cyber Grant Challenge where they pumped something like a $60 million budget into. Can you step back a little bit? So what was the original grant challenge? So DARPA ran the original grant challenge for self-driving cars. I don't know if you guys heard of that in I think 19, no, in 2006 or so. Too far back. And so the self-driving car grant challenge, the basic idea was let's put a lot of money to publicize the problem and fund potential developments in self-driving autonomous vehicles. And so DARPA funded a number of teams who had to compete at a final event for prize money to race down a desert somewhere to kind of finish a track. And then DARPA did the same thing in the Urban Self-Driving Challenge. I forgot the exact name. And so that was on city streets, same sort of idea. Then DARPA did a robotics grant challenge in 2015. And you might've seen YouTube videos of robots trying to like open a door and falling over. And that's where that came from. And in 2016 DARPA ran the Cyber Grant Challenge where autonomous systems, fully without human interaction, had to analyze software, identify vulnerabilities, exploit the vulnerabilities, and patch the software to protect it against those exports. And essentially they hosted this as a autonomous, no humans involved CTI, like a traditional capture of the flag just with all automated systems. So I was the captain of Shellfish at the time when the call came out and they thought, man, we should do this, Shellfish should do this. So I dragged UCSB into this contest and we built a cyber reasoning system that won third place in this challenge. And that was like, I don't know, total of a million and a half, one and a half million dollars of prize money for the hacking team. So that's a lot of international travel. So here at SU, we're pushing forward that research, specifically in, at the moment, making these automated systems capable of hacking feeds that are reserved for humans nowadays. So your modern vulnerability, your modern exploit for something like jail-breaking a phone changed together a huge amount of vulnerabilities. A friend of mine demonstrated an exploit at a Ponte-owned-like event in China called Ponte. And you change something like 10 vulnerabilities to go from an Android app with zero permissions to full compromise of the trusted execution environment that is responsible for stuff like your fingerprint reading when you're trying to authorize a payment, things like that. So no current technique can chain vulnerabilities automatically like that. And so this is what one thing we're pushing into. Another area we're pushing into is the identification of more and more classes of vulnerabilities automatically. The cyber-grad challenge focused on memory corruption, mostly. So you have a buffer overflow if you've learned about this or something where a program handles data correctly and ends up overriding its own code, for example, or overriding the location where it's executing code and executing the data as code. We were trying to push that into algorithmic attacks and into a number of other types of vulnerabilities. That's just a cool area of research. You build systems that can automatically hack for you. I think the cool thing, so a lot of the web work that we do is analyzing source code. So analyzing like a PHP web app or a Python web app. In these cyber-reasoning systems, they're given binaries. So like x86 or ARM assembly code instructions. And from there, then they have to reason about the behaviors of the program to see if there's potentially anything unsafe. But they can't just say like, yeah, we think there might be a bug here. They need to prove it by actually generating an exploit that can control the program to show that they can actually exploit it. So a very cool area of research. And also program translation, because these cyber-reasoning systems have to defend in the given problem area of the cyber-grant challenges we used to have to automatically fix the bug. And of course, in the real world, you can imagine automatically fixing the bug is something that is also desirable. Cool, so we have a new project on access control and specifically in, the interesting part about this research is it's on industrial control systems and SCADA systems. So these are specifically targeting the power grid. So power operators, so basically like most industrial control systems, it's one machine which is unfortunately typically like a Windows XP machine that hasn't been updated in 15 years, connected on some type of network to a bunch of programmable logic controllers that actually control physical processes, right? And so, and if these physical machines are generating power, you can imagine if an adversary can get onto that controller or get onto the system that controls the SCADA system, they can cause massive damage. I mean, they can just flood, I don't know, I'm not a power person, I'm not an EE person at all, but can really mess with power there. And this is actually the, anybody here of the Stuxnet? Yeah, so Stuxnet was a virus allegedly created by, well, I guess we don't know officially yet, but. They're like found out, right? Yeah, it's not, oh, yeah. I guess I shouldn't say anything. Let's wait until we hear. Wait until they're 50 years, yeah, 50 years from now and then we can see, but the, we know, what we do know is that it specifically targeted Siemens controllers of physical systems that were able to enrich uranium. So what it would do is basically get on this machine, identify that, yes, it's talking to this PLC that is the one that is used for nuclear enrichment by the Iranian facilities, and then it would like subtly cause the system to degrade over time while reporting back that everything was normal, right? So the status machine was like, light's all green, everything's great, but it would be subtly changing it in a way such that they had failure rates of like three times the norm, and so the scientists there didn't understand what was happening because their failure rates were huge, right? And so, and that's one instance of a, you know, that's a pretty, that's a clear instance, but you can think, man, this could be a really big problem if a nation state had access to the power capabilities of another nation state. I mean, you think about what would happen if the power went out for weeks at a time. That would be pretty crazy. You can see what happened, look at Puerto Rico. Yes, exactly. Not a nation state, but you could probably do pretty similar things. Yes. So we have other students working on incredibly low level security, so this for maybe the CSE folks, and this is what I actually really love about our lab and about being a professor is rather than, like at Microsoft I was told, I mean, I was on a team that did X, Y and Z and that was our product and we worked on that, right? But here now as a professor I get to work on crazy high level web things to next generation networking, to access control, cyber reasoning systems, and also way down in, so the ARM platform has this trust zone security extension that basically, so each register on the chip is physically backed in normal world and secure world. So we have two different copies. So literally the silicone is guaranteeing isolation between these two worlds. And so this is what Jan was talking about. This is how on your Google phone and even your iPhone when your fingerprint reader is done that actually calls into secure world into the trust zone and that does the fingerprint checking. So what this means is even if your kernel in the normal world, like the complete OS is owned by the trust zone, they can't actually then get access to your fingerprint data unless they compromise and take over a component in that secure world. So we're looking at how to leverage this for new interesting security properties. We're trying to break some of the security guarantees here. We also have some students looking at, I didn't put it on here, but looking at car security. So looking at can buses and how can we actually use trust zone or some ARM security in this kind of like embedded devices scenarios. Yeah, so mobile, so we're also really interested in mobile device security. Like Jan said, oftentimes we just install apps, right? It's like you want some apps so you install it on your phone. You don't think that, wow, if this app is actually chaining together 10 vulnerabilities, they could completely compromise my device. And so we have a corpus of, I think it's 1.5 million Android apps that we've been downloading and crawling from the Google Play Store. We have a super awesome, like, I think it's a two or three U server that has, I think space for 45 hard drives that we have to store all this data that's really fun that Yegana, the student in the middle there, put actually assembled that whole server herself, which was super cool. And so the idea is there we want to answer interesting questions about the security of mobile apps and the mobile ecosystem. We're working on a project now to look at phishing applications. So apps that actually will put a login window, let's say for a banking app and try to steal your credentials. So how can we actually automatically identify that before you ever install the app? So that's a cool, fun project. And on the flip side there, Raymond is working in the telephony space. So we're trying to combat telephony fraud and robo calls and scam calls by adding in a green lock authenticated caller ID mechanism to the telephony network, which is super fun. Yeah, so we have work in forensics. So Mike on the left is working in on Chromebook forensics. So he actually has this really interesting research where it turns out a Chromebook. So you guys know what a Chromebook is, right? It's completely, the encryption is done very well. So there's a master key that encrypts all these other things and the entire hard drive contents are encrypted. So you put yourself in the shoes of a forensic investigator, right? You bust into a hacker's place, you see a Chromebook. Do you just throw it away and be like, oh, this is all encrypted like garbage, like I can't use it. It turns out from his research, the answer is no, we can actually get forensically useful information off that device. One thing we found is that the extensions you install on a Chromebook, these, even though we can't read the contents of files or the file names themselves, they still leave enough of a fingerprint based on the file structure and the rough file sizes that we can say, hey, it's likely that this Chromebook has these extensions installed, which then means you can go to the companies that have that data, so let's say it's Evernote, you could then subpoena Evernote for this person's data. But if you don't know there's that link, you can't actually do that. This is with a completely encrypted Chromebook. I guess that's why you should use FD. Those are different questions, how do you get around that? And yeah, the other area that we're really interested in is threat intelligence and also security operation centers. So a security operation center basically is a group inside a company that's in charge with responding to security incidents inside the organization. The way these typically work is they are very dumb in that they just literally have people in front of monitors, so they'll oftentimes have these really cool-looking rooms with a huge 80-inch monitor with awesome graphics and alerts, but what it's doing is like virus detected on this computer, and then so the person has to go and wipe the machine and quarantine it and do some analysis, so we're looking at, okay, so actually what we're doing here is pretty interesting, we're doing a qualitative study of security operation center analysts to try to see what are the common problems and what are the common themes among their jobs so that we can figure out what things to automate. So this is actually something that's completely new for me is doing this type of qualitative interview style research, but I think it's an important part to actually understand what they're doing, so then we can actually move forward to figure out how we can help them, because you wanna think about the research space, there's an infinite amount of things you can do, but you wanna choose the things that are gonna be most impactful, so we wanna first understand what these people's job is actually like to understand how we can make the best impact. Eventually, we've been thinking about not necessarily for this, but how can you actually train people to be more security aware using games, could you teach them about phishing or about different types of scenarios using a kind of a game environment, so it's something I'm definitely interested in. Actually, I should mention with one of the next steps of our cyber reasoning research is the cyber ground challenge course for full autonomy, no humans allowed, they had an air gap literally with army personnel guarding it. Serious business. Yeah, serious business. And a robot arm, so they got some data out, but the system would burn the data to a disk and there would be a robot arm that would pick up the data and drop it into another system to ensure that there was no way that data could flow back into the cyber reasoning systems only out. Yeah, it was pretty intense, but in the real world, this autonomy requirement doesn't really exist, we have humans that can help, and so the next step of the cyber reasoning system is figuring out how we can, in a similar way, like how can we help humans with an automated threat intelligence agent, how can we have cyber reasoning systems that interact with humans properly, that can leverage, for example, their own expertise and combine that with human intuition, even if the human is lacking expertise itself. I have a question for you. Yeah, please. This is a con lab, do you ever have people that go into a certain area and then see another one and they're really interested in that transition? No, once you start, you're locked in for a while. So I think in general, I'd say different professors have different things that they find interesting, so you may work with one professor and then be like, hey, I think this area is really interesting, so they may say, yes, go in that direction or that doesn't interest me, go find your own funding, but I think it all depends, right? Because we do get funded research projects, so there are things that need to happen, right? And if nobody does these projects, we look bad or we have to do them somehow, like I don't even know how that would work, but in general, so I'd say for somebody like Raymond, the telephony, this is completely outside my area, we got started on this because Raymond came to us and said, hey, I think robo calls and scam calls are really important. We're like, yeah, yeah, we agree, but what are we gonna do about it? Like, we're computer people, we're not telephony AT&T engineers. And he looked at it and the thing that got me hooked was that he said there's a service you can use that can put a voicemail in your voicemail inbox without your phone ever ringing. So that way, usually if you see a missed call from a number you don't know, you just would never open it up, right? But this could actually drop a prerecorded message into your voicemail box. So I was like, this sounds insane, how could this actually work? And it's really, so he did some digging, found some patents. What they do is they make two simultaneous calls. So what happens is AT&T or whoever your provider is, there's a delay between when they receive your call and when your phone actually rings. It's usually a two to three second delay. And so what happens is you make two simultaneous calls, one will actually go through and so in about three seconds, the phone will ring. But in that time, AT&T considers the line busy so the second one will go to voicemail. So as soon as they detect one has gone to voicemail, they drop the other one and leave the voicemail on there. So you'd look at your phone and be like, new voicemail. So of course you're gonna listen to that because you never got a call on that. And that kind of started us on this whole path of looking at this and saying my whole perspective was we've solved email spam. I mean, I don't know if you guys use Gmail, Gmail spam filters are very good. I get such few spam emails, but yet we've constantly get robocalls and scam calls on our phones. So why is that from a technical perspective? So yeah, in general, we're super happy for the students to pull us in new directions because that's what's interesting. We pull each other in new directions too. Like I used to never really be interested in web but now I'm working with some of these people in that. So yeah, it's pretty flexible, especially with a group that does things so very, and especially from the perspective of an undergraduate researcher. Like later on in your PhD, for example, it's hard to really fully switch gears, right? But even somebody like Jan who worked on these cyber reasoning systems, he also had a paper on privacy. Yeah, exactly, privacy on online social networks. The information that you leak about your relationships by sharing photos. And nothing to do with binary analysis on my expectations. It's on the paper. I read this one on the software, this is on my networking where like the honey nets were dynamically made. Yes, yes, we have our honey, cool papers there. And some of the things that made it interesting to you guys, undergrad, undergrad, undergrad. Undergrad, and then I had a bunch of, we had a bunch of undergrad that I graduated from. So. All right. Undergrads, do you think you've been through here for a year or five years? Probably, I've worked with about, I'd say probably 10 or 15, but directly like, I mean, some of them are maybe Gail's the main advisor and I kind of help with, so I'd be about 15, but probably about five or six that I directly. So like Kevin Lau, who graduated last year, he was working with us. Man, and he published at least, I mean, he already published one paper on Bitcoin and ransomware analysis and he got involved with some web security stuff with me. That actually helped me a lot and pushed us forward and now I have more undergrads picking that up. So he's at UIUC now as a PhD student. And he won an NSF graduate fellowship and was our SIDZ, yes. Outstanding undergrad or something. Thank you, I think it was the right phrase. I just made one up. Yeah, well that's what I'm talking about. I should know I run it. The chair of it, yes. So going along with that, do you ever have undergrads that graduate and just like keeping stays at ConLab? Yeah, so let's see. Didn't Raymond do it? No, but so like Connor and Zach and James, so they're all in the four plus one program, so I actually really like that. So mainly what I try to do is usually like an undergrad student, I usually like to do Fury first because that gives us kind of a well-defined semester long period because honestly, I mean, not every person is cut out for research, right? Research is a completely different beast than doing a class project, right? Like the longest class project is maybe like four weeks, three weeks that you've done, right? But this is an open-ended research project that I don't know how it's gonna actually end or if it will ever end. And so some students, even good people who are very good technically and get A's in their classes, they get lost in this uncertainty, right? Where they don't know. How to move forward. Yeah, how to move forward or even to move forward. So I like like a Fury program, like that gives us a good kind of working base. And then if that goes well, then I'll work with them further. I like the four plus, well, I'll see how it works out, but I kind of like the four plus one because that gives us, I can get started with them for undergrad and then that can become their master's thesis. So we have a long time of working together. And then on the flip side, we've had some students like Ferris in the lower right. He was a master's student who worked with me, not even a thesis student, but he started working with another student on some moving target defense work that I didn't talk about. And he did so well that I convinced him to stay for a PhD and so he stayed. We're very charismatic. So we're excited to convince you to stay for PhD and hopefully he'll stay. Yeah, I wasn't able to convince Kevin, but maybe he'll come back. We'll see. All right. So if you want any more information, please email us and then yeah, I think we can have the rest of the time with questions and then a discussion about kind of research. I mean, I think we hit on it a little bit, but it's like, Jan, what are some of the things you look for in a researcher? In undergrad particular. In undergrad particular. In undergrad. So for me, it's always, the obvious thing is kind of independently motivated, right? And so how do you measure that? When Adam and I were still at UCSB, we had a couple of high schoolers come to us who were so intimately motivated that they were playing CTFs on their own. So that's one potential kind of thing to look for. But aside from that, people that have figured out something non-standard, like they run Linux on their laptop, this sort of thing. Something that shows that, okay, this kid is motivated and able to learn stuff on their own. Aside from this, recommendations from other professors. So if you super excelled in a class project, then that next step could be that theory that Adam talked about. And then once they start the thing that I look for in a researcher, almost more than it's as important as technical competence is the ability to handle uncertainty, like Adam said. So I've had mentees that were extremely technically proficient but just could not function when there wasn't a clear path forward. And sometimes in research there's not a clear path forward. Oftentimes, I would say. Yeah, it's kind of the standard. Or there's multiple paths. Or multiple. So you just have to figure out, try, I mean, you know. Yeah, exactly. And so there's that, and that's not a quantifiable thing. That's just something that you, you work with a person and you realize, okay, there was no path forward and they tried a couple of things and something worked and that was great. So I can give some concrete examples. I think when you're working with somebody and you get stuck because you will get stuck, you're doing research. If you come to them and say, I'm stuck, what do I do next? To me, that's always a red flag. We'll have a talk about why that's not good because this isn't my research project. I don't have time to do your research project. This is your project, so you have to own it. And part of owning it means that you, getting stuck is fine and it's natural, it's normal. It's a normal part of research. You're gonna get stuck. But the idea is you should be thinking about ways of how to get unstuck, right? Like, oh, maybe I do this, this, or this, right? And maybe you try one of them and then that doesn't work. And so you can come to me and say, hey, I'm stuck on this. I'm think, you know, I thought about these ways forward. I tried way A, it didn't work out too well. Of BC or D, what do you think is the best way forward? Right, because that shows me that you're actually doing stuff and you're thinking critically about the problem, which is I think another important. I like to say, especially to the PhD students that like, I don't know, everybody's smart, especially the PhD level, right? Like, intelligence is great. It's kind of a necessary prerequisite, but intelligence is not gonna do your PhD and it's not going to get your research paper done. And it's not going to finish your undergraduate thesis or do your research project. Everyone's smart. You need to be, you need to think critically, right? Which is a different skill than just being generally smart. You need to be able to examine a problem, think critically, and you need to work hard. And that's something that I think can often get by the wayside. I've definitely seen that in, especially undergrads who go directly from undergrad to a PhD, they treat a research project like a class project, right? So, ah, you work on it when you feel like it, right? It's like a, you know, you start two days before the deadline, right? Which is I know how a lot of the students in my undergrad classes start. But a research project doesn't like that. A research project takes consistent sustained effort. And I'm not saying it takes, you know, 100 hours a week of work, but it needs like four hours, five hours, consistent progress every day. Because you're doing something new. So I like to think about often, I mean, I don't know if you felt like this, but I felt like on some research projects, it's like you're just banging your head against a brick wall, right? And you just keep doing that enough and enough days and I guarantee you that wall will break, right? Because all it is is hard work and it will give and you'll come up. I've definitely woken up in the morning after a dream where I was coding and the code was like floating and I was trying to debug things and I was like, oh my God, a new way of solving the problem. And like, it definitely didn't work when I tried it, but then a few days later, I had another epiphany that actually made sense and I was able to solve the problem and progress. So, and it really is. And that was, so I think part of, maybe you found this too, is coming from industry into a PhD, I approached my research like a job. Like I gave it eight hours a day, every day I was gonna work on that research project. Like I didn't kill myself, I didn't like work, I mean, you work like crazy for a deadline, we're very deadline focused, right? So the last two or three weeks before a deadline, going crazy, working 12 hours a day to just finish this thing, but then the next after the deadline, you like take like three or four days off or whatever. But it's that consistent effort building up to that, that builds that base so that you can solve problems. Yeah, I was talking to an industry competitor that they actually beat us and decided we had to figure out the second, but he was complaining that his industry people, they worked eight hours a day and then they went home. They had no crunch time, nothing. Or PhD students, they work 16 hours a day and blah, blah, blah. But that sort of rush is not sustainable. So like Adam said, you need to manage your time, but at the same time, sometimes you get really pulled into a problem. That's amazing to see when a student is, cannot start working on a given problem because it's that fascinating. That's a good sign, a bad sign, is when a student will take on a problem and then say, well, maybe this isn't for me. And there are cases where either the problem is bad because in research, some brick walls sometimes don't break. Sometimes the problem sucks. We try to frame good problems, but it's hard to see fundamental flaws in a direction until you started walking down them. And so then the project can evolve. And so you have to be ready to have a project that mutates a bit as you're doing it, but sometimes a project is complete crap, it needs to be scrapped and you start over. When it becomes a problem is when you see a student try for, and I've had students like this, try for a couple of days on a project and then actually I want to do something else. Try that again for a couple of days on another project, actually I want to do something else and so on. After a while, then it's one of these things. If you meet one asshole in a day, they're an asshole, but if everyone you meet is an asshole, maybe you're the asshole. It's a good analogy, but I would phrase it differently. All right, well, with that, it's about time for class to be over. Let's thank our speakers again. Thank you guys, well, very quick, sorry. Short advice for how to actually start working with the professor as an undergrad. From my perspective, take my class. So I want to try you out first, right? I want to see that you're gonna be worth my time investment, right? So do well in my class and stand out, right? There's 240 students or 140 students, like go to office hours, talk to the professor, right? Get your ideas, your name known to them. So that way you can follow up with asking them about research. For me, the Pone Devils are those kind of things are great projects. A lot of the undergrads that I work with have come from those projects and those things and be super persistent with your emails. Like this is Ross and I and Yonget, an amount of emails that would make you explode. Like I have students in my class who complain about a mailing list where there's like 10 or 20 emails a week. Like, oh, it's just flooding my inbox. It's like, welcome to the real world. You're gonna have to deal with emails. What that means is from your side, you should be persistent because that's, again, a quality that we're actually looking for in researchers. So one of the secrets I sometimes do is maybe I won't respond right away and just see what you do. Like if you really want it and you really want to do research, you're gonna follow up with me again, right? And you're gonna bug me. It's gonna be impossible for me to say to avoid you, right? So that's the kind of persistence that I want. And that maybe even extends to even before you take the class, right? If you're just that ready to go, maybe you're ready to go. Take a note, you keep hearing these same things over and over again. What are faculty looking for? Number one is always motivation. We keep hearing that over and over. We talked about this, you know, putting in time, we talked about refuery, you know, scheduling sort of four hours a day through four days a week really makes it better as opposed to when you're like, I'm gonna do fury one hour a day, five days a week, can't turn on the computer and get stuff done in an hour. It's just not gonna happen. So, you know, time management, motivation, and just persistence, you know, everybody that does research, like they said, is smart. There's people that are way, way smarter than me. It's just- 100% we had PhD students with us who were, I mean, head and shoulders like much smarter than us in both systems and theory. And just they didn't put in the effort to really finish their projects all the way. And so they eventually graduated, but it took a long time and- Sometimes they did not. Or sometimes they didn't. Sometimes they dropped out. I mean, it's fine. It's just a- And then, all right, we're keeping them because this is after class time. So, yeah, I forgot. So, Adam mentioned, Owen Devils, you've been starting that with no research, no anything. CTF is a great way into security. And it also has its own rewards. World travel, global fame, Tuesday and Thursday, 4 p.m. Do you all do something with Soda too? I'm the faculty advisor for Soda. So Soda is also a great org to get into because they have, they do different type of hackathons where you build stuff rather than break stuff. They do that. A lot of companies come to give talks. For getting jobs, extra curricular, like Jan said, is everybody does, gets A's on their course projects, right? You come and show me the compiler you did in 340. It's like, great. Literally, 240 other students did that and literally across the country, every CS student is doing that, right? So you need to do something else to show people that you're motivated, right? Create a GitHub page. Start coding cool stuff just for you but put it online so people can see it. Like, that's what I know that I look for. CTF. You know what I'm saying? Thank you, guys. One more time.