 very much. Thanks. Okay. Hi everybody, thank you for coming to the conference. NordSec is actually really nice and it's going on for the next few days. So today we're going to talk about Linux Moose and IoT Buttnet on which we attacked and it unraveled an ego market. We wanted to present ourselves but it's already done, so we'll switch. Today's agenda is we'll first go over a quick recap on Linux Moose because it was firstly studied by ESET and quite discussed in the media, so we'll go over a quick recap. Then we'll show you how we built the Honeypots environment for the analysis so you can actually replicate the analysis if you want to and how we conducted a men in the middle attack on the button that's traffic to actually access the raw data of what it was doing. We'll then sort of present five characteristics that were revealed by that were revealed by our attack that sort of characterizes the clever scheme in which Linux Moose is involved. We just wanted to say quickly this is a joint research that has been done in between GoSecure, ESET and University of Montreal. It's actually a good example of how we can do research and like put our skills together to better understand computer misuse and stuff like that. So let's start with the Linux Moose recap. Linux Moose is an IoT botnet. It has a worm that infects routers and IoT devices. Actually anything that has an embedded Linux system with a busybox user LAN. What it does is it scans for open telnet communications and try to brute force telnet credentials like most IoT botnets actually. And its payload is a proxy service on Soxv4v5, HTTP and HTTPS. At first, when ESET was studying this, they thought it may be doing a shady VPN service just because it was using its bots to actually proxy traffic. But in the end, what they found out is that it was actually proxying traffic to social media side. So they were like, what? What? Why? And that's what we're going to see later on, why they're doing that. Just to continue the recap, the timeline where November 2014, sorry, it was discovered by ESET and thoroughly reversed engineer in 2015. A paper was published in May 2015, and you can have all the gory details there about how they reversed engineer in the infrastructure of the botnet. If you're lazy because it's an extensive paper, you can go and have a look at the various bulletin or the botcon presentation as well. But people wanted the question we get the most is why Linux Moose? So here we have a X dump of the binary file of the Linux Moose version one. And in there, there was a string, it's quite small, but the string that is highlighted is written Elan. So Elan is, oops, sorry, Elan is Moose in French. So this is why we picked Moose and being French Canadians, you know, we took the Moose as a big symbol. And so we kind of like that little game of naming malware, you know, and plug our own stuff. But then once Linux Moose report hit the media, it went into slash dot and slash dot devoted a huge part of their commentary on the name. So why Moose? And when someone highlighted them the fact that it was because of the Elan string, they said, oh, you guys got it wrong. I'm sure it's someone from the UK was nostalgic about the Lotus Elan. You know, it's not about a Moose. And then I met some fellows at ESET and they told me, no, no, no, you got it wrong. It's the Slovak music group called Elan. Because in Slovakia, these are famous band like started in the 60s and that is still active today. So they're really convinced that it was that. But again, you know, we stick with our Moose for because, you know, we researched that in Canada and decided to go with it. So following the ESET's report, we got some coverage. So BBC, for instance, dark reading, Ars Technica, pretty cool coverage. But then at some point because of the media attention, the command and control server went dark. So the malware sample had IP addresses, hardcoded, packed as integer inside the binary file, the malicious files. So they couldn't like change something in their infrastructure and have new servers hosted elsewhere. So attention got up to the fact that they had to shut it down. But in September 2015, it started to reappear. We saw new samples, new infrastructure. Now you have the recap of Linux Moose. Yes. And now that we have the second sample of actually Linux Moose, that's the one that we actually studied. And what we want to say now is just this talk is about how we attack the botnet. So you guys can actually replicate that with the Honepots and the Men and Mill attack. But it's also about understanding the botnet's operations and the market it evolves in. So it's sort of mixing a social approach and a technical one to have a really global understanding of that threat. Alright, so catching it now. Since we knew it would change, we saw in the code that the IP addresses weren't there anymore. So we needed to be active in that infection process in order to witness how it would be configured. So what we decided to do is we built Honepots. And how did we build it? We went with a software approach because it could be cloud deployed. So we could distribute it geographically. It would be cheaper than monitoring hardware because we needed, let's say we would have used hardware. We needed something to do the Men and the Mill attack in between the hardware and have the hardware and reboot it and manage it. And no possibility of geographic distribution. So we went software. We went low interaction because I didn't want to infect the world with our Honepots, you know. So we did something really emulated. So we could do less monitoring and less chance of hurting. But we needed to run the sample at some point. So we side loaded it with an ARM virtual machine which we infected with the real samples. And we killed the malicious traffic that would have infected others, which is a telnet-based payload that like Mesa mentioned earlier. So here's the architecture of the Honepots. You see on the left-hand side the QMU stack. So we used Debian ARM images. We deployed Linux moves on it. And this was running in QMU. We put in front of that Men and the Mill proxy service to do the HTTPS attack, which I'll cover later. On the right-hand side you see that we used the Kauri Honepot project, which we mimicked router user land on it with a busy box environment. This is exactly what the malware was expecting. So we mimicked what it wanted. We use IP tables in front of that to redirect appropriate flows to the appropriate places. And we sniffed all of the traffic with dumb cap in front of it. The components, the Kauri component, we picked it up because it was in Python, which was quite easy to modify. So anytime that there was some problem that wouldn't satisfy Linux moves' requirements, we could patch it like really quickly. It has a really nice emulated file system, easy to modify and easy to customize. So I basically unpacked the firmware, thrown in the tool that would mimic the file system. It copies all of the characteristics from the firmware and then you deploy it in the Honepot. It was actively maintained and it has machine parsable logs in JSON format, which is good to do some analysis, mass analysis when you have stuff like that running for months. You tend to have gigs of logs to look at. But unfortunately there was no telnet support. So what I did when I was in the holidays of the Christmas holidays, I decided let's write this telnet support. And so I wrote it, took more than I anticipated, but I'm a researcher so I can afford spending that time doing that. But it's still maybe two weeks of work nonstop. And then I send a pull request to the Kauri project and since then they merged it. So in August last year, Kauri now is used to do all of this IoT tracking because of our telnet support that we pushed a few months ago, which is kind of cool to see all of the Mirai botnet people tracking it. It was because of our support in Kauri that they were able to do that. I should have jumped on that Mirai bandwagon earlier but I was busy with other projects, unfortunately. So we deployed it worldwide, several places because from my malware experience, I know that there is kind of tendency for malware. For example, Brazil is very specific, Brazilian malware targeting Brazilian businesses. So I wanted to see if Linux moves had different payload based on where it was deployed. Didn't happen. So once infected, the HTTPS traffic started flowing to the social network. We had good command and control server contacts every five minutes and 15 minutes for the like, there are two types of payloads. Anyway, we were catching all of this. Before our attack, it was possible to extract that it was social network based on the certificate names, so the CN of certificates, but it limited what we could study. We wanted to have access to the internal, the HTTP traffic. This would allow us to understand really what they are doing on social network. Are they clicking on ads or are they really actively liking people, creating accounts? And this is what we needed to do. In order to do that, we needed to attack it. So this is how the bots are relaying traffic without our attack. You see three layers from the bottom to the top, TCP in below and then mounting a socks proxy end to end with the targeted social network. And then on top of that HTTPS, this is encrypted and authenticated end to end HTTPS does its job. So because of our attack, we knew that we wouldn't, we would need to break HTTPS, but we knew that the malware people, they don't really care about it. All they want is the traffic to reach the appropriate websites. And we kind of crossed our finger and figured it might work. So this is how we mounted it. We added that process, used IP tables to redirect to the man in the middle attack tool, which I mentioned earlier, man in the middle proxy. And then this terminated the HTTPS connections and recreated a new one with the targeted social network. And this is why there are two colors of purple because since we terminated the connection, we need to send them our own certificate that we craft on the fly and not the proper social network certificate, which means that they would get certificate errors, but we still had to try it, you know, to see if it would work. So we use man in the middle proxy in transparent proxy mode. It was running for months and has the ability to parse logs. So instead of using a better cap, which I tried, which failed us, I really recommend for long-term tracking and long-term breaking of HTTPS to use man in the middle proxy for these reasons. So we mounted the attack, redirecting the proper flows to the proper destinations, avoid trapping CNC traffic because it was over HTTP, but the proxy would mess with it. So we needed to punch holes through our firewall for that. We crafted certificate authorities and we crossed our finger. But this, the opportunity to craft certificate authorities, we had an idea. We figured why not do a little bit of social engineering. So what we did is we mimicked certificate authority of a security appliance. So I picked several of them. I did a web sense, a Fortinet, Fortiguard, Checkpoint and stuff like that. So I mimicked vendor product in other words. And I said, if a human ever sees the error, they might say, oh, yeah, this web sense stuff we get all the time at universities. Let's allow the traffic through. So this is kind of the idea behind the attack, but it succeeded so quickly that I'm sure that they are only ignoring errors, like blankly, you know? So we had our success and then we started having a lot of traffic in HTTP that we needed to analyze and correlate. And this is where the fun begin. Yeah. So where we're at now, right? Because we have at that time several infected hosts that were actually actively used by the operators sending traffic. We had HTTPS traffic in plain text and CNC traffic in HTTP. And while we were doing that, I also gathered lots of information on publicly available sellers market just because we sort of thought that it was probably doing social media fraud. So we started to look at the seller's market and gather lots of information about that to have an idea of the prices and who was selling that actually. So as I said earlier, our findings sort of can be summarized into five large characteristics that we're going to go through right now. And you're going to see why we sort of think that Linux most is involved in a clever scheme that allows the operators to actually make money without attracting any attention from law enforcement. So that's pretty smart. And Olivier, you're still on. Yes. So first one is stealthy. Why stealthy? Well, there are several reasons. First, there was no x86 version of the malware. This might sound stupid, but in fact, if you've studied IoT or embedded Linux malware, you'll have noticed that a lot of those threats are simply downloading the binaries of all architectures at the same time and then trying to run all of them. It's just simpler for them to infect everyone like that. And so they include most of these threats x86 version. And this helps analysts and software anti-virus company because all of their tooling and all of their idle license that they purchase is all tailored for x86. So carefully avoiding x86 for us has been kind of an indicator of cleverness and understanding what the industry is doing. They do no ad fraud, did us or spam. We think they do that because all of this kind of money, you need to at some point launder. You need to do some money laundering techniques or use mules, otherwise you get caught. But the social media thing has very unique characteristics about money laundering or the fact that it doesn't require it. That makes it stealthy again from our perspective. And there is no persistent mechanism. We explain that to ourselves by the fact that if your router is infected and it crashes every four hours because whenever they did update it was memory management update. So imagine you infect a router with four megabyte of RAM and you use it to proxy traffic and scan the whole internet with it. You can crash it and mismanage memory quite easily or reduce the performance. So if you add persistent mechanism to your malware, then this person's router will be shitty for like weeks. And at some point she or he will throw it away, right? So we figured they got in via telnet and default credentials. So why put persistent mechanism in the first place? So we think that this is why they avoided the persistence, which makes them kind of clever. But again, these are theories, maybe they're just like they didn't care or persistent was complicated because of the various environment that they're targeting. But we think our theory is kind of reasonable. Another interesting point is the fact that they're constantly adapted. So this is through the years of tracking. We saw updates and we can cover, I'll cover a few of the important updates that they did when the switch version. And this is always reacting to antivirus company reports. So first what they did is they obfuscated the CNC IP address. Instead of being in the binary file itself, it was now passed as an argument, which means that you needed to run a honeypot or to actively monitor internet traffic to be able to see the IP address. But they also exhorted it with a static key, which means that you needed to have someone who could do ARM or MIPS reverse engineering on hand. So this kind of two things together combined is actually kind of a clever upgrade or significant improvement. They changed the button enrollment process. So on the first version of the malware, we just had to run it, it would hook to the CNC and it would like receive jobs quickly. When we tried to do that with the version two or the new sample that we have, it didn't work. The CNC was not sending social media for traffic through us. So we wondered like, how is that? And then when we studied back the protocol of the bot infection, we realized that there is a communication going with the CNC on infection, which means that they can at the server side ensure that there is a correlation between the infectee and the infector. So what we had to do then is to spawn bots and cross infects our own instances. Otherwise, we would not see traffic going through it. They also updated their CNC protocol. So the old protocol was binary data on port 80, which is kind of stupid. It's really easy to write a rule to catch that. It's like nonstandard plus plus, right? So what they did is they updated their protocol to hide the same binary data but encoded in an HTTP like fashion. So in the slides, what you see is I highlighted the cookie and the set cookie. So it uses those headers to be able to carry that traffic. And the page of the server of the CNC server was the default it works page of Apache. So first time I had the IP address of a CNC server, I went to the, I did a W get on the page and I saw the it works page and I said, oh, this is not the proper CNC server. There's something I'm missing. This is just a default server. And then I did some more work, you know, and I came back to it like two weeks later and I was like, oh, they are cookie sets. Oh, and because I did the reverse engineering to understand that this was happening, I was like, oh, my God, it was in my face all that time. I had the right CNC server, but I didn't, I didn't click, you know, that there were additional headers. Of course, my pen test friends would say in burp, you would have seen it, you know, like that it would be different looking. But I didn't thought about that. I was using my curl and, you know, stuff coming in line. Anyway, so the, the, now that they encoded in PHP session ID fake cookies, they, they, we thought it would be base 64 or something, you know, safe for HTTP, but instead they did their own what I call the bozo encoding, which is they just use the same binary values as their older protocol, but they shifted it in the ASCII space by adding AS to all of the, all of the binary. And then they just undo this stuff, you know, you do the least change you need to do when, you know, you're lazy and you need stuff to work, right? So basics, I call it bozo and we laugh, but it's still like someone who is not familiar or with reverse engineering is not able to figure that out as quickly. So it's kind of obfuscation still, you know. So it's updated, yet it's still the same. It's still, it was still brute forcing login attempts on the internet looking for weak credentials, still uses the bots as proxies, leverages the clean IP addresses. So the reason why this whole thing exists is to infect DSL and cable lines and be able to do social media interaction with IP that are from these, you know, well reputed and user IP addresses. So it's still the focus of this button, it is still that. And it still targets social network. With that, let me allow my Sarah to speak with the rest of the analysis. Yes. So let's continue with the characteristics we were talking about. So the third one is no direct victims. And it's because it doesn't create any direct victims for what it's actually doing. And we're always saying we're doing social media fraud. So let's put that definition out there. It's the process of creating false endorsements of social networks accounts in order to enhance the user's popularity and visibility. This means that the operators are doing is they're actually creating fake accounts, logging in the fake accounts and going in like and follows other accounts that actually are being held by people who want to have fake popularity. And when we had the traffic at first, we wanted to see which social networks were firstly targeted, right, by the, by the operators. And we found that 86% of the whole traffic and that was persistent among all honeypots was targeted towards Instagram. So most of the traffic was on this social network. There was a bit of Twitter, but most of it was flagged really quickly. So they were blocked at entry. And we had a bit of Periscope, Flipagram and Kiwi. I don't know if you know these social networks, but like, I'm not too old, but I didn't know about them neither. So they're quite new and just keep them in mind because they're important later on. And then we looked at Instagram mainly just because it was 86% of its activity was on this social network. And we looked at what it was doing on the social network and we found out that it was doing actually, in 13% of the time, either a like or a follow. So it was actually making likes and follows. And 87% of the time it was actually building up to those likes and follows. So they developed scripts to mimic human interaction. We found about six or seven. And here's just a blob of text. You see the example of this script. And it was actually aimed to mimic human interaction, as I said. So you have one here, like you visit your own inbox, but you log in your fake account and let me visit your own inbox. It's probably empty, right? Then you go and see your potential recipients to send message. Then you go on your personal timeline and blah, blah, blah. And you do all that and you actually ensure that you're not going to be flagged when you do follow the targeted account. And that actually worked because what we found is that by doing that mimic, by developing those simple scripts, they actually succeeded about 89% of the time. So 89% of the time they actually created a like and a follow. And it wasn't flagged by Instagram. And in 11% of the time it was flagged. And it was flagged as spam and as like signal as a but. So that's interesting just because we don't know why in 11% of the time Instagram did target them and signaled them and saw them and blocked them. But in 89% of the time they did not. But then even though they're successful like that, there's one thing that we need to know is that follows don't last. So they're putting a lot of efforts into making a like and a follow, but not any effort into actually making look the fake account legitimate. So it's not actually flagged later on by Instagram. So here you've got an example of a fake account. The picture is a building. It's usually a building, a plant or an animal and it's like the pixels are played. So we couldn't actually find what they took. Is that me? Who does that? No, okay. We couldn't actually find where they took the pictures, but you still saw that they have zero posts, one follower and they're always following in between 200 and 800 people. So that's really easy for Instagram to actually within few days, few months being like, well, you're a bot, you're suspended. So within like the whole analysis, we found about 1700 fake accounts and 72% of them were suspended by Instagram within six months. So that was really, quickly flagged. So what does that mean? Well, it means that buyers are getting ripped off just because they're paying for likes and follows and they're getting really popular really quickly. And then six months later, they lose about 72% of them. So they need to buy again and again and again, if they want to keep their fake online fame and not look fraudulent, right? So let's talk about the buyers and that's the crunchy part. Who are they, right? Who's actually buying fake online fame? So we had access to the fake accounts, sorry, to the accounts that the botnet actually the operators wanted to follow. So we determined sort of a small methodology to flag potential buyers. So it had to be a profile on which at least five follows were performed by the operators, by the traffic. So we wanted to just to be sure that if it was randomly followed, well, we wouldn't actually target that account as a fake account, right? And it had to be an active profiles either owned by an individual or a company because it was really weird things. And it had to be the profile had to have lots of followers and almost no reactions on the pictures. So that would actually indicate that the followers are probably false. And here you've got an example of it was a designer and she was posting pictures, I say she maybe it's he, and she had about 150,000 followers and almost no reaction, like 20, 26 people that actually liked the photo and no comments. And that's actually an indication that it's probably fake. So we found some sort of three large categories that sort of overlap together, but that gives you a good idea of who's actually buying the fraud. The first one was business related accounts. So here you have an electric cigarette shop that has 17,000 followers and it was mostly accounts that you could buy stuff, but only online. And there was a link to the website and it was a bit shady. So you never know if you're actually going to get the product that you would like shipped. But it included watches, jewels, clothes, shop, anything that was not too expensive to actually buy. We also found lots of small shops like a pizza shop, sorry, like a pizza shop in Bali or a restaurant in Kuwait. So like small facilities, small restaurants that couldn't actually buy marketing strategies and stuff like that and we just want to boost their page a bit. So that we found too. Then we found business related accounts, but they were centered around individuals and that's probably a result of the fact that we're dealing with Instagram and it's only pictures that are being posted. But we found lots of people, they were there for economic purposes, for business, for contracts, stuff like that. But in the end, they were just posting pictures of themselves chilling out on a boat with girls and showing their wealth. And this guy is an example. He was a web designer calling for actually work and he had his web design websites that was linked. But you see that he's mostly looking showing his personal life instead of a million followers, yeah, for his dogs. That I didn't blur, I'm sorry. So lots of hair designers, web developers, TV presenters, statuists and the like. And finally we had lots of aspiring celebrities. We did find those Kardashians, Bieber and those people in the botnet, but in the traffic, sorry, but in the end, like when you create an account on Instagram, it's like the most popular accounts are, they offer you already to follow them. So we thought, well, maybe the fake accounts are following those people just to make them look legitimate. So we just ruled them out and because we were afraid of maybe lawyers coming after us afterwards. But those ones, they're buying fake likes and they're aspiring celebrities. So they were mostly people trying to go into the showbiz. You look at their Instagram account, they have like here you got the cowboy guy, he's got 30,000 followers, but then you go and look on his YouTube channel and he's got like 300 views and you're like, well, there's a balance that's missing here. So lots of actors, models, singers and you know, the showbiz people. And that was about 60% of our sample of people that we looked at that were either in one of those categories, but what's special about that as well is that we found 40% of the sample was mostly common people. So those people were not actually there for economic purposes. They bought fake likes and followers for themselves. And I had to blur lots of things here just because it's quite naked. But my favorite one was the one, I don't know if on the top corner here to the left, you see the guy, he's got a million followers and he's chilling out in the ocean with a Mac or he's sitting with a Kalashnikov and a girl and you're like, why would you do that? But in the end, those people, honestly, they made up 40% of the sample. And that's where that's when we sort of came out with the whole idea of an ego market. And I have to give the credit to Philip. And those people are sort of, they're not in our traffic, they're just the people that encourage that I say. And I always find funny this guy there who's like, come on. So yeah. So let's go back to our characteristics, right? We said no direct victims. Well, that's because the only people that we could think that could be indirect victims was those that get fooled by the false popularity. So that would be a bar who would actually hire a singer who had lots of fake followers to come and sing and no one would come in the bar, right? Or an advertiser who would pay the person to have more visibility but wouldn't get the visibility. But in the end, who will cry for advertisers, right? And we know that they're making, the operators, they're making lots of money by selling to common people. So that's one thing that we have to assess as well. And they're probably the fact that they're not criminal partners sort of in the underground forums trying to like deal credit cards and stuff like that. But in the end, they're just selling the the service to a miss and mister who are just like buying it with legitimate credit cards. It just it shows you how no money laundering is sort of possible just through the buyers that we've assessed now. So the fourth characteristic is hiding in plain sites. And that's it. Not selling on underground forums. They're selling everything they're doing on the web. And as I said, we looked at the seller's market to have an idea of the prices to assess the profitability of the operators, how much they were making actually by making fake likes and follows. So we found lots of websites just like this one. They actually offer services on all the social networks, Twitter, Facebook, Google plus and the like. They make it look really legitimate. So they call it social marketing. They've got pages explaining to you how it is important to have popularity online and how it's the normal practice. They've got customers feedback and the accept credit card, PayPal or anything and customer support. So it's it looks really legitimate. If you look at the prices, it's mostly all in bundles. It goes from 100 followers to millions. And here you've got an example of 1000 followers for $10 on Instagram. So that's the price around. While gathering the data, we because we didn't know at first that the net was actually focusing on Instagram. So we gathered information on all social networks and because I spent lots of time doing that, we wanted to show you a bit just what's the prices. So the most expensive social network where you can buy fake fame is LinkedIn, which is probably because it can give you a job. And also because I think you can have more than 500 followers like connections on LinkedIn, then afterwards they're not showing anymore. So maybe one one connection is worth more than because you've got a limit. Then you had Google plus Facebook, Twitter and the cheaper one was Instagram. And that's probably because we've seen it. It's quite easy to do fake follows and likes on Instagram. And one important fact about Instagram is the fact that you don't need a valid email address when you want to connect online. So I mean, it's just simple info sex stuff. Come on. But in the end, like we tried it, we try to make it, we create a fake account on Instagram and you'll put an email address like my name is Masara at NordSec.com and they'll just create my fake account and they don't like do valid email registration loop. Sorry. So yeah. And then we looked at the prices on Instagram. We focused on that a bit and you can see here that for 10,000 follows, it's about $112. But if you look at the standard deviation, the next column next to it, you see that it's a hundred bucks. So that actually tells us that the price is varying a lot. So you can buy 10,000 follows on Instagram for $2 and you can buy for $200. And economics, we often say that if the prices are really, really close to each other, it's because the market is really competitive and people are just the sellers are just trying to get market shares. But if the price is varying a lot here, it doesn't mean that it's uncompetitive, but rather that it's probably immature. So sellers don't know the worth of the products you're selling and buyers don't know exactly how much it's worth. So they're just shopping a bit risk managing like the fact that it's a fraudulent market and having to buy that. There's probably you'll see as well the fact that it's a fraudulent market. So there's probably lots of scammers. And the fact that there's scammers, they don't price according to the costs of giving the service to just price and hope that you're actually going to bother their service. So here's a little story because we had the sellers, we knew that the service was sold on websites like that, but we couldn't link it to Linux moves, right? And the fact that we're actually assessing the traffic, we really wanted to find the website where the product, the service was sold to actually go from malware sample up to, you know, the last platform. So while I was gathering the data, I found a seller that sold and that's again our weird social network Periscope Kiwi and Flippagram services. And it was actually a seller that was related to an email address. He had like about 12 websites. So he was selling on Periscope Flippagram and Kiwi social media fraud and Instagram as well. And what was special about that seller is that he wasn't selling any Facebook and Linux moves doesn't go on Facebook. He just never connects there. It doesn't try anything. And all the websites that I actually gathered information on were all selling Facebook except for that one. So I went to see Olivier and I was like, it's probably, it could be linked to Linux moves just because the service is sold and the activity of the traffic is really similar, right? So we decided to go downstairs in a shady little grocery store and buy a prepaid credit cards and then go back upstairs in the office and just buy fake likes from that website. And the whole idea was that if we still had our counterparts running, so if the fake account I created would go through our honeypots, then we could actually say, well, that's related to Linux moves and say that there was a link in between, right? So we went downstairs, bought some fake likes and followers afterwards and I created a fake account saying I was a young photographer. I was really inspired at that moment. And I have to deal with that for every presentation now. So I'm like, and then we bought 6000 followers and they were provided to me within the weekend. So my fake account was really popular really quickly. But then really quickly as well, fake followers were flagged by Instagram and I lost about 500s. And the whole idea was, well, we want to see it in our traffic so we want to have as much followers as we can. So I wrote to the email address being like, well, I bought 6000 followers from your website last week. They were provided to me throughout the weekend. However, since Monday, I have lost 500. At that time as was that, then it was more afterwards. Could you please provide them to me again? And then I set my account and I sort of build up my fake story saying that I needed these followers for a contest in winter photography. And he replied really quickly saying, sure, adding with a smiley. I'm like, yeah, nice. See, we call that customer relationship. And then he actually provided me 8000 followers. So he gave me much more than what I actually asked for, which is great. And that lasted for a bit, like a few days. Then I lost again. And as I said, we always wanted to ask for more so we could actually hire up our chances of seeing the account in the traffic. So I wrote again, I was like, hey, I lost followers a few days back. Could you push me up again? And then he said, I'm having, well, followers are not forever, but I'm adding them when you ask for. I was like, great, well, I'll ask always and you just add me some, right? But just that just didn't happen afterwards. I was like, hey, lots of so many followers. Why can I get them back and never replied? And it's just he stopped replying really quickly. And quickly, actually, within four months, I was a, my account went from 8000 followers to 1100. And now if you have a look, I have 400s. And I took those pictures and there's no reactions. So that made me quite sad. But, but, but it did happen. The account did go through our Honeypots. So we did find actually that, that website was related somehow to the operators. So that was cool. And then the last, last characteristic is large potential profitability. And that's because we had real world data. We had Honeypot, like we had traffic in our Honeypots. We knew the prices. And there's so many web, like security conferences or they're like, you know, that is worth billions of dollars. And you're like, well, what's your data? And you never know. And we felt like, well, we've got data. Let's try to see actually how much it's worth, how much, how much money they're making. And I was hoping they weren't making a lot because I wanted to say, you know what, we've got real data and they're not making lots of money. But that didn't happen. So on average, what we did is, sorry, we could ask us a potential revenue of Linux moves. So what we did is we took on average, how many follows were performed by a Honeypots per month. And then we took on average, the price of 10,000 follows, sorry, the average price of 10,000 follows is 112. And then it is 0.0011 cents per follows. And then we asked us how much Honeypot would actually bring per month. And that ended up with $13 per month per Honeypot. And that's based on our own data, right? This is actually really good just because for all the Honeypots that we had around the world, we paid about $10 per server. So that would be a $3 profit that we could do with a legitimate button up. And then we decided to go and say, okay, well, we've got, we know it's $13 per Honeypots and let's try to see how many butts the owner of Linux moves would have. And although it's still a lot of money, we sort of kept the state conservative because we didn't went over 50,000 butts, which is probably, they're probably like having much more than that. But if you look at it, it's for 13,000 butts, if the Linux moves had that and they monetized all their follows, they would make about $400,000. If we say no, they're not monetized per month, that's not what I said. Okay, per month. If, for example, we say they're not monetizing all their follows, we saw that they gave me more, right? So they're not monetizing all of it. Let's say it's half, then it's still $200,000 per month, which is quite a lot, right? But now I just want to say, we have some sort of evidence that there's revenue sharing among the actors, just because we found that the websites selling the service was related to a reseller platform and then to the operators. So we're pretty sure that they're sharing revenue together and we're actually trying to assess how much the operators are actually making from that button. But there's one thing that's sure is that they're sitting on a pot of gold. There's lots of money to be made from fake fake online, which is probably results of our society today. Okay. So, yeah. Here's our findings, right? So we went over the Linux moves button and how much it's involved in a clever scheme. We said it was stealthy, no X86 variants, only runs on embedded systems. It's constantly adapting. We've seen that from the first e-set report to the second one and actually changed its infrastructure and shown that it's quite flexible. It doesn't create any, what it does doesn't create any direct victims. So no line enforcement attention, which is good. It's hiding in plain sight, selling to common people on the clear web instead of underground forums and it has a large potential for fitability. So that's when we sort of went through those five characteristics and we're like, well, that's pretty much perfect, right? You got a button that you run criminal activity, but everything around it is so peculiar that no one's ever going to go after you and some sort of perfect online crime, right? And we thought, well, that's pretty good result, like awesome, except from the fact that we had to go through this for like six months, which we called it the shallowness of humanity and way too much skin. And we thought that it was so perfect that during our whole investigations and all the presentations that we did, we could not raise any interest from law enforcement. Like every time at a conference, we'd go like and being, hey, we've got a button that we know CNC servers, we know how it infects devices, would you like to take it down with us? The police officer would be like, do you have any victims that we can actually, you know, like start an investigation with? And we were like, well, not really. And then just really awkward moment. And then I tried to like email them and they just never replied. And then we tried to access it, like to, we tried to talk to hosting providers as well, like to take down the CNC, but we concluded that it's probably hosted on a bulletproof hosting service. So Olivier opened the ticket to tell them there was a CNC servers on that they were hosting CNC server and they just closed it without never replying. And at the same times, yeah, we lost faith in humanity until we found that in the traffic, dugs with more money than you, more popular than you and more rich. So that's it. Thank you very much. If you have any questions.