 Okay, we're going to be recording today's webinar that will be available to everyone after we're done. But good morning. Welcome, everyone. Thank you for joining us. Today we're going to talk about protecting Drupal sites from spam. And we're going to do a case study of Drupal.org. The Drupal Association has recently partnered with Distilled Networks to come up with a modern solution for handling spam on the home of the community. And we've been really pleased with the results. So we'd like to talk to you today about how we put that solution together and how we've worked with Distilled and how some of their techniques can be extended to protect other Drupal sites in the future. So with that said, let me do some introductions. My name is Tim Lennon. I'm the director of engineering for the Drupal Association. And I'll let my co-host, Edward, introduce himself. Hey, I'm Edward Roberts, and I had a product marketing for Distilled Networks, so please to be here. Awesome. And thank you for joining, Edward. Let's go ahead and get started. So Drupal.org faces some significant challenges when it comes to spam that are comparable to a high-value, high-traffic website that would be a significant target for spammers. In particular, those challenges are where a PageRank9 site, which means that any content that winds up on Drupal.org immediately hits the top of search engines, no matter what part of our site it's posted on, and that makes it a really valuable target for anyone promoting SEO spam or link spam, things of that nature. We're also wide open to user-generated content. We have open user registration. Anyone can create an account because we're a community website. All of our content is generated by our users, and this means that we can't just lock everything down to keep people from coming in and posting spam. And furthermore, because we're a nonprofit and a volunteer-driven community, we don't have much capacity for manual moderation, so we can't rely on human power in the spam fight. So what's the cost of spam and why is fighting it so important for a site like Drupal.org and for sites in general? For us in particular, we have to carefully shepherd the resources of the community. And so constantly fighting spam results in volunteer burnout. When volunteers are fighting spam instead of making code contributions, we're not making the best use of their time. Furthermore, the noise of spammy content can overwhelm the signal of important news about the project. It also can degrade our search presence. As most of you may know, if a search engine finds consistent and overwhelming amounts of spam on a website, they'll start to reduce page rank and reduce your presence in search results. And that kind of degradation could lead to a decline in adoption for the project. Besides that, there's also just some technical challenges and problems. A high volume of spam account registrations and spam content increases our database size that we have to manage, and it pollutes our community metrics. We can't measure the growth or decline of our community if the majority of traffic is coming from these spam accounts. It really hides the real information about our community and all that noise. So I'd like to give a brief history of Drupal.org and how we fought spam in the past. Drupal.org is actually the longest running Drupal site on the web and one of the most highly trafficked. And so our solutions have evolved as Drupal has. We've actually been online for more than a decade. So in the early days of spam fighting, a couple techniques emerged. I think people are widely familiar with, and I'll talk about why these are good techniques, but how they've fallen behind in the kind of constant arms race against spam. So these early techniques that appeared were behavior analysis and content analysis. And as any Drupal site owner knows, there's kind of two modules that we're the go to for Drupal users looking to fight spam. The first was the honeypot module, which uses a hidden fields technique where bots that are coming to your site to promote spam fill in this field that humans don't see and anything that fills that field is discarded. It also offers some rate limiting features and things like that. The problem with this module is that as bots have grown more sophisticated as humans who program these bots intervene to target specific high value websites like Drupal.org. Humanize can go in, identify those hidden fields and write the bots to work around them. Similarly, content analysis techniques are always part of a spam fighting in the traditional sense, text analytics to look for spammy content, either outright rejecting certain content or accepting it if it seems clear or marking it as unsure and throwing up a capture. And Mollum is kind of the traditional solution that most Drupal sites are used to. It's a tool we still use, but we found that it's been falling behind this year volume of spam and that it's in particular not a good solution in a multilingual environment. So while we use both honeypot and Mollum still today, it hasn't been sufficient really to meet the needs for protecting our community's home. So as I said before, these are good tools but spam fighting is an arms race and the people who generate spam have a direct financial incentive to keep finding new ways and new techniques to beat our protections. So let's talk a little bit about what modern spam looks like. Modern spam comes sort of in two forms and they're closely related to each other. First you have humans behind proxies where real human users are using automation tools, including bots as part of their toolkit, to identify specific websites to target, create mass account registrations using proxies each time to avoid IP detection, then use that as the basis for their spam attacks. In addition to that, you also have bots that are growing more and more sophisticated and finding new ways to get around the more traditional protections that are in place against the kind of old fashioned unsophisticated bots. So I'm going to talk about humans behind proxies because some of these attacks are the kind that have most recently had an impact on Drupal.org and we used distilled solution in an interesting way to resolve this problem. And then after I talk about that, I'll hand it over to Edward to talk about bad bot detection and how to handle these more sophisticated kinds of bots. So human spammers, again, they're still using extensive automation tools. They have a human driven process but they use a variety of different tools like automatic proxy changes, like automatic browser driving tools, things like that to speed up the process. And that extensive use of proxies helps to obscure the multiple bad actions they take all coming from the same source. So against Drupal.org, the pattern of attack is relatively straightforward. These spammers are building inventory for Black Hat SEO and Link Spam primarily. So what we see is that what turns out to be just a few users, a few real human beings, are creating dozens or hundreds of accounts each using proxies to obscure their identity and then either immediately trying to post spam with those accounts or holding those accounts as sleep or inventory to then activate when they're paid by some company to provide Black Hat Link Spam. So what's the solution that we put together? With distilled networks help, we started running Drupal.org's account registration process through the distilled networks cloud CDM. Distill uses their proprietary high-def fingerprint technique to identify the kinds of users that traffic whatever page or whatever part of your website is being protected, in our case the registration process. And so what we can do is we can collect this information, this high-def fingerprint, for these users and then even when these users attempt to change proxies and disguise their identity, we can actually detect, no, that's the same person and prevent them from making these additional dozens to hundreds of accounts that then become activated as spam attacks. So that sounds nice in principle, but how has it actually worked for us? How can we kind of demonstrate the results? There's a couple metrics for success. First is the rate of unconfirmed users being created on Drupal.org. So the concept of a confirmed user is unique to the Drupal community, but essentially what it means is when a user account is first created, it has relatively limited permissions to take action within our community website, until another human user looks at their content, recognizes yeah, that's a real human being and confirms them. But we have this pool of unconfirmed users, which represents all the new users on Drupal.org that haven't been vouched for yet, but also all of the accounts that may have been created by spammers, by bots, that might be kind of black hat and malicious accounts. So if we're seeing this technique be successful, then we should see that the rate of unconfirmed users being created goes down. Similarly, the rate at which we have to do manual moderation should also drop if this technique is working. If we're preventing these dozens or hundreds of accounts from being created, then the rate at which we have to block accounts should go down. So let's take a look at some of the results. As I said before, when we implemented these techniques, we're looking for that rate of unconfirmed users, the pool of potential spammers to drop. So this graph shows the weekly rolling average of accounts created per day. So prior to our relationship with Distill, we were seeing approximately 300 accounts created per day on average on Drupal.org, spiking to almost 600 in October of 2015. That was the signal to us that we needed to get in gear and really do something about this problem. So you can see on the chart, when we first enabled Distill Protection, there was an immediate drop off in these account registrations. We actually went to less than half of the number of daily account registrations that were occurring. From there, we found we had to do a little bit of tuning because we wanted to make sure that some legitimate registrations weren't getting blocked. So we tuned things a little bit and leveled off at a little bit over 150 account registrations on average per day, which was just slightly above half of what we had before we implemented protection and less than a quarter of what it peaked at when this attack really hit us hard. From there, we brought some of our findings about using Distill Network's high-def fingerprinting and some of the new technologies that they developed and enabled a new blacklisting process based on kind of the next generation of their high-def fingerprint, which led us blacklist the bad actor patterns in an even more robust way. And so now we're seeing an even more significant decline in recent months below that 150 account mark and actually steadily decreasing on average, which is a huge win. Similarly, I talked about how we wanted to measure this through the amount of manual moderation that we had to do. So looking at a similar period of time, you can see this spike in how many accounts we had to block as a rolling average every day. And you can also tell from this graph, which shows a peak at 200 that when we're having a peak of 600 being created, there are more that are coming through, that are slipping through the lines and remaining as sleeper accounts because they haven't been activated right away. So it's even more important because that evidence shows us that we were missing some of these accounts. And we needed to stop them before it started. So if I zoom in on this more recent segment of data, which represents when we implemented some of these techniques, you can see that the decline in the amount of manual moderation that we and the community volunteers have had to do is really dramatic. We get to this point where we're at a rolling average of five or fewer accounts being blocked either by staff or community members in a given week. So it's really been a big, big improvement for us. So the results for us have been totally dramatic. And our solution, one of the advantages of using Distill as our partner in this is that it evolves as they evolve their own techniques and as they recognize the malicious actions and update their tools to better detect new spam attacks in this ongoing arms race. But in addition to blocking these bad human actors, there's still the bad bots to address. And I'm going to hand it over here to Edward to talk about bad bot detection and mitigation. Thanks, Tim. Yeah, I think it's it's really interesting that you call it an arms race because that's clearly the battle that companies are on with this sort of malicious behavior. It's you do one thing and they change their behavior. You do another thing to stop them and then they change their behavior again. So that that graph where you're seeing little incremental improvements is exactly sort of the arms race that happens in this sphere. So I think one of the things that's interesting about the Drupal example here that we've seen is that it's using our fingerprint in perhaps a different way than many other solutions use us because many other solutions use us to clean out sort of bad bot or automated traffic. That is doing something on their site that is not, you know, not beneficial to the business. And so you're using the fingerprint to actually say, OK, there's humans on here that they're doing malicious things and we need to identify them and use the fingerprint. So using the technology of distill, which which is this high def fingerprint is enabling that. So it's interesting that this is the sort of ying to the yang of our typical sort of problem that we see. So if I just give a little bit of example of that and then I'll talk a bit more about the fingerprints so that people sort of understand what you're taking advantage of, hopefully it will sort of illuminate the picture. So from our bad bot report, we do an annual report that comes out every year and it's came out a few weeks ago for this year. We see that, you know, almost half 40% of traffic is some form of bots and only 61% is actually humans. There are there are two types of bots. There are bots that you actually want on your site and that would be your your search engines because they enable you to get found. And so you want them crawling your site and scraping you and indexing things. So you would want those on your site so that you would allow them. So that's, you know, 18, 18.8% of traffic would be good bots. But the other portion, which is, you know, almost 20% are bad bots and they're doing things that you don't want on your site. Typically, one could be they could be scraping your content, scraping your prices, they could be running vulnerability scanners. They could be doing what they did at Drupal. They could be dropping spam and sort of using that to sort of gain advantage. They could be committed online fraud, you know, trying to access credit cards, running credit card numbers, trying to card crack. They could be, you know, defrauding your ads, you know, fake clicks and making you pay for people that aren't actually clicking on it. It's because it's actually a bot doing it. So there's a whole plethora of things that bad bots can be doing. And it's really understanding what your site does that and what your site holds will sort of understand the scale of the problem. And so Drupal understanding that they've got this subsection of humans who are, you know, creating multiple accounts and then being identified to be able to identify those using the high death fingerprint is the same process. But it's basically looking for malicious behavior and then saying, how do we identify we now identified this behavior of multiple accounts and then using that fingerprint to block and prevent them. And the interesting thing is that the evolving bot landscape is what are bad bots doing, right? They have some capabilities that, you know, exactly what Tim was saying is that they're rotating IP addresses. They, you know, they're, you know, three quarters of them can sit there and just quickly do that. So if you're going to start blocking on IP addresses, they're just going to rotate to another one. Now you're playing the game of whack-a-mole trying to block the next IP address. And, you know, and obviously that's a fun game if you're, you know, if you're dealing with protecting a site. So, you know, that's something that's obviously not an effective technique for handling this problem. Almost 40 percent of them are trying to be evasive. They're trying to make themselves look like a real, real user behind a real browser and they're mimicking human behavior, whether it be moving their mouse or clicking, you know, delaying for clicks, delaying and pausing between clicks and half of them can actually load JavaScripts. So again, they're doing things that sort of what a browser can do and a normal user behind a browser, you would expect them to do. So they're trying to hide in plain sight, basically. And then again, 60 percent of them are hiding in data centers. So if you're going to start blocking whole data centers, you're going to be blocking a lot of real users. So blocking IP addresses or complete data centers is obviously a non-starter. So you need something else that allows you to block. And that's again where where this high death fingerprint comes in. And so I just wanted to sort of explain a little bit behind what's the advantage of the high death fingerprint. You know, a lot of fingerprinting tools, you know, have the IP address. They have some sort of header and user agent information. They might even cookie the browser, you know, and so the first three there on the list, I think a lot of solutions have some form of that and they claim it to be a fingerprint because they've gone beyond the IP address. At the still, we've we've added another 200 attributes of data that makes it far more granular, far more difficult to to sort of hide amongst those attributes. So if you change your IP address and all those other 200 plus attributes still say the same, we can we can identify you based on the fingerprint, not just based on a rotating list of IP addresses. And we've also added a whole tamper proofing layer to so people can't be sort of manipulating the the fingerprint and altering it and stuffing it with with different pieces of information or values. And so this is the fingerprint that the Drupal is using. They're using it to identify the device of humans actually logging in, creating accounts, but it can also be used, obviously, to detect bad bots or automated threats that are hitting your site. And so I wanted to sort of explain a little bit with a sort of real world example that's sort of different from what the Drupal example is. And recently we found a bot that was attacking gift cards and it called it gift ghost bot. What it was doing was it was going across. We found it on almost a thousand domains that still protects. And it was going across and trying to to do, you know, hit the gift card check balance process on all of these sites, whether they be coffee merchants or, you know, department stores or online retailers, if they had a gift card process, that process was getting hit with people sitting there saying, do you have this gift card number? And if it does, give me the balance. And that way they knew whether a gift card number had a balance that they could then go and use to go and purchase an item or to sell on the dark web. And so we saw this attack start, you know, on our systems at the end of February and it sort of peaked up during March. And it was interesting if you go to it, that we found that there were three profiles of the gift ghost bot. And it started off as a Linux with 30 user agents attached to it and the profile. We had another one which had 500 user agents on a Mac system. We had another 210 user agents on a Win32 system. We saw this across multiple sites. When we started blocking each of them, it was amazing to see that the bot operator changed and suddenly made their profiles, iPhones or androids and used another bunch of user agents. So the real cat and mouse game that Tim was talking about, the arms race, that's exactly what happened. These bots are changing their position to make themselves look different and the fingerprint was still identifying them. And so when you actually looked at some of the numbers associated with a typical retailer during this attack, we were detecting 6400 device fingerprints per hour on an average typical retailer. And that's, you know, more user agents as well. And, you know, up to 25,000 IP addresses detected per hour. Now, you know, I know Tim likes to block IP addresses, but I don't even think that he could handle 25,000 per hour. But so it's again, you need something that handles this in an automated fashion, not something that is relying on a human to actually do it and intervene and start trying to manually block these things. So I think this sort of gives an example of if a bot operator suddenly starts attacking you, you've now got to deal with a significant automated problem and you need some automation to stop it. So if we go to the next thing, we try to summarize in the bad bot report, so to sort of give you a little bit more education about what bots do and what they go after. So we would say the four key website attributes that are attractive to bad bots are first off, if you have some pricing information or some proprietary content, you know, whether you maybe have, you know, profiles of people's work history or whether you have profiles of, you know, products and product reviews and product information. If you have flights or, you know, or hotels, you know, listings, information like that that somebody would want to use, that is attractive to people. So they would want to go after that and maybe scrape it. Number two, if you have a sign up or a login or a registration process, which is the Drupal example here, number two, you're attractive to bad bots. Number three, if you have web forms, i.e. somebody can submit something and put it and they can maybe use and put some comments, ma'am, in there, you are, again, another, you are attractive to a bad bot. And the final one is obviously if you have a payment process, whether it be a gift card or whether it be, you know, processing a credit card, if you have that on it, you are liable to be looked at by bots. And I just want to jump in on the web form example because Drupal Association recently built out a new web form for some of our industry page content that had, it followed no pattern. It was no existing form that any bot designer would have seen on Drupal.org before. And nevertheless, as soon as we deployed it, we had spam submissions in less than five minutes. It's kind of incredible how quickly you can be made a target. It is, I mean, you know, it's automated, right? It's, you know, they're looking for these kinds of things and if they find it, then they're going to go after it. So it's, you know, the next slide sort of shows you the likelihood of, if you have those four attributes, for example, if you have unique content or pricing, 97% of websites that have that, they will be attacked. We see scrapers on them. If you have a login page or a signup page, 96% of those sites will have bad bots on them. If you have a form that can be filled in, a third of all sites that have form will have spam bots on them. And here's the one that's maybe perhaps more interesting. If you have login pages, 90% of websites will have bad bots behind the login page. That means they've logged in, they've got through and they're actually doing something behind the login page. So protecting that login page alone is not enough. So just going through a little bit more detail about each one. It's interesting if you haven't looked at the automated threat space or bad bots before, I would recommend looking up the OOSP automated threats handbook. It itemizes all of the threats that are currently being tracked by that organization and sort of explains what they are and what they do. So for this first one, when you've been scraped and they're scraping your pricing or something like that, the automated threat is scraping. And we would deem this to be a very simple bot. It doesn't have significant amount of sophistication technically, but it's seen, it's very widely distributed and seemed to happen on lots of sites. And so what does that bot do? That bot does, it scrapes for the data, it scrapes for prices, it breaks for competitive intelligence. One of the biggest thing is that lots of travel sites, for example, would scrape prices against each other so that they could compete against them. And then you've got aggregators who are aggregating different sources of information in one spot and those are scraping your content and putting it on their site and maybe gaining benefit from it. So what gets scraped is an amazing amount of thing, but it's not that technically hard to do. And so if I go to the next slide, bad bots love login pages. What you see is, again, this is a more sophisticated bot and what the threats that you see here, the OWAS threats from the automated handbook, they call it credential cracking and credential stuffing. And this is far more sophisticated because they're basically trying to sort of get into the account. And so let me show you a little bit about how that happens, what happens there is how credential stuffing works. So if you hear about a breach in public of, a business has been breached and there are millions of credentials that have now been made available, what you're liable if you have a login page, you are liable to have that list run against your login page or registration page to see if that account exists on it because they're going on the human premise that many of us use the same login and password across multiple sites. So even though you may change your password on the site that announced it was breached, if you use that password on another site, suddenly now a bot operator can use that against that other site and go around and see, yes, I see that account there, I logged in, now I'm in, I can change things, I can log you out, I can commit some fraud with it, I can buy something, I can, and that creates a whole bunch of downstream effects for the business itself if they've got to deal with unlocking a hijacked account or they've got to deal with some customer service issues with some fraud on that account. So stopping people from getting into an account using this technique credential stuffing is very, very, very important. And so, but again, like I said earlier, protecting your login page is not enough. When we see the volume of bad bot login requests, we only see a quarter of them are actually login requests, three quarters of them are actually request once they're logged in. So doing something behind or beyond the login page. And so for a website operator or somebody who's running a business online, it's important to know that you must protect those pages behind, not just the login page. That's just the entry point, not where most of the damage happens. And so what happens behind the login page? So what you have is a more sophisticated attack here and the OS threats that are most applicable here are carding, card cracking, cashing out. This is where the fraud is happening behind the login. Maybe there's a payment process that they can go and attack. It's a place where the fraud can occur. And if you think about it, what can happen is that there's financial frauds, there's spam that can happen behind the accounts. That's what was obviously the Drupal example was somebody was logging in, they were logging in as a real user and then using and putting spam behind that. And so that's where the behavior happens. There's phishing attacks. So if you can take over an account, you can do a whole bunch of things that other users, that are to your advantage if you're a bad bot operator. Now, the spamming one is one obviously that's very dear to Tim's heart and dear to the Drupal community. So we would call this sort of a moderate bot in sophistication because sometimes it's a bot that's doing it, but sometimes in this case for Drupal, it was humans were opening the accounts and then they were pasting the spam forms in and using it that way. But it's an interesting one that a third of sites deal with that problem. Now, beyond those four, there are some sort of automated threats that are collateral damage. So the more of this kind of behavior, whether it be somebody attacking your login page or scraping your content or trying to post fake spam content, you can get a lot of this bad bot traffic, you can get spikes and this spike can lead to application denial of service. Now, it's not anything that volumetric is DDoS is going to prevent because it doesn't look like that. It looks like normal requests, normal users doing things, but really it's a bad bot that's doing these things against your site. And so we see a lot of application denial of service things happening on a third of sites that we sit in front of just because of bad bots, not because of human traffic, not the spike in the human traffic. This is just the spike from a bad bot doing something. And so the final bit here is the other collateral damage that, and I, Tim mentioned it earlier was all your web analytics are wrong because they can load JavaScript and they can do, they can, these bots are sophisticated enough that they skew all your analytics. So if you're using Google Analytics and you're making decisions based on where you place ads or where you should place bets and investments, you can realize that maybe a quarter of your traffic, 20% of your traffic could be completely bogus and your metrics and your conversion tracking is completely wrong. And we have a customer of ours, StubHub, and that is one of the things that they use to still fall was to make sure, clean out their traffic of bad bot traffic so that they could understand what the conversion rates were with more accuracy. And once we'd cleaned the traffic out of bad bots that were doing things on their site, now they were able to make better decisions on where they should invest and the process for getting improved conversions. So one thing I wanted to do was just sort of, before we close out, was just sort of give you a quick overview, just sort of so that you understand technically what the distilled product does and how it gets used. So I just wanted to take you through this chart. And so from the left, as you see the human, the bad bot and the good bot traffic coming in on the arrows to the left, it goes into the blue box. And the blue box, its first thing it does is it identifies the traffic and says, all right, using the high death fingerprint, are you a real user behind a browser or are you a bot? And what we do is we do a bunch of tests and we fingerprint the device. And this is where the high death fingerprint comes in and we use a bunch of information, 200 attributes that we pulled from the device. And then we sit there and say, okay, we've got the fingerprint now. And then we check the largest known violators database in the industry, which is a shared fingerprint, device fingerprint database from all of our sites. So you get the wisdom of the crowd. So the fingerprints from every other Drupal, every other distilled deployment are shared here in this known violators. And if we've seen you already, we block you immediately. So you never even get access to the web infrastructure because we've already identified it. If we haven't already identified it and it's a new bot coming in, then what we do is we distill the traffic. And that's that next layer there. We do a bunch of traps and challenges. And those traps can be sort of geofencing, can be blocking certain IP ranges, can be rate limiting. They can be sort of injecting JavaScript tests. They can be doing a whole bunch of things. But then there's also a bunch of challenges that we say, can you prove to me that you're a browser that actually works? And what we're doing is saying, putting puzzles in front of the browser and say, if you can solve this puzzle, you're a real browser. If you can't, we know you're not a real browser because a real browser would be able to do this. So there are a whole bunch of checks and challenges. There's honeypot links that are thrown in there. There's many challenges. And what you do is, if you start to fail those traps and challenges, then we've identified you as a bad bot and then you don't get any further. And all of this is obviously happening before you even get to the actual infrastructure that's being protected. So we're sitting in front of the site. We're sitting in line and doing this before any damage can be done. And then beyond that first request, we've got this machine learning module where we learn the behavior of typical good behavior and bad behavior around the network, around all the sites that we protect. And we have two sort of machine learning models. One is the global network. What looks normal on the global network? So again, sort of learning from all the other deployments. And then also what do we learn from your domain and specific users? So there's a typical user, does these types of behavior and they get clustered together. Anybody outside of that normal cluster of human user, they get flagged as maybe there's something that there's going wrong there. So the machine learning is constantly looking at the traffic and they're sort of improving the detection methods. And then we have the ability to respond. We have the ability to respond using sort of universal ACLs, throwing up captures, throwing up blocks, silently monitoring. There's a whole bunch of granular responses that we can do based on a per page, per path. And all of this happens automatically. It's set. And so when it's needing to block those 21,000 IP addresses that Tim can't manage to do manually, the machine will do that for you. And then the final one is we also have an analyst managed service that will manage the whole program for you. So they will really take care of the problem for you and do this sort of arms race battle for you so that you can rest easy doing the rest of your job rather than worry about cleaning your traffic. Distill can take care of that and the analyst managed service does that. So I just wanted to give you an overview on what the product does and how it works so that you get some sort of context. And finally, sort of say, if there's more, there's a lot more data in our bad bot report that we recently published. So please go to distillnetworks.com and download it. It's full of 40 pages of statistics about bad bots and what they're doing on websites. And it really is, I think, an interesting read for people who are wanting to learn about how this website can get abused by bad traffic. Awesome, thanks very much, Edward. I know that at the association side, we found the tools that you provided to be tremendously helpful. It's really helped to protect the volunteer efforts that had previously been so absorbed with the spam fighting on Drupal.org and let us use that effort for more productive means for contributions to the project. And that's been really great. I wanted to invite everyone who's listening live or watching this recording to join us at DrupalCon Baltimore. Distillnetworks is going to be giving a presentation at DrupalCon. It will be from 3.45 to 4.15 p.m. Eastern time on Wednesday the 26th. So I hope you'll be able to join us there and learn more. Sessions are also recorded at DrupalCon so that should be available as well if you're not able to attend the con directly. And Edward, can you tell us a little bit more about this Distill offer? Yeah, yeah, for anybody who's watching this, we've got an offer of two free months of Distill just to Drupal community users. And so that offer is valid until the end of the month. So if you just go to distillnetworks.com and sort of fill out the contact us page and say this is where you saw us and you heard about us on this webinar, we'll be happy to honor the two free months of Distill for you. So hopefully that's an incentive to give you a chance to sort of help clean your traffic and we'd be delighted to sort of chat with you further about what problems you're dealing with because in the community of this, we see that there's a lot of different use cases where bots are doing these sort of malicious things and it's only the creativity of the human mind that sort of is the limits here. It's an amazing thing when you see the different things that people are trying to do and exploit. So if you wanna have a chat about it, please contact us and we'd be more than willing to help you out. Wonderful. All right, so at this time, I'd like to open up to any questions that there may be. If you're joining us live on the call, there is a Q and A button in your control bar for the webinar where you can ask any questions that you might have. So feel free to put any questions there. I have a couple to start actually. So the first one is what if I'm afraid of blocking real users and just the false positives in general? How do you know whether your measures you put in place are gonna keep real users away versus just the bots? Yeah, I think that's the holy grail for many businesses is that they're worried so much about just converting a user to a customer that any manner of false positives is deemed to be an increasingly negative user experience. So it's something that they definitely don't want to do. So when we're sort of talking to companies about putting processes in place, first off, we obviously stress the importance of the high-def fingerprint and identifying the device. Two, we also sit there and say there are monitor modes that we can put things in and say here's, and so you can start to see what we would detect and what gets detected before we actually do it. So it'll say, here's the list of people that we would have blocked. If we wanted it, are they real customers? And so you can sort of learn on the job and say, okay, if that was cleaned out, would it have affected a real customer? And many times companies don't find that they don't have those any false positives in there. We also have one of the only tools in cybersecurity that actually monitors its false positive rate, which is an interesting one, in that we have a capture solve rate that happens. So if we're unsure and we serve a capture to a user and we say, okay, we've identified this spike in traffic, we've served a bunch of captures out, we look at actually how many were solved. And so, because a solve rate, if it's solved, we've obviously got a false positive involved. So, and then we look at what's the behavior of that false positive report. And so we're actually looking at our performance in self. It's not a black box. This is a report that you can see yourselves as how many solve captures there were. So I think there's many ways of looking at this problem because in the world of retail and the world of e-commerce, blocking real users is real money being stopped. And so we understand that it's a key part of it. That is why we put all these steps in place, including sort of having the ability to audit whether you have any resolved captures. Awesome, okay. I've got one more question here, I think, before we wrap up. So what if I'm a site owner and I know that I have at least one or more of those attributes that you listed as things that make me targets for these bad bots. How do I know if I'm already getting hit by bots? I guess the glib response for that would be, from our data, we pretty much can guarantee it that you are being attacked and the bots are on your site. How successful they're being, you probably need to look at sort of other things like account lockouts. Are you seeing a lots of people who are locked out and they're having to get, call customer service to get or do something to get access back into their account. Are you seeing increases in fraud on your site? Are you seeing increases amount of spam or customer service complaints about spam? Are those the things that you're hearing about and seeing in the business? If you start hearing about those, it's pretty much the bots that are probably taking care of that for you and doing those things against your site. So if you're starting to see those kinds of behaviors and see those kinds of outcomes for customers, that's probably an indicator. Then obviously you can start to look at your logs and start to sort of do some investigation. And ultimately it's a solution that sort of looks at bot mitigation or automating this to clean your traffic that probably will be something that you'd be interested in once you've done those kinds of evaluations. So ultimately it's, this is an arms race and you need somebody who's gonna help you defend it because we find that IT teams today, they don't have the time to sit there and work out and clean their traffic themselves. If they can have somebody else an expert do it for them, that it sort of frees them up to do other things that are more productive to the business and they can sort of get the peace of mind that somebody's taking care of it for them. Awesome, okay. Well, that's our last question. So I wanna go ahead and I want to thank Distill Network sincerely for joining us in partnership to help protect Drupal.org, the home of our community. I wanna thank Edward for joining me as a co-presenter on this presentation and I wanna thank everyone who joined us who's viewing this live or who's viewing this after the fact and I hope we'll see you at Drupal.com. We'd love to have more conversations about our specific solution on Drupal.org and I'm sure that Distill team would love to talk to you about your site and the particular challenges that you might face. All right, thanks again everyone and we'll see you on our next webinar.