 OK, rydw i'n ddweud, ddod i'n ddweud hynny'n ddweud yn ysgol yw'r tyfnwys. Rwy'n meddwl am yma Ben Tullis. Rydw i'n meddwl am y cyfnodau cyfnodau, yw'r cyfnodau 12 o'r cyfnodau ar y cyfnodau cyfnodau ac rwy'n dechrau i'r cyfnodau cyfnodau o'r cyfnodau ar y cyfnodau. Felly mae'n ddweud ymlaen o'r cyfnodau yn ymlaen o'r cyfnodau cyfnodau. Mae hynny'n gweithio y cyfnodau o'r cyfnodau gael ffarn gwirioneddau i ddweud y cyfnodau, bydd ymlaen o'r cyfnodau ymlaen o'r cyfnodau, i ddweud, o'r cyfnodau, i'n ddweud i'r cyfnodau. Felly mae'r cyfnodau. Rydw i'n gweithio ymlaen o'r cyfnodau o'r cyfnodau 12 o'r cyfnodau i'n ddweud y cyfnodau a lyfodraethau i gwybodaeth i gydag o'r llwy fyddain, ond yn ymgyrchu eithaf, yn gweithio'r cyfrifol, yn y cyfrifolau fyddain i'r cyfrifol, felly mae'r cyfrifol yn ymgyrch â'r ysgriffawr SME, a'r cyfrifolau cyfrifol yn y cyfrifol a'r cyfrifol. Felly dyna'r cyfrifol o'r cyfrifol, cof er amlwg ac elwodraeth. Felly'r cyfrifol yn ymdweithio'r cyfrifol. I've dealt with really my fair share of systems that have been compromised and, if I'm honest, also systems that are yet to be compromised. Some of the systems I've helped to build and tried to make secure and try to teach the people who managed them to look after them. Some of the systems I have managed have been built by other people and are less secure and less visible. That's what we've got. Felly rydyn ni'n mynd i bydd i gwneud bwrthig yn lawer o ffordd yn llai'r slides ac derbyn rhan o'ch gwneud hon i heliwyb hynny yn fwywyr hyd yn ragwyddo rhan o ffwrdd mae'n dod o ffordd sy'n gweithio. O'r bwysig i'r busu gwheithio adrodd yn llai'r horydd hynny'n gweithio'r slides ac rwy'n mynd i gwybod, a phrygiau allu wahanol y Llinix sy'n rhan o Gymfrinsfeydd argymau fel ddim yn ei ffordd fel y cyfen -♪ area a dysgu rhywbeth, gyda'r cerddau iawn, ond yw gweld yr wych Feed For. Ond yw, nid oedd, o'r gwahanol yw'r gweithio march, cerddau iawn o fynd i fewn gwirio'r gwahanol eich ddweud, byddwch i ddweud o'r cynyddiad podeid, mi oedd y ddweud o gyfan i ddweud. Rwy'n meddwl, oherwydd, rydyn ni'n gwybod nhw'n cael eu cyfrwyddiadau yw a'r cyfrwyddiadau newid mewn, ac nid o'r cyfrwyddiadau sylwyr a phethau cyfrwyddiadau i ryw nyfodol ein fanol, a dylai pythonnu ar ynddau cyfnod gymerod ar gyfer y Angel Abertaeth a gysylltwudnos. Felly, rhywbeth yma ym Mhengyrchu, rwy'n meddwl ar gyfer ein hyfrifeneil, dyfaniad gweithio a ffairfairfianfyddau a ffairfianfio'n meddwl, yna'r ddweud o dweud o'r arddangos, yna'r ddau gyfnod, a rydyn ni'n rhaid i'r digw gyda'r fasor o'r clyw, o'i mynd yn mynd yn yr hafyr, wrth gael gwych cymaint â'r gwahanol neu'r gwahanol. Felly mae yw eich cymaint sydd y gallai gael eich cywir neu'r gwahanol, neu yw'n cyffinio yi'r gwahanol ac eich bod eich gwahanol a gael gwybod gyntaf ar ar y gwahanol. Mae angen i'ch gorffen o ychydig i gwahanol gwahanol. Byddwn i'n didnoedd yma wedi chi'n rydyn ni wedi chi'n gwybod i'r gwahanol, yn yr ymgyfydlu faint, neu'r cyfarwydd iawn i'r cystafl. ac mae hyn yn dweud ein hwnnw, ddechrau, ysgol, gyda'r gweithio, ddechrau, gweithio, oedol. Diolch, rwy'n deall, mae'n fwyaf cyflei gyda'r gweithio a'r cynghyd yn dweud. Felly, rwy'n deall, fel y base cyhoedd, mae'r cyfnod yma yw'r cyllidau cyllidau cyffredig o'i'r cyffredig, mae'r cyfnod yw'r cyffredig o'i cyffredig o'i cyffredig o'i'r cyffredig o'u cyffredig. i'r sgwr Diamond and extracting from that sort of log source and log sync the relevant information security information how to present that. And then lastly, some focus distributions that if you are interested in network security monitoring or host security you may want to look at these distributions as they contain a lot of the tools that will be talking about. So, a brief definition what's information warfare, it's just a model, and it is helpful Gallen nhw'n ddweud i'r ffordd o'r ffordd ar gael ei ddweudio'n gweithio'r ddau o'r ddau'r ddau o'r ddau, o'r ffordd o'r ddau o'r ddau yn y ffordd. Mae'n ystod yn digwydd 4 cyfwyrd gan hynny. Mae ystod y rheswm o'r ddweudio'r ddweudio ar flinol. Felly mae'r eich cyffredin iawn o'r ddweudio'r ddweudio, oherwydd mae'r ddweudio'r ddweudio'r ddweudio'r ddweudio'r ddweudio. Os y gallwch i, mae'na ymdrygu'n credu cyfeirio a byddio leidio cerddwyd yn fwy iddiol yn ei f neurad. Rydyn oedd hynna'n gwybod, oedd hynny'n dydd hi ei gyrddwyd â'r ffyrdd. Rydyn o'r rhannu yn cerddwyd wedi'i gennych ysgol yn y cyfle hwnnw yn yr model chi. Dwi'n gwneud hwnnw, a gennych y bwysig hwnnw. Dwi'n gwneud hwnnw, a wedyn y fwyrdd hwnnw, maen nhw'n edrych arlych i'r cyfrifio'n cyfrifio hynny. sydd y ddigid Sonicadau ym ni'n gweithio i'w bryd i'w ei gweithio i'w et give a'i gweithio i gweithio eu ddigid Greifrrodd Cymru. Mae'r ddechrau arfer o'r ddigid Cymru yn ei eivod arall i ddigid Lywodraeth Cymru. Dwi os yw amdod, ac mae'r ddigid Abertegell oherwydd yn ei ceisio ar y cynhyrch yn y Gymrogyn ysgrifennu, ac mae'r cyfrannu, gyfrannu wrthbwrs, goblwyniadau ac pellog. O'r cyfrannu cyfrannu The third element is the offensive operation, so this is your offensive team launching attack and the aims here are to increase the value of the information resource to them and to decrease the value to the defensive team. We've got the three classes of attack and that is increased availability for the offensive team, decreased integrity and decreased availability of information for the defensive team and I've given there some examples of the type of attack that can take place. Finally, the last element in our defensive information warfare model is the defensive operation, so we've got the information resource, the players of the game, what they do and what we do. They're designed to protect the information resources and they must cost less than the losses that would occur if we were not carrying out these operations. We've got six classes of defensive operations, so all of the tools and techniques that we're going to be talking about today fit somewhere into these classes of prevention, deterrents, indicators and warnings, detection, emergency preparedness and response. Those are the classes of defensive operation that we're talking about. To classify the threats, we're just basically going to be talking about random threats and targeted threats. There are many ways of actually doing the classification, but we're just really keeping it simple here. We've got random threats and that would include malware distribution, so people can download infected files or ubiquitous USB sticks. Anyone can pick them up, put them in the computer and information warfare is on the cards. We've also got IP address scanning techniques, so anyone who has just put a server on the internet, a honeypot or anything else, will know that very quickly, if you leave an SSH port open to the world, people will find it, people will try to authenticate against it. That is the IP address scanning. I make mention here, the Khanabotnet, which was actually also briefly highlighted in Mika's presentation just prior to this. That is a very interesting topic there, and it's worth a look at, really, because it was a botnet that was created with 420,000 hosts created just using default passwords on hosts. I think 2012 it was created, most of the hosts were routers, but they were admin, admin, admin password, default accounts on network hardware and various other bits of hardware. It was used to create one of the most detailed maps of the internet so far. It was used in a non-niferious application, and in fact the botnet was deleted at the end of its useful life with a little note saying how it was created, what it was used for and how you might want to secure these hosts in the future. That's an aside, really, but well worth reading up about. Some of the random threats, war driving and session hijacking. Anyone can be affected by these kind of targets. With focus threats, we hope that we're not targeted by these, but if someone's got a grudge against anyone in particular, and they really want your information resource, they want to get into your network, they want your passwords, whatever it is, there are many resources available to them. The techniques are traditional network penetration, so just try guessing passwords. Or if you can capture some information, if it's encrypted information, you can then take that offline, use whatever resources you want to do to try to do reverse crypto analysis on that thing. Cloudcracker is a good example of that now, so I hope that nobody out there is relying on a PPPT VPN, for instance, because really these days that kind of thing is not secure. Capture the traffic, upload it to Cloudcracker, pay a few dollars, and it'll be cracked with cloud resources, various other things. We've got also known exploits, so vulnerabilities, privilege escalation, if you've got a trusted member of staff who has some access, can that member of staff elevate their privileges and gain access he shouldn't have? Social engineering, can you just lend me your password? All of these kinds of things can be used to enable information warfare in a focused manner. We only really need to look at the things that have been coming out of the black hat conferences and DEF CON and various other things to know that actually the offensive team doesn't even need to be well resourced these days. We've got some pictures here of a Pony Express and a Mini Pona, so you can now, for less than 40 quid, you can build yourself a Dropbox like this, power it from anything, battery, catch a WPA traffic, capture any old network traffic, plug it into the physical network, but whoever it up, enough information, take it offline and decrypt it, so it's frightening to think that if we are the focus of an information warfare attack, it's really quite frightening to think that it's easy for anyone to do. So, just briefly looking at the targets, don't try and sort of read this, but it's just three relevant news articles from sort of July here. We've got two from the home, one is how someone on the internet hacked into a baby monitor and was able to speak to a child through the internet somewhere else in the world. The one on the right is the smart TVs where you now wave at your TV, they are Linux based hosts sitting on the network and they can be the target of an information warfare. You wouldn't like to think that people can just turn on the camera on your TV or laptop and see what you're doing, but it's there. And the centre one is a news article that came out from GCHQ just saying about how prevalent information warfare is at the sort of nation scale and I suppose in industry. So there's an awful lot of information out there about the scale of it. So moving on, just the basics of defensive information warfare, I'm not going to go through all these. I'm not going to sit here and tell you how to make good passwords or how to make them. You haven't come for that. I'm not going to tell you about how to lock doors and physical security, but it doesn't go without saying and if you don't have good documentation about your systems, if your team can't communicate, needs to do something about the communication and the team. If you're not backing it up, all bets are off. So it needs to be said, but I'm not going to harp on about it. And similarly, anyone out there developing an information system, building an information system really needs to have a monitoring system in place. I'm not going to try and sell you one. Oh, I see. Sorry about that. Any monitoring system, these are the basics. Record all the metrics. Monitor everything you can. Keep all of those because they're all useful in analysis and response. And some people just say, oh well, you know, we've got a monitoring system. We use Nagit. We've got a bank of green lights. It's not really what it's about. You need to review that, keep it current. Update the configuration based on any kind of changes or incidents, security incidents, anything like that. And use multiple systems. Use parallel systems where people just say, well, I like green lights and I want to know where everything's working. They're not using a network security monitoring system. They're not using a performance monitoring system. So you really need to get all of that information and then you can present it. So the first thing we're going to be doing is increasing network visibility. That's the first of the core parts of what we're talking about with defensive information warfare. So it's about finding needles in haystacks. You know, there's so much information. There's so much data passing. An information warfare attack might be tiny, might be a few bytes, might be one transaction with one web server, and it can be buried in log files. It can be buried in network traffic. So we're going to be taking the network traffic and scanning it for known patterns, known attack patterns, rule-based matches. We're going to take a sidestep and look at wireless intrusion detection systems as well, because many, many things these days are built on Wi-Fi and we need to know how to make sure, as far as possible, that we can build secure wireless systems as well. We look at profiling the network traffic. So, yeah, we might know that, you know, on this network segment, it's a storage network. So I'm only expecting to see eye-scuzzy traffic on this network. That's great. If you only see eye-scuzzy traffic on that network, things are good. If you see anything else on that network, that literally should be ringing alarm bells. So this is sort of how to go about from profiling network traffic, filtering out the legitimate traffic, and seeing what else is wrong, what is not there, and implementing anomaly detection. So if you've got a network, you know, which is supposed to be fixed, the number of hosts are not changing, the IP addresses are not changing. If IP addresses change, you need to know about it. So that's network anomaly detection. So, again, this is kind of the basics for capturing Ethernet traffic. One of the most core tools is using switch mirror ports. Also known as span ports, monitor ports, different hardware. I'm sure many of you will have used this kind of thing. So I'm just going to step through basically. One port receives traffic that's sent over one or more of the other ports. So here's a sort of diagram of how it would be configured in a managed or smart switch. And you're sending traffic to port one that is also going over two, three and four. Okay? And in terms of the network infrastructure, this is how it looks. You have a protective monitoring server, which in our case is running Linux. That passively receives all of the traffic sent over the other ports. That's about achieving network visibility. It doesn't send anything over there, it just receives that information. When you're building a redundant system, you would simply sort of double up the switches and the monitoring ports. So you've got two capture interfaces because you've got two switches and redundant routes to the internet. So when you're dealing with larger infrastructure, so sort of tree based and sort of switches that cascade down in different levels, you've got various different options. The first option here is the sort of higher end. So it's really only available in high end switches, Cisco's, HP's, Alcatel's, and that kind of thing. It's actually not where most of my experience is because, as I say, I've worked in the SME sector and I've worked with low to mid end equipment. I have tended to use the option two of distributed monitoring. So you've got one central monitoring server and then in each of your locations these might be branch offices or they might be separate buildings within a campus or whatever else. You have remote monitoring servers. They will scan the traffic and act on the traffic that they can see and they'll report back to a central monitoring server all of the events, all of the alerts, statistical network traffic information and the system log files. OK. All right, so far? Great. So one of the first tools that we're going to be talking about is SNORT and this I'm sure many of you will be familiar with. I just want to talk about it briefly because this is one of the key network intrusion detection system tools that is there. We've been talking about using it in passive mode where a protective monitoring server listens and can see all of the network traffic on all these switch segments. It also does offer an inline mode which we'll come back to a little bit later and that can do intrusion prevention. So if it sees malicious or nefarious traffic it will block it or reject it. And it searches the network traffic based on rules, so rules matches and we've got here, I've listed how the various rules are updated. Sourcefire, who are the original authors I think or maintainers of SNORT they sell a rule set to commercial subscribers which is updated daily. That rule set is made available to registered users free 30 days later and there's also a community rule set and third party rule sets. So which rules you choose to use and where you get them from that's up to you. But first of all building the intrusion detection system and making sure that traffic is visible is a key step. So here just some key pointers really. We need to make sure that the network interfaces that are capturing the traffic don't do packet reassembly or offloading. So there's some conflicts here for generic receive offload and large receive offload. That helps to turn off some of the features on the network interface card which can end up with spurious results when you're doing packet sniffing and network capture. So you'll be updating the rules with Oink Master or pulled pork or just manually if need be, if that's appropriate. And we have here sort of the SNORT can be configured. It is often or has been considered difficult to configure SNORT and get it right and get the alerts to be correct, get the performance to be correct, get the actions that result from pattern matches to be correct. So there are many ways of doing this and there will be lots of different ways in your environments. You might want syslog alerts that get incorporated later. We might want to output to unified two files which are if you're in a high performance or large capacity network, that is kind of recommended these days that you output to unified two format and use another tool, typically barnyard two, to take that output and then act on it, forwarding it to whatever other systems you have in place. So it's also worth mentioning Syracata. This is another network intrusion detection system. Started in 2009, multithreaded by default so it was intended to tackle some of the threading problems and performance problems that SNORT has been bugged with. You can use it at the same time. It can use the same SNORT rules. Choose whatever you wish really. Another one that's worthy of mention is Bro. Bro is a very interesting platform. It's a passive network analysis platform. So it wants to see all of that traffic and it's got a full scripting engine to say what should happen, what can happen, what you want to happen based on what traffic it sees. So you can make it into a network intrusion detection system. You make it into an intrusion prevention system but you kind of have to script it or use scripts that other people have created. One that I've mentioned at the end there which I found out about recently which is quite good, in network traffic it can see files. So if you've got HTTP traffic, for instance, it will see those files, isolate them, MD5 hash them and it can then compare those MD5 hashes against online malware databases run by Team Cymru and others. So that's one example of how a network analysis platform can be scripted to behave in certain ways. Okay, so we're going to take a sidestep briefly to wireless intrusion detection systems and one of the key tools that's in use. Now best practice really at the moment in Wi-Fi is to implement WPA2 Enterprise. It's the most difficult to decrypt because if you have the four-way handshake you end up with a file that you can send off to an offline analysis, crypt analysis and break WPA2 PSK without too much difficulty depending on what the passphrases are and so on. But WPA2 Enterprise is still difficult to crack. You can create that kind of system with hostAPD, WPA supplicant and the right SSL certificates and so on. But the other key thing is 802.11W, all of the wireless network traffic management frames are themselves protected. So if you don't have that best practice system at the moment, it is vulnerable to attack, it's already vulnerable to attack and we're going to consider briefly two kinds of attack and how we might detect and respond to the attack. And that is a rogue access point, so an access point that pretends to be a legitimate access point for a company. You turn on, you hide your phone, whatever it is and it connects to the rogue access point instead of a legitimate one and that can then be used for man in the middle attacks. And a de-authentication attack, now that can be used to capture a four-way handshake or it can be used to cause a denial of service attack. So briefly, we're going to be looking at Kismet and how one can use Kismet at the tool to detect this kind of attack. Here's some information about it. I haven't got time to read it all out. I'm sure you can take it in. Some of you may have used Kismet before. It's also useful for wireless network reconnaissance and capture. So here's a diagram of our first example, which is detecting a rogue access point. We've got a legitimate wireless access client, so we've got a physical network with two legitimate access points and two Kismet drones. So these are capturing network traffic or sniffing for wireless network traffic and we've got a rogue access point there. So in this case, Kismet servers connection to the Kismet drones and receives the information from Kismet drones. Here's the configuration element, which allows us to define the legitimate access points for our network. So here's the Kismet configuration file. The alert that we're interested in is the AP spoof and give some information there on about the thresholds and the time limits of sending these alerts. There's our AP name, Tullix, and here is a list of two MAC addresses for the valid access points. If the Kismet drones see any other access point, which is broadcasting that SSID, we know that we've got a rogue access point generating an alert. The second type of wireless network attack is a de-authentication attack here. So we've taken hypothetically some Wi-Fi cameras that are operating over WPA2 or something like that, but don't have the protected management frames. An offensive player can cause a de-authentication attack, so they continually send de-authentication packets, essentially knocking those cameras offline. Again, the same network configuration there. The Kismet drones report back to the Kismet server. We are seeing a de-authentication flood or broadcast disconnection packets. If a disconnection packet is sent to a broadcast address, that'll technically take down all of the wireless access clients, for that access point, we need to know about it. So we can't, in this case, prevent the systems being taken down, but we can detect and respond to the alert. We generate an alert which goes into our security information and event management system. Come back to you. So that's it. We've done a brief overview on network intrusion detection systems and scanning network traffic. We've looked at wireless intrusion detection systems, moving on to network traffic profiling. I'm just going to highlight a couple of tools. These are not the only tools. They may not be suitable for every environment, but N-top and N-top, next generation, NG. These are very useful because they give you a rich graphical interface, describing and allowing you to drill down into the type of network traffic that you can see, the volume of network traffic. Who's talking to whom on the network? So you can see this information. There's a technique there, PF ring, for increasing the number of threads and cores that can be used for this kind of capture analysis. So here is, again, don't try to look at the details of this, but it's just an example of this is the rich graphical interface on N-top, next generation. It's just a few minutes of capture on my own home network. And it shows you application protocols, who's talking to whom, top hosts, top application protocols, various other things. You can drill down into that. If you've incorporated NetFlow information, you can see many sites and more sites than you can see from one host, generally. Here's the older version, N-top. It's still a very useful interface and a great deal of information about all the hosts. You can really drill down into what that is and keep that information persistently for whatever period you define. Okay, going to move on briefly to NetFlow. It is very useful, but especially in the SME space and in smaller companies and people without dedicated network resources, I've seldom seen it used or used effectively. So I just want to have a brief overview. It itself is a network protocol and a format for describing network traffic. So if you've got vast quantities of network traffic, unless you've got a data centre that's intended for storing that traffic, all we really want to keep is the statistical information. That's what we can keep. So NetFlow, in this case, exports emit UDP packets which describe the rest of the network traffic from one or more exporters and they're sent to one or more NetFlow collectors and then they're stored for long-term analysis. Okay, so from the exporter side, we want to create NetFlow streams. It's already built into high-end routers. So there's a lot of Cisco houses out there. They already have NetFlow-capable switches which will just emit UDP packets describing the rest of the traffic. Juniper and various other switch manufacturers and hardware manufacturers will support it in hardware. But that's not necessary. So if you've got your D-Link switches or your Netgear or whatever, you've got all kinds of network switches and you are using mirror ports in this way, you can then use open source exporters to look at that traffic and then describe it and send that statistical information off to a collector. So we've got some here, N-Probe, F-Probe, Softflow, DNRflow. They're all useful tools and they can all do this in various different ways. It's also built into open V-Switch if you're using a virtual switch infrastructure or you've got sort of cloud projects underway. Open V-Switch has got it. Other virtualisation providers and cloud providers will also have their own implementation. Okay, so that's the exporter side. On the collector side, I already mentioned that N-TOP and N-TOP-NG can be used as NetFlow collectors and can incorporate that statistical information into N-TOP. There's another one which I am familiar with which I use and I have a great deal of respect for which is NF-Dump and NF-SEN. So this is a set of tools that collect NetFlow streams, store it for long-term analysis. That's the NF-Dump tool set. And NF-SEN is a web interface on top of that that allows detailed data extraction. So here are some screenshots. Again, don't try to read into the details but some screenshots of NF-SEN allowing you to see a great deal of information about the number of packets, the number of TCP connections, the number of ICMP connections. You can apply filters so you can say, well, I only want to know about the web traffic. I don't want to know about traffic entering the network or traffic leaving the network or traffic within the network. So you can dynamically apply filters and profiles to allow you to look at that information. And then it also facilitates creating NF-Dump filter format commands so that you can extract the detailed information out. And it's an extremely useful tool. Okay, finally on the network visibility side, we've got anomaly detection tools. Now again, these may be useful for your networks. They may be useful to different extents for different networks. But ARP Watch and ARP Alert, their function is to look at a network segment and make a relationship of the IP addresses and the MAC addresses that are on the network. And if something changes, send an alert. So if you know that you've got a fixed or highly secure network, someone goes and plugs something into a port and they get past the NAC system that's in place, we want an alert sent about that. Prad's passive real-time asset detection system again that listens to network traffic. It builds a database of hosts and services both in terms of what's being requested and what's being served. You can then query that at any time and use it to build an inventory. You can actually use that also to inform your configuration of snort. By telling snort, my network looks like this. These are the servers that I have, these are the clients that I have. This is what you should be worried about. And lastly, PBNJ. I think it might stand for peanut butter and jelly, I'm not sure. That is an active network detection system. So it uses Nmap to inquire what's on the network, get that information back, it puts it into a database and then at any time later on you can then re-run that and see what has changed on the network. So different techniques, different tools, they may be useful for you. So we're now going on to some host-based visibility tools. I'm running short on time, so I'm going to have to go through. First of all, I'm just going to go through some useful tools. It might be useful to anyone running Linux systems. Then look at host-based intrusion detection systems. Sorry I'm having to go at this pace. I just tried to put too much stuff into it. A useful tool is et cetera-keeper, et cetera-keeper. Just keep your configuration on your Linux hosts in version control. It doesn't necessarily prevent people from changing it, but if someone is trying to change it, then it makes it more difficult for them to hide what they're doing. It makes it more difficult for you in response to analyse those changes and to make them visible. So I've shown a screenshot here of HG-Serve. I use Mercurial for configuration management of et cetera-directories. It means I can fire up a web browser and look through with a web browser what's changed about my configuration. Okay, atop, another useful tool. There's another top, but this one is very useful because it periodically records the state of the host. So very often in an operations team, you'd have a request come in to say, what's something happened last night on our cluster? Or it was still from the sales team. We noticed a funny drop in sales. Can you tell us what happened at half past four in the morning? Very often you would have to say, you just don't have that information. We can't tell you what was happening on your system at half past two because you haven't implemented any historical information about what the process list was and what was actually happening. Atop is useful because by default it takes a ten minute snapshot of what's going on. You can step through in time periods and say, oh, I see, this process happened. It can seem this much memory that looks a bit suspicious. Audit D, this is really the key tool that people building secure systems really ought to be looking at using for host auditing. It is widely used, but it's also often ignored. So you can create rules to say that any particular type of file should be audited, access, deletion, rename, move, all the rest of it. Works with the kernel module. You've got the system there, and then the end user tools that we would use for reporting an AU report and AU search. They will allow you to look back through audit system and pull back reports. We've also got Audisp D, which is a dispatcher so that you can integrate the audit system into other parts of your security system. So you can trigger alerts, you can make things appear on analysis consoles based on real-time actions of the audit system. Again, don't try to look at the small text on here, but Linus is a very useful script tool. It's an active development, and it's a tool that you can use to audit your systems for security. It will come back to you with hardening suggestions. I think this is actually one to keep an eye on. I think it's a very useful tool, and you can use it just to see, you know, has anything changed since the last time I ran this, and what is my host actually doing? So that's Linus. Moving on, host-based intrusion detection systems. There are many, and there are many different techniques that you can use, but one of the key ones that's required of things like PCI DSS and GPG13 is file integrity monitoring. You want to say, tell me that nothing has changed about this set of files. So OSSEC is a multi-platform, distributed, host-based intrusion detection system. So you've got a server, you've got OSSEC agents running on platforms and systems, collecting log files from various other things, checking log files for particular pattern matches. And again, you can create email logs or syslog alerts, and you can output the alerts into a database for collation inclusion into anything else. OSSEC is a very useful tool. On the file integrity monitoring side, there's SAMhion. I'm not quite sure how that's supposed to be pronounced. It's an Irish word, but that can be a standalone file integrity monitor, so you can install it on a server, set up a database of files that say this is how they should be, and it will alert you of any changes. But you can also use it in a distributed mode, so if you've got a cluster, if you've got a cloud-based system, you can use SAMhion in this client-server model. You can use the Beltane web front-end to administer any changes. If you know that a new version has been pushed out, we can use the web front-end to check that everything is as it should be. You're expecting to see certain changes. You can commit certain changes, so that's the new state that it should be, alert me if it changes again. Tripwire, this is quite an old tool now, but it's lightweight, it is still very useful. You do use that for file integrity monitoring, so you build a database of files, check sums, it should be. You place the database on read-only media or read-only network share, and again, it will alert you if anything changes about that. It's interesting because its own configuration, policies, databases are digitally signed, so it's difficult for an attacker to update that unless they have another passphrase, so just another sort of technique. Similarly, there are other tools for file integrity monitoring, aid, f-check and stealth. They all have their own key features, and they may be useful to you in certain environments when you're building a system. Okay, now we move on to good log file management, and I'm going to break it down into sort of syslog, alerting and application logs. I'm not going to talk much about application logs because that's really for application designers. I'll touch on it in certain things, but syslog in the SME space, a lot of servers, a lot of systems, they just don't centralise their logs, so I've been asked to look at household name websites. Oh, something happened, can you go and have a look? And you're expected in an operations team to go and log on to 20 servers and use grep and orc and all the rest of it and to extract information about what happened and the client or the people, the owners of the website, haven't thought that centralising those logs is a good idea. So it seems basic, bread and butter stuff, but get your logs in one place. If you're looking after window systems or other systems, there are ways to get windows event logs also into syslogs. Generally, we'll be using our syslog, syslog ng now, and we can use TCP or RELP to make sure that those connections are secure and reliable. If you want to get a message through, RELP allows a back channel to say, I'm sending you a message, I've received it, thanks very much, or I didn't receive it, I'll send it back again. Also, here's another technique that is quite new and is a requirement for GPG-13, that's the GCHQ, sort of good practice guide for protective monitoring systems, so a lot of public bodies having to adhere to this at the moment, is cryptographic log signing. So if you get your logs in one place, you've got a proof of custody, you know that it was sent by this machine, you know that it was received by this machine, this machine signs it. So it creates a signed file if someone comes in and tries to modify that log file, you can use the RSG util or whatever else to verify that the log file has changed or the log file has not changed. So that is a feature of new our syslog versions and is a useful tooling technique. So you've got all the syslog data in one place, just find some way of analysing it, searching it, there are free and open source tools to do this. One example is Addiscon log analyser, so it's the company who does a lot of the our syslog development. They've got a simple PHP based web front end to syslog data and you can search it by host, by priority, by all sorts of fields, just helps to extract that information that you want. Loads of alternatives and many of these systems, Elasticsearch, Greylog 2, ELSA is another one that's interesting. They may already be useful, be used in your application log. A lot of people will say, right we put our application logs in here for analysis. There's no reason why you can't put syslog data in there as well. So all sorts of tools, I'm not trying to get you to use one. On the application logging side, I will just make mention of one thing, which is that a lot of clients, a lot of business owners think that because they've got Google Analytics running on their website, that's enough for them. They can see all this rich information about where people are coming from, what they're doing, what the spikes are, all the traffic, but actually it is incomplete and getting that message across to owners of websites and owners of web applications is quite important. So you really do need to be logging and analysing the log files themselves because your distributed denial of service attack, your anonymous user isn't going to be submitting their information to Google Analytics. So you're going to have people thinking, well the website is appearing normal on Google Analytics, but the web servers are absolutely falling over themselves. What is going wrong? They don't see all of the errors that don't get returned. They don't see all of the distributed users who are not submitting that information to Google Analytics. One tool that can help is Pwik, a nice open source tool and that has a log analytics mode. So you can feed in web server logs and you can get out the same kind of rich graphic information that Google Analytics gives you and you can use this in conjunction with other systems. So briefly moving on to active response. What can we get the systems to do on attack? Now I mentioned that we can have snort in inline mode where if it sees malicious traffic it will drop it or reject it. You can also use snort-sam as a plug-in module for snort which will enact changes on a firewall, one or more firewalls. So you have snort-sam agents running on the firewall machine or near the firewall machine and it's got it itself as plug-ins for lots of different firewalls. So if you see that a particular host is sending malicious traffic snort tells snort-sam to go block that host on the firewall. You have to be careful when designing any kind of active response and intrusion protection system that it itself cannot be subverted. So if you've got someone within a company and they're funneling out through a network address translation device someone in a company wants to do something malicious they can make it look as if the whole company is doing something malicious. You could inadvertently firewall away the whole company and cause a denial of service. So it needs to be very careful with intrusion prevention systems but they are useful. Secondly on here fail to ban. Another great tool sometimes overlooked. You can get it to scan through log files. Again regular expression pattern matching. When it sees certain things such as authentication failures in log files it will modify the firewall rules to block that. And if you've got repeat offenders they keep going back you can get it to monitor its own log files. So if the same offender gets blocked three times in 24 hours block them for a year. Okay so we've got all these log files in one place. We've got intrusion prevention systems. Another way of getting the security information out of the combined log files is using a tool called SEGAN which uses a snort-like pattern matching on log files and then generating alerts and integrating with security information and event management systems. So here I've just given an example which will pick out from a log file the example of the wireless intrusion detection system that we saw earlier. So here's our broadcast disconnection and de-authentication flood message that's appeared on our syslog server. SEGAN can find that from the log file and can then make an alert about this on an intrusion detection system response. And also itself can work with Snortsam to modify firewalls based on log file matches. So a very flexible tool, open source under active development. So we're now moving on to some sort of web consoles if you're going to have something on a video monitor on the wall what is a good thing to have to keep an eye top-down view on your network. One such tool is Snorby. This is a nice web-based tool, Ruby on Rails application for collating intrusion detection system alerts with SEGAN pattern matches and you can see here that we've got the different severities of match and that comes from various rules that you define. From this interface if you've got a network analyst console you can integrate with OpenFPC for full packet capture. So we might want to be sitting at a central control room and we might want to initiate full packet capture for a day or an hour some kind of capture on a remote interface. OpenFPC is one way of doing that. So there are other consoles available, some of the probably named Squeal and Squirt. Squeal is a client server application and Squirt is a web interface to it and Base is a slightly older web interface to Snort databases. So all very useful. Last couple of slides now. OS Sim, we're now moving on to focused distribution. So if you're not doing any network security monitoring or if you know that you've got, you know, it's some way that you want to get started with it and it doesn't fit in with your existing distribution methods, you can look at these focus distributions. OS Sim, my Alien Vault, is a very interesting product and you see here it's got a list of many of the open source components that we've mentioned here today. Be aware it is an open core product so they will try to upsell you to the USM product for certain industries, certain environments. That might be a good product but be aware that you click on buttons and it says upgrade. They have their own custom web framework, custom correlation engine, so it's a useful tool. The next one is the security onion and I think this is one of the most interesting projects in terms of a focus distribution. It is an Ubuntu-based distribution and it pulls together a great number of the tools that we've spoken about today and allows you quickly and easily to build a distributed network security monitoring platform. It uses Elsa that I touched on briefly which is an enterprise log search and archive system to collate all of your syslog and application log data and allow you to query that and incorporate that within intrusion detection system databases. We've got other tools. We haven't even spoken about Explico, Network Miner. Explico allows you to reconstruct files that you've seen in passing network traffic so it can fish out video files or emails or into the message or all the rest of it. CatMe is the full packet capture application suite that is used in the security onion. Argus is a tool that is used to have more statistical information about the kind of network traffic so it sort of sits alongside NetFlow like that. You can see here it actually allows you to do very interesting things with other tools like geo-mapping the network traffic so you can see here an output to Google Earth. Those are just representative screenshots. In summary, when you're conducting defensive information warfare and systems we need to make sure that we've got maximum network visibility. We need to make sure we've got maximum host visibility. Rigorous log file management. Get all the logs, going where they should be going. Get them scanned for what they should be. Get the right rule and pattern matches and then that with the tools will allow rapid analysis and response. So you need to make sure that people are able to respond to it. Okay, thanks very much. That's the end of the presentation. If anyone's got any questions. Do you mean which of the distributions would I choose or which intrusion detection system? It really depends on the environment and I wouldn't necessarily say that only one is right. Essentially, the Sirocata and Snort are engines. The key thing that they're doing is pattern matches based on the rules. So the rule sets that you have, whether you have the VRT rules from Sourcefire, whether you're a subscriber to that, whether you have the emerging threats rule set, whether you create your own custom rules, those will be used by whatever intrusion detection system engine you've got or whether you have bro scripts to pick out the relevant data from the traffic. That is the key thing. In terms of the engine, which engine I would use for a particular task, it would depend on the performance requirements and how many sniffing interfaces we use and what we want to do with those alerts, whether we're doing a unified two output, whether we're doing output to an SQL database. I don't know. Probably Sirocata's native multi-threading would give it the edge over Snort in terms of out-of-the-box performance because you wouldn't need to set up the pf ring side of thing, but you may already be setting up the pf ring side of thing for N-top. So you can choose whatever you wish, really. Any other questions? Oh, a bigger part in this. A gentleman there. That's right. Generally, the technique when we're dealing with signing log messages is not to sign individual log messages because the signature will end up larger than the log message itself generally and the performance overhead of signing that. So log messages would be signed in a block often and that's the technique that's taken by the Alien Vault USM product and by the ASIS log, I believe. They sign a block of log messages to say that logs received between this timestamp and this timestamp appear here in the log file and we can tell that that block has been unchanged because its signature is shown here. So, yes, that is certainly a key part of the log signing requirement and to be honest, within the open source tools that I've come across is quite a new component. So, with the Alien Vault USM, when they're dealing with the logger and the sort of enterprise side, they will talk about large-scale sand storage and high-speed disk access for the logs because they're expecting vast numbers of logs and in fact, if you get your firewall rules you can send every single permit and deny from your firewall rules on your switches and things so you can generate terabytes of logs. So, yes, that is something that you will need to assess in your environment. What is the event per second rate that you're going to be dealing with of logs coming in and other messages from the passive scanners and things. So, yes, log signing and the performance overhead is worth considering at an early stage. Okay, and your question? That's right. Yes, yes, that is very interesting. A truly passive network capture system will not be easy to detect, is nigh on impossible to detect. So, if you've got a network interface or a switch port which is not receiving packets, that is very difficult to detect. There was mention on the screen but I didn't really have time to mention it. Switch mirror ports is one way of doing it. The other way is network taps. A network tap is a device that can only listen from a network and cannot modify it en route. Network taps of 10 or 100 megabits are cheap and cheerful. When you get up to gigabit network taps, they become expensive. And when you get up to what the NSA and GCHQ allegedly use when you're talking about fibre optic taps, they are very expensive. But they are truly passive and impossible to detect. I think there are some techniques. I wouldn't be able to mention them off the top of my head for if you've got just TCP dump running on a network. But, yes, a very interesting point. Okay. Oh, yeah. I think the thing to get in terms of the smaller environment and that requirement of keeping the workload down, it's about responsibility and making sure that it is someone's continual task to be in charge of that network monitoring. So yes, when you first put it in place, there might be a high volume of alert and you will need to tweak which rules are applicable to your system. It's not a vast amount of work because you just sort of look at the requirement. If you're coming from a low level, you say, okay, well, that's fine. But I know I haven't got any window systems in this network segment so I can take all of these out. I can take all of these rules out. It's just about having that ongoing responsibility to say it's someone's. Because otherwise someone will be given the task of installing a network intrusion detection system and they'll be fired the next week or they'll be moved on to something else. And because there's an overload of log messages and security monitoring stuff, it will never get done, it will never get looked at. It doesn't become a key part of that organisation's information security infrastructure. So it will vary very much on the environment, whether it could be a corporate land or it could be a cloud-based system or it could be some co-located service, depending on the nature of the environment and the services that it will provide a server has to provide some services, they will be attacked in some way. That load can be one person's or it can be shared with a department. As long as the responsibility is there of someone or some function within the organisation, that's the important part. Okay, oh yeah, finally. Oh, is it? Okay. Okay, I stand corrected. Thanks very much. Thanks everyone.