 Thank you Good my name is Wim Remes. I'm a security consultant working in Belgium, and I'm gonna talk about OSEC today There's a lot of information about me on the internet, and there's a great tool that's called Maltigo I don't know if ever everybody has ever used it, but if you find my phone number you can call me and we can Go and have a beer. There's three things. I want to show you first before we go into OSEC I was a volunteer at the Brucon conference in Brussels last year. I don't know if everybody Anybody has visited Brucon We're gonna have another event this year in September if you have something interesting to talk about the CFP will be open. I think somewhere this month or next month and Anyway, it's a very good conference. It was awesome last year, and I'm sure it will be awesome this year also Then I was lucky to be invited to Excaliburcon, which was a security conference in China Who's gonna have another? Edition this year also. I know it's a stupid picture, and I'm very happy that I'm not in it, but Good if you ever want to combine security and China, this is a this is the place to be and then the third thing that I'm involved in is the Eurotrash security podcast It's me Craig Balding, Chris John Riley and Dale Pearson we met at Brucon and We were all listening to podcasts. The problem was that all the podcasts that we listened to were hosted by US people And there was no Eurocentric podcast, so we we are trying really trying to make a European podcast Using very basic tools and it's fun to do and I think we provide some information that you might find interesting If there's something we're doing wrong after listening to the first episodes don't hesitate to give us feedback on that So on OSSEC I'm not a developer of OSSEC. I have to make that clear. I'm not a developer at all I'm I started as a user of OSSEC and I've Talked with Daniel Sit who is the lead developer of OSSEC a few times My job is merely to to introduce OSSEC to the community and make people use it more To do their log management and system protection So the tool is developed by Daniel Sit Somewhere before 2005 he had he was using tripwire Which you probably know on a lot of systems and he had a lot of problems managing those those logs So he started developing Starting from a problem he had himself And that developed into syscheck. It was the first name of OSSEC Then he started building tools around it and in 2005 he released the first open source version It's licensed under the GPL v3 Now the tool itself was very good But there was no support on it and Daniel Sit was doing most of the development in his in his free time Then he had the opportunity to join Turbogate. Turbogate was a small company involved in security projects The project state open source and they provided Support on it. So you could get commercial support, but the product state open source and in 2008 Turbogate was Acquired by Trend Micro which you know as a big security company and there have been doubts About the product staying open source But I've done a talk at ESA a few weeks ago and there was a guy from Trend Micro in the in the audience there And he thought that it was still going to be open source and to remain open source At agenda for today First I have to introduce you into the boring parts of log management the theory behind it Then we're going to dig into the OSSEC features a Little bit of the OSSEC architecture and I'm going to introduce you how to do log analysis How it works and how you can do your own log analysis on any log that you have from any application from any system that you Might want to install OSSEC on So log management. It's so easy that even the babies can do it It isn't okay There's a lot of sources that logs can come from first biggest problem on all all systems if it involves security There's users interacting with our applications if there weren't any users there wouldn't be any need to do log management There wouldn't be any need for applications either. We're good Then we have the applications databases behind it the systems and they all generate enormous amounts of logs What I've learned is we look at logs Only when there's a problem to see what what has happened if we could that could do that proactively it would be very nice The reasons why we do log management I think there's only two because we have to because there's requirements Regulatory requirements. It can be an internal policy It can be somebody requiring you to do log management Then you have to and then there's a very few because they want to I haven't met any yet If you're gonna look at logs, you know in any corporate system, you're not gonna have only things logging to syslog, but you're gonna have your windows logs Your network appliances will have their own proper format of logs It's a big mess Then if we talk about log standards the first log the first standard that comes up is is syslog I think you all know syslog The problem is that it has been abused a lot Which with abuse? I mean there's developers. I don't have anything against developers. Otherwise, I wouldn't be here Or I would be wearing armor But we're dumping chunks of source code in in a syslog. We're dumping user names passwords even credit card numbers in in syslog messages and If we're gonna do proper application logging most of the times we only find Cryptic strings that don't mean nothing to somebody analyzing the log And it might mean something to the developer behind the application, but if you're gonna put it in production, it's gonna be a big problem Then there's a second type of logs which are the proprietary proprietary logs. There's been efforts by Websins by IBM and then the last one is CF is from ArcSight They they thought they were the if they were the biggest one everybody would be would be adopting their log standard It hasn't happened. There's some applications now moving to CF. ArcSight is one of the big Seam security incident and event management solutions and Some applications are already moving to CF, but it's not too big So we know what happens when a proprietary standard tries to be the big standard. It never happens And then we have IDMF which was an awesome initiative by some academics. It was very complex And it wasn't in relation with what you see in a production environment. So it didn't materialize either Most recently. Oh, that was the next slide good What do we need if we want to talk about log management? We need to have a language that everybody agrees upon We also need a syntax so that every log log message looks This looks the same So everybody can understand it then we talked about Cislog first Cislog by standard in the RFC is UDP If you're gonna roll out Cislog in a production environment the first engineer that you're gonna meet is Gonna try to move you to Cislog and G. Cislog and G is an awesome tool It's also very flexible to get your log management and it supports TCP, but TCP is not in the standard You need to be able to to do your To use a transport according to the to the message that you want to send Some messages might want to use UDP, but if you need more log more logging in in case of an incident You might want to move to TCP And then we need recommendations that guidelines that everybody can use to do do their log management And there isn't many of that either There is one one initiative from from NIST, which is a US organization that you might might want to check out But it's about the only one that gives proper recommendations for log management then more recently there's become the common event expression that that is a Standard that might make it because it's very flexible. You can do binary binary logs. You can do plain text logs You can use XML for logs depending on what logs you want and what what in which case a log Event happens now for OSSEC OSSEC is Defined on the website as a host intrusion detection system Which means it's something you install on a on a system to detect intrusions. It's Much more flexible and just just that description the three main future features of OSSEC are log analysis so you can have it consume logs Interpret them and have Throne alerts or have reaction to to that log message or log messages then the integrity control you can monitor several folders on your system and When a file changes an alert can be thrown With the active response, which is a part of the of OSSEC you can have the original file put back So you remain your system remains The integrity is is controlled and then the root key detection. It's not It's it's not a replacement for any anti malware solution, but it's a basic It's a basic set of signatures for for configurations That are interpreted as as root kids We dig into the architecture There is First I have to touch upon the install mode You can install OSSEC in three modes, which is standalone For a web server or a server that you have in a DMZ that you want to have intrusion detection on you can just install it in standalone mode Once you have multiple servers Or multiple clients you're gonna want to install the agent version on the on the clients and have a central server for the For the for the log management The good thing about that part is that the agent runs on the system, but it doesn't do any any analysis on the system itself It means it's very low footprint on on your on your server And all the computation is going to be done on the on the OS X server itself And if you do it on the OS X server, you're also going to be able to the correlation of events if somebody is scanning multiple servers in your in your environment with nmap like we Learned before then you're gonna be able to to correlate those events the two Two main processes that are running on the clients are the log collector And that is in fact the only process that is running as root because you need root access to access most of the system logs And that's the only reason the only thing that it does is read the system logs And any new messages are forwarded to the agent The agent is responsible for communicating with this communicating with the server and all communication is encrypted and compressed The standard port for for OS X is UDP 1514 You know 514 from from syslog, so they just put a one in front of it Then the server receives all the communication from the agent and it's gonna forward the messages to the analysis demon If there is an event room You can have two actions you can have a mail sent either to the system manager Or you can configure to have for a certain application a mail sent to the application owner or the application developer So you can act upon that message and then a very nifty feature for me is a Xxd which allows you to to run a script in reaction to an event a Very good example is on at Defcon. They you Defcon is a security conference in the US. They usually have a What they call a poem to own contest? There is a box that they put there if you can hack it the box is yours and in 2007 there was a guy who Configured the system to do an art poisoning attack once somebody tried to Intrude into that system. It was never hacked and he used OS sec and scappy the scappy libraries to do that so if you have a Complex architecture, you're gonna have all your clients running the OS sec client and you're gonna have your central server That's interesting, but you don't get a good overview of your complete infrastructure Luckily your firewall you switch in your routers. You cannot install an agent on it because they're closed But you can have them report using syslog to your say to your OS X server and you can interpret that those logs as well So we have servers we have our network infrastructure But maybe we have installed an intrusion detection system on our network as well We can have that report into OS X as well Snort is by default supported so OS X already has a whole set of rules to to read OS X messages Snort messages Then of course we have applications. We have databases We can monitor those logs as well. We just point the OS X agent because applications and the database are running on our servers We point the agent to to those log files and the agent will consume those as well and then if we're going to do virtualization you can install the agent on any Linux or Unix based system to even a VMware ESX you can install the OS X agent on and With that you have a complete centralization of your of your infrastructure logs These are the rules that are by default included in the OS X package package So you see there's Solaris rules. There is Sonic wall Cisco asterisk Apache And you can create using the local rules you can create exceptions or extensions on the existing rules one thing to note if you're going to change a rule in one of the Application-specific rules those are going to be overwritten during during upgrade, but the local rules XML file will never be overwritten How are we going to do log analysis? I already told you that everything happens on the server The first thing is going to do is pre decoding at that moment. He's going to get basic as an information from the from the from the log message But not going to do in-depth analysis on it Then in the decoding part, we're going to extract a lot more information like IP addresses user names host names And specific strings in the analysis We're going to give meaning to that to that information. We're going to interpret it and We're going to make it clear so this is an example of a Pre-decoding rule the only thing that we want it's a default syslog message So we can extract the time and date the host name is there as well and then we We extract The application name the program name And we just record to the log message Now we might have a have a tool that is monitoring the same application And then the application name is not going to match But we want to to have that log message For our application as well so we can use a basic Regular expression To extract the application name and still have it in the same rule set Then in the decoding phase, we're going to have more more information extracted So this is just a basic log in message And again, you see we use regular expressions to define which fields we want and with the order tag we give Meaning to that to that information in the analysis phase We're going to create rules So we know it's a decoded in the pre-decoding phase as a demon in this case And we're going to look for a for a string called logged in and that means that user is correctly logged in And based on the on the first rule, you see that every rule has a rule ID and By you reusing the the rule ID in the next rule you can Create a chain of rules So here we have if the user is not John, we will throw the through the message. Okay, this was not John Based on the source IP we can match it to a cedar notation of the of the network if In our current policy is not allowed to access a certain host from a certain network We can throw alert on that as well By building rules like that you you can really create a flexible rule tree and Have actions taken on certain events and that really makes it a really flexible tool to To manage all the log that is coming in instead of looking at edit When the event has already happened if you're gonna build rules You're gonna have regular expressions problem with the The real regular expressions on in an IDS situation you may not want The extensive extensive features you want to have speed because you want to have the the messages interpreted as as fast as possible, so The OSCEC team has decided to build a regular library For the decoders and the rules. There is two libraries. This is the first one, which is a very extensive a more extensive one And you can build your rules very flexible that way Then there is a second one, which is actually Only used in the rules and it's the simplest that you can have You can just look for a string and then you can have multiple strings chained with the with the pipe And this is the fastest one is the best to use for integrity checking OSSEC.conf is the basic configuration file where you do all the configuration for your host In the in the cys checks section you just define which Which directories you want to include to have the integrity checking? It's very important I for me if I configure it on systems. I always take the the lowest in the hierarchy or the highest in the hierarchy and then you're going to exclude lower in the hierarchy because if you Do the monitoring too high you might miss something while you can still exclude Files or directories that you don't want to monitor and those are mainly files and directories that you know change a lot You don't want to have alerts thrown on those Since version 2.2 There's also a real-time check in the past. It was run like you see on top In a in a defined pattern so every every few hours you could have it run Now you can have certain directories that are high-risk also monitored in real-time Then in the in the rules you're going to configure rules for your for your applications for for your for your files and these are the basic rules that are in the OSSEC configuration to to monitor File changes on your systems Like I said before you can use and those To create actions maybe to block a user if he has create created the file or or change the file Or just put the original file back. So you maintain integrity integrity There's a lot of commands that you can use on the on the server to check Integrity The c-check update is just updating the database That's not that's done automatically, but every every few hours if you want to have the situation updated now You can use that command With the c-check control minus L you're going to get a list of the of the agents My mind minus I is just going to give you from a certain Clients all the files that have changed after after short while it will already be along a very long list If you're looking for a specific file, you can use that last command then the management of OSSEC At this moment it all happened happens from the command line There have been I did there is a basic web user interface that you can use if you're gonna have a lot of a lot of server, it's not very I It's not very very well developed at this moment I Mainly use OSSEC to create meaning to the to the messages, but then they're going to be forwarded to something like Splunk to really create dashboards or information May make it visual so on the command line you have Managed agents and there's two versions on it if you compile it in the server version You're going to be able to create the central keys And you're going to have a central database of keys then on the agent side The basic functionality is there to import the key and that that's it Agent control that's your main tool to control all the behavior of the agents minus LC is going to give you the the list of the clients that are currently connected then minus agent ID is going to give you the information on the client which is the OS version IP address and if you're going to use central configuration Which I will explain a little later You will also see the MD5 hash of the central configuration file so you can compare it and see that it's up to date Mine minus small r a is gonna do the System check and the by the integrity check and the root key detection check on the On all the agents minus are and then provided with the agent ID is going to restart an agent and minus r minus you With the agent ID is going to do the system check on a specific agent. I Didn't include central configuration in my slides, but I want to explain it a little bit You have the possibility to create a central configuration file for all your agents and that's going to be pushed to the agents every Every two or three hours now In that configuration file you can either specify specific configurations based on the On the ID of the agents so if you have three web servers you include in the specific rules for those clients The IDs of those of those agents or you can create a configuration file based on the operating system so for Solaris or Linux or AI x you can create different different rule sets basic conclusion If you're gonna do log management from a corporate point of view you're gonna get a reseller coming in with a with a Solution a proprietary solution. You're gonna have two or three consultants at your At your office for a few months. The problem is they don't know your applications They don't know your environment the only one that knows your environment and your applications is yourself If you're gonna start with OS sec you will have the time to Create rules based on your environment and not on what the proprietary solution Offers OS sec being developed since before 2005 is very mature solution to use It's it's very stable and offers a lot of Functionality for you to to start your log management as I said log management is something that has become Necessary due to regulatory compliance if you're gonna do log management You better start By understanding your locks yourself One thing to remember is that tuning the rules of any log management solution will never stop So You can create log Your rules now in a few months you will have to redo them and see Whether it still fits your solution It's a lot of work, but it's worth it. I think So That was my overview of OS sec. I thought I hope you found it a little bit interesting and if there is any questions Yes, I I Understand that the problem with UDP is the reliability because you have no confirmation The intelligence is built in the agent and the server. So when your agent when your server fails Your agent will will cue the messages and send them in the service available again. I'm gonna walk up to you So so you're asking if there's a redundancy building That that is that is not possible. What you can do You can have multiple servers. You can have one agent With keys for several servers. It needs some extensive configuration and I in my Does this still work not from here wait better I think there's two main courses for Reliability for redundants you can have Two servers and have the agent point to the two different servers, then you will have to synchronize the keys between the two servers That's one one solution the problem is Since there is already Reliability in the agent agent sites if your service gonna going down the agent will keep the messages until the service available available again and Build building a server is really a it's really very easy. So I Basically you have a server installed in about 20 minutes It's better to have a backup of your keys and rebuild then have one server running doing nothing in case Maybe the server will crash But you have that you have the possibility to make redundant servers. I haven't used that possibility yet Either you yell or I come to you Yeah, the the agent is currently supported on Windows from 95 to Windows 7 it's supported on AIX on Solaris and on most flavors of of Linux Excuse me. Yes Yes One second. There is the MS Regarding authentic authentication There is the DS DACP rules at this moment For the installation of the of the agent basically, it's a it's a simple install for the configuration or In no in most cases that there isn't a distribution package for for for all for all OSes so it's a it's a basic install and normally I create a package For the for the customer when based on the OS that he's using right now I'm busy with a project involving Solaris and AIX so I created a basic packages for those We have 50 minutes left No more question 20 minutes I talked too fast Yes One one OS X server can have about 250 clients Yeah, I it depends on on the memory you have on the storage that you have and on your on your network card and the speed of your network So that's that's different in every environment Excuse me. Yeah For for for the file integrity It records the MD5 and the Shaoan hash of the of the file No at this moment. No, but it's so open source applications. So if you want to contribute you can do that Yeah, sorry Yeah No, you you're gonna have a tiered approach if you have that that many that many sir That many clients or that many log sources You're gonna have a tiered approach. You're gonna have your basic OS X layer then you're gonna have Another layer of of syslog servers and then in the end you're gonna for your storage you're gonna have a Central storage thank you