 edition of DEFCON. Those are the hours that start at 8 and never end. My name is Ophio Arkin and I will be speaking today about infrastructure discovery. Usually a lot of people at DEFCON talk about how to break things, how to search cool stuff, how to take something and to break it. Most of the people that I know from the world that actually do network security, they are in charge of how they maintain their stuff. This talk is basically going to speak about what kind of knowledge and how we get it from an enterprise network means that how do we acquaint ourselves with our users, how do we acquaint ourselves with the machines that operate on the network and with actually what is being done on the network. Usually if you look at the DMZ then the DMZ is usually well fortified. We have most of the things are already there. Most of the firewalls that we have can block 90%, 95% of full attacks. We can put another system where like 99% protected so the guys that understand how to do defend themselves, they're actually pretty good doing that. Usually it's not that hard to do that. You just need to understand what you need to do, what you need to defend. That's sometimes another interesting question. The talk will be, well I'll try to do it really fast so they can take one of those walls down and have the bowl here. We're going to talk about why do we need infrastructure discovery, why active network discovery and passive network discovery are not that good and what can we do and I will show you, demonstrate you a new agentless real-time approach that enables us to have real-time information about our infrastructure and to allow us to do modern odd thing. A bit about myself, I'll do that pretty fast. I'm the chief technology officer for Insightix. I'm also the founder of the security group. Most of you know me from my previous work, ICMPO since it's scanning, X-Probe 2 and other things that I've published along the years. I'm also a member of the voice over peace security alliance. Let's go. Okay, so you have an enterprise network. It's large, it's complex, a lot of things operate there. Usually it may be combined from multiple networks in different locations and basically there are a lot of nice things that operate on the network. Some of those are mission critical. Mission critical is a nice definition that says a lot of things to different people. At the end of the day we need those systems to work in order not to have some financial damages, another nice definition. But at the end of the day mission critical means that if those systems are down or if those systems cause some kind of a problem it means that other systems suffer. So for some mission critical it means the servers, the networking gear and for others it means also the desktops. If you had suffered an outbreak of a worm that hit a desktop and that desktop brought down your switch, access switch or other things on your network then that desktop is mission critical to you. IT basically needs to take care of a lot of things that are actually important for the regular operation of the network. We need the network to be available. We need the elements to be available. We need to maintain the security of the network and of course the availability of whatever is on the network. So basically IT has ton of work. Usually the CIO will come through IT and say well you have enough people that needs to maintain what we have, just invest in smarter management systems. So IT needs to identify what are the assets, what they mean to the organization, their properties, their roles, the different interdependencies, what will happen in one of them will fail, how that will affect the IT organization and the business process to understand the importance of those assets. A lot of boring stuff which at the end of the day is very critical to be done because if we're not doing this we don't stand a chance to actually run an IT organization and we have here a lot of things that relate to provision the network, provision the elements, know what we're defending, knowing what we're managing. Detecting troubleshoot issues, defend our assets, eliminate those systems to actually pose the risks to some of our assets, and of course do other things like prepare for the worst, do disaster recovery, disaster recovery management and other other things. The problem, the problem is that in order to do all of those things we need information. We need information about the network layout. How does our network look like? Yeah, another worker at your local burger store. Okay, so we need to know the topology. We need to understand what are our resources and we also need to understand what are the elements that we actually need to manage and to secure. So at the end of the day we need intimate, complete, and accurate knowledge about our infrastructure and without this kind of knowledge it will be, well, we can't actually do management and security. The problem is that the information that we need is either unavailable, partial or incomplete. This is because this kind of information is not easily produced. Some people in our organization may have that information but they left. Some of this information is dynamic and some of the systems that we use in order to maintain that information are insufficient. So at the end of the day, if we don't know the network, we can't actually manage or secure it. This brings us to this slide, the result. The result is that we always work at the 80-20 rule and you know what? The 80-20 rule is really bad when you do management for networking and for security because you know what? The 80% that I know of, you know what? I'm managing them right. I'm doing all the things that I need to do for them right. But those are the 20% that I'm able to uncover, that I don't know what they are in my organization. Those that actually pose the highest risk for me. Well, the highest risk can come from a compromise. The highest risk may come from auditing perspective. The highest risk may come from a lot of things but at the end of the day, I don't have control. I cannot understand what I have and I have full sense of control and full sense of security. And when I have these, then it's all downhill from there. So along the years, there were several technologies that were trying to solve the issue of knowing the network. Basically, what they were trying to do is they were actually trying to have the topology of the network, understand what are the elements that reside on the network, what their properties are and they maintain this information up to date. It seems that this is a definition that it's very small, right? It contains everything but it has a lot of problems in it. Doing network topology, if some of you are using network management systems that claim that they do this, it's a hard thing to do. It's not that easy. It's not that easy to say which switch is connected to another if you do not use CDP or other proprietary protocol. It's also not easy to produce the information about what actually lies on the network. So let's start with one of my favorites, Active Network Discovery. With Active Network Discovery, basically, we send some packets to the network and we hope that we'll see some responses from those elements on the network and sometimes according to those responses or the lack of, we will drew some conclusions about those elements. An Active Network Discovery system will try to get information about the inventory, meaning the elements, their properties, information about network services, the configuration, the topology and if this Active Network Discovery is able to do from their ability assessment. Usually, it will be installed on a single location on the network, although you can install it in multipoints and we'll scan the network from that point. The way this can actually work is that an Active Tool can be fed with a list of IP addresses it needs to scan or a list of networks it can scan or that the Active Network Discovery tool will try to do that automatically by querying local elements and drawing that information from them. For example, HP OpenView, Tivoli and other management systems go to the router, detect the routing table, understand there are other networks they might need to operate against and continually trying to detect other routers on the network taking more and more IP address ranges and from there trying to enumerate all the information about the network. The strengths that Active Network Discovery has mainly relates to the ability of Active Tools to completely control what they're asking for. They send the stimuli out so they can understand well we need this information from that element we need that information from another and they can also control the initiation of the scan and also they can control the pace the queries are being sent. Technically speaking, if we're not having any kind of obstacles between the Active Network Discovery to the target and Active Network Discovery system can entire IP address ranges. The weaknesses are a lot. I'll try to do something with the refresh rate pretty fast. Sorry if I can fix this. No. The weaknesses, well the first one is that the discovery that will be done will be incomplete this is since network obstacles on the network will prevent packets to reach their targets and if you have network-based firewalls, host-based firewalls, those Windows XP-SP2 that you installed and forgot that by default they have firewalls and now you're unable to manage not enable devices load balancers and other things that you put on a network will prevent the packets to reach their targets and basically this will create black holes of information. So from a network that has a lot of a lot of elements you'll discover that you have a very small network. Well it's not true. Sometimes you'll need some type of information to be enumerated from those elements in order to drew your conclusions and you know what sometimes those services will not be operating on those elements and sometimes there will not be willing to speak to you. So for example if you need information from the Windows remote registry and you have the administrative rights for the domain but the system is not a member of the domain guess what it will not help you. Also with SNMP if you don't know the community strings of those routers that you want to enumerate the end of the day you can't do the information from there as well. The process is very slow. It's time consuming. By the time you get the information the information that you get is obsolete. This is because it's not real time. It also consumes a lot of network bandwidth and it might take some elements down when it's actually doing the process. This is because sometimes it will over helm the networking elements between the scanning system to the target with the amount of traffic it produces and will cause an out of service condition. So some folks went and say well you know what we can do scanning in at off peak hours will take our active network discovery systems or management systems will take that to off peak hours and that will do it. Well wrong because you can't actually define what are off peak hours and sometimes during off peak hours there are some crucial processes that actually are being done like backup, restores, some reporting on your databases. So actually the process of probing those elements might not actually can be quantified to a certain time and sometimes it may take longer than you actually know. Telecom company that I know runs HPOV against 4000 devices and takes them 24 hours to complete the scan. This is absolutely no no if you're doing that against your operation network. So there are others who said well you know what we'll do the scan faster. So we'll narrow the time that the scan needs to be done and we'll finish it faster. Well it's a good thing in theory. How many of you know Nmap? How many of you used Nmap? Cool. So someone can tell me what's the usual time that Nmap scans an element on a local network. Three seconds. How many packets does it send? 1500. So let's see. If I scan my network three seconds per element 1500 packets that's a very bad thing to do. Try that on your local printer. Believe me my printer doesn't that cannot withstand that kind of faster scan. So whenever someone from my engineering team goes and use Nmap by mistake or I know that I can't print so I need to go to the printer, power it off, power it on and then use it and then see who's the engineer that was trying to play with the network discovery on my network. So at the end of the day this will cause disruptions on your network. Scanning faster is not the solution and we go to the stability issues which are the major ones who basically what those will cause. They will cause your machines to be unstable like print servers, printers, other elements that cannot withstand a lot of traffic and all of those will at the end of the day will be the line of service. It doesn't help if I play with it even. Sorry about that. This is because either the pace of the scan, the usage of non-RFC compliant packets, the type of information enumeration you're using. So if you're thinking that you can do network topology by taking the ARP tables from the routers believe me this is something that you do not want to do. You make the routers choke, you don't have connectivity to nowhere and this is the last time that you will run an active tool on your network because your CIO either will fire you or will tell you this tool does not run on my network anymore. Some folks just forget that when they do invulnerability assessment they shouldn't use those tests that actually cause null service. It's a bad thing. Against production machines it's not a good thing as well. So sometimes the stability issues will arise just because there's the element between you and the scanning and the scan target cannot withstand what you're doing. If you remember some of the old checkpoint and out of services where someone's scanning from the internal network, someone on the other end what will happen is that all these state tables will be filled with information that will wait for the time out for connections but the state table is full it cannot handle other legitimate connections so what you have is a perfect amount of service to someone from and the inside doing a pen test did. Also switches there will be overheld by the number of connections and they will die. Believe me it's very nice to for example I'll remember I'll mention that later. If you span too much on a Cisco 3550 it's a party going on. It will send all packets all over the place you'll never understand networking in your life. You should see that. So at the end of the day the result is that some of the networks you have maybe declared is active scan free. You'll never be able to scan them at the end of the day because all of those other admins, the app admins, the network admins will come and say well we can't do this because you know that at the end of the day I'll have the null services and I don't want to do that in my production machine. So there's a lot of production networks that are unable to to be scanned because of these things. Nothing I do it helps me with this. So we have some parts of your network unable to be scanned because people are fear about active network discovery tools or previous history has proven them right about that. So some people suggested that scanning active scanning can be done in a different way. Let's see. Let's have multiple machines scanning from the same place. It's actually called a cluster. Guess what? This doesn't help you actually because the scan is now distributed among several boxes. They scan the network at the same time and they send actually more traffic to the network. So all of your denial of service conditions, all of your other problems are still there but nothing is really a plus. Other were suggesting to put a proxy in each IP submit or broadcast domain. This is actually a better idea because it can actually scan faster and you do not need to go through routers. But at the end of the day it doesn't give you any other advantage because still you need a pretty knowledge about what you have. The scan packet still might cause instabilities with the elements that are on the network and still if you have firewall devices, not enable devices in any other obstacles on your network, still those packets will not go through those. So basically active network discovery at the end of the day provides you with incomplete and in some of the cases inaccurate map of what you have on your infrastructure. Questions until this point. Because you have to understand what you have. The idea is that I don't want you to tell me what you have. I want my tool to discover it automatically. So if you have something that you are not aware of, the tool will understand what it is. It depends what you want to take out of it. I mean a segment of the day is nice but do you need the information to be current? If you need to manage something, if you need to be able to do something useful with the information that you uncover, you need this information to be real time and active network discovery cannot be real time. Yes. Exactly. Yes. This is absolutely true. So going fast to another area of invention. If you didn't know, passive network discovery systems were first introduced in the mid-1990s. Don't let other people tell you that they were invented last year. Basically passive network discovery system is merely a sniffer installed at the chalk point of the network processing packets that were sent by active network elements on the network. That's actually one of its biggest problems. So how can we look at the deployment? The system can be connected to the network using a tap like this one, a span port like this one, a span port at the access layer and span ports which are sent to one element. So what kind of information can be harvested? Basically the information is about active elements. Active elements that may shown up in the inventory, active network services, the distances that those active network elements are from the monitoring point, the client-based software, the network utilization information and if possible vulnerability information if this passive network discovery tool is doing. The kind of information or the purpose of collecting the information would be to build the layer 3 based apology which is fairly problematic because what you need is actually the physical network topology including the switches including the hosts to know exactly what you have and not just the layer 3 one, the network utilization information, some network forensics if the tool is able to do that for their ability discovery to create some context about the network and to feed that information to other systems. The strengths of the system is merely the ability of the system to be real time, to have no impact over the monitored network in the means that it doesn't send any type of traffic to the network and there it doesn't pose any kind of risk to the network when it runs on. It can process packets and can process information from all TCP AP layers and it can detect anything that is active on the network even if it's for a short time period. It can also detect elements behind network obstacles such as not-enabled devices, firewall elements and also can provide you with network utilization information and can also useful for network related anomalies. Who is using a system that it actually is based on anomaly detection? Cool. Are you happy with it? No right? Because my anomaly is not your anomaly, it's not his anomaly and that's the main problem. I'm not gonna go into it but I don't like it. Weaknesses. The first and biggest weakness of passive network discovery system is that what you see is only what you get. So if you have an idle service, if you have an idle system, if you have an idle whatever, that kind of element, that kind of information will not be uncovered. If the information does not go through the monitoring point, it will not be uncovered. It means that at the end of the day the discovery will be incomplete and inaccurate and non-granular because we cannot control what passes through the monitoring point. We cannot go to the client and say, you know what, we need you now to browse because we need the type of information that your browser sent to the network in order to build something or to conclude about something. We don't have any kind of control. We don't control the stimulus. So passive network discovery cannot detect all assets and cannot detect all protocols and services and cannot detect all ports. We cannot control what we see and in order for us to detect something, well, you know what, sometimes we cannot detect it because the kind of information that discloses the type of information that we need may never pass the monitoring point. So another interesting subject, this is actually getting worse. Shut it off. Sorry. Where's the projector thing? You want to try it? No, he's recording. Yeah, you can do it. Nobody is here to press it so just feel free to, hey, it gets everything in all of those. Yeah, now we'll see. No, we don't want to do that. Sorry. Yeah. Okay. Did you party it on? Okay, I'll continue. I have some modifications to the presentation of the problem. So basically, when it tries to build the topology, what it will do, it will look at the distances, the elements are from the monitoring point, and it will look at the time to live field value with the packets it receives. And that actually would be the information that it will use in order to build the layer three topology. And you know what? Guess what? That actually does not help me because if I see where the elements are and how are they located, and their, what is their position according to my system, I cannot actually build everything because I will have islands of information and I will not be able to uncover the exact routers that actually operating on my network. And the only thing that I will be able to detect is those systems that actually send traffic to the network. Okay, now this projector doesn't work at all. That's nice. It's always the messenger. Okay. So if we have routing not done through the monitoring point, okay, it seems trying to work. Yeah, it's encrypted stream. I'm not mirroring and it's 640 by 480. I'll try to detect again. Oh, yeah. Refrigerate these like down like, yeah, buy something else. That's always works. See, I got an upgrade. So now I'm using Windows. Thanks for your help. A round of applause for the guy that donated his machine. Yeah, wanted back. Okay, here it is. Should I go to JD travel? Nice. Yeah, do it by telephone. That will be nice. Let's start from the beginning. This you can see it now, right? Okay. So basically, if this is my network, and I want that passive network discover system to draw it, this would be the result. Similar, right? I had this and now I got this. The problem is that I wouldn't be able to uncover the switches. I will not be able to uncover which machine is connected to which switch. And at the end of the day, I get this thing. I have a router, I have systems, but actually, it's not my system. It's not my network. The problem is that we need to be deployed as close as possible to the access layer. We need to be see layer truth traffic in order to do so. It means that we need a lot of moving boxes and to do this in a complete coverage. It means a lot of boxes and a lot of moving ports. So a lot of weakness is that I'm unable to monitor the state of network services. If this is an idle service, I cannot monitor it. If the service goes down, I cannot monitor it. At the end of the day, no service monitoring. So for example, I might see traffic for a certain port. I may declare it as an open. The guy would close the port. I will never know it. So the less obvious weaknesses are the fact that I can inject whatever I want to the network and guess what, the passive network discovery system will take it. The problem is that there is no way for the passive network discovery system to actually validate the information. This is since the only traffic it sees will go through the monitoring point. It can't do anything. So at the end of the day, that leaves it with the inability to validate it. The problem with it is that it's not only influencing the passive network discovery systems. If there are other systems which rely on the type of information that the passive network discovery system actually provides in order to draw other conclusions to do other things like network intrusion detection systems or even network intrusion prevention systems, that might be something which we don't want to be happening. This would actually cause the kind of information that will be taken from a passive network discovery system will cause other management systems that actually get the information from passive network discovery system to produce even more inaccurate and incomplete information when they process the information the system provides them. A very simple example to show you how this works. Let's say we have a Windows-based machine, like the one here, and a passive network discovery system might have some kind of counter or it might have some abilities to say well until I'm not detecting the operating system itself, I'm not trusting the information that actually is being sent through the different field values. So unless I understand that this is a Windows machine I'm not going to trust the time to leave field value. So what I've done is that I only played with one field. I played with the time to leave field value, I changed it from default value of 128 to default value of 126. And all in all that is still a valid Microsoft Windows machine but for the passive network discovery system it is now being located two hops down. I didn't do anything actually, I didn't move it, I didn't had to actually physically disconnect it from the place that it is being connected to. This is because passive network discovery system in these cases unable to go to the switch and ask the switch okay who's connected to you. So a simple example to show you that is very easy to trick the system. All you need to do is to understand the methodology and the the way the system works in order to just put some decoys and deception and remove your system. Other weaknesses that I have services mainly because you need to take the information, open it up and understand what you're seeing. Other examples of even remote code execution usually we'll see this against Ethereum which is a great tool one of the best network analyzers out there but some remote code executions possible against it in its different versions. So as some of you was in charge of a network and I had a problem, I had a problem that I couldn't understand what I have, couldn't understand what I need to defend. I couldn't understand where are my elements so I can understand what are the differences that I need to do. I cannot build walls without understanding first what I need to defend. I've actually been doing this for the past several years and actually analyzed whatever there is to analyze regarding active network discovery, passive network discovery and actually figured out that both of them will never help you if you want real intimate knowledge complete and accurate about your infrastructure. Another interesting point is that there is many theoretical researches out there that actually they cannot be working on live environments, real world environments. If you remember the NAC researchers, those are good for paper, they don't really work on the real world unfortunately. So the goals of the research were you have this kind of complete and accurate information, you have it in real time, you do not have agents to detect and react to changes in real time and basically not allow someone to send decodes that will be taken but to understand that we have them and to ignore them and to cover the entire IP address range of a network and of course to have some other things like topology which is really important. So what I've come up to is with a technology called dynamic infrastructure discovery. Basically this is a technology that combines tightly integrates actually between active and passive network discovery and by tightly integrating between them it has new discovery abilities that are not able to be made with each and every one of those if you use them solely. This technology actually is adjustable to any type of network that you are using. This is because it is able to balance between the information that is being harvested from a passive link to the amount of traffic it needs to send through an active link. It means that it works by listening to traffic that goes through a certain monitoring point on the network. It starts to build profile of the elements, data profile of the elements that's working on the network and after a certain time had passed and it seemed that some pieces of data that were not being collected or there are some pieces of data that cannot be collected through using passive means it's actually calling the active site and surgically inserting packets into the network in order to get the missing pieces of information. So for example if you want to know where your elements are you need to talk to the switch and do that passively or if you want to validate the type of information that you have seen passively you do that actively. So at the end of the day it balances between the type of information that it sees going through the monitoring point which can be good, can be bad and can be great. It all depends on the time of the day that that you operate on or that you are at. For example when your users are coming to work at 8 o'clock, 9 o'clock everything is working right. They do email, they browse the web, everything is fine. 12 o'clock everybody at lunch nothing works. 6 o'clock everybody works. A lot of elements basically disappear from the network because people shut down their machines and go home. 2 a.m. you might see a lot of backup activity on a network so at different times of the day you see different things on the network that you are able to take different things from them about the elements but at the end of the day you need that information to be complete. So the balance comes from using the active site surgically probing. Other things that are unique for example you can detect what are the guest machine of a certain host machine of VMware or any virtual machine. You're able to tie them together. It's easier to detect not unable devices and wireless access points using this technology. Basically anything that any piece of information that can be gathered either passively or actively can be gathered using this technology. You can build your asset management. You can do physical network topology because you are able to take the information that comes from the passive side combined with the information you are able to take from the switches and combine them into physical network topology where you show where the host has actually connected the switch connectivity and the routers. This is something that you can actually work with. You have a real-time change detection. You operate in real time because you have that from the passive side. You don't require any agents to be installed on the machines because your monitoring point provides you with all the information in the world that you have. Already talked about this. One of the things that I like about this is that you are able to triangulate between user host and location. One of the biggest problems of managing and securing enterprise network is the inability to locate or the inability to say who's the exact user that actually doing this and where it's actually his machine is being located at. So when you have the physical location information, when you have the host name, you have the username of the actual user that's actually been using this, it's very easy to come to someone and say you are the one that actually did this and that because you have all the information that triangulates that user to do his machine. From using only surgical probing, this technology do not pose any kind of risk to production environment because you do not need to use lots of packets in order to probe elements on the network for getting the information from and therefore all the information is being either harvested passively or by using surgical active packets or the number of packets that actually insert into the network is very low. The packets are RFC compliant. There is no risk of using those against production environments. Some limitations, the only limitation that this technology actually have is that you can't really scan a network for 65,000 something ports so you might not uncover all the services that are actually working on that system. Some usages, I'm being brief because I need to finish. You have clear visibility into your infrastructure. Everything that is on the network is being detected from storage devices to host machines, to network-enabled devices, to wireless access points, everything. The accuracy level of the technology, let's say when we're doing OS detection is less than owed up to 5% of false positives. The ability to locate any device on the network, see its properties, see what it's where about, understand what it's been doing all done in real time is something that you need as a context. When you have physical network topology that you can locate everything and when you have the odd information that allows you to see whatever everybody or that particular element is doing you're able to understand what is happening and if there is a problem on the network you're able to locate it easily. So let's say that you're an admin and there is a new patch for a certain service. Usually what you'll do is you'll scan your network and understand where it is and try to understand what the service that needs to be patched first. When you have current and real-time information what you only need to do is just to have a search under your inventory for the open port, you get the list instantly so you save the IT hassle of going and looking for that type of information you instantly have the information that you need in order to work against and that saves you a considerable amount of time. You can take the information and feed it to other systems for example you can do target-based vulnerability assessment one of the biggest problems of vulnerability assessment tools is that they need to be to do active network discovery. So what you do is you're able to take a list of your Windows machines or Windows 2000 machines and instantly feed it to a vulnerability assessment tool and that vulnerability assessment tool may need to do only the Windows 2000 based test against those machines and that's it for it. So you actually are able to make those tools work better and solve the 80-20 problem for them by feeding them with the contextual information about the network. There's also the ability to baseline the knowledge about the elements that operate on the network because you see them all the time so you're able to actually say what's unauthorized and what's non-authorized because you see everything until this point if you're using an active tool like this gentleman had said if I do a scan once in 24 hours I can't do anything with the information because I don't see the changes there are some users that might bring their laptops from home and take it in between and I'll never know that they're actually existing. I think the most important thing about the technology about the internet dynamic infrastructure discovery technology is that you finally have control you can achieve control what you have because you understand what you have you understand the layout you understand what you have a strong auditing capabilities and you understand to use it for the first time and not just saying okay I have partial information I have complete information I have control I have everything that I need. Questions? Yes the gentleman here this is working if you want more details I will give it to you on the end of the presentation. Other questions? If you'll talk to me after the presentation I will answer the question. Yes the only switches that you actually know that they exist are core switches I mean in all the companies that I was talking to or working at or consulting to what they know is they have a firewall that connect them to the internet and they have those nice core switches because they cost them a lot of money and usually when you hook this up to the core switch you get all the information that you need from the core switch and you go from there. Sure it's sure it's something that you need to install on X number of places in a distributed manner in order to see the complete visibility because it needs to see traffic between layer 2 to layer 3 but in the end of the day you still see complete visibility say that you want to have an ids system an ips system they all have the same problem but this system actually can take traffic from multiple networks into one place and not just see one network at a time. If you have a layer 3 barrier that you cannot pass you need to install another system. Well when you have a management server you're able to connect everything and all the dots and you have a complete information about everything. Yes you must you must use SNMP to take information from the switches because there is no other way to understand where the elements are if you're not using SNMP. The idea is that you don't need to know the connectivity and you build the connectivity automatically using other algorithms. There are several researchers that were talking about how you build your topology by just taking the information from the switches and trying to connect the switches without using proprietary protocols. Some of them have some holes in it some of them are incomplete but at the end of the day you can learn a lot from those those researchers. What I did with these technologies you must understand where the elements actually are connected to and what you are able to do is just to say okay if this element is something that for example is unknown I am able to say which switch and which switch port is actually being connected to you can use that information and you can say okay I now won't go and tell that such and kill this port. The idea is to have the visibility. There are some switches that can send you an SNMP trap for example when the information about the switch MAC, excuse me the source MAC that is being registered to the port changes. Not all of them are able to do that. The new Cisco 2950 are able to do that but at the end of the day if you want to start to do log aggregation be my guest. I mean there are other you know products that do log aggregations and other open source tools but if you don't know what you're doing with information then it's just a big pile of logs. Yes. To set what? Yes it is possible in several cases to detect subs. Basically the issue is that if you have a port which you see multiple MAC addresses and is not connected to any other switch then guess what you have a hub of course that it might be a four port or six port or sixteen port but no more than that. A hub is merely a guess but if you don't have any other switch connected than a port you basically have a hub there. Of course you must you must you must be connected as as close as possible to the access layer in order to to do anything in networking and security these days. Yes, if if you're using TTL it means that you're going to fail. That's something very easy to change and you don't want to you don't want to use that. Look at the example that I put here. VMware you have you have two ways of doing that. You have a bridge mode and have NAT mode. Both of them you need to have different ways in understanding that you have VMware. I mean it's very easy to understand that an element is a VMware machine just by looking at the MAC address and it's VMware right but the the thing that you need to do is that you need to actually understand where it is being connected to and that's other algorithms that I I'm using to do that. Okay so they need to yeah one other question. No I this works against whatever you put and if you have VLANs that's not a problem. If this wasn't working with VLANs then I guess it was a major problem. It is working with VLANs with everything they have on the network. So I would like to thank you that you have stayed with me through all of the problems that we had today starting late, replacing laptops and staying up until five minutes to 9 p.m. Thank you very much for having me here. I enjoyed this.