 A warm welcome to all, and especially to Marius, who will now talk about the Avatar 2 reverse engineering framework for firmware. Thank you. All right. Thanks for the introduction. As stated, I'm Marius and I'm here today to talk about Avatar 2. I develop it as part of my PhD studies over at Urocom, and if I say Avatar, I don't refer to a movie by James Cameron or a metal band note, I refer to the actual Avatar 2 framework. Let's see how I try to communicate with you about the framework. So first, I want to tell you a little bit about binary firmware analysis in general. Want shortly to discuss the tooling landscape to see what other people have done and are doing. Then I actually introduce the high-level concepts of the Avatar 2 framework itself, and in the end, I'm going to give you a couple of examples to show how the tool can be used and is used by us in fact. So let's start. Binary firmware analyzers. Why are we interested in analyzing firmware of embedded devices? Well, as we know, the amount of embedded devices is steadily increasing day-by-day, buzzwords like Internet of Things and so on around. In the end, these are just interconnected embedded devices. Misconfigurations, bugs and vulnerabilities are common on those devices, and I would say that a majority of the reported wounds we find so far on those devices are mainly misconfigurations on low-hanging fruits like disclosed private SSH keys, misconfiguration in the web server, or just simple bugs in the web server itself. However, we hope that in the future someone is going to change and vendors may be secure their software more, and then we need to actually hunt for more complex bugs, which are also still in there in firmware. However, when we want to find more complex bugs, we need more sophisticated tooling to succeed. Of course, we can sit down and reverse engineer a long time, but at some point in time, tooling will greatly benefit us. However, they are especially compared to desktop systems and a lot of challenges present for firmware analyzers. Most of all, there are a variety of platforms, a variety of different boards, system on ships, which all comes with their own memory layout and their own hardware peripherals, which may be mapped at certain addresses and may behave completely different on other devices. Furthermore, there's often no operating system level abstraction. Some of the firmware is based on Linux, however, there's also a lot of monolithic firmware around which just use non-kernel at all or have some small, tiny kernel-firm-bedded systems. In both cases, the hardware interactions will be embedded in the firmware code itself and not as part of the kernel. This forms actually a problem because when the firmware access hardware, it might be we are memory mapped in port output or it might receive interrupts from the hardware, for instance, when new data is available on a bus, we need to somehow fetch it in our analyzers and our tooling. On top of that, there are just a variety of architectures, like not only a lot of platforms but also a lot of windows and architectures are around. We have on desktop systems, mainly x86 and x8664 around. On embedded devices, we can have all the different architectures from ARM, MIPS, PowerPC, even sometimes Spark. And just to give you one example, please don't attempt to read the next slide. This is just a list of the microarchitectures defined by ARM. And this is just ARM itself, not the third-party windows for ARM, which are making system on chips. These are around 30 different microarchitectures, all with tiny differences in the architecture, which is quite challenging to grasp in a generic tool. And there are even more challenges, which we are facing in comparison to desktop systems. Binary analyzers on desktop systems normally greatly use this instrumentation. So it instruments the software under test or under analyzers to add certain hook or address sanitizer, for instance, checks during runtime that everything is going fine. And on embedded devices, this is challenging due to two reasons. Once again, the missing abstraction of the operating system. And furthermore, quite often, the code only restyles inside the read-only memory of an embedded device. So what does this mean, read-only memory? We will need to flash it to change its contents. However, then on the other side, firmware might be encrypted or signed by the Wendor. So instrumentation is harder than on desktop system. Likewise, emulation is challenging. While on modern desktop systems, all hardware interactions is handled by the kernel, which can be easily abstracted by an emulator, we don't have this comprehensive possibilities for embedded devices. The reason for that is that there are a lot of peripherals around which interact differently with the hardware. And as a result, being able to emulate all of the underlying hardware of the embedded device is a lot of implementation effort. Likewise, fault detection. When we, for instance, fast test desktop systems, we most of the times rely on observable crashes, like segmentation faults or error handling of the gelepsy for heap corruptions and so on. So we get a physical output or we got a noticeable output when we corrupt memory. On firmware, this is different. Firstly, even Linux-based embedded devices are not, or most of the times are not utilizing gelepsy. So heap protections are way smaller, if present at all, and some devices may not even contain about the memory management unit, which at the first place enables the notion of sack faults or invalid memory accesses. So in this case, the firmware might just continue to execute it, also we corrupted the state of the program. Another big issue is interrupt handling, because a lot of firmware is basically designed in a way that it runs continuously inside a single main loop and just checks memory contents. Those memory contents are updated by interrupts handlers and will derive the execution path of the main loop once triggered. If we go with static analyzers, we will need to define where those interrupts are triggered. Furthermore, as we saw before, there are a lot of different microarchitectures around, and microarchitectures have a lot of small, tiny changes and single instructions only present to this microarchitecture. I mean, for instance, coprocessor accesses on ARM cores are varying from core to core, or from microarchitecture to microarchitecture. So this showed a little bit the challenges we have in the field of dynamic firmware analyzers. Let's look at the tooling landscape. Compared to desktop systems, the tooling landscape is, due to the challenges, way smaller, and especially smaller when only considering open-source tools. Furthermore, while a lot of static analyzer systems for desktop systems exist, they may exceed their bounds when being applied to embedded firmware, because they need to approximate the environment, which is not always possible in the embedded case, and it is also possible to infer the behavior of peripherals and interrupts. In the following, I will show you four open-source tools which are aiming to analyze firmware. So obviously, this is not a comprehensive list, but gives a glimpse of what has been done and what kind of different approaches are out there. So let's start with PHY. PHY is a symbolic execution engine for MSP430 firmware, which is based on CLI. So CLI is the main symbolic execution framework here, which basically operates on the LLVM immediate representation. In order to have PHY working, the analyst needs to specify explicit analyzers, memory and interrupt specifications. The analyzer specification hereby defines, among others, the memory layout of the firmware under analyzers. Furthermore, the memory specification specifies how memory should react when it's read-to, read-from, and write-to. So this is basically a way to abstract memory mapped IO, so that when particular memory cells are accessed, symbolic values or specific concrete values can be injected into the analyzers. The interrupt specification is, well, defining at which points interrupts Kultukur and which interrupt handler should be executed. While this is a great work which doesn't need any presence of a physical device and Kult successfully analyzed quite some MSP430 firmware, it required the presence of the source code of the firmware because that's basically the way how CLI works. So unfortunately, source code is not that often available when we are analyzing firmware. So let's have a look at binary analyzers tools. First there is Firmadine, which is a binary analyzers framework based on QIMU, a popular full system emulator, which also enables user-space emulation of single processes. However, in this context, QIMU is used as full system emulator and brings a lot of architecture which can be emulated and additionally a lot of hardware boards or hardware layouts. Firmadine targets ARM and MIPS firmware in specifically and uses an instrumented Linux kernel. So basically it takes the extracted Linux-based firmware, puts it inside the QIMU emulator and runs it with their own implemented kernel. This kernel allows automated analyzers or allows plugins for auto-analysers of webpages and secure network monitor or protocol implementations. Additionally, interesting is that this framework has capabilities to automatically throw known exploits, mainly known from Metasploit against the emulated firmware and quite interestingly, a lot of exploits found on one devices can be propagated to other devices, which basically means that there is also a huge code base shared among different kind of embedded devices, at least in the Linux-based world. Yeah, unfortunately, the downside here is that it's only worked for Linux-based firmware and only if there's not two specific kernel modules around because if the embedded device needs to do hardware interactions, this will move to specific hardware peripherals, this will most likely be done via specific kernel modules and if they can't be emulated, Firmadine fails to succeed. Another interesting project which was released this year is Luakimu, which is as the name suggests also based on Qimu. It is considered as work in progress and the example released together with the tool was targeting a BCM4358 chip firmware. So these chips are Wi-Fi chips used, for instance, in a lot of smartphones. They're enabling Luakimu the prototyping of custom hardware platforms or boards in Qimu Jagong with Luakimu and also add instrumentation capabilities based on Luakimu for different events inside Qimu. Unfortunately, as this only emulates the firmware alone and there's a lot of hardware interaction going on, especially during initialization function, this requires either a lot of modeling or trial and error to prune out execution paths which are not relevant for the analyzers or for the analyst. So the last tool I want to talk about is Avatar, the first one. So as some of you may have talked, if I'm talking about Avatar 2, there must have been a first Avatar. And this tool was based on S2E, which basically is again a combination of Qimu and Klee, which allows symbolic execution from Qimu emulated firmware. Additionally, Avatar utilizes open-mode CD and GDB and allows partial emulation of ARM firmware. So with partial emulation, it basically means that the firmware itself or parts of the firmware itself are run on the board, inside the emulator, and specific hardware requests like memory mapped IO are forwarded to the actual physical device via the connection of open-mode CD and GDB. Additionally, Avatar provided ways for orchestration so that you can, for instance, start executing on the device, then transfer the important states, so the important memory layouts and the registers inside the emulator and continue execution inside the emulator or inside S2E. This allows basically to skip all the initialization function of a board which are not interested for analyzers. Additionally, and quite obviously, as S2E is using Klee, it also brings symbolic execution and selective symbolic execution for firmware. Unfortunately, Avatar 1 was heavily tied to the S2E infrastructure and it requires in every setup the presence of the physical device to succeed with the partial emulation. So what did we learn from looking at those four tools? First of all, there's a lot of focus on the ARM architecture. Then really, the majority of tools are utilizing Kimu's emulation capabilities as a basic block for building up their framework. Unfortunately, the resulting frameworks are then heavily bound to Kimu, so they don't see any ways or don't define any ways to get the analyzer state of the emulator into another tool. This missing way of transferring states of analyzers is at the same time a little bit the motivation of the Avatar 2 framework. So in a very big picture, it's a framework for dynamic multi-target orchestration and instrumentation. We will see what this means with live demos later on. The focus of Avatar is on firmware analyzers and the full thing is an open source and Python-based framework, which we released in June this year, so it's quite new and it's a research project. So we try to have a clean and usable code base, but sometimes some things may be a bit fragile. In comparison to Avatar 1, Avatar 2 was redesigned and re-implemented from scratch to especially focus on better usability and a better abstraction of targets. It is developed by the software and system security group at Eurocom, specifically next to me, the main developer, Darionizi, Aurelian Francion and Davide Balzerotti. The main goals when we designed and started to write Avatar were to have the possibilities of target orchestration, separation of execution and memory, and state transfer and synchronization capabilities. Target orchestration means that we orchestrate different kind of frameworks with abstractions inside Python. Those targets could be anything, debuggers, emulators, other frameworks, and we easily want to be able to add new targets to the Avatar 2 ecosystem. Furthermore, we want a clean separation between execution and memory because this is basically the core concept or the main requirement to allow IO forwarding or remove memory, so that the analyzer runs inside one target and operates with memory of another target. Furthermore, state transfer and synchronization is important to us because once we are starting the analyzers in one specific target, we don't want to keep the analyzers local to that target. We may be or may want at a later point in time to switch the execution, for instance, from an embedded device to an emulator. And for doing so, we need easy ways to transfer the state. So we came up in the end with a framework which basically consists of four components, the Avatar 2 core, which is a Python library and is the main interface from the analyst to the analyzers. Inside the framework, there are the so-called targets, which are the Python abstractions of so-called endpoints. Endpoints hereby are all the things you want to have as endpoints, so emulators, frameworks, debuggers. However, targets and endpoints are not talking directly to each other. They are disconnected by an additional layer of so-called protocols, which we can also see here in this picture, where we have the Avatar 2 core at the top which defines and orchestrates a set of targets, which all talk via execution protocol, memory protocol and register protocol to the distinct endpoints. So the question is, why did we add the abstraction for protocols? The idea is quite simple. A lot of tools actually have similar ways to communicate. For instance, both KeyMode and OpenMode CD offer a GDB server to talk to the analyzers framework or to the software under analyzers. And by separating the protocols into purposes, like execution or memory, we allow the clean separation of those different concepts during the execution. Oops. So let's move on to the implemented targets, which could also be a small quiz of no-yo-open-source methods. On the top left, we have the ARCAFIDGE, which is actually the mascot of GDB, which is quite interesting because this fissures users to spit water from under the water surface above and shoot at single bucks and put them down. So I think it's a quite matching mascot for GDB. On the bottom left, we have QEMO, which is the full system emulator that we just talked about a little bit more. On the top right, there's the PANDA framework, which is a reverse engineering framework based on QEMO and aims to allow repeatable reverse engineering. It does so by basically recording all the non-deterministic IOs occurring to the software under emulation and then later on those non-deterministic IOs can just replay to the very same software from the same initialization state, which will result into the same execution. The advantage of doing so is that the resulting memory footprint of a record is way smaller than an instruction or memory trace. Additionally, PANDA allows a plug-in system which allows to hook different functions or different events inside QEMO to add foreign analyzers. The last tool on the slide is the ANGER framework, which is as of now still under development and will be made public or will be merged into the public branch soonish. And ANGER is basically a symbolic execution framework which provides quite powerful symbolic execution engines. Capabilities, sorry. Oh, one thing I forgot. We also support a fifth target, which is not represented on this slide, which is OpenOCD tool to talk to JTAG interfaces, which then in turn can talk via the JTAG protocol to embedded devices. So just a little bit of background knowledge. JTAG is debugging port present on some embedded devices. And if it is available, we can use OpenOCD to dynamically debug firmware on the target device. As we've seen before, a lot of tools are based on QEMO. So if we want to have them easily integrated into the avatar ecosystem, we need to have our changes more or less locally. That's what we did. So we changed QEMO a little bit to work with avatar and to forward the state and memory and so on. And all of the changes are located in one single subfolder, which will make it straightforward to implement new QEMO-based targets for avatar 2. More specifically, the changes which we did, or the most notable, is the addition of a configurable machine, which is similar to the Lua-based port description present in Lua QEMO. But hereby, the configuration of the hardware we want to emulate is defined in a JSON file, which is automatically generated by avatar 2 based on the specifications the analysts did in Python. And it allows, in general, a flexible configuration of the different hardware you may want to emulate. Additionally, we added a new peripheral, the avatar peripheral, which communicates with avatar 2 via Pothix message queues and basically allows the removed memory from inside QEMO. So the idea is that if QEMO or if there is a peripheral with memory map.io, which you can emulate, you will use the avatar peripheral, which will then forward all memory reads and writes to the avatar 2 framework, which will then dispatch it, for instance, to the physical device. A couple of other features I want to highlight about the framework is that we aim to design it architecture-independent. This basically means that we have a subfolder inside the framework which just deals with architecture abstractions, so that the framework can work with those architecture abstractions local to any analyzers. As of now, we have abstractions for ARM x86 and x8664, and we are currently developing another one for MIPS. Avatar 2 uses an internal memory layout representation, so just the layout, not the memory contents itself, in order to be able to push it to different targets or to configure the JSON file needed for the QEMO configurable machine. Furthermore, avatar allows modeling of peripheral reads directly in Python, so you can move on and script your peripheral directly in Python if you know how it has to behave or if you just want to have something which statically returns the same values because you don't care about the specific peripheral. Additionally, we want to keep the avatar 2 core as small and maintainable as possible, but on the other side, there are a lot of tasks which have to be repeated during an analyzer. For instance, we frequently want to assemble or disassemble instructions, so in order to enable it, we added a flexible plugin systems which also has already a couple of example plugins, for instance, an orchestration plugin which automatically orchestrates the execution of targets. In the normal way, you would write an avatar script you explicitly define when you do what, while in the orchestration setting, you would just define a set of transitions and avatar 2 will automatically change the state around according to your defined transitions. Likewise, there's an instruction forwarder which basically aims to deal with those unemulated instructions, so small micro-architecture-dependent instructions. So once avatar encounters one of those instructions, it will not execute it inside emulator, but on the embedded device, so that the state changes at least there accordingly. So after this kind of high-level talking about the framework, let's go directly to the examples. I will in the following show two use cases, how to use avatar as a dynamic instrumentation framework and three how to use it as a dynamic orchestration framework. So if you want to write an avatar script, you normally need to do three or four things. First, you need to create the main avatar object, then you need to define the set of targets you want to deal with in your analyzers. Optionally, if required, if you have more than one target or a chemo-based target, you need to define a memory layout. And last but not least, you need to specify an execution plan. So let's start with a simple demo, which basically is a demonstration for Hello World. So we have here on the left, I hope the font is big enough for everyone to read it. An execute table file, A.out, and a Python script, HelloWorld.py, which we see here also on the right. So if we execute A.out, basically nothing happens, it just exits with the error code 42. On the right side, we have our full analyzers, well, analyzers and instrumentation on one side. So at step zero, we create the avatar object and define the architecture for this analyzers. We add the concrete target, which is in this case a GDB target. Then we not only add the target, but open as a sub-processor the endpoint, the GDB server where our avatar target connects to in the end, which is basically just executing this A.out file. We initialize the GDB target, which will connect to the GDB server and thus all the initialization functions. Down here, we have some shellcode, which we want to inject into the target. This shellcode is basically just a shellcode for a simple, yeah, HelloWorld output on STD.out. So it basically just does syscall write HelloWorld. Here is the interesting part of the framework. We instrument GDB from the outside. We advise to write memory at the current location of the instruction pointer. The memory we write has the length of our shellcode, is our shellcode and raw memory. After we wrote this, we want to continue our execution. So let's see if the demo gods are with us. And here we go. We had HelloWorld as an output. While this is just a very simple demo, it directly demonstrates the instrumentation capabilities of our tattoo. And especially what I really like is the possibility to script GDB from the outside, without having to execute your Python script from inside GDB. So you can say here on the right side, the full analyzers you're doing are centralized in one place. Let's continue with binary instrumentation on a real target. As a real target, we choose Highway, which is a PLC rootkit, which was presented last year at NDSS. And it basically injects, or this basically works based on code injection on a normal commercial of the shelf PLC. The PLC itself has multiple boards. We can have a look if everything works and we can see it. So no, no, here we go. So this down here is the open PLC, which we can power cycle very shortly. Well, what's the right one? Okay, sorry for that. I forgot to do it beforehand. Okay, so this demo is very fragile, which is going to show up. But here we basically have our PLC starting to boot. We see several boards here. Here on the side, we have a human machine interface board, which basically deals with all interactions to the exterior worlds, like SD card, the physical switch, the network interface, the USB interface. On the top here, we have the IOS for this programmable logic controller. So here you can connect the different IOS. Right now the PLC is booted. Everything is fine. It has no IOS detected. All the status LEDs here for different IOS are disabled. What's special here is that on the lower board, which we can see here, is the Cortex M3 MCU, which is just responsible for dealing with updates, or mainly responsible for dealing with updates of the exterior world. So updates of the GPIO state. And this Cortex M3 MCU, interestingly, also has an enabled JTEC debug port. So we can easily solder some things and have here on the side our JTEC interface connected to this PLC, which will lead us to the demo. This device is particularly interesting because parts of the firmware are residing inside SRAM. So the board initializes, and there we go, perfect. And firmware is loaded into the SRAM. So this basically means we can instrument those parts of the firmware, which we also did by re-implementing the proof-of-concept implementation of HAWI. So here we basically do the same, create the avatar object, load the assembler plug-in, add an open OCD target, set a breakpoint at the main loop because we want to skip all the instrumentation, all the initialization function. We continue our execution until we eventually hit this breakpoint. This is done by the way. And once we are here, we are going to inject some assembly code. So this assembly code is rather simple. As stated, it's just the proof-of-concept implementation of the HAWI malware, not the full implementation, but it already shows, or will show, that we can diverge the execution so that the PLC, the human-machine interface thinks certain inputs are enabled. And we do so by hooking an interrupt handler, which is executed frequently to check the state of the IO board and modify the state manually. So let's see if this works. I think I forgot to mention to say that we tried to have both Python 2 and Python 3 comparable code, so let's use legacy Python in this example. And here we go. So the camera is now off, so I cannot show you, but on here too, let's start it to blink, which basically symbolizing that input is present, also clearly no input is connected to this PLC. I added a picture of it just to be safe, that in case this demo doesn't work, we can see it. Let's move on to the next example, which aims to improve fault detection on embedded devices. This work is part of the what you corrupt is not what you crash paper by our research group, which will be presented at NDSS next year and is a joint work in Siemens. In parallel to this talk, we uploaded the slides so you can go and check out the paper. If you're interested in more details of what I'm going to say here. In a nutshell, this paper investigates the challenges specific to fast testing embedded devices, which are on the one hand fault detection and instrumentation, which we already talked about, or one additional problem is scalability. Fast testing greatly benefits from having the possibilities of running multiple instances of the same software and fasted in parallel. This in the embedded case would mean that you need a few tests traditionally, just a lot of different embedded devices. Furthermore, in the paper, we evaluate different strategies of fast testing of embedded devices. For instance, physical rehosting, static instrumentation or binary rewriting. In the end, we try to give some approach or give some direction by utilizing partial and full emulation of firmware using the avatar 2 framework. For this paper, the setup has two targets. On the one hand, STM32 L152-AE development board, which we have here. It's a nice board which has nice features like directly having a JTAG interface embedded and even providing serial access to it over USB. On the other hand, the target we are using is Panda, the reverse engineering framework. We as target software for our tests, we used expert or an instrumented version of expert with artificially vulnerabilities. The analyzer itself orchestrates in the sense that the initialization of the embedded device is run on the physical board and the emulation of the main loop, of the main part of the firmware is done inside Panda. For the analyzers, we wrote five Panda plugins which check or verify during emulation the state of the firmware by mimicking already existing techniques which are used for analyzing desktop software. For instance, we have something similar to a ShadowStack implementation or some tools with checks or one plugin which tries to check the consistency of the heap by tracking malogged freed and reallogged objects. The big advantage of these approaches is that there is no need to modify the firmware. For evaluating this, we did 100 fast session of one hour each in quite some different setups. We fasted the baseline against the native board, then we used partial emulation with forwarding of IO to the board, we used partial emulation without the board but with another and we utilized full utilizations. With the plugins we called detect previously undetected faults and quite interestingly the full emulation provided better performance than native fuzzer due to the fact that even inside the emulator the clock speed of the emulated firmware is higher than on the actual device. Now the next demo is actually a subset of this work. It shows the record and replay features which we have when using Panda. This is especially cool because normally if you analyze or dynamically analyze the embedded device you need the device physically with you, physically present. However, by utilizing Panda we can record one execution and replay it later to the inside the emulator without the need of having the device present. So let's look this up in a demo. So first of all that you believe me the software running on this guy is actually the XML parlor as I stated. So we are just looking at the serial output and writing XML file to the serial input and here we have echoed the XML file itself and the main loop of the firmware just brings us back the documentary. So fine. This works so far. So let's look at the avatar2 script for recording this. This is a little bit more we are more huge than the avatar scripts we saw before. We define new targets here, the Panda target and OpenOTD target. We add different memory ranges one for the read only memory with the firmware sample we fetched one for the RAM for size 14k pages and several ones for memory mapped IO whereby we want to emulate the serial interface with an avatar peripheral. Down here is an example how to use the orchestration plugin so basically we define a starting target add a transition and start the orchestration. This orchestration will automatically transfer the state from the board to the panda board once this specific address is hit in execution and will synchronize the RAM range. Once we are there we are beginning the record and going into an IPyton shell or continuing the execution inside the emulator and going into an IPyton shell for dynamic or father analyzers. So, we need to specify a trace name. Okay. And the demo is not good with us this time the GDB protocol was unable to connect. Too bad I don't have time to debug it right now. However, trust me this works and on top of that I prepared already some records before which are impressingly unspectacular. So, we have runreplay.sh script which basically just executes the panda with the configurable machine with the configurations automatically generated by avatar. Let's see at least the replay works of a previously recorded execution and here we go. We have a replay completed successfully and a lot of debug output about the configurable machine the number of executed instructions and the number of replayed non-data IOL. Okay. Let's move on. The last example I want to show you is some work in progress where we basically want to leverage symbolic execution to complex software using avatar. So, for this we inserted in artificially bug because we are still in testing phase inside Firefox and executed Firefox concretely inside GDB until the function of interest. This is particularly interesting because Enger itself won't be able to run complex software as Firefox or would be needed to create it with a state. We analyze only one thread and once we hit the interesting function we automatically extract the memory from GDB or the memory layout from GDB, sorry not the memory and copy just the layout into Enger while then the memory content itself are copied on read. So if Enger accesses memory it actually copies it from GDB to Enger. Reason for that Enger itself associates a lot of meta information with the data and this world if we would dump the full memory contents into Enger this would exceed the amount of RAM we have present at least on this machine. Furthermore we symbolize the symbolic arguments and start our symbolic exploration. Our preliminary results here are that we had approximately 10 minutes of runtime in the script for just executing 36 basic blocks accessing 21 pages uniquely and we found the bug. Let's recap the example we saw. We saw 5 examples dynamic instrumentation of GDB dynamic instrumentation of a PLC fault detection on a development board together with Panda. We didn't saw the record but the replay of the development board in Panda setting and we very briefly saw symbolic execution with Firefox and GDB. Note that some of those examples are already available open source and the ones which are not will be most likely be made available within the next months. So let's wrap it up. Dynamic film analyzers is still a very challenging topic and I don't claim to have or that we have solved it completely. However, Avatar 2 tackles some of the challenges and tries to improve the state of the art. Additionally, one interesting thing which we recognized is that multi-target orchestration. So the concept of having different emulators and frameworks interacting with each other during the same analyzers is a concept which is not limited to film only but also desktop software analyzers can benefit from it. As it is almost the end of the year, we also make some plans for the new year for the next year. We basically want to move our main development to GitHub. Currently we develop in a private repo which is a little bit sad because keeping it in sync is a little bit hard. Then we want to introduce proper versioning of the Avatar tool and, of course, add more and exciting targets to enable more and exciting analyzers. So if you're interested in helping us or just want to have some question, feel free to contact us. Either an IRC on-mail, just talk directly to me. And one small disclaimer, we may be looking for people to join our group in the near future. As I'm running out of time, I just print the acknowledgement shortly and I guess we can move over to the Q&A. Thank you. And the first question goes to the internet. So does the framework support the complex x86 system like the Intel ME? So the framework itself is not executing any software itself. Instead it uses underlying tools to or other tools and targets to execute software. So if you execute concretely on GDB, using GDB on your machine, the tool is just fine. What you probably need to do is to augment a little bit the register definition inside the architecture or inside the architecture abstractions. Next question goes to microphone 4. I haven't heard about Panda before, but as I understand, you can record and replay executions and that includes like executions on partly the real hardware and partly in QE. Could you use it for debugging as well, like also like reversible debugging, like for example you step in the code and then you jump back to a certain point of the recording? Actually, yes. The original purpose of Panda is just reverse engineering of software just executed on inside the emulator and it also has, or I don't know how far the development state is for stepping back, but I think they were working on it. And in general, while replaying, you can always attach to the replayed execution with GDB or another tool and start working on it. So, that's what I'm saying for your reversing purposes. Thank you. Does the internet have additional questions? Then we go back to the microphone 4 again. Thanks. First of all, I have two questions. Is the Panda released now because I guess it's part of the source and we instrumented and have a modified version of Panda inside the Avatar 2 framework but you can also get it under github.com slash panda. So, second I remember the problem with Avatar 1 which was it was slow and I want to know what are the improvements in Avatar 2 regarding the speed. Yeah, that's very excellent questions and we improved the speed quite a lot. Unfortunately, we couldn't see it in the demo of recording the execution here but let's say in Avatar 1 the main bottleneck was memory interactions with the physical device and we had some time on benchmarks, the of the 14K pages which we needed for this example took in Avatar something around 2 to 5 minutes while here we are done in 1 to 5 seconds. So, it's a significant speed up but still not fast enough to cope with real-time requirements. Thanks. Okay, next question goes to the microphone too. Hi. Hi. Embedded systems rather often have real-time components. Could you then, for example, just hook into a free Arto's thread to just analyze non-time critical parts? Yes, this should be possible. I mean, in general we have to investigate more a little bit real-time dependent embedded systems. We currently put it a little bit out of scope but it's for sure one input we will look to in the future and I think hooking real-time critical parts may be just working. Thanks. Okay, yes, Internet. Hedions would like to know if there is a roadmap to MIPS for MIPS support to Avatar 2? No, there's no roadmap currently available. We started implementing it and it gets improved a little bit side-on-a-side. If someone wants to step in and help with it we are happy. I can tell what has to be done to enable BIP support and well, in general, we are working on it but sorry, I can't tell a specific time when it's going to be ready. Okay, any further questions? Nope. Thank you all. When you leave, please take your trash reviews and other belongings with you and wash your hands.