 So hello, hello, everyone, my name's Sanu Dushistova and I've been working on tools since 2004 and today I'd like to talk about target communication framework. So the problem these days is that almost every device software development tool has its own method of communication with target system. And when you try to integrate that into a single solution, usually it becomes an enormous problem to set up the target right and then you end up having a bunch of separate tools that do not work together well that impose unnecessary restrictions on the target system and also are quite difficult to configure. So that was a problem that a lot of tool vendors had and probably almost every vendor had its own solution for that but at some point there has to be an open source one and this is how target communication framework appeared. So target communication framework is a universal, extensible, simple lightweight vendor agnostic framework for tools and target to communicate for purpose of different tooling for development of embedded software needs. It allows single configuration per target or sometimes even no configuration. It has a small overhand input frame on target side. Actually agent can be made very small and also it has a transfer agnostic channel abstraction and it allows all the discovery of targets and services. Here's some history of the project. This framework was developed by Windriver and it was donated to Eclipse Foundation in 2007. In 2008 it had an initial release. It included the core protocol specification and initial Java framework, C-agent that worked on VxWorks, Linux and Windows examples and Eclipse remote systems explorer integration and debugger integration for CDT. In 2011 there was a major release in terms of functionality. We had a terminal service, actually that was donated by Yachto people, disassembly wash points and initial support for Python binding and target explorer was added. This year TCF reached 1.0 release and now it includes the Python binding target explorer and we also added the lower shell to the agent. Unfortunately community didn't quite grow. Right now main development is still done by Windriver. There's also Xilinx that employs two initial authors of target communication framework for Windriver employs and Monavista that's me. TCF code is licensed under Eclipse public license and C-agent is also licensed under the Eclipse distribution license. That's a BSD license used by some Eclipse projects which require dual licensing along with EPL. For more details on that license you can check the legal resources page on the Eclipse dot org. So the main architecture of the framework. All communication links can share the same protocol which simplifies the connection set up and allows transparent tunneling without the necessary protocol conversions. The protocol has transport agnostic channel abstraction so it doesn't depend on any specific transport such as TCPIP or serial or SSH. In fact any third party vendor can contribute the value at server to do a transport conversion from a standard TCPIP channeling to custom channels such as JTAG or even proprietary hardware connections. All services can immediately route through the new transport and take immediate advantage of the value at. Currently the reference implementation that's out there support only TCPIP but other communication and addressing schemes can be added easily. All high level services operate on the channel abstraction. So some definitions. Under peer we understand the communication endpoint. Both hosting targets are called peers. A peer can act as a client or a server dependent on the services in implements. This is a group of related commands events or in their semantics define a service. A service can be discovered, added or removed as a group at a communication endpoint. And the channel represents a communication link connected to endpoints, peers. A single channel may be used to communicate with multiple services. Multiple channels can be used to connect the same peers but no command or event ordering is guaranteed across channel. So the TCF communication protocol defines data packets properties and roles common for all services. It also defines contents of a part of a packet. The rest of the packet is treated as an array of bytes at that level. It provides multiplexing, open multiple channels per peer and also proxy packet forwarding on behalf of other hosts. The protocol defines three packet types. Request, results, responses and events. Each packet consists of several protocol defined control fields followed by byte array of data. Binary representation of control fields is a sequence of zero terminated SQ strings. The format of the data depends on the service but the framework preferred marshalling for data formatting is JSON. So command, in command a token is a unique stream generated by the framework for each command. It is used to match results to commands and a service name is used to identify a service that handles a command. A command should always be answered with the result. The result doesn't have to be positive. It can include an error code or can be a special end result that indicates that command was not recognized but there always must be one. Since the client cannot detect that a response is missing, if for some reason the peer is not able to answer the command it should consider such situation as a fatal communication error and it must shut down the communication channel. It's not necessary to wait for a result before sending the next command. In fact sending multiple commands in a burst can greatly improve performance especially when the connection has a high latency. But at the same time clients should be carefully designed to avoid fluid in the communication channel with unlimited number of requests since this will use resources in forms of memory to store requests and time to process them. Next packet type is event. Events, well this whole framework is event driven so that's the most important packet probably in here. So in the event the service name identifies the service that fired the event. Events are used to notify clients about changes in peer state. Services should provide sufficient variety of events for clients to track remote peer state without too much of pulling. Clients interested in a particular aspect of the target site should have a model of that state and update it by listening for relevant events. If a service implements a command that changes a particular aspect of peer state then normally it should also generate notification events when the same part of the state changes and it should also provide a command to retrieve the value of the state to be used by clients to initialize the model. Service events are defined statically together with commands. The framework does not do any event processing besides delivering them to clients, however service can define additional event related functionality if necessary. For example, commands for event filtering, enabling, disabling, registration, etc. If events are sent too frequently they will cause flooding of the communication channels and degrade performance so some care should be taken when designing events for a service. However, too few events will force clients to pull for changes and can also degrade their performance. Also there is a special type of event which is called flow control and it can happen that one side of communication channel produces messages faster that they can be transmitted and that will cause traffic congestion. So by flow control event this situation can be reported and so clients can react to it. Next feature which is my favorite personally is the auto discovery. Auto discovery is done by the locator service and it uses a transport layer to search for peers and to collect data about peer attributes and capabilities services. The discovery mechanism of course depends on the transport protocol and it is a part of that protocol handler. Experts known by other hosts are added to a local list of peers. Automatically discovered targets require no further configuration. Additional targets can be configured manually. All TCF peers must implement locator service. That's the only required service and all other services are optional and formally they are not part of framework. So current implementation is based on UDP broadcasting and that implies some limitations like targets in different networks cannot be discovered. So what's already there? Today one can download a plain C implementation of a lightweight extendable target agent. There is a Java client API, usable standalone or on top of Eclipse. There are Python and lower client APIs. There is a complete debugger UI implementation in Eclipse, CDT integration for debugger launching. There is a target management remote system explorer integration profile system and process browsing. And there is a target explorer which is a lightweight UI for remote file system and process browser. Terminal access and debugger launch. We also provide documentation and usage examples. So current TCF agent has the following available services. The locator service, memory service, it provides basic operations to read, write memory on a target. Then process service, process service provides access to the target. OSS process information allows to start and terminate the process and allows to detach and detach your process for debugging. Debug services like memory and run control require process to be attached before they can access it. If a process is started by the service, its standard input and output streams are available for client to read, write using stream service. Stream type of such streams is set to processes. Run control service. Run control service provides basic run control operation for execution context on the target. Register service. This service provides basic operations to read, write CPU and hardware registers. In addition to commands that can set get individual register context values, the service defines commands to set get values at multiple locations. This allows to get set multiple registered context in one command to specify offset and size for get set on large register groups to get set truncated register values. For instance, only the low 32 bits over 64 bit register. So then we have this tech trace service. This service basically implements trace back tracing. Well, break point service speaks for itself, I think. It allows to set break points. Then the memory map service provides basic operations to get and set memory mapping on the target. The path map service manages file path translation across systems. File system service allows the operations with the target file system. The system monitor service can be used for monitoring system activity and utilizations. It provides a list of running process, different process attributes like mainline environment, et cetera. So it can provide functionality similar to Unix top or Windows task manager. File service provides access to targets, operating systems terminal login and allows to start and exit a terminal login and allows to set a terminal window size. If a terminal is launched by the service, its standard input and output streams are available for client to read, write, using stream service. Stream type of such streams is set to terminals. Stream service is a generic interface to support streaming of data between hosts and remote targets. The previous LTT and G integration used that for streaming data back to hosts to show in Eclipse. So this service supports asynchronous overlap data streaming. Multiple read or write command can be issued at the same time. Both peers can continue data processing concurrently with data transmission. Also multiple clients can receive data from the same stream. Clients are required to express interest in particular streams by subscribing for that service. And it also allows flow control. Well, disassembly service provides disassembly. And the context query service allows to search for context that matches the pattern. Unfortunately, all the debugger related services are architecture specific and right now only work on X86 system. Other services that are responsible for transport can be cross-compiled and can be used on different targets. But additional work is required to port the debugger implementation to different architecture. Well, there is a porting guide in the documentation that briefly says how to do that. But that has not been done by anybody from the community yet. Now I prepared a little demo. Here's my setup. Let me switch to my system. So that's an open SUSE 12.1. And right now I'm running a QMU X86. Let me find the console in here. There it is. So there is an agent running in the demo mode. And I'll start an agent on my host in the interactive mode. Now I want to see the peers. So there are four peers. I have two network interfaces on this one. And there is also one from the QMU. So we can see the appearing form on this one. As you can see, it shows the information about the peer. Now let's try to connect to it. All right, connection is established. So now let's send some commands in there. So roots for the file system, I got it. Now I want to try to open the root directory. And now let's see if I can read it. So it's returned me the list of files with the attributes. That's the command line. Now let's see if I can show you the clips, debug integration. It might take a while. Let's see what's going on. So while I'm trying to bring this up inside my virtual machine, which is quite slow, I can take any questions if you might have some at this point. No? No? Well, not really. It's not documented. Is it feasible for TCF to be a debug agent for the kernel itself? I mean, because that would be quite a useful thing. I mean, is it a KGV? Imagine it's getting a little KGV agent. Or some equivalent of that. When you debug the kernel, it's hard. Well, I think it might be possible to do this. But well, when the reverse commercial solution is based on TCF, they do not ship GDB-based debugger. OK? Is there a problem with the structure for handling an agent that needs to have a notion of it? You know, so I'm having to work for an associate vendor. And I know that probably none of them, everybody has got their GDIC cable. And there's probably no problem. Some common notions of GDIC are very confounding. So TCF is designed with that in mind. But there is no such framework in the open source because nobody is willing to donate their resources to do that. We should be happy to take your contributions. But right now, the community is quite small, though pretty active. And actually, I think that's one of the most friendliest community that I know of personally. And they're always willing to help. It's just they happen to work for vendors that see that kind of open source implementations as commercial disadvantage. I wish I could have launched that beforehand, but that would interfere with the command line demo. So I decided not to do that. Since I never demoed the command line part, and that one I showed at the ELC Europe in Barcelona. Yes, all clients are equal. And actually, the auto discovery is based on that fact. Well, we include the open SSL into the Linux implementation. Yes. Could you trace the bond on the process, or is it more about a remote system is more referring to the fall system, or? It's services-based. So a trace part is separate from the file systems part. All right. It doesn't make sense to ask. You want to ask a process for a reason, you know? So I'm just trying to understand when it's running on a private part, what is agent really reporting on the state of the system, or is it attaching to a specific process? So the agent, when the agent gets the command, it looks for a service that can handle that command, and then it launches that service. So if, say, you're interested in browsing files in the file systems, then it will start that service and get the data back to host. And if you're debugging, it will be doing that through different services. So each service is responsible for its own set of functionality. OK, finally. So target explorer is going to discover my target. I hope it never failed before today to do that. OK, yes. That's my target and their process and target and what I want to do. So I started at the top in Q and U. Now I do refresh. Now I do fine. And we want to attach. So as you can see, we're attached to the top. You can see the disassembly view here. Now I want my debug view back. All right, so if you do the suspend, you can see the registers here. Since they changed their marked in yellow, you can step over here in disassembly as well. So here, we don't have the sources. But I recompile top and let me start my top and detach. We'll do refresh again and we attach. So now we've opened the source file. And you can see variables and registers. And we can single step. That's something new because I only updated that yesterday from the latest Git snapshot. So breakpoints also work here. But then again, that's 86 target. No, it's not. That's all over the DCF protocol. OK? Is there a setting that is that for bugging most embedded systems is a two-machine process, right? The machine, the complicated one, connected to the serial port of the embedded machine. Yes? You have a DCF page that's running on the workstation and setting the functionality into the port. And then say you can set it from target. And then you have setting it from the target with some other user functionality as well. Can you cross people through the connection to that? Yes. There is a value at server. So there is a possibility to do that. Yes, that's in design. We don't have converters out there for, say, JTAG to TCPIP. That's not implemented, but that's possible. Yes. And there is a proxy that can do that, where you have to plug in your value add. Yes? And the target itself may have enough TCPIP? Yes. Yes, you can do that. So that's all I had for demo. Because I'm not sure that you're interested to see how it can browse remote files. So questions, any more questions? What's called? No? Okay. Would you really help talk if you have to do another than JTAG? No. Just what you do to debug with JDV. Nothing more. So some references? Well, our wiki, the documentation, and that's also where the source code lives. And the mail-in list is a great place to ask questions, and they're usually answered. So that's all for me. Thank you.