 Welcome to the last presentation of the day. We will talk about VZLUSAT 2, which is a CubeSat with two Linux payload computers on board. I'm Tomas Novotny, and here's Martin Sabol. I'm working mainly on Linux stuff in our research institute, and Martin is working on space hardware and operating of the satellites. And we are both from Czech Aerospace Research Center. So we will start with the introduction, so we will have the context. And we will focus mainly on VZLUSAT 2 in our presentation. We will talk about its hardware. We will briefly talk about the software, and we will spend some time on the operating of the satellite. There will be question at the end of the session. And what should you take from that presentation? You should know what are the specifics of the commercial of the shelf components used in space, especially with Linux computers. How the Linux computer may be operated in space, and we will talk about our future plans as well. So Czech Aerospace Research Center has a nice acronym in Czech, which is VZLU. This research organization, which was established more than 100 years ago for aviation stuff, and we work with space roughly 20 years. We have 200 employees, and we are based here in Prague and in Brno, both in Czech Republic. We are mainly working as subcontractors for ESA, as you can see, but we also have our own missions. And one of these missions is VZLU SAT 2. So this is a photo of the satellite. It's a 3U CubeSat, which means size of shoebox approximately. And at VZLU, there were 28 people working on it, not only engineers, all the stuff which was working on VZLU stuff, and about 70 external people, because there are six very nice and interesting scientific payloads on board from these external institutions. Today, we will talk only about the Earth observation mission, because it's the one which uses the Linux computer. So we can talk about it, and it's our primary mission, by the way. We have two cameras for that purpose. Both cameras connected to the same computer. One camera is black and white with high resolution, which means 25 meters per one pixel. And the second one is color, and it has a wide field of view. We have UHF radio, which is used for both for transmitting and receiving the data. And the key point is attitude, determination, and control system, because it's a crucial part for Earth observation. We will talk about it later. It requires quite a lot of sensors and actuators, so these are connected to the attitude control system. We operate in a sun synchronous orbit, which is nice for Earth observation. And it's low Earth orbit. And the last information from my side is that the satellite was launched at the beginning of 2020, and it's still operating. And Martin will talk more about the details. OK, thank you, Tomasz. As Tomasz mentioned, we have several instruments on board. I will briefly introduce them. We have Gamma Ray Burst detector, Time Picks detector, which is radiation detector using the same computer as our camera payload. So we have two identical computers on board, the Linux computers. Then there is Orbital monitor, which is just another radiation monitor in space. But the sensor was developed here in Czech Republic in a faculty of nuclear science here in Prague. Also, we have our own developed radiation detector, which is called SXD. So radiation is quite interesting thing in the space of a lot of radiation detectors there. All the instruments are interconnected via common bus views, CAN bus, as a physical layer, and CubeSatspace protocol, which adopted some features from TCP-IP. So it's a distributed protocol. Each instrument has its own address, and we can share them via a common bus. We also use two I2C interfaces, which are separated for a connection with our power system and other payloads, like our attitude control. The primary mission objective is Earth observation, and not only Earth, but to take an image of the Czech Republic. That is the main goal. So there are some requirements based on this goal and some limitations, which is mainly the limited power budget. The total income power is no more than approximately 3 watts in average. And also, we have only UHF radio, which has very limited bandwidth up to 9.6 kilobits per second. So the speed is not very high to upload and download all the scientific data we are able to generate. Also, the crucial part was the delivery time, I mean the total time available for the development of the hardware. It was less than eight months. So the decision was to use a Linux computer based on these requirements and to connect the camera dielectric to the computer with some high-resolution chip and provide all the compression features and so on to use our limited link budget. Finally, we started cooperation with a small tech company called L4, and they developed for us a computer, which is called VCBS2 computer. You can see it here on the picture. We have two identical computers, as I mentioned. One is used for our primary mission, which is Earth observation. And the second is to use to connect the X-ray detector, connected to the USB2. So that was the requirement on the second computer to have the availability of the USB interface here. You can see the computers integrated in a satellite. This is the integration phase. You can see all the instruments here, the optical. This is the optics of the main camera, the GRB detectors here, orbital monitor, radiation detector, and so on. This is our attitude control developed in-house in a VZLU. So this is our own hardware. The main benefit of the used computer is that we directly or the L4 directly implemented the driver to read out camera data from the optical sensors using LVDS. So the sensors, each 1.3 megapixels, are directly connected to the computer via LVDS channels. And the images are provided via some API directly to the user space. So we can simply use the images. As Lenka said, the space environment is not very friendly. Quite critical. Environment is the radiation. So we should consider this. There are at least two types of the radiation. One is cumulative, which is the radiation tends to degrade your electronics on board. During the time, typically it leads to significantly increase the leakage of the electronics, and possibly to increase some misbehavior, and so on. The good mitigation technique is to use some aluminum shielding, as shown here on the picture. This is our camera shielded by this piece of aluminum. This is quite common technique. And also you can do some testing of your electronics, like we did on a VisitLU Sutter 2. This is our electronics tested in a laboratory in Austria, in Zeibersdorf, Cobalt-60 Source. So you can then observe the behavior during the ionization. Energy protons might also cause single event effects, might lead to soft errors. For instance, single event upsets, which typically means some bit flips in memory and so on. There might be some mitigation technique like to use ECC memories or some other logic to prevent bit flips, or at least detect bit flips, and, for instance, reset the device. Quite serious and possibly destructive events are, for instance, single event latch ups, which leads to perform parasitic terrestrial-like structures in integrated circuit, which leads to simply a short circuit. And this should be handled by power reboot in a very short time. So the good technique is to use some latch up protections like fast circuit breakers, which we also implemented directly on a camera. Low pressure near vacuum in a low Earth orbit, which means the gases are simply liberated from various types of the materials. And the materials like some common plastic and PVC and so on shall be avoided as much as possible. As all these free gases might, for instance, somehow regenerate with your other surfaces, like some sensors or surfaces and so on. So you should use low-outgassing materials. For instance, all the PCBs and so on shall be cleaned. And the good technique is not to use common plastics, as I said. There is also thin viscose issue, which tends to grow more in low-pressure environment. The good technique is to use thin-lit materials or thin-lit alloys during the assembly process, which significantly lower the probability of the thin viscose growing. Thin viscose leads to the traditional circuit, of course, for instance, of the BGA components. Thermal cycling during the orbit phase, the temperature vary from minus 40 to 50 degrees, for instance. And so we also provide some testing in-house. We have thermal vacuum chamber. You can see the wall satellite here inside the chamber. This is mostly temperature on the surface of the satellite. Inside the satellite, the temperature, we can see that it does not vary so much. There is a temperature about from 5 to 10 degrees during the wall orbit, so it's not such a critical. But some materials, like PV panels and so on, are exposed to such thermal cycling. So you should consider also this and test if possible. The mechanical testing is also one of the crucial parts. As before you reach the orbit, you would probably use some rocket. And the rocket is exposed to the random level of the vibration. You shall withstand quite high g-force. We have also our own facility to provide such a testing, such a vibration test. So then we are pretty sure that our payloads withstand the launch phase. A code component. To use a code component, it's in such a small project and low budget project like these CubeSats is probably the first choice. But of course, this leads to some risks as these components are not qualified for space. The good starting point would be to use automotive grade parts which are qualified for some mechanical shocks and vibrations already, and also have extended operating temperature range. So this is a good starting point to select such a component. And of course, to do some qualification testings, like TID and so on, is of course beneficial. That was about the hardware. I will kindly ask Tomasz to continue the software. OK, so think about the BSP and the software. The whole BSP is built by N-class trap built environment, which is basically built route, which is some stuff around. The boot is quite simple. There is a predefined sequence which verifies the checksums of the images. And the first valid is then booted by the U-boot. And I would like to mention that there is no supervisor. So U-boot needs to take care of the boot. There is no console in U-boot. So U-boot is really the responsible part for the booting of the Linux computer. When the Linux starts, the root FS is in RAM. And the non-volatile memory is mounted only when it's needed by the script. Regarding the debugging, the satellite itself is roughly 500 kilometers away only. It's quite difficult to reach it. So we have no JTAG. We have no serial console. We can only debug by power cycling and shell over cubes at space protocol. There is nothing more. So that's quite difficult from that point. This is, of course, mission specific. If your mission requires it, you may have serial console if it's implemented on your onboard computer. So it depends on the mission. As you can see, simplicity is the key. There is no fancy stuff yet. We will see maybe after some tests. But as a starting point, the very simple old common stuff is used there. Something about software. As Marcin said, we have Ken as a physical layer. We use Socket Ken, of course, on Linux. I'm just mentioning it because the Socket API and the Socket Ken was received very well by the programmers which were working on microcontrollers. Because this API is really convenient. And they were able to operate the Ken very quickly. On top of Ken, we use Leap CSP, which is the official library for cube subspace protocol. It may run on I2C, Ken, et cetera. But we use it only on Ken because the computer is connected only via Ken. For image compression, we simply use OpenJPEG. And if you are interested, X-ray optical payload, control software, and some parts of the BSP configuration for the respective computer are available on GitLab. The link is in the presentation. For the communication with the computer, on board we use Vcom, which is our terminal client, which use CSP packets to communicate with the Leap CSP on board of the computer. This terminal will be hopefully soon published on our GitHub. It's now being rewritten for our partners. So we will publish it hopefully soon. I would like to talk about unplanned in-orbit upgrade because it's a nice use case for Linux versatility. Because after the launch, one company, which is called Zitra, came to us. And they said, OK, we would like to test our artificial intelligence classifier in space. We would like to have flight heritage for our software. And we said, OK, it was not planned. There is no update mechanism. We can just change the planners and scripts. But we will try. So we uploaded the binary and run it on the camera computer. By the way, upload of 100 kilobytes required 10 passes with 27 hours. The best pass, we uploaded 21 kilobytes during the best pass. So you can see that the UHF is really, really slow. And why to have cloud detection on board? You can see the real situation on the planet. This is the trajectory of the satellite. So it would be best to automatically pick that part. And the detector was successful. It really picked the part, which is literally without clouds. It's a very near Greenland. It's taken from the angle. But yes, it was successful. So the classifier works. And we are moving to operation, which is Martin's part. OK, operation. You need some ground station, some antenna to communicate with your satellite. We have our own ground station, which is maintained by University of West Bohemia in a Pilsen. Here, this is the antenna system on the roof of the university. We have quite limited access time, no more than one hour per day. So during that time, we have to check the satellite status. It means to get some telemetry data, upload some future plans and plan the job for the satellite, and also download the data available, the scientific data, which will be processed later. So we tend to automate these processes as much as possible for that purpose. We have a dashboard, which is publicly available. You can see the telemetry data on it. We use also that dashboard to plan the uploading of the files of the planners, to download data, and also to commanding our satellite. This is the automated process. But of course, the operator may access the satellite also directly, manually. As Tomasz mentioned, we use the VECOM utility, which simply encapsulates the user commands to the Kipsat space protocol, packets, and twice Versa. This is how it looks like. It's a terminal with a set of commands you can execute and send to the satellite directly from the ground station and get the reply, of course. But this is limited to the access time over the ground station, during the pass over the ground station. What we do is we can execute the same set of the commands directly on board, on the satellite, in our onboard computer. So we can upload simply such a planner file where you execute the command directly in time. So then you can generate your scientific data or start a measurement through South Atlantic anomaly, typically for the radiation detectors and so on. The satellite itself also drop some basic telemetry data every 10 seconds. We are connected to the SACNOX network, which is a global ground station network. So you can also view such a dashboard. We create it for our satellite with very, very basic information, like the temperature and some radio statistics during the lifetime. The Earth observation means you have to somehow trigger the camera and the attitude control itself. How we do that? We have our own attitude control hardware, which was, as I mentioned, it was developed here in VZLU. But there was no control algorithm already during the launch, ready to be used. So as a demonstration mission, we updated our algorithm in orbit during the regular operation. Once you build a camera and you put it on the orbit and you run the camera, you take an image, you will probably see something like this or such an image, very nice images of the Earth. But hardly recognize the Czech Republic here. It's really, so that is not exactly what we want. In this case, this attitude control is a very, very crucial part. And this is the most difficult part on the mission. So what we did is we implemented, we embedded the micropyton to our onboard computer. So it's just one of the threads running there. It's a micropyton, so we then simply upload the Python files and make the attitude control running and working. And that's it. And it seems to work. So we are happy. This is how we start our attitude control. We simply run a Python script. We have prepared a CubeSat CSP command for it. Camera. As Tomasz mentioned, we did some unexpected experiments with the ZYTRA, so we have implemented FTP-like service encapsulated in the CSP. So we can upload the files to the camera directly from the ground station without obviously to be involved, so the files are uploaded through the radio via command bus to the camera directly. Also, we implemented something like a remote shell over CSP which enables the console access and the Linux console access directly. So we can run scripts. We can upload some script, run it, and so on, do some optimization. It's a pretty good feature. And make all the orchestration play and then make a photo campaign running, which means to run attitude control and start capturing. With all these things ready, we generate some ADCS data, which we plan to be downloaded. The download itself might take one to two passes over the ground station, which results to some report from which we can easily check if the attitude control was done as expected. Here, you can see the satellite trajectory, which is the blue line. And then the green line is a pointing direction. So you can see we point it to the native direction. What we do next is we select image. We would like to download to see some preview, how it looks like. It took one pass to upload a planner I mentioned before. And to download some raw thumbnail data, which are copied from the camera to the onboard computer. As we have the main storage on-board computer from which we then download data as a small chunk. It also took some passes, typically one or two passes over the ground station. As a result, we can get JPEG image, compressed image with preview. If we are happy with such image and we decide to download full image data, then we also have to copy the full image to the onboard computer and then download all the data to the ground station, which might took quite a lot passes over the ground station, up to 40 passes. So we have precisely selected the image to be downloaded, at least the full image resolution. But finally, we can get something like that, which is what we wanted to get. This means the mission was successfully completed. And we fulfill our goals. That's the result. And so we now, we are happy to produce another images. But this is our output from our main output from our payload. OK, no more from my side. So I'll ask Tomas to finish the session. OK, so the last slide. It's about the conclusion and future plans. So as you anticipate, we plan to use Linux in future missions because, as Martin said, all the goals were met. And Linux is still working well in space in our particular use case. So it was a great success, I would say. There are many features really available which accelerated the development. And we were really able to finish all the hardware and software in eight months, which is, I would say, quite nice for the space mission. We continue with the development of the new computer. The new computer extends the functionality of the old one. There are more interfaces. There is a bigger storage, but not just a bigger EMMC. We are pre-selecting, and we are qualifying SSD which will be used in space. So the storage will be 100 times bigger, hopefully. And we are implementing supervisor which will take care of the boot. And it will help the U-boot to switch to different memory or load the U-boot itself from a different memory to have more opportunities and chances to survive the mission. The basic bring up of the hardware is done. And we also, as was said in the previous session, we cooperated Linux for space since the beginning, so we will run the space distribution there. OK, that's everything from our side. Thank you to the organizers. We were just a final remark. We have an opportunity to present to you the hardware, the space hardware, and the model of Visitor Lucid tomorrow at poster session, which starts about 5.30, if I'm mistaken. So we will be happy to chat about Linux, space hardware, and general stuff if you want. So please come, and we can talk. So thank you very much. Thank you very much. OK, and the questions? And repeat the question. Yes, of course. Yeah, for me, for instance, very handy, it was just some simple batch scripts, very, very helpful tool. So that was something I heavily used on the space. OK, the question was if it's possible to see that board tomorrow, yes, it will be possible. And I will repeat the first question. Were there anything handy on the Linux on orbit? That was the first question. So the question was? The question was if we can get more images from our satellite via some other ground stations, for instance. We are prepared to drop also chunks of the data to the other ground stations, like in a Subnox network. It is not fully implemented. It is ready to use. But yeah, it is not working like we would like to see it. But it's ready for it. There is at least room for the improvement, because this is really a bottleneck to get more and more images. But even with this compressed data, compressed images, we are able to get one image per pass, so it's not that bad. But it's not full image, that's right. Yes, we, first of all, the question was if we see reboots in orbit, yes, we do. First of all, we were testing the protons and these kinds of stuff on the ground. And yes, we saw reboots caused by radiation. And yes, there are some reboots. But we are not tracking it. We are tracking the reboots of OBC. This is part of the Subnox telemetry, so you can see there is a number of reboots and the cause of the last reboot. So you can check online. And I'm not sure if you really, if you see the reboot of the computer during the pass. The reboot of the onboard computer, the question is if this is caused by the radiation or software itself. But there is a lot of vegetation. To be honest, the camera is operated in some very short duty cycles. So we switch on the camera or the power to the camera only during the photo campaign. So for a few minutes to say, and then the power of, also, also, as I mentioned, we have limited power income. So we are not able to keep the camera running during the wall orbit. Could you repeat the question, sir? Yes. Yes, the question was if you know the time between reboots. Yes, you can check the Subnox dashboard because that plot is not here on the screenshot. But down on the bottom part of the page, there is a cumulative amount of OBC reboots. You can see the number there. There is a number and the cause of the last reboot. And there is a plot on the bottom of the page. So you can see the reboots in the time. As you may anticipate, there were a lot of reboots during the commissioning phase because there were a lot of tests. There were some problems, let's say. And after that, the number of reboots was stabilized. And it's quite steady now. Usually there are up to three reboots per month or more. You can see it's now, we are operating 500 days roughly. 500 days. And you can see we have 135 reboots now. Of the OBC, this is not Linux computer to make it clear. This is the control computer. This is not the payload computer. Which memory? The question is which kind of memory we are using. There is USB-I memory, which is directly on the COTS module. And the Linux image is stored there, if I'm not mistaken. And there is 4 gigabytes EMMC, which is used for data. No, there is just aluminum on top of it. It protects the memory. But the EMMC is the best grade of automotive. And it was pre-selected. The EMMC memory was pre-selected in tests. So we are pretty sure that it will survive two years on orbit. It was tested on a TRD. And also what we did is it's a BGA component. And we re-balled the GBA. So we replaced the pure team with tin-lit alloy. OK, so we are sorry. There is a stop sign. Vinging quite a lot, so we have to stop. We can continue in chat, so OK.