 Professional open-source radio astronomy by Jason Manley. Yeah, thanks very much. So I think I've made a bit of a blasphemous move bringing my Mac here today. But the other seven computers that I run will run Debian or Debian derivatives. So I don't feel like I'm in bad company. And I've managed to convert my parents as well. So it's looking pretty good, I think. So I'm here to talk about radio astronomy today. The talk's going to be kept pretty informal and not very technical. It's kind of a high-level overview and just to kind of give you a flavor. So lots of pictures and things. This is what we're talking about. So we're building these big dishes. They look like giant satellite TV dishes. This is actually a picture of Mirkat, one of the array that we're currently building out in the Karoo. And we don't just build one of these things. We're building an array of them. So that's a picture taken very recently on our site. And Mirkat has about 64 of these things. They're about 13 and a half meters across. And the whole thing is based on a very flexible, reprogrammable software-defined radio. That's all open source. There's about two pedo-ups per second of processing capacity in there. It doesn't sound very high compared to today's supercomputers, but we're only burning 26 kilowatts to do that. It's very power efficient. And we're very, we dominated by IO. For two pedo-ups a second, we've got 15 terabits per second of data being exchanged. And that drives us into, towards building sort of custom hardware rather than buying off the shelf things. Interestingly, we do direct digitization. So there's no analog down conversion. Those of you familiar with radio astronomy and sort of analog radio. And it will be the world's most sensitive L-band instrument when we finish building it next year. So why are we building this thing? Well, it's kind of a flagship project for the country. We want to show off our capability as it were. So we're building this world-class telescope. But we also want to attract and retain scientists and engineers. A large part of our project is human capital development. We invest about a third of our budgets into students. We fund about 400 students at university. And so we want to inspire a new generation to sort of pursue careers in science and engineering. We also try to stimulate the local economy. So we use local industry wherever possible to build this thing. And it's actually a pathfinder for a future telescope called the Square Kilometer Array that's coming that will be built throughout the world. And those of you who follow these sorts of things in the news may have heard of this announcement a couple of years back. We won the right to host the Square Kilometer Array on our site right next to Meerkat. So that's what we're working towards. And the site where we're building this thing is protected. We have legislation in place that prevents you from putting radio transmitters up there. RFI is a very big concern. It's radio frequency interference. You can't come near us with a cell phone or to a radio or anything. And I'll try and explain a little bit about that in a minute. And where are we talking about? There's a map of our country there. And there's a section towards the Northwest that's sort of almost circled in blue, for those of you following online. And that is the protected region of our country. This is a map of population density down here in Cape Town in the sort of bottom left corner there. It's red. It's very densely populated. But out in the northern Cape where we're building this thing, the color is gray and that's naught to two people per square kilometer. And I say people, but actually I think it's just sheep. There's like nothing out there. And that's very important because we don't want people polluting the radio spectrum. It looks something like this, mostly rocks and kind of scrub vegetation. It's a semi-desert region. And there are these little hills that actually provide a bit of shielding from terrestrial transmitters that are a few hundred kilometers away. So it's kind of the ideal place to build this. And there are very few places in the world that offer this kind of environment. So this is one of the arrays that's currently on our site. And if you zoom out a little bit, you'll see seven of these things standing there. But we hope by 2020 it might look something more like this. If we start building the square kilometer right there. And in fact, these dishes would not just be based in South Africa, they'd be spread out throughout the continent all the way up to Ghana. So we have lots of other countries involved in this as well. Now, Radio Astronomy has a fairly long history. Or rather astronomy does, Radio Astronomy is a bit shorter. Carl Jansky discovered the first radio waves from space in 1929. But Grotta Riba in 1937 was the first person to actually build a telescope for the purposes of listening to signals from space. Jansky sort of stumbled across them while trying to solve the problem of radio telephone calls across the Atlantic. And he discovered this interfering source coming from space. So we sort of happened upon this science. But you'll see this dish on the right, looks remarkably similar to the way we build dishes now. And that was the first one ever done. So the basics have been pretty much the same ever since it started. In the early days, things were pretty simple. You built the dish as big as possible to make it as sensitive as possible. The processing was all done in analog. And the computing back end was a guy with a pen and pencil. The problem with this sort of approach is you can only build these things so big. So you sort of run into a sensitivity problem eventually. And the largest fully steerable telescope in the world is the Green Bank Telescope. It's about 100 meters across, weighs 7,300 tons. To kind of put this in perspective, imagine picking up a football stadium and turning it around at the sky. That's the size of these things. And in fact, it's a problem because the thing weighs so much when you move it around, its own weight deforms the surface. So you have little motors that actively correct for the surface deformation. But what happens if that's not big enough? If you still want it more sensitive, well, you have to build it into the ground. So if you've ever watched the James Bond fans who've seen GoldenEye might recognize this. It doesn't fill with water. And there's not a bad easily under there. It is kind of a spectacular bit of engineering. And this was in 1960. This is 300 meters across. And at the moment, there's a bigger one under construction in China called FAST. That one's 500 meters across. And they reckon they're going to finish it by 2017. So today, it's pretty far along already. But when you get to these sorts of scales, there are all sorts of limitations. You sort of point it straight up at the sky. You can move the focal point around a little bit so you can steer it a little bit, but you can't look at anything you want. You kind of have to wait for the Earth to rotate back around to see that thing again. And also, the bigger you go, the narrower your beam gets on the sky. So you see a smaller part of the sky. So if you want to do a survey, if you're looking for something, it takes you a lot longer to do that. Now, why are we trying to build these things so big? And the reason is to try and collect more energy. To sort of try and put this in perspective, imagine a 100 watt light bulb that you turn on and off. So on for a second, you burn about 28 milliwatt hours of energy. If we run a 26 meter dish and we look at the brightest source in the Southern Hemisphere, Virgo A, 12 hours every night for 10 years, you'll only collect a thousandth of that energy. So these things are really, really sensitive. The signals that we try to detect are really, really weak. What happens if you put a baby monitor on the moon? Could we detect that? Not only can we detect it, but it's 10 times brighter than the brightest source we observe. So if you walk next to one of these dishes with a cell phone, you blow them up and they're really expensive. So we don't let people on site with cell phones. So rather than trying to build these things bigger, we build more of them. And so the start of this was sort of the very large array in 1980. This one's still very popular. If you've seen the movie Contact, you might recognize this. So these are, I think, 20-odd meter dishes and they're 20-odd of them, something they're about. And the kind of fundamentals of this is you can imagine a source up there somewhere in space and there's signals coming down from it and they arrive on the surface and these dishes are in slightly different positions and the signal hits them at slightly different times. So what we try to measure is that delay and then you can work out where in the sky the source was. It's sort of the basic concept. So in South Africa, we kind of knew to this field. We haven't been doing this very long. And so we've gone through a few different designs to get towards this mere cat array that we're building. We started out with an experimental development model which was just a single antenna. Then we went to cat seven. That's seven of these things that's been running since about 2010. And now we're busy building mere cat. And those of you with a sharp eye might notice that these things look subtly different. The first two on the left are what we call prime focus. There are legs supporting a feed structure in the middle of it. But mere cat is what's called a Gregorian offset. And the reason for this is to avoid any thing sitting between the sky and your dish. So these lines are a little bit fainter with the lights on but essentially the signal comes into this main reflector, this big dish here, and then bounces off to a sub reflector and then hits the receiver. And there's nothing sitting in the aperture of this main reflector. So you get a very clean signal, no multiparthing. And this is a stepping stone towards this big international project called the square kilometer array, which is sort of consists of three parts, this dense aperture array thing, some dishes like these pictures you've been seeing and then something called sparse aperture arrays. And the sparse aperture arrays are gonna be built in Australia, but the dense aperture arrays and dishes are coming to Africa. And they might look something like this. These are artist impressions. Of course, these are the dishes. You can have lots of them. Dense aperture arrays look sort of like these tiles that are on the ground. And the sparse aperture arrays are sort of dipoles staring at the sky. Now the focus of this talk is sort of on the processing side of this. My role in this lot is the initial upfront signal processing of these systems which is now all digital. In the early days of digital instrumentation, it was done with discrete logic. That's 74 series ICs and they were actually still in use up until just a few years ago. So over a 30 year lifespan that these things ran. So you build these cards with all the logic on and then you kind of stack them in these racks and you've got a room full of these things to build a correlator this size. And the current state of the art is to do this using ASICs. So this is the EVLA. It's an upgraded version of that very large array in New Mexico. And you basically take a PCB, you build them as big as you can. Some of them are two meters square and you just pack them with ASICs and you slot them into these big chassis cabinets and you put lots and lots of cabinets in the room and you kind of connect them all with ribbon cables and hope you haven't miswired something. And a lot of people ask me, well, can't you just do this with software now? I mean, that was 74 series logic in the 80s. Surely by now we can just put an Intel CPU there and we're done. And it turns out you kind of can. You can replace this old 74 series logic with about 153 gigahertz CPUs. And today's sort of cutting edge correlators built using those ASICs, we need something like 200,000 CPUs. So these systems are pretty big. We crunch big numbers. And you can improve this by using accelerators like GPUs or cell processes or whatever the flavor of the month is. But these things are all pretty power hungry and it's difficult to get the data into these devices. Computers aren't really good for high bandwidth applications. So when we came to building cat seven, which was deployed in 2010, we sort of started with a bit of a clean sheet. We said, okay, there are all these other ways of doing these things. If we were to do it now from scratch, given our current technology, how can we do this? What's the best way to go about it? And one of the concepts was to have kind of a building blocks that you can use, if you will. You can kind of clip all these building blocks together to build different types of instruments because you do different science with these telescopes. And to scale the system dynamically, as it were, by just adding more and more hardware. So you could build pocket spectrometers and channelizers and all sorts of different things all out of these same building blocks just by sort of connecting them up differently. And when you run out of space on one board, well, you can connect multiple boards together. And when you aren't able to connect multiple boards together anymore, you can plug them all into a big network switch and just sort of keep scaling this thing out. And so we came up with a sort of concept diagram where we have dishes here on the left plugged into something called F engines. It's a channelizer. And then it goes into a big network switch and then you have a whole bunch of processing nodes hanging off this. And it turns out we weren't the only ones thinking along these lines. The collaboration for astronomy signal processing and electronics research, or CASPER, which started at UC Berkeley, tried to create the kind of PC of radio astronomy. So the idea is to sort of accelerate and streamline the development of these instruments, which can take over 10 years sometimes. And in so doing sort of better track MERS law, we want to use low cost commodity hardware. You don't want to have to redevelop the system every time you upgrade it. You want to reuse what you can. So can't you come up with some sort of platform-independent DSP library? You want to use standard communication protocols that it's easy to interface with other equipment. And the idea is to sort of develop this quickly, deploy it late and upgrade it often. Much more, you know, similar to what you would do with a laptop. And critically, if we can get lots of people behind us, then the non-recoverable engineering cost would be really low and you can share it across the community. Whatever you develop, whatever I develop, we can share it. And if we develop different things, then we both gain what the other person's done. So traditionally, as I say, it takes over five years to develop one of these instruments. To put this in perspective, CASP has been really successful. In their first two years, they built eight instruments. And the cost is a whole lot lower than these things normally would be. It's normally tens of millions of dollars. And these CASP-like instruments are typically under a million. So we use something called a Simulink environment. That's the programming environment for these things. It's actually graphics-based. Simulink is a MathWorks MATLAB sort of add-on and it's the only non-free part of our tool flow at the moment. But you kind of drag and drop these modules out of a library and you connect them up with little arrows and you hit a compile button. And this was pretty novel when we started doing this sort of circuit 2008, 2009. But now it's fairly common. And this map in the bottom right here shows the collaborators at the CASP workshop some few years ago. So we use these open-source hardware platforms. This was sort of the first-generation one. This was called the B2, which stands for Berkeley Emulation Engine. This board was actually designed to simulate multicore processes. So Microsoft and Intel were some of the big customers for these boards. And back then it had incredible specs. 24 gigs of memory and 180 gigabits of IO. It sounded amazing. Now it's like, yeah, whatever. But the problem with that thing was there was no way to get analog signals into it. So you couldn't digitize your antenna data. So then they built this other thing called the Internet Breakout Board, whose sole purpose it was was to digitize signals, get it into the B2s for processing. And that worked well enough. But we thought, oh, my animations aren't working so well, sorry. So we came up with something called the Roach. And in America, Roach means marijuana. That's not what this means. This stands for Reconfigurable Open Architecture Computing Hardware. And it was kind of the one platform to rule them all. It rolled in the functions of the iBob and the B2 into a single platform. You could digitize signals with this. You could do your processing on here. And this has been really successful. We sort of designed this thing for Cat7. But we open sourced it and it's now in use throughout the world. Most radio astronomy facilities have one of these boards in their pipeline somewhere. Typically, we use the biggest FPGA device available at the time. That's a field programmable gate array for those of you not familiar with it. The kind of idea is it's like a CPU that you can rewire the innards of. So you can implement your algorithms in hardware rather than in software. So we have this sort of big FPGA device and then we put a lot of memory on it and lots of IO. And then there's a little co-processor on these boards as well that runs, Debian. And it has some out-of-band hardware monitoring, this predates idracks and things. And we tried to keep it backwards and forwards compatible. So we use Ethernet's communication standard. It's been around for 20, 30 years and we think it's, we hope it's gonna be around for another 20, 30 years because our telescope has to last 30 years. So you build this, you kind of take this one building block and then you stick a whole bunch of them in a rack and wire them all up together in a network and you get a cluster of these things as you do in ordinary computers. And it seems to work pretty well. So then we built the second generation board called Roche 2, not very original. And it's just sort of bigger, better, faster and unfortunately also more expensive. And these boards have become sort of the Casper de facto standard building blocks. And then when it came to build the third generation board which you'd think the next generation would be called Roche 3, it's not, it's called Scarab, but it's the same thing. And we re-labeled that to sort of bring it in line with the upcoming SKA array. We wanted SKA in the name somewhere, but it's the same thing, bigger, better, faster version. And it looks something like this. Those of you who were here yesterday afternoon may have seen the demo I was giving. This is just a slide showing sort of some of the capabilities of the board. I guess the only thing to kind of highlight here is even the Roche 2 that is some, many years old now, already had memory bandwidths of over 200 gigabits per second, which was many factors over what was available on typical computers at the time. And these things that I knew saw all over at GMRT in India and the ATA in the States and there are a whole bunch of other arrays coming online that could also be using the stuff. And we kind of have some strange requirements. These things operate out in the desert, they stand alone. So, they have to run remotely, very reliably. We have to design the system to cope with failures and we need to design it with a long lifetime in mind. So no single hardware platform is gonna last 30 years. You're gonna have to at some point service it and upgrade it. How do you go about doing that? You have to pick your hardware and software very carefully. This is what the development environment looks like. It's not your average text-based IDE, but you kind of have blocks representing different things in the system. Some of these represent IO parts, ADCs and DAX and things. Some of them represent signal processing tasks. And then you kind of wire it all together and hit compile and it programs your board and runs your function. So we have a pretty big DSP library. We do digital down converters, fast Fourier transforms, polyphase filter banks, multiply and accumulate, data reorders, all sorts of things, all the typical radio astronomy building blocks. And we try to make these things configurable. So it's a library of components that you can adjust parameters on and it will consume more or less hardware resources depending on what you do or what you're trying to do. You can tweak things like the amount of bandwidth that you're processing, if it's real or complex numbers, how many streams you wanna process in parallel, tweak bit widths and so on. Most of the stuff we do is integer. It's not floating points. And this has worked really well for us. Instead of taking years to develop these systems, we built a dish monitoring system in a month, built an RFI monitoring spectrometer, the same one I was showing off yesterday in one week. And it took us six months to build the CAT-7 correlator, a whole lot less than the 10 years people were estimating. And this is kind of the first sort of science that came out of CAT-7. We ran it through a standard set of observations and compared it to existing known observations to see if the system actually worked. And the results looked pretty similar. It was quite good. That was one dish. And then when we first connected two dishes, we tried to see if we could measure this time difference and you get something called fringes. This is the wrapping of the phase of the difference in the two signals. And then when we had a whole bunch of these antennas together, the first four, we tried to make an image and we managed to, that was what the image looked like with a single dish. And with multiple, you can actually zoom in on that piece and you can see that there are actually two things, or you would if the light weren't shining on the board. And then we tried to do a little bit of science with it and it turns out we could do science from about 1982. So we weren't quite where we needed to be, but it demonstrated that the system worked. And you could actually pick out individual parts of what was this, 1977. So we were not quite cutting edge yet. But that's kind of where Meerkat comes in. So Meerkat is the sort of final telescope that our government is funding. It's our last step towards the square kilometer array. And we learned a lot from the XDM and CAT7 and we've implemented all of these fixes in Meerkat. This will be the world's best telescope. We digitize at the feed now, so the dishes are no longer analog, they're digital, they bring back ethernet data straight from there. So the analog paths have been kept really short and that gives you much better fidelity in your data. And this time domain data comes straight into a giant network switch and we hang off a whole bunch of processes and those could be ROCH3s or scarabs or any CPU or GPU or whatever the processor you want. Anything you can plug into a network switch you can use to process this data. And what's interesting is this switch in the middle is multicast. So you can now have multiple devices processing the same set of data. So you can have multiple science projects running at the same time. That will be a world first. I can't go into too much detail for Meerkat because I'm kind of under gag orders. Those of you interested in this should keep an eye on the papers. In two weeks time there's gonna be a really big announcement. That's about all I can say, I'm afraid. No, no, no, nothing like that. We have not finished building Meerkat. So only the first 16 antennas are in place. So there's not that much we can do with 16 antennas but there's already been some new stuff there. Someone asked me yesterday about how do we time these things and the answer is with hydrogen mazes. So timing is critical. The thing we're trying to determine is the different time of arrival of the signals of the various dishes. So we have very accurate clocks to do that. And as I say, all the stuff's open, the hardware, the software, everything. There are about 150 repositories at our GitHub account. Lots of software projects, some hardware projects. So the CAD designs, you know, schematics, PCB layouts, everything is in here if you wanna make your own hardware. All the supporting software is there as well. So that's kind of the overview, yeah. Some time for questions. Great talk. So yeah, I think you already raised your hand during the talk. So the switch that you have there, is that a commercial switch or do you do it? No, it's commercial. We kind of would prefer to just buy everything off the shelf if we could, to be honest. And the only reason we go and develop this custom hardware is just because we can't find any other products out there that can do what we want. So that's why we sort of developed that. But in terms of switches and the computers that monitor the network and control these things, it's all off the shelf stuff wherever we can. I'm very curious about your digitizer architecture. You said you're direct digitizing at L-Band and yet your sampling frequency looks like it's below and I quist for that band. What are you doing? I'm just curious. Yeah, so. And what's the bandwidth, I guess, is the other interesting question. So Meerkat digitizes 856 megahertz at L-Band. They're four different bands. There's UHF, L-Band, S-Band, X-Band. X-Band is the widest one. That's kind of six gigahertz bandwidth. So with Nyquist, what's important is not the highest operating frequency, but the bandwidth. So 856 megahertz band, your sample at 1712 megahertz. Any other questions? Thank you. For a non-radio astronomer, what do you lose when you move from a very big dish of a certain size to multiple little dishes? I mean, I realize there's the practical that you cannot get that and you need that, but do you lose relative resolution or time? It's a good question, and I should have covered that. The answer is you're trading off mechanical cost for electrical cost. Essentially with an array, you have to multiply every baseline. A baseline is a pair of antennas. So it's an order n squared processing problem. So to build a small array of seven dishes is trivial. To build an array of 64 starts becoming interesting. To build an array like square kilometer array with thousands of dishes at the moment looks pretty impossible. We think we can do it, but it's a really hard problem. It will generate more data than the entire internet combined, something like 50 times over that we have to process in real time. So it's a proper challenge, and the data is streaming. There's no dead time. You can't record it and then go off and turn on it later. But what you gain is being able to observe a bigger part of the sky, because if you make a single big dish, the beam narrows with lots of little dishes, the beam's quite wide, but the cost is more electronics to process the signal. Hi, maybe you can explain a bit about the openness of the data. So Miacat has a vision. We would like to make all of the data available to anyone and everyone. So the idea is there will be a web portal, and you can kind of see the last six months of observations, and you click on one of them and ask you what format you want to download it in, and it will even do on-the-fly conversion to put it in the appropriate package for you, and you can then sit and turn on it on your laptop if you want. Yeah, might be a bit of a tough task, given the data rates, but if you're at a university or something and you've got access to a big cluster, you could do science, and we'll give you the data for free. They'll be broken up into chunks, maybe 20-minute observations or something, so yeah, they'll be maybe a bit smaller, yeah. Yes, you said that you couldn't use any cell phones or radios near the actual antennas, so what do you use on-site to communicate with your teams? It's a real problem, actually. In many ways, we're our own worst enemy, so we have little metal boxes with IP phones in them, with fiber going in there, so you can like telephone booths back in the day. For emergency communications, they're two-way radios, and we've selected the radios to operate at a frequency that's outside of our band, but even so, just having them there is a problem, and you can't do science while someone's talking on it. You know, to try and put this in perspective, there's a great picture outside Green Bank where they show the signal coming in, and then there's all this garbage, and the signal carries on, and when you're on the tour, you kind of say, what is this blip here? What happened? It all went screwy. They say, oh, that's when some tourist turned on their digital camera. So you really have to be careful, and it's the reason why we're out there in the middle of nowhere, it's to discourage people from doing things like that. But the computers you use to do all this, that's shielded, or how's that working? It's buried underground in a bunkered facility, very, you know, James Bond-like. There's another question there. The FPGAs, what do you like about them, and what would you change about them? Because presumably the FPGAs, the commercial FPGAs, are just general purpose. What would you like? So they do lots of different families of FPGAs. You know, they might have one that has lots of IO, and that's targeting kind of the networking market, Cisco, these big guys use a lot of those FPGAs. Then they have other kinds of FPGAs that are DSP targeted, those are the ones we use. So doing a lot of digital signal processing. And the ratios of the sort of primitives inside there are different. So they might have a little bit more memory in there, and some more multipliers, but less IO, for example. And historically, we've always just gone for the biggest one that had everything. But they're also starting to run into this problem that the GPU guys hit a few years ago where you're now limited by kind of how much heat you can get out the chip and how big physically you can make the packages. So we're starting to run into this issue where we're having to trade one thing off for another. We can't just have the biggest and most of everything. And so now we have to select kind of the FPGA very, very carefully. It turns out that they're still a good mix for us. You know, obviously DSP is a big target market for these guys, the military also use it a lot for radar systems and kind of tracking stations and things. And their application computationally is very similar to ours. So we use parts that are designed for that kind of thing. Are there any more questions? Yeah. So you've got a custom hardware platform and you're talking about these things being around for 30 plus design year life. How do you keep up with the change? Your favorite FPGA is going to be obsolete next month. Yeah. And step-by-step repeat that for most of the other semiconductors. That is absolutely true. And it's a problem that the other telescope facilities have solved by buying a lifetime supply of all the parts and just keeping it in a big storeroom. We don't like that. It's a valid approach, but it's very expensive. So what we're trying to do, and time will tell if this works, is keep sort of industry standard interfaces with the idea to replace what we call line replaceable units. So we're talking about Roche 1, Roche 2, Scaraboards today. In 10 years time, it could be something completely different. But as long as you can still plug it into the network and still compile the design for that platform, in theory, it's the same thing. So if my laptop fails today, I'm not going to buy this model or buy a different model, but if we can run all the same software in our minds, that's the same thing. So you can still accept the data through a net report. It's the same thing. So that's the vision. Your last comment made me remind, made me remember Medical Imaging Processing Company, which will remain nameless, that has these clinical trials that go on for five, 10, 15 plus years, but found out that a lot of the software and a lot of bugs happened where it turned out to be machine dependent and architecture dependent, which is unfortunate, but apparently happens. And so when they try to change to newer generation of things, if you were to try and reproduce the software analysis, turns out you get different results. So I don't know if it will affect you or have test cases to sort of check that, oh, your results are actually consistent across 10 years plus. We're the first telescope facility to do something called system engineering. It's a very robust kind of process to validate your system. A lot of military institutions use the same kind of approach. That gets us part way. The other way we're trying to deal with that is by open sourcing all of this, we have a lot of other people around the world using it, sometimes on different hardware platforms, and sometimes in very different systems. And they uncover bugs. No one's perfect. They're bugs and things. But the hope is that by diversifying as much as possible, you will uncover these things more quickly. So who is using the tools you are developing, the open source tools? Is it other institutes or animator astronomers? Them as well. I titled the talk professional radio astronomy because one of these boards typically cost around $10,000 US. They're pretty pricey. So for an amateur astronomer to get a hold of that and start using it is hard. They mostly can't afford it. But most of the big radio telescope facilities around the world have one of these boards somewhere in their system now. And so they use these tools. And if you're going to ask me to name some of them now, I probably could. But I'll just embarrass myself because I'll leave someone off the list and they'll be unhappy with me. But they're probably about five or six dozen institutions around the world using them now with various generations of the hardware that have been deployed. A lot of the big universities also use them. In the states, students play on these things and develop algorithms and things on them. So it's even started to be used a little bit outside of radio astronomy. So the boards are pretty generic. And they're very cheap compared to other FPGA boards like this. Hi, thank you. I'd just like to know you growing a pool of talent and intellectual skills, et cetera, where are you sourcing people from and who are you cross-pollinating with? Where are people moving on to from you? That's a good question. So as I say, about a third of our budget goes into developing people. We can't employ all of them. So we fund about 400 students. Our engineering officers only employ about 200. So a lot of those students go into other fields. And we're not very picky about the sort of students we fund in terms of what they research. Obviously, we prefer for them to research something related to us, but we would fund students doing any science and technology, some sort of engineering-based project. So some of them work in completely different fields. That's OK. As long as people are learning and excited about this stuff, we feel we're doing our job. So any more questions? I also don't see any questions from IRC. So let's thank the speaker again.