 So welcome. Let me tell you something about Pearl and the physics lab. So first, maybe I should introduce myself a little bit. Well, I'm mostly here at FOSSTEM because I'm heavily involved in gen2 linux. So most of the time, you will find me down at the gen2 booth, where I'm involved in packaging all sorts of stuff, including also Pearl. As a Pearl package, I'm a little bit horrified at all these ideas about carton and docker and so on. I think specifying good dependencies should do the work. But anyway, in my main occupation, in my day job, I'm an experimental physicist working at the university leading research group. And we are using Pearl to measure. And that's what I'm going to tell you about today. This is this package here, lab measurement. So what are we working on? So we are researching carbon nanotubes. I don't know if you've seen these pictures here before. There are many different forms of carbon. There are many different forms of carbon. You can have diamond, where you have such a three-dimensional lattice of carbon atoms. You can have these fuller greens. You can have graphite or graphene with these layer structures. And you can have carbon nanotubes. Carbon nanotubes, there you have basically one of these graphene sheets wrapped up into a tube. And it's really one big macromolecule. You can have many different types of these. It's a bit hard to see. So you can have multi-walled ones, where you have many such shells in each other. You have single-walled nanotubes, where you have just one. These are transmission electron microscope images. And you can describe them with different structures here. You can see that essentially this carbon sheet is wrapped up and glued together in different ways. Why is this interesting? Well, you can do a lot of different things with them. On one hand, they are very light. And they are mechanically very stable. They are very tension-resistant. And this is more or less the only material where people find that the numbers are right for building this hypothetical space elevator. Basically, you need to have something that is strong enough so it can carry its own weight over such a long length. The other side is that it's a very good electrical conductor. So people have been thinking about using it for chip VRs, for example. And that it can be used for very fast transistor elements. The problem is the handling and building the devices. But also, there, people are making good steps forward. So this is kind of the first nanotube computer that was built. It's still far below anything 86-like. But anyway, I mean, it looks like the logic is essentially working. What are we doing? We are looking at the mechanics and electronics of single such molecules. We can see here that you have two metal electrodes and you have one molecule that is lying across them in this electron microscope picture. The scale bar here is half a micrometer, 500 nanometers. So we have a single macromolecule. We have metallic contacts. We have a gate electrode that is somewhere on the chipback substrate. And you can draw this, for example, here like a small circuit. You have two voltages that are applied. One source voltage that is actually driving the current that flows through the tube. One gate voltage that is just changing the electrostatic potential. And you measure the current that flows. That looks pretty simple. The challenges are you want to measure at very small voltages. You want to measure very small currents. You probably want to measure at high frequencies. And you want to see tiny details. And that's why you go to low temperature. Low temperature, well, temperature is for us a little bit something like noise. The higher the temperature is, the more smeared out the effects that we want to observe are. This is a logarithmic scale. Physicists love to compare temperature scales and energy scales. So temperature scale in Kelvin. So this is always 1,000. So we go from 1,000 Kelvin, which is about three times room temperature, to 1 Kelvin. Here in between somewhere, helium gets liquid. And this is where our experiment actually is at 10 milli-Kelvin, which is 0.01 degrees above absolute 0. That's something you can buy. So this equipment to cool down so far is, well, commercially available. It's called a dilution refrigerator. That's actually where part of my IRC nickname comes from. It uses an isotope mixture of helium-3 and helium-4 in liquid state as a coolant and goes down to, in principle, arbitrarily low temperature. Just that at some point you get the equilibrium between the thermal isolation from the outside and the cooling power, and then you have a minimum. But it's still a quite complex piece of equipment, and you need the knowledge and the personnel to handle it. Yeah, and with that, we end up at some point in, as soon as my laptop manages drawing the picture in our milli-Kelvin lab, where you can see all these equipment pieces assembled together. So you have the actual cryostats. They are sunk down in the floor here. They actually go about three meters down. And you have all the control electronics that in part controls the cooling, and in part controls the actual experiment. What type of equipment? Well, let's look at such a simple measurement. I've shown you this picture here already. We apply two voltages, measure one current. So what do we need? We need two highly stable voltage sources. This here, from a company called Yokogawa, we need to amplify the current. It comes out of the thing that's done by some analog piece of equipment. So I haven't even made a picture, because it's not interesting for us here. And we need a digitizing multimeter that basically reads out, digitizes the values and delivers them to the computer. That's what we have down here. In general, we can divide more or less by rule of thumb the equipment into three pieces. There's the fairly simple part. This is these voltage sources, for example. Fairly simple as relative. Fairly simple. My personal classification is the manual is smaller than 200 pages. And the programming part of the manual is smaller than 50 pages. Typically, they have one main function. For example, they apply a voltage, or they measure the level of a liquid. Examples would be some cheap stepper motor for 20 euros that you buy at Konrad, or one of these voltage sources, which costs about 2 and 1 half thousand euros, or a power meter, or a liquid level meter. You can't actually read what's here. This is kind of an excerpt from a manual where the commands syntax for this thing here is described. Level two would be complex. Complex means usually there's an ARM Linux system somewhere in there that's controlling it. The manual is somewhere smaller than 1,000 pages. The programming part is somewhere smaller than 215 pages. And you have a main function with a lot of details. For example, you have something like radiofrequency source that can do amplitude modulation, pulse modulation, frequency modulation, anything like that. Or you have some temperature measurement equipment that measures five or six temperature channels and can regulate. Or you have a magnet power supplier for a big fat superconducting magnet, or such a spectrum analyzer here. Well, that's level two. And then there's also level three. Level three, well, manual bigger than 1,000 pages, programming part larger than 250 pages, price insane. Typical, well, one is this cooling equipment that does not really have 1,000 pages manual, but it should have. And if you count every page with a mistake three times, then you get pretty close to the 1,000 pages. Then something like a vector network analyzer from Roder and Schwartz here, which is basically measuring transmission properties in the microwave regime. So you send a frequency signal in, high frequency signal in. You get a high frequency signal out, and you see what transmission amplitudes and phases you get. Well, this is, again, a piece of the manual. And what you can't read is that the programming part here is somewhere like 500 pages. How do you program this stuff? Well, there's different ways to get access. You can often connect them directly to some network. It's not always a good idea to connect them to the internet. At least one piece of equipment that I saw once that actually cost somewhere around 40,000 euros had an open telnet port where you immediately got a root shell on the internal ARM Linux system without any password. That was luckily fixed in the next firmware release. So use a separate network for that. You can control some of the older equipment with serial cables. Some of them have a USB port that acts as a USB serial interface. Then there's this GPIB general purpose interface bus or IEEE 488. That's something people outside the lab don't know. It's basically a very fat cable about a centimeter in diameter with a fat plug here, a parallel bus, 8-bit, depending on how old your equipment is. It can be pretty slow or pretty fast. Up to 15 devices on 20 meters of cable topology any so that means you can make a bunch of spaghetti out of it. But it's pretty useful because it handles part of the equipment addressing and part of the communication protocol. And more modern variants are also built on top of that. So you have the USB test and measurement, which is basically just another type, another class of USB device that more or less emulates the protocols from here over USB. And you have VXE 11, which is emulating the same protocols over TCP. So they are in everything but the hardware pretty much similar. How do the commands look like? Well, luckily, most of the equipment just works with ASCII, which makes life pretty simple. You can talk to them, essentially, with a terminal emulator. And everything that conforms to this IEEE 488.2 understands some minimal commands, a few generic commands. For example, this IDN, where you basically just tell the instrument, identify yourself. And you get a response like this here. So this would be a lecture temperature control interface. The model number, the serial number, and so on. Everything that goes beyond this primitive idea of identify yourself or reset yourself or something like that can vary. So you have non-standard stuff, which is basically either old or highly specialized equipment. You have the not quite standard stuff. Well, some companies really try hard and get it wrong. And you have a language that's called SAPI or SCIPI, standard commands for programmer instruments, which defines rather complex syntax that is fairly readable. So you have something like system communicate serial about 2,400. You can guess what it does. It sets serial communication to 2,400 bouts. Or measure voltage DC, was OK. Or source frequency start 100, stop 200, means sweep a frequency from 100 hertz to 200 hertz. That's actually doable. So how do other people solve this? How do other people control the equipment? Well, there's many different solutions. I'm not going into all of them. I'm mainly going to show you one. That's kind of the default solution. That's called National Instruments LabView. It exists even for Linux, which is astonishing in a way. And that's what every hardware vendor provides drivers for. Now then the question about for Linux already becomes a bit more questionable. Programming in LabView basically means you wire up the flow diagram of your code. So it's really graphical programming. And you can easily put together graphical interfaces where you have input fields and output fields and so on. Actually, I think LabView is a very nice program for simple stuff. It's good to make some quick prototype and try something out. The problem is, when things get a little bit more involved, then you end up with something like this here. And the fact that this picture actually comes from the National Instruments website tells you that they have realized the problem. And this is from a page called, I think, something like LabView wrote mistakes or some rookie mistakes or something like that. The problem is I've seen code like that coming from a company for some piece of equipment. And we had to work with that and modify it. So yeah, it's not so much fun. You can do subroutines and everything like that. So there are ways to do it much better. But why do it this way then when you can actually write a script? So what do we need? We want to be operating system agnostic. What we want to do needs to work with Windows and with Linux because at least I want to work with Linux and some of my colleagues don't. It needs to be transport and driver agnostic. So the idea is if you have, for example, some piece of equipment like these voltage sources that can be plugged in via GPIB or via Ethernet or via USB or via serial, then it shouldn't matter which type of cable you use. It should work just the same. It should support background operation, which means, for example, cron jobs. So we want to be able to set something up. So whatever Linux box monitors some piece of equipment, reads out values every 10 minutes and sends an email if something goes wrong. And we want to have support for foreground operation, especially also we want to be able to run some long and complex measurement and watch the result life to see when something goes wrong and hit Control C. Of course, we want to be able to control our equipment in as much detail as possible. And on the other hand, we also want to have it pretty easy to set up some sort of default measurement. And especially since not everybody who is working in our lab is a programmer and a Pearl expert, we want to have it fairly easy to extend the instrument driver somehow and also to create a new measurement just from a template. So of course, what we wrote didn't happen overnight. This program started over 10 years ago more or less, has grown and evolved in the meantime a lot. How it looks like right now is more or less like this here. So we ended up with some sort of layer structure. Everything here in this gray box is the actual lab measurement distribution. So we start up here with the actual hardware. Then we have hardware drivers or library bindings. For example, the serial port, that's just a Linux kernel or the operating system. Linux GPAB would be a set of kernel modules that is also a separate distribution. This is a commercial library. Then we have a bus layer, a connection layer, an instrument layer, the so-called express layer. I'm going to go through this in detail on the next slides. And in the end, down here, your actual measurement script. And everything in this box here is pure Pearl. So no compilations involved anymore. Let's look at this in detail. So the upper part. So we have these hardware drivers. As I mentioned, this is outside the actual distribution. Part of it is the operating system. For example, the serial port. Linux GPAB is a distribution of Linux kernel modules that comes with a library to do the communication and with bindings for many different languages, PHP, Python, but also Pearl. Some bindings we also do ourselves. There's a snap visa package which binds to the National Instruments Visa Library. There's a lab vxE11, which basically implements that protocol on top of TCP. USB TMC, which uses slip USB to implement this test and measurement protocol. There is also a Linux kernel driver for that protocol, but it's a bit unflexible. And the bus and the connection level, they encapsulate the transport, more or less, the command transport. Bus means we have one object for one host adapter, more or less. So for example, for this GPAB, you have one object that is corresponding to the adapter card that goes into your PCI Express card, for example, that then connects to the cables. And that has also global properties that are not specific to the instruments connected to the bus, but that are specific to the entire system of connection. The connection layer, on the other hand, that is one object per attached instrument. So this is like a TCP connection that goes from point A to point B. So you have a connection per instrument that is attached to the cable. The connection layer is actually only a very thin interface. So if you look at the code in the legacy version, most of the stuff is implemented on the bus layer. I picked, because nobody here is familiar with the specialized hardware, I picked the socket connection for TCP connection as an example. So here we have the bus, and the bus defines all these properties that correspond to everything that you can do with the TCP socket, like remote port, remote address, local port, local address, and so on. But also things like timeouts. The bus creates the connection objects and keeps track of them. And the bus also handles the actual, oh, this is too long. The bus also handles the actual communication. So when a connection wants to do some talking to somewhere, then it hands over the text to the bus. The bus does the actual hardware access. The connection layer itself is rather simple. OK, I shortened here a little bit, but the socket connection module is really not much longer than this. It uses a generic socket. It uses the bus, and it uses the generic connection package, but otherwise it just uses the inherited subs. Things get more interesting when you come to the instrument layer, because the instrument layer here is the first layer where you can actually do something as a user then. We have a generic lab instrument class, and then subclasses of that, which correspond to specific types of equipment. The specific types of equipment means, in this case, that this subclass implements the language the instrument speaks. So you get your specific queries to the instrument, and you pass the specific responses that they send. And you can implement a nice interface for your end users. Generating the instrument object is rather easy. So there's this function here in the lab management package where you can generate such an instrument object by passing the class name. And then passing further parameters. And in this shorthand here, we get, for example, an instrument of the type OI Triton. That is automatically generated with a connection of the type socket. And that calls up the whole underlying stack. So you get a bus of the type socket. You get a connection of the type socket. In this case, it's generated with the default parameters, like the default remote address and the default port and so on, which is, in this case, already adapted for this specific type of instrument. Or here a network analyzer with the module name of the instrument, IS-ZVA, and the connection type, Linux GPIB, and here the address on the GPIB bus that this particular device that we want to address and then it generates a connection and a bus if the bus doesn't exist yet. And we are ready to talk to it. How does working with an instrument look like? Well, this is actually a fully functional script. So we have, we're calling here in the instrument module. We need an address on the GPIB bus, in this case, where this instrument is located. So we get that from the command line. We generate the object corresponding to the instrument. This is, in this case, a Stanford Research Lock-in amplifier, which is connected with GPIB. Address as parameter at the first GPIB board in the PC. And then we read out from there the output amplitude and the output frequency. And we get the input amplitude and the input phase. Doesn't matter now exactly what these values mean, but anyway, you get ready-made functions for getting the various device parameters. And obviously, you can do this in such a small program, but you can also write a big and complex perl script, big package that does much more complex things with the equipment. What do these functions do internally? Oops, no, I, wait, yeah. Well, they are not really complicated because we want to keep this level where people actually may want to modify things and enhance things. We want to keep this level as simple as possible. So if you look at this get frequency function here, it does nothing but call the query function of the lab instrument class. And that sends this string here to the instrument, asks for the frequency, gets a response, and that response is returned. This query function, it handles, it's in the lab instrument class. It passes this string to the connection with the proper parameters. The connection passes it to the bus with the proper parameters, and that goes to the hardware. Same, get amplitude, also just one command string that is documented in the manual of the equipment, in the hardware manual, and similar also our fee here, where we get two values at the same time, and then return them as an array. And as you can see, this is kept deliberately simple, so people who are not perl specialists can improve it and extend it by simple copy and paste. Of course, there are more different cases, and there are also cases where we can use more language features or more programming features. For example, there are many different types of voltage sources from different companies, and they all have roughly the same functions, so we can make a class that is called instrument source, and that provides like a common interface. And then we can automatically provide more complex functions. For example, we could say we have a very sensitive chip, and that chip doesn't survive if the voltage suddenly changes above a certain step size, so we enforce that the voltage is only changing in a smooth way with the maximum speed and the maximum step size, and all this is already then implemented in the source package and our actual, well, hardware addressing modules don't have to take care of this anymore. Similar modules are in preparation for other equipment types, for example, superconducting magnet power supplies, but then things can easily get rather complex and we do part of this stuff only when we really have the need for it. And now for something more complicated, yeah, now we want to do something like a real physics lab measurement. Let's get back to the simple circuit that we had at the beginning. We apply two voltages, we measure one current. We could also measure more things, so let's generalize this a little bit. So we have nested loops of two or more control parameters. In this case, the two voltages, one voltage, a second voltage, and at each point that corresponds to two of these values, we want to measure one or more parameters. For example, the current or the noise spectral density or something like that. And we want to watch the progress of the whole thing, so we get some output that you can't see here. And we want to plot the data in real time and we know like this here. How do we do that? Well, for this, we have this express layer. Let's just talk through one of the measurement scripts rather quickly. Part of it you already know from the previous slides, so we call in-lab measurement. Here we just define some constants that correspond to our hardware and to our equipment and so on. This here you've already seen as well, so here we generate the instruments. We have two voltage sources of the type Yonkogawa GS200. We have one multimeter from HP or Agilent or Keysight now. Yeah, and then comes the new stuff. Then we define sweeps. Sweeps basically means we go from voltage A to voltage B in steps of this size with this and this speed. So here, for example, we go stepwise from minus one volt to plus one volt with this step width here. And there are some more parameters like we don't, we jump from point to point, so there is no actual smooth sweep involved between the points. And before we start, we wait three seconds. And this is a similar declaration, so we go from minus 0.5 volts to plus 0.5 volts with another step width and another speed and so on. And then we define a data file. So for example, yeah, tab separated columns with one voltage, a second voltage, and the current, three columns. And then we define a plot. And for that plot, we have an x-axis, the one voltage, on the y-axis, the other voltage, and the color bar is the current. And after every block, meaning after every trace of one of the voltages, the block, the picture is refreshed. Next step, we need to provide measurement instructions. So what should be done at every point? And there we define a sub that is called at every point. What does it do? Well, it calls, it reads out from one source the value, it reads out from the second voltage source the value, and it reads out the multimeter. About this caching stuff here, well, sometimes hardware access can be slow. And especially these voltage sources, they can set the voltage very fast, but reading the voltage out is unfortunately a very slow action. So what we do is we provide an option that the software keeps track of the last set value, and when you want to read it back out, you just read it from a cache in the PC. You don't actually access the hardware equipment anymore. Yeah, and then each of these triples to voltages one current is locked into the file. And in the end, we wire this up, we connect the data file with the sweeps, and so you get these nested loops, and then we start it. And it starts to run, and then you get a plot like this. Well, precisely you get a plot like this when nothing is connected because this is just noise, but never mind. The plotting itself is done by calling a new plot as an external program, and piping the commands and reading the data file from it, which works in a fast and robust way. What do we get in the end? You can't see this here. You get a copy of the measurement script for archival purposes. You get the actual data file in the sub directory. You get a summary of the device configuration, and you get the last state of the live plot as a file. As I said, this code is at least in its very beginnings over 10 years old, so right now we are looking into modernizing it, and that turns out to be rewriting it with moves. Step, we need to keep it working because we have it running all the time, so we need to rewrite it as stepwise as possible. Parts are already included in the current release, but we need to somehow keep compatibility as good as possible. The bus and the connection functionality is mostly done, so this is now one combined layer. The generic instrument functionality is also mostly done. The instrument porting and subtyping is partly done, but also in some parts of the code is already more in the new version than in the old one, and the express work is work in progress at the moment. Of course, once we have moves, we can use the moves functionalities, and we can, for example, use roles for types of instruments. What we did before with the source, we can now do with, for example, moves roles. Or we can use it for language standards so that we have one role that implements part of the skippy standard for addressing the devices. And then what we are also doing is we are doing the plotting via and the data handling via PDL now, so this gets a bit better encapsulated. And with that, I'm getting to the end. I'm not the only person who's involved in this. Daniel started it like 11 years ago. Florian introduced the layer model, Christian Bujkov and Stefan Geister made this express layer for the fast creation of scripts. Charles Lane introduced a lot of code for oscilloscopes. They use this at a particle accelerators. Simon is the guy who is now doing all the work porting it to moves. And, of course, some more people were involved. All the details are, of course, in the Git log. And with that, I'm finished and thanks for your attention.