 Now, welcome to Building Maintainable Command Line Tools with mruby. I am Eric Hodel and the stock about is working with mruby and mruby CLI. About me, I'm a Ruby Committer. I'm a Ruby Gems Committer, a Rate Committer and the author of Many Gems. I'm also on vacation from most of my open source work. Instead of working on Ruby in my spare time, I've been spent a lot of time making shelves like these. The far one there is still a work in progress, still trying to put on a varnish to fill those gaps in so it's smooth. And I work for Fastly, a real time content delivery network. And this is our amazing website, Fastly.com, with Gordo, one of the dogs of Fastly. And Fastly supports many open source projects, including hosting the Ruby Lang website, Ruby version downloads, and supports Ruby Gems. Last month, Fastly served 425 terabytes of data and 8 billion requests of supporting Ruby Gems for free. And I'd also like to highlight the work of David Radcliffe and the rest of the Ruby Gems.org team for speeding up gem downloads, especially for users outside of the United States. And they should be making a blog post about that work real soon now. And of course, you can follow Fastly on Twitter. So part one, my exploration of M-Ruby and M-Ruby CLI started with a web API wrapper. We're building a new product of Fastly with new APIs and I wanted to, I wanted an easy way to interact with the APIs so that I could demonstrate that they were usable and that they were interacting the way that we, that was, that made sense. And why not choose C-Ruby to write the wrapper? Well, when we use C-Ruby, we have to install the correct version depending on what Ruby features you want to use. Then you need to install the gems that you're going to depend on. And then you have to keep both the Ruby version and the gems up to date for as long as you're supporting the tool. And M-Ruby is very similar to C-Ruby. So we can start with a prototype in C-Ruby, port that to M-Ruby, and then continue our maintenance in M-Ruby. And what about M-Ruby that makes it more suitable than C-Ruby? M-Ruby is a lightweight Ruby implementation. You only need to use as much of the features of Ruby as you need, which keeps the language and the size smaller. It's also designed for embedding. You can run Ruby on low resource hardware or embedded into other software without all the hassles of embedding C-Ruby. Like C-Ruby, M-Ruby has libraries and it calls them gems. M-Ruby ships with a limited standard library, so you can use a gem to get an array with all of the features of C-Ruby. There's also standalone M-Ruby C libraries, some of which are C extensions that wrap C libraries like the M-Ruby Curl gem. At past Ruby kikes, there have been many other talks on M-Ruby if you want to see other applications. And you can find recordings of these on the Ruby Kikey website. An especially interesting one is the one at the top making high-tech seat in M-Ruby. It's about a bidet built with Lego Mindstorms. And you can find recordings of all these talks on the Ruby Kikey websites. There's also two talks on M-Ruby at this year's Ruby Conf. Both are in ballroom A. One is in the first session after lunch today on evaluating Ruby without Ruby. And one is the very first session tomorrow about M-Ruby on small devices. Since M-Ruby is compiled unlike C-Ruby, how do you use it? The first thing you do is you edit your build config RB. Unlike C-Ruby where you can use any gem at any time, in M-Ruby you must pick in advance. In the build config, you pick which M-Ruby gems you want to use. You also choose which compiler to use. M-Ruby CLI uses this for cross-compiling to all the different platforms. And you can also specify library-include paths in case you need to link with other C libraries. If you're writing in M-Ruby gem, instead of embedding M-Ruby, you'll fill out an M-Ruby gem specification. M-Ruby CLI applications use the M-Ruby gem specification for specifying the executable name and the test files. The M-Ruby gem dot rake is where this M-Ruby gem specification lives. Like Ruby gem's gem specification, it contains the gem metadata, the dependencies, and the list of binaries. And it includes information like the name, the version, license, test files, and so on. You can also specify the gems you depend on and the binaries that get created. And after you have these files configured, you run rake to build M-Ruby and all the executables you need. And once rake is complete, you'll see several executables in the bin directory. If you're using M-Ruby CLI, you'll see the name of the command line tool you specified in the gem specification. You'll also have M-Ruby and M-Ruby executables that have all the gem dependencies you need compiled in, and you can use these to run one-off tests. And you'll have an M-Ruby test if you have enabled testing for this build. The goal of M-Ruby CLI is to allow Rubyists to build single file executables in a familiar environment. Last year, Terence Lee and Zachary Scott gave a talk about M-Ruby CLI that showed the basics of the tool and how it worked. And the very basic description of M-Ruby CLI is that you can write Ruby and release software on Linux, on OS X, and on Windows. The typical development loop of using M-Ruby CLI is to start by writing some M-Ruby. You compile the code that you've written and you run the tests and go back to the start if your tests don't pass. And once you have enough of your features built and tests passing, then you can release the executables. And when I first heard about M-Ruby and M-Ruby CLI, I thought it sounded pretty easy, but I was sure there would be some challenges involved when using a different Ruby implementation. The first one is that M-Ruby is not the same as C-Ruby. The implementation doesn't support doing everything that you could do in C-Ruby like I'm familiar with. The first problem I had was the modular standard library. M-Ruby's core library has classes like string and array and hash, but they only support the minimum number of methods. For the full C-Ruby experience, you have to add extension gems like M-Ruby array, EXT. And it's very easy to forget to do this if you're developing your own M-Ruby gem and you forget to add these extension gem dependencies. And without them, the gem won't work in another project, so it might work in one spot and not another. M-Ruby also has, M-Ruby is also a smaller language outside the standard library. There are no predefined globals, such as load path, since everything is compiled in, nor any of the globals inherited from Perl or Shell scripts. So some libraries you might want to port from C-Ruby may depend on these and you'll have to work around them. I also had a problem with here document support when porting a library from C-Ruby, and I haven't yet reproduced it on its own, so I need to file a bug for it. But while you can use keyword arguments to call methods, you can't define methods that automatically check keyword arguments like you can in C-Ruby. And if you haven't explored keyword arguments yet, there's a talk today in the second session after lunch in ballroom C on keyword arguments, check that out. Also some of the G sub replacements don't work in M-Ruby, so you may need to change your regular expressions. Backtraces in M-Ruby also seem to be less useful than in C-Ruby. I feel like I did more exploring in the C source to determine why an exception was raised. I'm not yet, I'm not sure that's due to unfamiliarity with M-Ruby or due to other reasons. There are many pure Ruby libraries that are already written in C-Ruby that you'll want to use in M-Ruby to make your life easier, and overall this is a pretty straightforward process provided you be aware they're restricted to syntax. Unfortunately though, porting the tests for these is much harder due to differences in the way tests run, and we'll cover that in the future in this talk. The existing gems for M-Ruby feel a lot like working with libraries from the Ruby 1.6 days, and I think this is because there just haven't been enough people working in M-Ruby to build up the libraries that apply well to many different types of problems. So an M-Ruby library often has a very narrow focus, so you'll either have to do a bunch of hunting to find the library that fits your use case, or you'll need to submit patches to get the library to work for you. So far I've submitted some patches to all of these libraries. For some libraries I've sent multiple patches. They're usually pretty small, which is nice. And the last challenge I'll cover for M-Ruby is getting the build information out of M-Ruby. Since M-Ruby is compiled, it needs to link in C-Libraries to compile successfully. With M-Ruby CLI, you're also cross-compiling these C-Libraries to many different platforms, so you need to use the correct C-Libraries. Unfortunately, or sorry, this correct C-Cross-Compiler. Unfortunately, it's hard to extract the cross-compiler information out of the M-Ruby build system without loading all of its rake files, which is pretty slow. And I think improving this will make M-Ruby's build system more flexible, but I haven't yet tried to, or haven't done any work on this yet. M-Ruby CLI, like M-Ruby, was a new tool for me, so it took some time for me to learn to use it the right way, and here's some of the larger challenges that I had with it. The first one was Docker. The M-Ruby CLI build system is built to top Docker, and one of the really great things about this is it's ready to cross-compile to Linux, to OS X, and to Windows without any extra setup. Unfortunately, Docker is kind of clumsy the way M-Ruby CLI uses it. Typically, Docker is used to run a long-lived service, but M-Ruby CLI only needs to keep the container running just enough to finish compiling or to run the tests. But I think out of all the options, Docker is the best one available, because there really is no extra work to get started with building command-line tools. In order to start a build of your command-line tool, you run Docker compose, run compile, and this is far too long to type, even if you use your shell history well. When there's a problem with the compile, it's hard to debug from the outside. You have to start a shell inside of Docker, then figure out which commands you are running, and then run those to reproduce the problem before you can start to fix it. When you're working with this Docker shell, you don't have your familiar development environment, and there's no shell history from the last time you were doing this, so it's harder than usual to get things fixed. The biggest challenge I've had with M-Ruby CLI was its build system. Most of these problems came from the way the tasks were organized, the tasks for building are executable, and the tasks for building M-Ruby were all mixed together into a single rate command. This made it hard for you to add your own tasks and then have them run in the right spot. For example, before compiling the M-Ruby Curl extension, you need to have the Curl C library installed for every platform you're cross-compiling to, and by default, that's very difficult. Due to these difficulties, I made some improvements to the build tasks. The first was to build M-Ruby separately from all the other work that must be performed inside Docker. This means I can cross-compile the C libraries ahead of time. The other improvement was to have a rake task that exists outside Docker and starts Docker for me. This allows me to perform extra tasks that don't need Docker to run and are easier for me to debug. So the new build system runs rake three times. First the outer rake tasks run, and then once they're done, they start up the inner rake tasks inside of Docker. The inner rake tasks perform things like cross-compilation or any other setup that can only occur inside Docker, and when these are done, M-Ruby's rake tasks are run, which builds your command line tool. Since the outer tasks don't need Docker, I use them to do things like download and unpack M-Ruby to get it ready to compile. There's also some global test setup tasks for reason I'll cover later. And finally, there are release tasks to publish your finished gem executable, M-Ruby executable. The inner tasks that run inside Docker cross-compile the C libraries you have configured and then invoke rake again to compile M-Ruby and run the tests. There's some additional benefits to my changes to the build system. It's now much easier to add hooks for custom tasks since there are well-defined tasks for different phases of the build. I also made rake tests faster by only compiling for the host platform when running tests. There's no need to build every platform when you can only run tests on the host Docker platform. It's also pretty likely you'll need to use C library as a dependency of an M-Ruby gem in your project, and there's two options. You can include them in your command line tool. The first is static linking, and this embeds the library inside of your command line tool and keeps you with a single file executable. The advantage is you don't have to provide any separate instructions for installing the correct C library for your tool. The downside, though, is that you need to re-release your command if there are any vulnerabilities in those C libraries. This also makes your executable a little larger. The second option is to dynamically link your C libraries. The advantage here is that vulnerability management is much easier. The user should be upgrading their own libraries for security vulnerabilities, so there's less for you to do. The disadvantage is this adds an external dependency, so you're no longer truly a single file. And the final challenge I had with M-Ruby CLI was cross-compiling libraries. M-Ruby CLI can cross-compile your executable for up to six different platforms, and this means your C libraries also need to be compiled for all of those six different platforms. For example, for libcurl, you can't just install the libcurl development package inside Docker and be done, since this library doesn't work on OS X or Windows. Instead, you have to download the source, unpack it, cross-compile for each platform you're going to be releasing on, and then set up M-Ruby's compiler to use those libraries that you've built. To help myself out, I wrote a tool to automate cross-compiling dependencies. You can configure the library to cross-compile with a few values. The first is the name Curl here. The release name is used to create the source directory, since libraries don't necessarily use consistent naming styles. The URL is where the source is downloaded from, and then optionally, you can set extra configure flags. For example, I don't yet have open SSL cross-compiled, so I disabled it for now. In part two, I'll discuss the origins and the structure of the command line tool. So my exploration of M-Ruby CLI started with a tool that our customer support team could use to configure a new product we're developing. Our service was and still is API only, so there's not yet any web UI for anyone to use. The service has a complicated setup, so I wanted to automate common use cases to make it easy to avoid errors. I also wanted to have a tool that customer support could use and also send to users to reduce the burden of supporting this new product. The solution to these problems is to write a tool that could automate the setup, validate any inputs, and reduce the number of errors the users would encounter. It would have the benefit of also giving design feedback on the API, and we could use that to make further improvements. Before exploring M-Ruby CLI, I started with some prototypes in CRuby. The first was a single file script that primarily served as documentation. Script was very easy for me to write, and customer support could modify it for their needs, but it wasn't flexible enough to handle all the ways they needed to interact with the API. As the needs of customer support grew, I split this single script into multiple scripts. Each script had its own purpose, for example, listing options, performing initial setup, or enabling or disabling features. This was more flexible, and I made the point of making the scripts well documented so it would be easy to modify for writing future scripts. And finally, I extracted a couple classes from the scripts for reuse. This included a basic implementation of JSON API, set as our API transport specification, and one for argument parsing using Ruby's option parser, as we have many commands using the same arguments. So what lessons did I learn from the prototypes? The first was that documentation in the tool was very important. We had not yet budgeted time for API documentation of the service, so documentation by the command line tool was very useful. Roughly half the lines in the scripts were ended up being comments. And this also meant our support staff could figure out what the tool was doing and suggest changes that would improve their lives. Keeping the script simple made everything more understandable. None of the scripts had any external dependencies, so they were easy to install just by checking out the repository they lived in. Most scripts were small and had a single well-defined purpose. And it was okay to have one or two larger scripts that automated an important action like initial setup. Ultimately, though, they were hard to maintain. Our development environments install an old version of Ruby without keyword arguments by default. So we needed to install a modern Ruby version separately. And customer support also would add or edit some of the existing tools to make their lives easier. But then they would end up with multiple independent versions that were hard for the both of us to support. I decided to investigate a switch to MBBCLI to address the maintenance burden while trying to keep the maintainability high. I decided to model my command line tool off of the pattern that RubyGems uses. It has a set of subcommands with one subclass per subcommand, and they both use option parser to parse arguments. Each has a common setup method that picks which options are used for the command, and then after parsing arguments, the execute method runs the command and handles any errors. For example, if we got a command that lists conditions, the setup method would require a service and a version that the conditions are attached to. And then the execute method would perform the API operations to fetch the conditions from the service and then display them. I decided to port MRubyOptParse from the CRuby standard library for option parsing. I chose this because it was familiar to me and to other Rubyists, and it has good documentation. OptParse performs automatic type checking and argument completion for you, which means less code to check, code to write to check these by hand. OptParse even offers shell completion as a standard feature, which is very nice, and all of these would make parsing arguments easy and straightforward. For API requests, I chose MRubyCurl. I evaluated several other HTTP libraries for MRuby, and this one seems to be the most mature, mostly because it uses Curl. It supports both HTTP, which we use in our development environment, and HTTPS for accessing our production systems. The library also supports persistent connections, which is especially important for HTTPS requests. The only downside to is the libcurl dependency, which must be cross-compiled and linked into the tool. For testing, there are two MRuby gems that seem to be the most popular. The MRubyMTest gem provides a library for assertions, while the MRubyTest gem provides the tool that runs the test cases. While using these gems is pretty easy, setting up the test cases can be pretty painful. Testing gave me the most frustration when building the command line tool. It's the most important piece of writing maintainable software, but due to the unfamiliar environment, there were some adjustments I had to make in order to understand testing with MRubyCLI. The MRubyMTest gem is very much like the Minitest library that ships with SeaRuby. You make a subclass of the test class. You write test methods the same as Minitest. Setup and teardown methods are used for common test setup in the same test case. But you have to remember to start the tests with mtestunit.new.run at the bottom of your test case, since MRuby doesn't have at exit without an extra gem. Since MRuby is compiled, you can't just run the test directly. Instead, you use the MRubyTest gem, which provides the test runner. When the tests run, a new MRuby VM is created for each test file. So this gives complete isolation in between tests. The design of a test runner has some complications to it, though. The first is that creating a global test setup or global test helper requires setting test preload in your MRuby gem specification. Without this, the helper won't be available to any of your tests, because MRuby just thinks it's a separate test file that it's supposed to run. And I'm still trying to get this to work because I only discovered it recently. The second complication was Docker. MRuby CLI uses Docker Compose, but most of the documentation is written for the Docker command. So it took some time for me to figure out how to set up DNS records so that tests could use other services on my local machine. I would have preferred to mock all the interactions, but without an easy test helper and a mocking library. It was just easier for me to use the development services directly. The final complication was that MRuby CLI was slow to start because it compiled all platforms before starting the tests. This made the feedback loop very long, even though you might be making a very small change. To work around the global test setup, I chose to perform some of the tests set up before starting any of the tests. I used the improved build structure to do this setup outside of Docker before starting any of the tests, because I have better and more capable libraries there. Since I only learned about test preload recently, I'm passing the data needed for global setup into the tests using environment variables that each test can look up when it needs them. And to improve start-up, test start-up speed, I adjusted the rake tasks to only compile for the host platform, which is what the tests use. This meant I only had to compile one platform instead of seven. MRuby CLI also supports bin tests, which are integration tests. Since I prefer strong unit tests, I haven't explored these much yet. For an integration test, you run your command and then it captures the output and then you can assert what the output looks like. And despite all these difficulties though, how I would you choose to use MRuby CLI? And why not use Go or one of these other tools that has the same capability? I'm primarily a Rubyist and I work with a team of Rubyists. And while we aren't opposed to learning new things, MRuby keeps us in our comfort zone. For example, when I had problems with the test setup, I could use C Ruby that I was more comfortable with to get the job done. And remember that I want to have a tool that I can deliver to fastly customers. So the ability to work where I'm most comfortable means that it's easier to ship quality software. By limiting the number of things I need to learn at once, my stress level is lower and my ability to deliver is higher. And now we approach the end. What would I like to see that would make MRuby and MRuby CLI easier to use? For MRuby CLI, I'm going to contribute back the improvements to the build system I've made. I've already had some discussion with the maintainers about the improvements I've made and what benefits they bring and they seem positive on them so far. The documentation for MRuby CLI could also use some improvement. As it isn't necessarily obvious where to start with a new command line tool, or what complications you might run into or which libraries you might want to check out. I'd also like to make upgrades to new versions of MRuby CLI easier. But I'm not yet sure how that will work yet. Accurate backtraces would be very helpful. Right now you have to pay extra close attention to figure out which line MRuby meant the error occurred on. I also want to separate the loading of the build configuration from generating, loading of the build configuration from generating the rake tasks. This will make it easier to integrate MRuby's build system with MRuby CLI's build system. And finally, I'd like to see keyword arguments and methods. MRuby's CAPI has a fancy implementation of argument checking, so I'm unsure how this will work with keyword arguments. In CRuby, I'd like to see the MRuby get args ported over from MRuby. This is a function of the CAPI that extracts method arguments when a Ruby method calls a C function. In MRuby, the method allows you to specify the types of the arguments, so you get automatic type checks and automatic conversion. The equivalent method in CRuby is RB scan args, but it's much less expressive since it only checks a number of arguments, so you have to do all of your type conversion by hand. Also, finding the test preload is difficult, so I'd like it to make it automatic so it's easier to write tests. Also, MRuby test runs all the tests and doesn't have an option to run just one test. So, or a subset of tests. And without this, it's much harder to make large changes since you have to go and filter your tests by hand to figure out which test failures are important. Thank you, are there any questions? Yeah, so the question was, what is passing data out of Bandmeat here for the testing? And for that, since you can't pass data across Docker without it going through a file, I had to go and do some setup, write a file, and then load that file inside of MRuby to get the test setup data that I needed. And so I did that through argument, or through environment variables. So the question was, how robust is support for linking dynamic libraries? Like, can you specify paths and all that? And yes, you can specify all those paths with, there's a support for all those through the MRuby gem specification stuff. So you can say, for example, for all of the cross compiling work I did, it'll build it into per platform directories. And then I can say, go look at x86 linux, this directory, and that's where curl will be. So that, yeah, that's all very easy. Can the user pass something in if it's not in the expected place? I think you have to edit the build config. I don't know if you can override with CC, like the CC or LD flags, or LD library path, or so forth. Did I notice much memory usage difference between CRuby and MRuby? I did not measure that, because the CRuby prototype, it only runs for a few seconds and does very little. Although I did do a test, one of my MRuby curl patches was to add persistent connection support by reusing the curl connection. And I believe I ran it for a couple days, just repeatedly checking a local server. And I don't believe the memory usage grew over a couple megabytes. So it's very nice. How long did it take to iterate from the first solution to the last one? So I'm familiar with building command line tools in CRuby. So a lot of the difficulty was figuring out how to use MRuby and how does this piece work and how does that piece work. In terms of writing the implementation in MRuby, it was very straightforward after I was like, okay, so I've figured out how to use curl, how to get opt parts going. And then everything else after that was very straightforward. I probably spent three or four days writing the implementation of the command line tool, or at least write one command was maybe a couple days. What is the question is what does an upgrade path look like for a customer? And how do you mean exactly, upgrading? So how would you release a new version of the command line tool? Yes? Yeah, so since it's a single file, you can upload the binary and say download this. You can also do things like maybe releasing on Linux. You can wrap it up in a package, which would be great if you had for using dynamic linking because you could say, I need curl and open SSL and whatever else. So it will depend on things like how exactly you want to release it. But the easiest one is just, hey, here, download my new version. I suppose you could integrate a thing like Sparkle that OS X uses. But I haven't explored any of that yet. Command line tool. So what are some notable command line tools built with Ruby CLI? There was a talk at RubyKaigi about Hakonewa, which was a tool for using containers. Setting up containers, I believe. I don't have any other examples off the top of my head. Maybe Terence might know. So Terence says there is a lot of stuff in Japan less in the US. And one of the more notable ones is a system monitoring tool. I believe there was a RubyKaigi talk on that one. It might be, yeah. What did I do to cross-compile libcurl? So the only thing that I had to do was extract the compiler configuration. MRuby CLI provided that by default. So I extracted those compiler information and then plugged those into the call to configure the configure script that came with the libcurl source. I can show you after the talk if you'd like to see. So how different is MRuby from CRuby and how different is CRuby from Ruby? So CRuby is synonymous with MRI, so yeah. So I originally gave this talk at RubyKaigi, so they're familiar with CRuby there. And CRuby and CRuby are syntax-wise. They're very similar. There's a few things that are bugs probably. There's a ISO specification for the Ruby language and MRuby is built off of that specification. Like I said, there's a few things that are missing like automatic argument checking for keyword arguments. You can just do star, star, args and that's about it. Global variables are different. Some libraries may not support a thing you need. One of my patches was on a Geruma where it didn't expose optional captures properly, so it had empty strings instead of nil, so there's a few things you're going to run into that are like, oh, somebody didn't fill out all of the features yet. But overall, it's going to be very familiar. So how did I handle the non-availability of require? So part of it is you just list your MRuby gems in your specification and those get compiled in automatically. There are some metaprogramming tricks you can do with require in CRuby that you can't do in MRuby like the Ruby, sorry, the Ruby gems command library will opportunistically load just the sub-command that you need, but you can, since it's all compiled in and it's all loaded at once, you don't need to worry about that. But it requires you to adjust your thinking about some of the metaprogramming things you do, like this thing loaded and it automatically gets registered while you have to just register it manually. So as far as regular programming, it's fine if you want to do some of those metaprogramming tricks that sit on top of require doing some stuff and having callbacks like this subclass was created, you're kind of like, well, I don't need to worry about that anymore because it's just going to be there and I'll just have a thing that pre-registers it manually. How many platforms that I build it for and why do all these cross compilation? So Fastly's customers operate on many different operating systems and MRuby CLI provides us out of the box. I disabled Windows support just because I wanted to save time and I'd come back to it later. So right now it's building for four platforms, Linux and OS 10, 64 and 32-bit. It's really, there's no reason to turn it off unless I wanted to test it on OS 10 and the Docker platform and I had a development environment that used Linux. So it was just easy to be like, I'll just leave those on and then I can make sure that I don't have any libcurl bug that didn't compile something correctly or whatever. But yeah, it's kind of like MRuby CLI, that's all that stuff up for you so there's no reason to disable it unless you want to save some time. So the question was, based on all the things I want to add to MRuby, how easy is it to contribute to MRuby compared to CRuby? Since the, even MRuby's internals are broken up into a bunch of little gems like MRuby, array extension and so on. So this makes it much easier to figure out where you want to make a change. So if I wanted to fix a bug that was in some array method, then I would be able to go and find the spot to make the change much easier, which means that it's less of a burden of like, how do I make this change and where does it go with CRuby, it's one big project with all the files mixed together so it's much harder to figure out how to make the change. So I think like, I need to make this change, where does it go as much easier to do. So Terrence pointed out that MRuby is all on Git and GitHub with a pull request workflow. So there's not the complication of CRuby's hybrid Git subversion issue tracking. The community is smaller but reasonably active and some other stuff that I forgot. I believe there are no more hands. So thank you for your time.