 Okay so we're moving on to the last speaker before lunch so I'm guessing everybody's hungry but please refrain from just storming the doors two minutes before lunch. Please welcome Kyle Knapp who's gonna talk about dynamic class generation in Python. My name is Kyle Knapp and I'm a software developer at AWS where I primarily focus on developing the AWS SDK for Python also known as Bota 3 and the AWS CLI which is a Python based command line tool to manage your AWS resources and I'm gonna talk to you today about dynamic class generation in Python. So let me first go over what you should expect from this talk. First I'm gonna talk to you about the basics of dynamic class generation mainly what it is and how you'd go about doing it. Then I'll talk about the benefits of doing it and when it is appropriate to use it. And finally I'm going to build an application from the ground up that uses dynamic class generation and really shows you all the benefits you gain from it. So let me start by answering the question what is dynamic class generation? What it is is a generate the pattern is to generate classes at runtime so all the functionality of a specific class or the definition of specific class doesn't exist in the source code and as a result it gets generated as the program runs. And because the class doesn't exist in source code and the functionality needs to be generated from somewhere else in most cases it's some other data source such as a schema or API model. So let me show you a simple example of how you can dynamically generate a class. Here I have a hello world function that takes an object and it prints hello world from that object. I also have a class that I call base class that takes an instance name as a parameter and sets it as a property. Now I call type usually if you're familiar with using type it usually takes only one argument and it tells you the type of object but here type can also accept three different arguments where the first argument is the name of my class the second is a tuple representing the inheritance I want this new class to have and the third parameter is a dictionary representing the additional properties I want to add to this class and once I call type it will actually create my new class such that I can instantiate it as so. Now if I was to introspect some of these variables here if I look at the other name of my class it will appropriately show my class. Next if I try to access one of the properties like instance name it will appropriately show it up as my instance when I access that property and also I can call hello world from this instance to get that additional functionality of printing out hello world from the specific object. So you're probably thinking it's like okay that's cool but why should I care about dynamic class generation? First of all it can improve your workflow improve your workflow in the sense that you will have to write a minimal amount of Python code to actually add new functionality maybe not you may not even have to write any Python code at all which is cool. Second it can improve the reliability of your code a lot of times how dynamic class generation works out is you have a generalized code path and most of your logic flows through that generalized code path. As a result it becomes more heavily well used and tested. Then third it can actually reduce the physical size of your code because specific classes doesn't actually exist in the source code and therefore doesn't take up against the size. And finally this is a production level pattern. I'll go into an application that uses it a little bit later in the talk. So now let me talk about some of the downsides of it. First of all trace specs are a little bit difficult to follow mainly because a lot of times you're using a generalized source path or source code and as a result you have to look through the data source that you're getting this functionality from to really identify where it's going wrong. Secondly IDE support doesn't come right out of the box mainly because IDEs need the source code to be able to auto suggest and it simply isn't there. So now let me go into a production level application in terms of BOTO3. BOTO3 is the AWS SDK for Python so it allows you to interact with all the different AWS APIs available and BOTO3 is dynamic and it's data driven. It's dynamic in the sense that all of its clients and all of its resources are generated at runtime and data driven in the sense that the functionality for these dynamically generated classes are pulled from JSON models representing an AWS API. And what those two qualities allow for me as a maintainer of SDK is an efficient workflow because with AWS it's constantly innovating constantly adding new features and services so being able to stay up to date and provide all these new features through the SDK is very important but it's very difficult when there's a lot of APIs to work with. With BOTO3 in the fact that it's dynamic and data driven all it takes for me is to simply update a new JSON model that's packaged in the library and I don't have to write any Python code at all and I'll immediately be able to take up that functionality and I can spend my time doing some other work that doesn't that can't be really handled in the dynamic class generation. So now what I'm going to do is show you a demo of what this would look like. So now what I'm going to do is open up IPython here and now let's say AWS came out of the new service called MyService. So if I try to create a client for MyService I will probably get a runtime error saying this is not available. So let's actually go fix this. So what I'm going to do now is let's say we got a new API model for the service and I'm going to open up this model to show you what it kind of looks like. And if you look at it right here it's just Amazon EC2's API model. It's nothing new here. And now what I can use now is the AWS CLI which shares the same data path as BOTO3 in order to add this model to the same data path that they share. So if I provide for the service model the file that I just opened and then I provide a new say I want to rename the service name to MyService. What that would do is just copy the file into the correct data path for BOTO3 to search such that if I open up IPython again I can then use it. So if I input BOTO3 and try to create the new client it appropriately creates and now I can actually just call one of EC2's operations. I'm just going to do the describe region's call. And it appropriately was able to make the call as well. So this was really cool because I did not have to write a single line of Python code and able to add new functionality. So that being said, you're probably now wondering when should I actually consider dynamically generating these classes. The one big point you probably want to look for is if there exists some canonical source of data where you can actually pull functionality from and better yet if there's more than one application possibly using this source of data. A classic example is web APIs. A lot of times what happens is you have to update a model anyways in order to generate some of the server code. So being able to use that for the client code you get to pick that up for free which is awesome. Then you can get in the case of databases where you have a defined schema and libraries like Sandman are able to use that schema to create dynamic APIs to interact with those databases. So now let's actually build an application from the ground up that uses dynamic class generation. The application we're going to build is a lightweight BOTO3. It involves having a client interface where its methods are one-to-one mappings to the various APIs. And this application is going to be all auto-driven. There's not going to be any code that's specific to a specific API method. It's going to be able to validate inputs based off models. It's going to be able to parse responses based off models. And for now it's only going to support JSONRPC protocol. So let me go over the steps real quick as well of what we're going to do. First I'm going to show you how you can write a simple JSONRPC client. Then I'm going to integrate API models to pick up input validation, response parsing. Then I'm going to add API specific methods to this application or the client interface. First step, I'm going to make sure it's extensible so I can inject my new classes. And finally I'm going to actually add more APIs. So for those of you not familiar with JSONRPC, let me briefly go over it. So JSONRPC relies on using post against a single URI where there's not any additional pass or query strings you have to worry about. And most of the data is in JSON bodies through the request and response. So in the request, you will specify a method or parameters you want to provide that method. In a response, it will return you a result from that method call. So now let me go over some sample code of how the JSONRPC client worked. Don't worry about looking through it too deeply right now. I'll walk through it with you. So now if I was to try to instantiate this client with an endpoint URL, it'll say that your endpoint URL for later when I try to make an API call. For the make API call, I'll have to specify a method and parameters such that the method gets mapped to the method element and the JSON payload I'll send. And the parameters will be mapped to the prams element. Then once I have that payload set, I can just use requests to just send a post to that URL with the JSON document. And what it will return to me is a JSON document as well, where you can see here it'll have the result of multiplying one by two by three together, which as you would expect would be six. So let's talk about what needs to be expanded on this. This field is very general. Honestly, you don't even need this client class right now. You could probably just call requests directly. You don't have any input validation. You have no idea what methods you can use, what parameters you can provide, or even the types. And also, it returns the entire response. So if you notice in the response, there is elements talking about what the ID of the request and response was in the version of JSON RPC being used. So now let's talk about step two, which is integrating the API model as soon as this client. So currently, where we're at, we can create a client with an endpoint URL and make a generalized API call. By the end of the step, what we'll be able to do is actually take a JSON model, load it, and have our new client class consume this model. Such that I can make a modeled API call, which what I'll do is be able to know if it's positional keyword arguments you need to accept, and appropriately print out just the result I wanted. And then similarly, if I pass an incorrect parameter type, say the string foo, I can't multiply one by foo, that will throw me a runtime error saying, this is the type of string. I expected the type integer. So in order to get this working, let's talk about an API model schema. Here's a sample API schema that we are going to use. This is a lot simpler than some of the other schemas you might see out there, like in what Butter 3 uses or something like Swagger. But for the sake of this presentation, try to keep it simple. So here what I do in this schema is you can identify what the endpoint URL here is for your API. And also you can provide the operations that you may want to provide. In our case, we only have multiply for now. And for each operation, you can specify an overarching documentation for that method. You can specify what the input's supposed to look like, and also what the output's supposed to look like as well. In terms of the inputs, you can say what the type is. In our case, it's going to be a list, and therefore we want to have it be positional arguments. And you can also specify how you want to describe this input as well. And you can also model what each element in this list will be, in this case, it's integer. And you can do the same for the output in which you get documentation, and also specify it's an integer as well. So now what we're going to do is actually take this model and then try to run it through the client. Don't worry about trying to read through this right now. I will go over it as well. So now what we do is import the API.json file into a model. We can now have this model of the client class consume the model. And in Initializer, what I'll do is I'll create a param validator and a response parser, which will use the model to both validate and parse responses based off the provided parameters and the API response given back. So now what I'll do is continue on and instantiate the inherited client Initializer. And then I'll be able to use the make model API call, in which case what I'll first do is try to validate the parameters provided using the validator and the input model. Once we know these parameters are validated, we'll then use the make API call, which is inherited from the client class, in order to actually make a request against the API. Then we'll be able to parse the response based off the model and the response given back to us, as so. Now let's talk about when he's expanded on this step. It still feels too general, mainly because the API is still completely undocumented. We have no idea what methods we can provide. We have no idea what parameters and what the output is going to be. So let's go fix this by actually adding API specific methods by dynamically creating them. So currently what we have is we can open up a model, instantiate a new client, and make a model API call. But by the end of the step, what we'll be able to do is now use a factory function to create a new client that will actually have these new methods that we want on the client. And similarly, if we call help on the client, we can also get documentation on what it will actually look like. So let's go over some code on how this would look. So once we've loaded the JSON model and create the client, we will then start initializing some of the variables needed for the type called later down the road. In which case, we just call the class name myClient and set some class spaces. We wanted to inherit from modelClient in this case. And also, we will create an empty dictionary for class properties that we want to dangle off that class. So now what we'll do is actually open up the model and look for all the different operations available to us and call this helper function getClientMethod. What the getClientMethod will do is actually define a new function called underscore API call, and which will be used by the instantiated client. So once we return that defined function, we add it to the class properties that we'll provide to type, which will actually create this new client class. And then with this new client class, we'll instantiate an instance of it. Such that now, if I call multiply of one, two, three, it's actually proxying out to this newly defined API call here, where self is now referring to the client that the method got attached to. And the makeModel API call, which is what's called underneath of this function, is inherited from the model client class. But there's one big issue. The doctrines are still not specific. So if I try to call help on this client right now, you'll just see exactly what I described, which is multiply is just a proxy out to this underscore API call. So let's figure out how to do that. It's actually not too difficult. You just add these two lines here, which we're setting the dunder name and the dunder doc for the method that we add to the class properties. Such that when we call help now, we will actually pick up these new documentation and the correct name for the method. So by setting the dunder name right here to whatever the string operation name, it allows you to override the fact that it looks like a proxy when you call help. And then by setting the dunder doc, you're actually able to set the documentation for that method. The get doc string method or function is pretty much what it does is it takes the operation model and looks through all the different documentation elements, if you remember from before the API schema, in order to concatenate together a string that has the operations and the parameter types and all the return types for you. So let's talk about what needs to be expanded off this model. First of all, it's not extensible. It's not extensible in the sense that we need to be able to support something like custom class names or custom inheritance in the sense that we don't want to actually only have to rely on these modeled API methods. We want to be able to add new functionality on top of it. So let's talk about step four in which we make the client extensible. Let's start with this sample application where I create a cache client in the sense that we know that given a method and a set of parameters, we're going to get the same result every single time. So it doesn't actually make sense to hit the server every single time to do it. Why not just store the result in memory and return that when needed? So with this new client class, I'll create a new dictionary representing the cache. And then I override the make model API calls such that I create a new cache key consisting of the method, the arguments, and the keyword arguments that I provide. And check to see if this key is in the operation cache. And if it is, I'll just print out where I'm retrieving it from and also return the result from the cache. Then if it's not on the cache, what I'll do is I'll actually make a call to the server and get the result and store it for us. So currently where we're at, we have this logic, but there's not really a great way to actually hook in our cache client class. There's no option for us right now. So by the end of the step, what we're going to be able to do is take this new model and then actually create a client such that we can override what the name is of the class we want to use and its inheritance. Such that now, if I call client.multiply, you can see where it's retrieving the result from. And then if I call it again, it will actually save that result and say it's retrieving it from history. And then I can also look at what the name is as well and see that it will return my super smart client as I defined before. So how to do this, it's actually pretty straightforward other than updating the signature for this function. I just added these two lines, which sets a default if no class spaces were provided. In order to do that, if now if I walk through this and I create a new client with the model and the customized string name and the class spaces, you can see that my super smart client gets mapped to the class name when type is called. And the cache client's tuple gets mapped to class spaces when type is called. So now when I call multiply, you can see that it's appropriately using the cache client. And when I call the dunder name, it will appropriately use my super smart client for the string name. So let's talk about when you need to be expanded here. To be honest, we're actually at a good base. There's not really any glaring holes except for the big elephant in the room, which is there's only one API method, so let's actually go fix that. So in this final step, we're going to add more APIs such that our engineers working on the server end were hard at work and they added two new APIs, add and subtract. So in order to actually update this API, they updated the models as well in order to generate some of the server code. And as a result, we got these new API models for free. And to look at some of these API models, you can see that for add, it's very similar to multiply. In a sense, it's a list of integers. And what it'll return to you is some of the integers as a single integer. Subtract is also pretty much the same, where it's a list of integers and it will return to you the difference of those integers as well as an integer. So now let's actually do a demo of what this will look like. So if I open up IPython right here, what I'm going to do is I'm going to import a couple helper functions. First, I'm going to import one that lets me get my models pretty quickly without having to open up the files and loading it. And I'll import the createClient function. So now if I call getModel, what I'm going to do first is I'm going to open up the old model, the one that only has the multiply in it. Such that when I create the client now, you can see that it only has the three methods. It has the generalizeMakeAPI call, it has the model one, and it also has a multiply. So if I call multiply now, it print out the correct answer of six. But now if I try to call add or something, that I won't be able to do it because the API model's not up to date here. So now let me go fix that and use the new updated API models that I got for free. So if I create this client again and look at all the different options, or methods I have available to me, now you can see that I have add. So if I add one against each other, it's going to get me two. And then if I call subtract on this instead, it appropriately shows zero. So this is really cool because I didn't actually have to write any more Python code once our application was kind of built from the ground up. And adding new features in the future is not very difficult at all. So now let me get to the kind of conclusion of this talk where with dynamic class generation, I realized my sample application was super simple. It was just adding, subtracting, and multiplying numbers together. There's not really even need to be a web API for this. But what I want to get at is the fact that we started with an application, and kind of built it to the ground up, such that in order to add new functionality, all it required was updating a new model, or getting a new model. Hopefully you weren't even the one who had to update this new data source in order to pick up the new functionality. And that's really powerful, especially when you have a bunch of different other applications possibly using this similar model, such that one update to the model can update any different applications that may be consuming it. Dynamic class generation produces really robust code in the end because it's a generalized code path, and all your logic's flowing down that. As a result, you get very well tested and heavily used code. And one thing I've really been talking about was features and bug fixes have a much wider spread impact in the sense that right now for the client application I wrote, it only supports JSON-RPC protocol. If I was to add something like a REST JSON, a REST XML, or a query protocol, what that opens it up for me is to be able to support a bunch of different APIs that may be only running on that protocol. And in terms of bug fixes, if I fix a bug in a certain code path, chances are, even though I'm targeting for one functionality, if another function is using that same code path, I probably fixed a bug in there, too. So I really hope you guys got a lot of ideas about how dynamic class generation works, how to use it, where would you use it, and possibly have some ideas on using it in your next big project or even current project today. I would like to really thank you all for coming. If you want to look at some of the presentation code, here is a GitHub repository for that. Here is the Bota3 repository. If you want to kind of look at the more nitty gritty stuff on resources in dynamic client generation. And also BotaCore, that's where most of the dynamic client generation happens. Bota3 just kind of proxies out to that. So if you want to actually see how clients get created there, I'd recommend checking that out. But otherwise, I'll be here all week. So if you guys have any questions about Bota3, BotaCore, CLI, or about AWS, come find me. I'll be happy to talk about it. But that's about it. Is there any questions? Thank you very much. Could you contrast this technique that you showed us with two other ways of doing a similar thing that is using the GitHub attribute and using Metaclasses? Yeah, so Metaclasses is definitely something you can actually try to use it with. Because in reality, type is just a Metaclass, right? So you could define your own Metaclasses if you want some specific functionality out of it as well. So there's a bunch of different other approaches in order to dynamically generate classes. It's just type is one of them that is kind of built for you right out of the box. Is that good? Anyone else? More questions? OK. And that's lunch. Let's thank Kyle. Thank you.