 Software automation is transforming numerous industries and in particular in networking we're seeing more and more adoption in the recent years. What is driving this interest in programmability and network automation in general? It has to do with the fact of growing demand from the network. The more higher requirements in terms of speed and scale devices and a growing number of individuals trying to connect to the network and expectations of faster services, for instance, deployed in the network have led to a growing interest in software automation. It's just not feasible to manage networks how they were managed in the past. Quite a fair amount of intervention by humans and software automation has become critical not just to keep up with speed and scale, but also has become a tool for innovation. Quite a few operators see software automation as a way to differentiate their services from those offered by the competition, for instance. Ultimately, the goal is to have a network operator that manages not just tens of devices, but an individual that can manage thousands and tens of thousands of complex networking devices through the heavy use of network automation through software. All right. So how was this automation done in the past? So what kind of mechanisms were available in the past? And this kind of a view of what was common and popular, let's say five years ago, I would say less and less use these days, but there's still a fair amount of networks that are using this legacy manageability framework. And in the past, networking devices were managed through a combination of the command line interface, syslog and SNMP. And these frameworks have multiple limitations. In general, all of these three components were completely siloed. They share little, if anything, at all in common. There was very limited availability of structured data. In particular, CLI and syslog did not use structured data. The lack of structured data make data difficult to parse, error prone and very brittle. Any change that happened on the router, it was very likely to also break the implementation of the automation software. There was an overall lack of the use of schemas to define the data. So it wasn't not always clear what was the formal definition of the data, what kind of types, what was considered valid versus invalid data. And that's what schemas become critical. There was also limited standardization. SNMP had some, basically was a standard base and MIPS were standards. But when we looked at CLI and syslog, there was not a clear standardization of those interfaces. And in overall, there was limited tooling. SNMP, even though it had standards and structure data, didn't offer the capabilities to manipulate configuration. The industry never adopted SNMP as a protocol to manipulate or to manage configuration. And even though it was heavily used for operational data, it's used quite often to rely on polling, which was a very inefficient way to retrieve data from the routers. So the industry overall is making an effort to move away from this legacy manageability framework and move to a model-driven manageability framework. In this framework, all that can be configured and monitored on the device is defined using a data model. These data models are going to be common regardless of the interface, the programmatic interface that is used to interact with the device. The data models are defined in the YAN modeling language, and I'm going to give some of the specifics on that language for those who are not familiar with it. And you would interact programmatically using different interfaces. The most common ones are NetConf and GRPC. All data now gets exchanged in structure format. So it's machine readable, whether it's XML, JSON, or Google Perk of offers. And what we want to focus on this presentation is how we can simplify the use of this new framework. And what is an SDK that can facilitate the use of these data models, the use of NetConf and GRPC, and be able to use multiple encodings in a simple fashion, okay? This framework is very rich, it's flexible, it's loosely coupled, but there are a lot of moving pieces, and it would be good to have a library or an SDK that make it as easy as possible. We're going to see what kind of SDK can be used. But before we look at the detail of the SDK, let's take a quick look at YAN as a model language, because again, the entire new manageability framework is based on data models, and YAN is the language that is used to define these data models. The data models are three structures, three data, three structures that define, again, everything that can be configured in the box, everything that can be monitored. These trees are built based out of four basic components, that are leaf nodes, leaf lists, containers, and lists. Leaves is basically a terminating node on the tree that has some kind of type and value associated with it, okay, and also has a name associated with it. The leaf list represents data of the same type, that is multiple instances of data of the same type, and the issues whenever you need to have these in a mathematical sense of a set, so multiple pieces of data that have the same type and have to be unique, okay. So you got leaf, leaf list, containers are nodes that group other nodes in the data model, and ultimately have lists that also groups data, other nodes in the data model, but it does that within the context of a key, so you can think of it as a record in a database. So again, leaf is a terminating node with a type and a value and a name, leaf list, multiple leaves, so basically multiple values of the same type, unique, so it's a set in a mathematical sense, container is going to group other nodes, and you got a list that typically have a key, and these four components are the legos that are used to build the overall tree structure. All right, so that is a very brief overview of Yang, we're going to see later how the SDK kind of simplifies the use of Yang, so we don't need to worry too much about the details of the model language per se. So if we use NetConf as a protocol to interface with the device programmatically, NetConf is an RPC-based protocol, so basically you have a remote procedure call, a call or request sent in XML format to the router, and the router is going to send a response. All the messages and all the data embedded in the messages are encoded in XML, and we see here the list of the RPCs that are available, you get config, edit config, copy config, etc. They allow you to manipulate not only configuration, but the operational data. All data exchanged in XML, these RPCs obviously I'm just listing here the name of the RPCs, but the RPCs has multiple arguments that are available in each of the RPCs. So if you want to interact with the devices in theory, you have to be familiar not only with the data models, but you have to be familiar with all these RPCs and with the parameters of the RPCs. The older programmatic interface that can be used to interact with the data models is GRPC, and in particular the GRPC network management interface. And it's also an RPC-based protocol, and we have four main RPCs that allow us to interface with the device, our capabilities RPCs to discover the capabilities of the device. I get RPC that allows me to read configuration or operational data from the device, I set RPC that allows me to write configuration on the device, and I subscribe RPC that allows me to program the device to stream some data back to me. So those are the four-based RPCs, and as it was the case on Netcom, each of these RPCs is a set of attributes and parameters that you need to specify to get different behaviors from the RPC. So if you wanted to take the advantages, in principle, if you wanted to take the advantages of this new framework, you will have to understand Yang or have a very good understanding of it. You will have to have a very good understanding of either Netcom or GNMI to be able to use that framework. So what we're trying to do with SDK is see how we simplify the use of these technologies to make sure that people drive automation as fast as possible. So that's why we developed the Yang development kit. With the Yang development kit, you have an SDK to develop automation applications for devices that support Yang. Yang by now has become pretty much a de facto modern language for networking devices. So if you're trying to manage a relatively new networking device, you're going to find that all these new devices support Yang as a modern language. So this development kit, this SDK is auto-generated from the Yang data models. And it has two very important things. The first one, it allows or it provides a lot of abstractions. Provide an abstraction for the data models for Yang in particular. So you're able to use the data models without having to worry about learning all the details of Yang. It provides abstraction for the management protocols. So it gives you direct access to Netcom and GNMI and Rescom if you choose. But it also can hide the details of Netcom, GNMI and Rescom. We're going to see that I can write all my automation without having to worry about the underlying protocol I want to use. And it also provides an abstraction for the data encoding. Potentially using this SDK, I'm going to build client applications for automation and all the data ultimately goes to the device in XML format or JSON or Google protocol buffers. And the SDK allows me just to manipulate objects in the native programming language of choice without having to worry about that data ultimately is how data is encoded and decoded. All that happens automatically. In addition to these abstractions, the SDK provides built-in data validation. So as I mentioned before, the data model defines everything that can be configured and monitored on the box in a tree format. So it specifies all the nodes and this specifies what type the nodes has, what name. So if I deviate, if I create an object and I deviate from the definition of the model, the data, the script automatically all the software that is built with it, whatever software is written with the SDK automatically is going to perform the validation. And I have some, a couple of examples where that automatic validation is going to be more obvious. The SDK is available in multiple languages, available in Python, Go, and C++ and it's completely open source. It's available on GitHub and I have the pointers at the end of the presentation. You can take a look at it. The simplest way to get access to all the resources is just go to ydk.io. So let's take a closer look at the structure of the SDK and the packaging that is used. So you're going to find in general what we consider to be the core package shown here in the bottom part. And the core package brings a collection of services that have implementations for the different protocols, implementations for GMI, implementations for Netconf, implementations for Rescon. But in addition, performs other functions that are functions on data models that are not related to protocols. There's a service to encode and decode in case I want to take, for instance, a JSON string and convert it or validate it with a model. So services in general is an implementation of a protocol that's probably the simplest way to do it or an implementation of an abstraction of a protocol. The services make use of providers so you can have an instance of a service that has different providers or different implementations. An example of this is the CRUD service. The CRUD service stands for create, read, update, and delete. It allows me to manage a box, a device, a router, a switch using create, read, update, and delete operations and not have to worry about the underlying protocol. I just specify a provider and that provider can either be a Netconf session or a GMI and the SDK automatically takes care of issuing the correct RPC calls to match the semantics of the CRUD service. At the lower level of the core package, you have the PATH API, which is a low-level API that allows you to make use of models using PATH notation. So services and providers make part of the core. But where are the data models? The data models actually are packaged as model bundles. And in these model bundles, you have a hierarchy of classes that completely mirror the structure of the data models. And this packaging allows you to install different model bundles depending on the capabilities of the device. This device may support different model families or different vendors have different model families. So in this way, you install the core package and those services are going to act on models that are part of the model bundles. And you can decide what model bundles you can install. And you have this structure basically in Python, in C++, and in the Go programming language. So let's take a look at what's inside those model bundles. The model bundles is basically classes that have been auto-generated, nested classes that have been auto-generated from the YAN models to match one by one the structure of the data model. So that allows me to instantiate these objects and perform operations on these objects. So services on these objects. The leave list and containers become classes, and a leave node becomes attribute of the class. And again, the structure of how these classes are nested is completely identical to how I define the data model. So these model bundles allow me just to instantiate, for instance, the root of a model. And recursively, the model is instantiated and end up with an empty object that I can populate with the data that I'm interested in. For instance, let's say the example that I'm trying to configure something on my device. So I can instantiate a configuration object. And recursively, the model is instantiated with empty data, with no data. And I can plug the data that I'm interested in. And then I can invoke a service to send that object, for instance, to the router. And I have a couple of examples and I'll make it more obvious. But the key point here is that these model bundles and this definition of classes allows me to just focus on what's important and understanding the structure of the data in the data model. I can completely forget about the details of Yang. Yang, I mean, I'm talking here about the four key abstractions that Yang has. But if you read the formal specification of Yang, actually Yang can be a rather complex model in language. He has the notion of modules, sub modules, sub augmentations, deviations, groupings. There are all kinds of different aspects that are present on data models. And with this SDK, I can completely forget about learning those details. And I can focus on understanding the hierarchy of the data, which is what matters at the end of the day to develop any automation software. Let's take a look at the validation. I mentioned before, one of the key benefits of the SDK is the validation of the data. Because the SDK and these model bundles are originated from the Yang file, any software written with the SDK, it gets automatic client-side validation. Validation that is very fast and gives you precise error reporting. So if I create an object for configuring, for instance, the interfaces of a device, I start plugging that data. If I try to, for instance, send that configuration to the router invoking a service that allows me to do that, there's a validation step that takes place that automatically and it will tell me if there's any error on my object. What kind of configuration errors or what kind of errors in general I can have in my data. For instance, you can have an object that is operational data or an object that corresponds to a data model for operational data. Operational data cannot be written on a device. It can only be read. So if you create that object and you invoke a service to write that information on the router, you will get an exception saying you cannot write that information on the router. So that's, for instance, a simple level of validation that happens automatically for you. There's also type validation. So if the data model indicates that a particular node is a string integer or et cetera, if you use a different data type, when you execute, you get an exception and it will indicate that you have a mismatch in the data type that is expected according to the data model. The data model can also define some constraints on the values. So even though a node is an integer, there may be a valid range that is accepted for that integer. Or if it's a string, there may be a regular expression that is associated according to the data model with that node. So the value gets validated. If the value does not match the constraints defining the data model, you will get an exception. There's also semantic validation that takes place. Let's take the example of configuring the interfaces of the device. The interfaces typically are defined using a list. And the key is the name of the interface. So each interface has a unique name. If you create an object for configuring interfaces and you create two interfaces in that object and you give it the same key name, when you're trying to send that object to the router, your script will automatically throw an exception and it's going to indicate that you have a duplicate key name, allowing you to fix the bug immediately. It also takes care of deviations. What are deviations? Deviations term that exists in Yang allows a device to indicate exceptions. So even though a device may support a data model, maybe it doesn't support 100% of the data model. Maybe it supports 90% of the data model. So when the connection is established between the client and the router, the router can indicate what are the exceptions that it has in coverage. And if you create an object and you assign a value that you're trying to send to the router and the router already indicated with a deviation that it doesn't support that, you will get an exception indicating or making you aware that the device is declaring that it doesn't support that data and you're trying to send that data to the device. So all these validations, again, thousands and thousands of complex COVID that you will have to write otherwise happens automatically for you just because of the fact that the SDK is auto-generated from the Yang files. Let's take a look at a very simple example in Python. This is an example that you can probably run it without any change whatsoever. And what it does, it changes the host name of the router. And it does that using a data model, the system data model as defined by the open config group. So let's take a look at one of the packages that get imported. We see here that we're making use of the CRUD service. I mentioned before the CRUD service is a create, read, update, and delete service that abstracts away the protocol that we're going to use to talk to a device. We're also importing the NetComp service provider. So this is indicating that even though all my code is going to be written in terms of CRUD operations, ultimately, I want to connect to the box using NetCon. And I import the open config system data model. So let's see here. We instantiate the provider. That's the first step that we take on the script after importing the modules. We instantiate a NetComp session. We provide the address and the credentials to connect to the device. Then we instantiate a CRUD service. All right. And we get into the data manipulation now. We create the system object using the open config system model. And I'm going to end up with an empty representation of the model. And then I'm going to go under system.config.hostname. And I indicate the hostname that I want. This path is exactly the path the hierarchy defined in the data model. System.config.hostname. System in Yang notation, for those of you who may be interested in that, system is a container in Yang terms. And config is also a container in Yang terms. And hostname is a leaf. So system gets finally into a class that is nested, that has another class nested inside config. And hostname is the attribute of the config class. And we're setting that to Europe. Then once we have the data ready, we call the create operation of the CRUD service, invoking or making use of the provider that we defined before, the net con session, and passing the system object that has the data that I want to send. So there are a number of steps that happen at this point. The first thing is that the script will validate the data, has to make an assessment of whether system.config.hostname, assigning a string to that attribute is valid or not. In this case, hostname is in the data model defined as a string. So this is valid. Then because the provider is netconf and all the communications in netconf are done with XML, that system object will be converted to an XML string. Then because we're trying to perform the create operation with the netconf provider, the create operation is going to be converted to an edit config RPC in netconf. And that system data that was already converted to XML is going to be inserted into that RPC. The RPC will be sent, the edit config RPC. We will wait for the script, we will wait for the OK to come back. At that point, it will decide whether the device that is talking to requires a commit or not in netconf. Some devices require a commit to make changes effective, some other ones do not. The script automatically discovers that based on the capabilities that are exchanged when this provider was instantiated. When this session was instantiated, the router would have indicated whether it needed a commit or not. So if the router needed a commit, this create also will take care of sending the commit and waiting for an OK to come back. So if you look at this script, this script is making use of netconf. It's using heavy use of XML. It's making use of Yang. And none of these code, I have to worry about the specifics of Yang. I didn't have to worry about the specifics of netconf. I didn't have to worry about the specifics of XML. I just need to understand the structure of the data model, the data that I want. I want to create my object, then I invoke the operation that I want on that object. So this dramatically simplifies run of the code, plus we get all the data validation that I mentioned before. So how can you get started from flavor of YDK, which is the one that is most popular? The first option that you have available is to install it on your system. You can install it on macOS or you can also install it on Linux. If you want to install it on Linux, we have installations available for Ubuntu. And we also have installations available for CentOS. You can install it from the Python package index. It's also, all software is also available on GitHub. So you can install from source using GitHub. But the recommender would actually use PyPy. We have a repository YDK PySample that has a lot of examples. So I would recommend if you install it on your system, it's always good just to clone that repo to get a lot of examples and get familiar with how YDK works. If you don't want to install it on your system, you can use a virtual environment. We have two flavors of them, one based on virtual machines and one based on containers. So for virtual machines, you can use vagrant with virtual box. So if you have those on your system, if you go to the YDK PySamples repository, you will find the vagrant file that is needed to initiate a virtual machine. With this approach, you have a Ubuntu virtual machine that has YDK pre-installed. If you don't want to use virtual machine, you can use a container, you can use Docker, and you go to Docker Hub, you look for the YDK Docker containers, and you can get started with that approach. The last one is a hosted or a cloud-based alternative that you also have available. If you have access to decloud.sysco.com for value, you need to have a sysco.com user ID, but those should be pretty much available for any user with registration. You can make use of the YAN development kit sandbox that is available in the catalog, and that will give you an Ubuntu box and two sysco.usxr devices. So you can use those devices to configure or read data programmatically using YDK from the Ubuntu machine. So those are the three options that you have. Again, you can install it in your system natively, you can use either a virtual machine or a Docker container, install a virtual machine in a Docker container, or you can use a hosted setup on the cloud that sysco.com. All right. So now it's demo time. Let's see it in action. Some of the code that I showed before, I actually did some changes to make it more complete. So I modified that script that I showed a few slides before. So instead of having a hostname hardcoded that is going to be written on the box, it's going to take it as an argument, the script takes it as an argument, and also has the option to either read the hostname or write the hostname. And you can talk to the box using either netconf or gnmi. The example that I had before was always writing the hostname. The hostname was hardcoded and was also always using netconf. So with this script, I can choose whether I want to read whether I want to write the hostname. And I can choose whether I want to use netconf or gnmi. It is vendor-neutral because it uses the open config system data model is protocol-neutral, excuse me, because it uses the CRUD service as an abstraction of the protocol. And it's also encoding neutral. I don't need to worry in my script to deal with the details of XML, JSON, Google protocol offers. All that happens automatically. So let's take a look at that. Let's go to the box. Okay. So first of all, let's take a look at what I have installed here. All right. So we're going to be using version 8.5 and we're going to be using open config data model. So we have this bundle bundle. This is the model bundle for open config. And this is the core package that we're going to be using. Okay. So the name of the script is hostname. So let's try to, as I mentioned before, the script takes some arguments from the command line. So the first thing is it takes us an argument, the device that you want to talk to the device where you want to read the hostname from or the device you want to write the hostname to. Okay. You specify the device and it shows us a URL. Okay. By default, if no other argument is specified, the script is going to read the hostname and is going to read it using netconf. If you want to write the hostname, you need to provide the right argument. And if you want to change the protocol, you need to specify the GNMI option. Okay. In addition to these two arguments, we have a verbose option to see the logs, to see the details of what's going on. Okay. So let's run it first, just reading the current hostname of the device. And this is a device that I had available at the moment. So I'm just going to specify the device. So again, by default, it's going to use netconf. And by default, it's just going to read the hostname. Okay. So it comes back and it tells me the hostname is ASBR1. Okay. So before we go into more detail, before we try to execute it with different arguments, what I'm going to do is just take a quick look at the script. And you're going to see a lot of commonality with the example that I provided a few slides ago. Okay. So this is kind of the bulk of the script. Let me maybe go to the beginning. These are just common comments. Okay. So key things that that we're doing here, we're importing the CRUD service, we're importing the GNMI provider, and we're importing the netconf provider. Okay. So let's take a look at some of the logic. So if the user specified the GNMI option, we're going to make use of the GNMI provider. So we're going to instantiate a GNMI provider and store it in provider. Otherwise, by default, we use the netconf provider. Okay. So this is all the logic I need for the rest of the script to function for both GNMI and netconf. Okay. So after we decide what is the provider that we want, we instantiate the CRUD service, that again is going to give me that abstraction on top of the protocol. Okay. After I create the CRUD service, I'm going to check whether the users specify the right argument, meaning that they want to write a new hostname on the router. If they wanted to write a new hostname on the router, I create the object from the system data model, and I go under system hostname and I assign the value specified by the user. Okay. After I populate the data, I call the create operation with my provider. At this point, I don't care if that provider is GNMI or netconf, the logic is the same. And I pass my system object. Okay. This create is going to implement the code. First of all, it's going to trigger the validation. And it's going to create the right RPC, depending on the provider. And it's going to trigger the proper encoding of the data, again, depending on the provider. The provider is XML. This is going to create an edit config XML RPC, and it's going to convert the system to XML. If the provider is GNMI, this create operation is going to create a GNMI set RPC, and it's going to convert the system object to JSON, to a JSON string. And all that is going to happen automatically. Nothing in this code has to, in this code, I don't have to worry about converting data to JSON or manipulating JSON. I don't need to worry about protocol buffers. I don't need to worry about XML. I don't need to worry about the details of how these RPCs are formatted or structured. All that is handled automatically. I don't need to worry about the validation of the data either. If the user did not specify the right argument, meaning we want to just read the host name. So what we do, we instantiate the data model, and we invoke the read operation with the provider. Again, the crowd read is going to be translated to NetConf get RPC if the provider is set to NetConf, or to GNMI get RPC if the provider is GNMI. And the data is going to be stored in this object. And that includes the validation of the data, and also converting the data from XML to the Python object, or JSON to the Python object, depending on the type of provider. All that happens automatically. And after the data is read, we're basically invoking this path in the model to print the host name that the provider has. So again, this is the write operation. This is the read operation. And here, before this, I decide the provider is going to be GNMI or NetConf based on the argument passed by the user. So let's go back. Again, we run this version of the script, which was basically the default behavior to read the host name using NetConf. But this time, let's do it with a variation. We're going to pass the verbose flag so we can see what's happening in the background. So we see all the logging. So we see that the read operation of the CRUD service got translated to a GET RPC. And we see here the reply that came back from the router with the data. And we see all the data that was read back. And this data in XML is being converted to the XML object. We see here the host name data coming back. All this is validated and automatically converted to the Python object. And this is the output of the script. Okay. So let's now provide the right argument. So let's not just read. In this case, we want to define a new, and let's call it, let's say we want to rename the host name. We want to change it to Europe. From ASBR1 to something that is more meaningful. So let's specify this Europe. I still have the verbose flag. So we're going to see how the script now, instead of doing the read, CRUD read, it's going to do CRUD create and how that gets converted to a completely different net count of RPC. Okay. So let's take a look at this. Okay. So here we called it with the host name. And we see that now it's an edit config before it was a GET RPC. And here we have our object. Here we have our object converted to an XML string embedded in an edit config RPC. This object was validated. The fact that we made it this far means that the object was validated successfully. There was no errors. Okay. Here's the RPC sent to the router. The router took that data and is replying that the data is okay, but the change is still not effective. And this particular device, this device requires a commit. So here's the script sending a second RPC with a commit message asking the router to make the change effective. And then we get it okay from the router. Okay. So if we invoke, if we invoke, let's say, let's remove the write operation. And let's now invoke it with the GNMI. Let's see what happens. We'd read the host name. And now we've read it with GNMI. Okay. And we got Europe. So we effectively modified the host name. And now we've read it with GNMI. So let's verify. Let's see what, how the messages look in the background. Okay. So now we're reading with GNMI. And we see that the CRUD read got converted to a GET GNMI request. We see the encoding here. GNMI uses protocol, protocol buff encoding for the messages. So this is the decoded protocol buff message. And we see here the response that comes back, the notification with all the data, the JSON data, the data of the data model encoded in JSON. So GNMI is a little bit different in, in NetConf. The RPC is XML and the data is XML also. In the case of GNMI, the RPC is protocol buff and the data, it's JSON. Okay. And we see here that the router effectively was changed to Europe, the host name. And we can see it here in the embedded JSON too. Okay. The last thing I wanted to show is, it's kind of forced a little bit of a hack with how the script is written. But I wanted to show the validation. I want to force an error. Okay. A validation error. So let's say I have a bug in my script. So instead of passing the argument provided by the user, I made a mistake and I'm setting the, oops, I'm setting the host name to 100, even though the data model is a string. So giving it an integer value should be invalid. So let's say I'm executing my script. Okay. So we're going to execute it and let's say we want to take the router back to ASBR1. Okay. And I ask for, show me all the RPCs, all the messages, right? That's what we do with the verbose to the same router. Okay. Let's see what happens. We see here, there's no RPC. We don't see any XML messages sent to the router. What we see is an exception that tells me that there's a model error invalid value 100 for host name, get integer and expected string. So we see here the automatic validation kicked in and it validated the data locally without sending anything to the router. Okay. If we go back, remove that line, let's go back and we run it again. Now we see the messages and now we see that we're sending the new or we're sending the host name back to ASBR1 and we see that we get the okay back from the router. Okay. All right. So let's go back really quick after the demo and let's review some of the resources that are available. So all the resources that I'm listing here pretty much are accessible from YDK.io. So if you have one URL that you want to remember, probably YDK.io is for URL. If you're looking for samples, you can go to YDK.py samples that I mentioned before on GitHub. There are over 700 examples. For some boxes on dcloud.sysco.com, I mentioned already that you have a pre-installed Ubuntu machine with all of YDK plus two Cisco iOS XR routers that you can make use of the vagrant box that is on the YDK.py samples, repo to get a virtual machine. And there's also the Docker, I forgot to enumerate here, sorry about that, the Docker Hub containers. So those are additional that you can use. For support, I would encourage you to go to the YDK community. There's a number of users that you're going to find there plus all the contributors also join the support community. For documentation, these are the URLs where you can find the Python, the Go and the C++ documentation. This is the list of repositories, source repositories that you can check for Python, for Go and for C++. You can, obviously, as I mentioned before, you can install from source, but in the case of Python, it's just to make use of the Python package index. Here are some additional recordings. There's actually some demos that have done, some longer demos that have done in the past that are also available on YouTube. So here's some additional material related to YDK that you might find useful. So you can go and take a look at that. And we have arrived at our destination. Really quick summary, we talked at the very beginning and we saw how software automation is becoming critical for networks today to keep up with the speed and scale that is needed. We saw how networks are evolving to have a model-driven programmability framework instead of the old legacy manageability framework that relied on CLI, Syslog and SNMP. And we saw the details of how YDK provides an SDK to dramatically simplify the development of automation applications as long as the device supports YANG data models. The SDK provides strong abstractions for YANG and its data models, abstractions for the protocol and abstractions for the encoding and decoding functions needed to manipulate data. You have built-in data validation and multi-language support packages in Python, Go and C++ and it's completely open source. And we do welcome contributions from other people so please get engaged in the community and if you have any suggestions, issues in GitHub, welcome or contributions also more than welcome. Everybody has questions. Again, go to the community or you can reach me directly at 111.pontes on both Twitter and GitHub. So thanks for joining today and I hope this information was useful and I hope to hear from you directly or through the community. Thank you very much.