 Welcome to my talk. I'm Michele Desimone and this talk is TensorFlow Strikes Back because maybe many of you may have tried the TensorFlow PyTorch or Keras like raise your hand if you tried TensorFlow in the past. Okay many of you raise your hand if you try the PyTorch and also many of you and how many of you did try Keras in the past? Okay how many of you like Keras? Okay perfect so you you probably like the this talk too. So just a quick question with just a quick word about me. My name is Michele Desimone online I go by the name of usually Ubic. You can find me on Twitter, Reddit, I don't know Github and with the as Mr. Ubic and I've done a couple of things in the past like I work as a machine learning engineer and a machine learning researcher at Zurutec which is a Chinese company who does also a lot of R&D in the field of deep learning computer vision and stuff like that. And you also work as a freelancer for my own company which is a Ubic.tech. You will find the website if you are interested in contacting me for anything. I also am also the founder and organizer of the PyData chapter for the Emilia Romagna region in Italy. We will be having a talk and meet-up starting in September so yeah yay and I'm also the manager for the GDG Bologna which is the Google Developer Group in Bologna. So if you want to reach me the concert will be also on the site of the conference so let's get it in. These are some other contacts that you may, this kind of site, the personal sites are still under construction. They will be up by the end of the conference. Follow them. I think I sometimes post the interstitial stuff especially about the conference recap and something like that. So okay so let's dive into the talk now. So thanks for 1.x now. It was amazing. It was amazing. It had a lot of very nice theme like phenomenal computational powers. It was very fast. It had a nice Python layer on top of a very performance C++ core. Amazing performance. It was very easy to deploy in production and still is to this day. Like if you try to play torch I think that the one of the main drawback of my torch right now is still the production story. Like you can deploy the Cafe 2 model over the ONNX but I believe that the support for TensorFlow is much, much better yet. Luckily things are improving on the part of our side. Also TensorFlow is a beautiful static graph which allowed you to basically like support your model and then do every sort of cool thing with that. Like you could save your train model with Python and serve it with any other language you wanted. You could do optimization. You could do a lot of things. It was very nice. And also very high performance input pipeline under the form of the TF data module. They were built by Google themselves for managing the input throughput for the TPU, if I remember correctly, because one of the problem was that they had like this amazing hardware but they weren't able to use it properly because they hadn't high performance input pipeline. So you have it. But it had a very ugly and clunky API. Like yes, there was Python on top of the C++ code but it was almost like not really Python. I mean you had to forget everything you knew about Python when you came to the TensorFlow 1.x experience. And so it has like the graph was statically defined because it was a static graph. So basically you described the computation and then later you run it, which if you're familiar with Python is not at all Pythonic. Together with that, you had the problem of like variables which were added to a global name space and was not cleaned nor garbage collected. So if you lost the like the pointer to your available, they were staying there. You had to retrieve them manually. You had to work with scope explicitly. It was really not a Pythonic experience. And that to me is probably one of the main reason why PyTorch grew in popularity so much because it's actually offered like a proper Python experience with deep learning. So it was nicer. The context is when you try your end at TensorFlow 1.x and PyTorch, you will see that PyTorch was really usable. You had a problem like scaling it in production because you hadn't like the support for stuff like TF serving or even at the beginning the cloud provider were only providing models with like support for stuff with the TensorFlow models and stuff like that. And over the time, the PyTorch story for the production side got a lot better. While TensorFlow like it behind in user experience and usability, it was a mess. And it was a mess until very recently. Because it was only, yeah, this is a bit of a green text is impossible to read it now on the screen, unfortunately, but you will find it online. It's a joke, you know, it's the usual experience of a TensorFlow developer over the year, like you start, you are really loving it, the ML crazy, very nice. Then PyTorch comes along, you start seeing, oh my God, but I can actually like know how to to program in this stuff, even though I haven't spent like hours and hours trying to learn the special syntax required to work with variables and graph and stuff like that. And so you see also a lot of people migrating towards PyTorch. And the idea is basically that you start thinking yourself, oh, maybe I should try PyTorch too, but there are a lot of code in production. How the hell do I port everything? You start panicking, you vomit, you start trying, you don't want to port everything. And then maybe you decide to wait, summer 2018 comes along, you try if an eager mode, which should have been the first sign, you know, like the advanced party from the this new era of TensorFlow, you try it, but no, it doesn't work very well. There are a lot of bugs. There is not really that much support yet for it. So you even panic, maybe even more, you wait some more. PyTorch 1.0 release, and it's amazing. It's blazingly fast. Production started got really better. And now as well also the possibility of interacting with a form of static graph. And you're really wondering if you should stay on the TensorFlow bot. But then the announcement came and TF2 was a reality. So maybe you did a good thing waiting before jumping the ship. So what is this 2.0? Well, 2.0 has basically new logo, it's sleek, it's more modern looking, it's beautiful. And maybe it's the same thing that's happened to the API, because the API is actually sleeker, more friendly, overall less in your face about you are not comprehending the graph, you're stupid, you cannot get to the global namespace and stuff like that. So let's see what is really changing. So first of all, you either desire static graph, you'll live long enough to become eager by default. In TensorFlow 2.0, you no longer have the static graph. Actually, you still have, but you don't know it. And there will be later on a presentation from my colleagues, which is in the first row, that will show you why there is still a static graph power in everything in TF2.0. So the idea is that with TensorFlow, eager execution is an imperative programming environment that evaluates operation immediately without building graphs. Operation is to concrete value instead of constructing computational graph to run later. Basically, what this means is that you don't need a PhD anymore to do TensorFlow. You can simply do computation as if they were Python. Like if you have a variable called A, which is a float of 1.0, and a variable called B, which is a float of 1.0, and you sum them together, you're saying 2.0. If you did with this one in TensorFlow 1.x, it doesn't work. What you see is like a TensorFlow operation node. You still have to run your session and evaluate, well, this is gone. Luckily, in TF 1.x, you're required to manually choose to gather your operation into a graph. And so you had to construct your data flow graph. Now, in 2.0, you don't need it anymore. Also, eager is enabled by default. So eager is now the default behavior. So it is basically PyTorch. What also you don't need to do anymore is that now, and especially with stuff that I will show you later, you are not obliged to use any more stuff with the TF control dependency or using the proper TensorFlow-like TensorFlow if and for loop in a while, because you can use Python now. I mean, if you're in Python, use the Python stuff. So yay. So, yes, we can now debug, especially the bug, because in the TF 1.x, the bug experience was really a pain. And now we can debug and write Python code in vanilla Python. Yes. Finally, after, like, I don't know, four years. So, second thing, no more global namespace which suck all your variable and does not recycle them. In a decision, which I believe it was due to the environmental affinity and responsibility, the TensorFlow team decided that it was time to start recycling variable. So in TF 2.0, whenever you lose track of your variable, whenever you lose your Python pointer to a variable, you lose it, the garbage collection came in and recycled it. So you better keep track of your variable. This may seem, you know, like something which can be a bit of a pain for the end user, but actually it is really simple because on one end it forces you to really know what is going on and to keep track of everything. And secondly, if you use Keras as the default API, which is now the default API, you do not need to track this manually unless you do really, like, very, you know, like custom things. But even then, it's not difficult to track them. But if you use Keras, you basically don't notice and you can, you know, you can like delete stuff as if you were a Python object and the object is actually gone. So once again, it feels like using Python and you did like some sort of alien domain specific language, which happens to be run in Python file. So yeah, it's okay. And then also make function, not session. So no longer do we have the session run, which is was the this construct that you use in TF 1.x to run a particular graph. So if you remember what I told you earlier, if you sum together two things in TensorFlow 1.x, you won't be getting the result. But actually what you get is basically a graph with an end node, which is your result. But in order to get the value, you have to run the session. And now this is not no more the case. You can simply have a normal function as Python and you get not only that, but also, as I told you, the static graph is not gone. So TensorFlow is not really dynamic in the sense of having a dynamic first approach, but the static graph is there and you can use it to leverage its performance because the static graph is faster. And you just like better optimization in terms of like kind of vision and GPU optimization to run faster. So you can still use it. And in order to use it, you just need to use a decorator. This decorator is the tf.function decorator. You put it on top on a Python function. And basically the code inside it will be parsed and converted into a static graph definition. And the static graph definition will be basically this statically compiled sort of your function will be run as a static graph. So it will be faster. And of course, you can also export it with many things as we show you in the last part of the talk. Now the nice thing about this is that you can now define, for instance, like a custom training, which will be showing you later on. You can define like a custom training as if it was a pure eager. And then when you're done, you slap on it to the decorator. And basically you are turning everything that you define eagerly into a static graph. And the static graph run very fast. So it's amazing. This is using something called the tf.autograph, which is basically this magic black box that eats Python, pure Python function and spits out like TensorFlow static graph equivalent code. If you want to know more about how this thing works, as I told you earlier, see my colleague this afternoon, if I remember correctly, like probably the last talk of the day or something like that. So it is very interesting. It does a lot of magic underneath. It manipulates the Python syntax tree. It's nice even like in a pure, like a theory of programming language kind of perspective. So also the death of an API. So tf layers, which was the package that you used to use if you wanted to create neural network layers in TensorFlow is dead. tf graph is gone into hiding, as I told you earlier. tf contrib, which was this huge module containing everything like third party or half implementation or even very cutting edge stuff that was built on top of TensorFlow is gone. And it was about then time that this API polishing happened. Because there was a lot of redundancy, contrib was a mess. But what we have now? Well, now we have Keras, long live Keras. So what is Keras? Keras at a glance, Keras is not a framework in itself. Even though now it basically become a synonym for TensorFlow, Keras is a set of API specification for deep learning library. In the beginning, Keras was like framework agnostic, meaning that you could like plug and play different framework in its back end. However, this framework were TensorFlow, CNTK and Theano. Theano is deprecated. CNTK was deprecated by Microsoft very recently. So we are now stuck with TensorFlow if you want to have the cutting edge stuff. So basically, Keras now only lives on top of TensorFlow. Unless someone can port it with PyTorch, I don't think we will see any other framework for the time being. It is very high level. Sometimes it can feel a little bit too magic-y. You don't really know what is going on unless you really open the engine and see what is under the hood. But this is very, very simple to use. It offers basically three sets of API. For the beginner, let's say like that, everything you do probably fall into the first category, which is the Keras layers, models, sequential and functional API. I will show you later on. Then it also has the training API, which is more for expert or even like custom model. There is a nice website and also you should try and look at the Keras on TensorFlow docs. One clarification, if you install, if you people install Keras without any TF, it only install Keras as the specification library. Let's call it like that. If you want to have the Keras optimized for TensorFlow, you simply install TensorFlow and Keras for TensorFlow comes with it and you access it by accessing the module TensorFlow.Keras when you program it. You are not seeing that. Perfect. So first thing first, before we do anything with the models and training and stuff like that, let's see what layers are. Well, Keras layers are the most basic structure that you can find when defining your model with Keras. They are now the one and only layer API. This is the API to use to define layers. If you want to create a custom layer, you subclass from a Keras layer and there is a work guide on how to implement the even castle layer, but out of the box you receive anything you may want to use. So this is the only API you have now. It's available under TensorFlow.Keras.Layers and we usually import TensorFlow STF, so it's usually TF.Keras.Layers. They are a platonic object, meaning that they behave like same Python and not like the computational operation of TensorFlow 1.x. And they're very simple to use. They're basically more like class constructor, so you have to initialize them. You initialize them with a configuration. And then once they are initialized, they expose a call method, which actually you can use them as if they were callable once initialized with some other parameters, like the input and stuff like that. So very simple to use. Then we have Keras losses. There are no more TensorFlow losses or stuff like that. Everything now lives under the TF.Keras.Losses module. With the TF2 and out of the box you already have like a very huge selection and implementing custom losses is actually really simple, meaning that you simply subclass from TF.Keras.Losses.Losses, which is the primitive interface for everything related to losses. And then you basically only have to define, in order to have a valid loss, you just need to have a call method on it, which accept Y through and Y thread, which are basically the thing that you are using to calculate the loss and it will be passed to the model, to the losses when invoked inside a model. Then we have Keras optimizer, long gone TensorFlow optimizer, which I can remember if they were under TF.optimizer or TF.train.optimizer, but they are gone. You now have only Keras optimizer. And they live, of course, inside the TF.Keras.optimizer. There are a bunch of them. And once again, if you want to create a custom one, there are guides on how to extend it on your role. So let's dive into the core of the very interesting part, which is the model API. We now have three model API. The first one is the sequential. Sequential is the most straightforward. You basically create a model by stacking layers on top of each other. They feed one into the other in a direct line. So it's the most simple one. You can either specify the layer that you want by passing it as an array. When you construct the sequential model, or you can instantiate the model and then repeatedly call the .add method and pass them to it. Here are two examples of them. The first one is when we pass a bunch of layers to it, and the second one is basically we create the model and then we repeatedly call the .add method to instantiate, to add it to the layer that we want to. Also note that activation and everything like batch normalization, can be passed explicitly as a layer, or they may be configured sometimes as properties of the layer. So you will see later on some example of how this is done. Then we have the functional API. Now, the sequential has a limit, which is that your model has to be like your stack of layer has to be linear in the sense that, for instance, you cannot have multiple input or, I don't know, like a model that comes in at another time, like later in the computation. So it's somewhat limited in scope and in use. And like the successor, what you should use instead, maybe all the sequential whenever it feels like it is too restrictive, you should use the functional API. Functional API is called that because you invoke layer, you initialize the layer and then you call the stuff on it. And basically the graph of the model is created by this concatenation of the code of the various layer. It's actually pretty simple to use. And in the words of the documentation, the functional API can handle model with no linear topology, model with shared layers, and model with multiple input or outputs. It's based on the idea that a deep learning model, usually a direct graph, a dug of layers, the function API is a set of tools of building graph of layer. So it's basically a more advanced set of tools on top of the sequential. Here it is how it works. So we start by creating the input node. We never specify the batch size, usually. What gets returned inputs contain information about the shape and the type of the input that you expect it to feed to your model. You can expect this information by calling several models, several methods on top of the object that you get, which is an instance of this tf.carous.input. So you can expect and see if everything you are passing is correct. Then we add the node by simply calling the various layer or even model itself. So you can plug and play different models if they were layered by using the functional API on the inputs that we have defined earlier. Layer model, as I told you earlier, before they can be called, they have to be initialized. So you have to first initialize them and then can use them to call stuff and create the graph. Once everything is done, you simply package everything together by using this carous.model object. Carous.model, you have to basically make it in this sort of way in which the inputs are the input nodes. Like in this case, this will be, I don't know if you can see it, but I have a pointer. Don't know if you can see it in one, but here we have our inputs node, okay, and then we have our output node down there. So in the carous.model, in the end what we need to do is simply say specifying which inputs are like the first node of our computational graph that we have defined by calling the various layers and the output node of the search graph. If we have multiple inputs or multiple outputs, the only difference is that instead of having a single object, we pass it an array of stuff. So it's really that simple. Once you have the model assembled, you can expect it by calling the model.summary method, which prints these very nice looking sort of, you know, like summaries in your console. There are also ways to generate like graph fees of the graph of your model. So there are plenty of tools already baked in carous to explore your model. Then we have the final API, which is the chainer API, as I like to call it, and the subclassing API, which is the official name. So why is it called the chainer? It is called chainer, because chainer was the framework that popularized the sort of API. The idea is that you subclass from an interface, which is like a primitive model, and then you basically handle everything yourself in terms of initializing your layer, and then defining your forward pass. So it's actually, it basically gives you like the most power in terms of customizing your model, but of course comes at a cost that you don't have the like checks and like everything checks that are already in the functional and sequential API, and also you usually tend to have more bugs in this sort of way, because we are defining your own forward pass, so if you do something strange bugs may arise. So it's more error-prone, so the caution is use it only when strictly necessary. And as I told you, it's actually that simple to use, like everything you need to do is, for instance, in this case, you subclass the TF carous model, and then you define your own initialization function, which super the model, and usually calls the unit of the model, you can specify a lot of parameters to it in order to construct it properly, and then here you define stuff which will be available for the forward pass, and then later on you define your own forward pass. This is useful because you may have like strange models like, I don't know, like GANs or something like that, which usually are better done in this way than the functional or sequential, or maybe you want to define your own model, also you can use it to define a very short model, and then you plug this model into the functional API, because as I told you earlier, you can mix and match layers and modules that basically operates in the same way, like a model is just, imagine, like a collection of layers in this case, so you can use them interchangeably. So, we also have the beautiful color pipeline of Google, and here is the high performance input pipeline, because we have seen the layers, the loss, the optimizer, what the model, what we need to do a proper training, well, we still need the data. How do we fetch data? Well, we use the tf.data package, not a lot of change since tf 1.x, basically now this works as in the overall TensorFlow experience, this is more usable, it's more intuitive, but in the in the older TensorFlow, you would need to create manually your initializer and pass it to the model and stuff like that, now everything you create with tf.data is Pythonic, meaning that you will be able to iterate over it in a simple way. What this does, the cool thing about the module itself is that the API introduced this object, which is the tf.data dataset, which is an abstraction, and you can see it as a sequence of elements, and then you can also use it to define a computational pipeline with the reusable element and various transformation, and everything is optimized under the hood to be extremely fast, and you can customize it even further to basically maximize your hardware performance. The basic idea is that everything has to start from a source, and you can have two kind of source, which are either in memory or from a file, so if you're working from memory data, you have a tf.data dataset from Tensos or tf.data dataset from Tensoslices. If the input is stored in the recommended tf.recode format, which is a format, a file format devised by Google to actually be extremely performant when the user together with the tf.data, you can use tf.data.recode dataset in order to construct everything. Once you have done it, this dataset object exposes a series of methods, which you can call to create this computational pipeline that does transformation, mapping, everything that you may want to do, basically you can do it, and there are already sort of pre-built functions and transformations you can put on it, or you can define your own, for instance, with the map function. You use this map function together with Python callables, like lambda functions, and you define your own conversion, stuff like that, so it's very nice to use. There is a documentation, because as I told you, there are a lot of methods which are already implemented and stuff, which I don't know, you can batch stuff, you can repeat stuff, you can specify like shuffling, everything you need to do in a proper deep learning pipeline, input pipeline. Also, as I told you, in tf 2.0, this dataset is Python iterable, which means that you can either consume its element by invoking it in a for loop, or another option is that you can create a proper Python iterable object and consume it with the next function. So now we have also the input, so it's time to do some training. So before we can do the training, we have another step to do, which is if we let's say that we split the training in, there are two ways of training a module in this new API. You can either use the pure Keras approach, which is very performant, not so not so customizable, but also it has like in the model API, if you use the one which comes out of the box, which doesn't require you to thinker with it, you basically are safe that no bugs should arise. And if you want to do this kind of like pure Keras approach, you have to to know what the model compile does and what the model compile does is basically after you create your model, you call this a magic method compile, you provide it a loss function and an optimizer, and what it does is that it configures your model for training. It's basically like the preparation for the training. As you see, you can pass it some arguments, and a very three important ones, which are the optimizer. As I told you, it's basically an instance of a Keras optimizer. You can either pass it the object or specify the string with the name of the optimizer, and that's some sort of Keras magic that they don't particularly like because you have a mix and match in like strings and Python objects, and I don't really like it personally. I strongly advise against using it but you can do it. Same thing with the function, like you can pass the function, the loss function, as an object or as a string if its name is the one like the pre-built one. Again, I don't like passing strings. I prefer passing directly the object because I like to see what I'm properly passing and not you know like trusting the magic under the hood. Your mileage may vary but it's nice to have it there. Then you also have metrics. Metrics are they're used both for login and also for optimization purposes. They're simply what metrics are you trying to like optimize and care for. Additionally, you can use the run eagerly if you want to force the model to run in an eager way. Otherwise, Keras usually like does all its optimization and it becomes a static graph. This is how you compile model. For instance, here we have a the sequential. As you can see, as I told you earlier, even activation in models can either be expressed as a layer so your model would look like as more layers to it or you can use string in the layer definition. Again, I don't like using string. I prefer passing it explicitly. This is an example taken from the TensorFlow website. After it, we compile it as you see here. We pass for instance the the proper object, the an instance of the add a multi-mizer, but the loss and the metrics are like defined in this sort of stringy way. I mean, it's okay. But it's very simple. Now that we have done this call, the model is ready to be trained. How do we train it? Well, we fit it. If you're familiar somewhat with scikit-learner, it's not that different. Keras has a proper scikit-learner clone API. Unless you're in particular use case, this is the the function to use to train your model. And it's vastly more preferable than the using like custom training loops that I will show you later on. It is fast, it is optimized, and it is reliable in the sense that it's much less buggy than defining your own training loop. Here the model fit has a ton of arguments. And the most important one are the X and Y, the input data, the target data, the epochs. How many epochs do we want the model to run? The batch size, as you see here, one pass. If we here, there are two different way. For instance, if we have data in an umpy form, we have to specify the batch size and then we can also give it validation data. And it will do the validation using the metrics or values thing that we can specify. Or if we, this is an example, as you see, it's pretty simple, or we can use the model fit together with the data, with the TF data data set, which is the high-performance input pipeline. If we do it like that, we won't need to, we won't require a batch size, but we will indeed require the steps per epoch, for instance, which will give the, which will let the model know how many steps we have to iterate over the data set. Actually, the story is a bit more complicated, but not that much. You may have like other requirements, but it is really easy to even to debug if you encounter errors in this way. And I personally recommend that whenever you want to always use tf.data data set. So this is the last part, which is what I call the dark power, which is the custom training. Well, custom training is probably, in my opinion, one of the next thing about TF 2.0. And the idea is that beyond the safety of the model fit, the model evaluates the large dark power of the gradient tape. Those power met practitioners will embrace this power through the sanity in exchange for a perfect control over the training. What this means, except from the like the jokes for the economic and stuff like that, is that the gradient, you have this object, which is a gradient tape, which is a TensorFlow object in which you can record, you can use it inside the scope. So we have to open it with the, for instance, like with the gradient tape as blah, blah, blah. And every operation that you do in it will be recorded as the, as an operation to later on extract like gradients and then apply it to variable. In this way, you can basically define your own background class and everything. But the problem is that it is more buggy, but it gives you also a loss of power. Like if you want to, for instance, to train a generative adversarial network, this is the way to do it. Like doing it with the keras and model fit is a pain, you have to do a sort of a lot of magic. But if you do the custom training, it becomes dead simple. Of course, it's more buggy. And here is an example. I don't know if I can scroll it. It doesn't probably fit everything inside the, sorry, it doesn't fit in the slide because there is a lot of comment, but you can find this example on the TensorFlow site. The nice thing about it, what you really need to see, which is very important, is that part, which is like we are opening up the gradient tape and we do, we are invoking the model here and all the operations that are done inside the tape are recorded and they will be used to compute the gradients later on. Once we have done with it, maybe we have defined everything we need to do, we have called all the model, all the layers that we have to call, then we exit the scope and after we exit the scope, the only thing we need to do is to extract the gradient, to apply the gradient using the optimizer to our variables and it is this call over here. If you can see it, it's there. Okay, so we extract the gradient, then we use the optimizer to apply the the gradients to the trainable ways. This is how you define your own custom training loop. This is usually used together with the stuff like the subclass in our chain and API because usually whenever you have like this very peculiar model that you need to use the subclass in API, more often than not, you also need to use very advanced training technique and maybe model.fit doesn't cut it, so you have to specify it in this way. The very last thing is exporting models because as I told you, I personally think that the power of TensorFlow is not in the library itself but in the ecosystem of a project that surrounds it and I wanted also to show you some of them but there won't be time, so maybe reach out from me during the conference. I can show you like some very neat stuff about like the party library or extension or tools that are built on top of it because there are really many of them from differential privacy to probably programming. So come and see. Everything you need to do be about exporting models. We are running out of time. These are links that we'll be able to click once I release the slides so they will point you to the documentation of TensorFlow. The tldi is very simple. You can either export your model by like if you trained with Keras and they are like in the they are not using the subclass, they are not subclass, so they are either sequential or built with a function API. You can simply call model.save and you have a lot of options. You can save it either in the form of an adobe file or the let's call it proprietary but it's not proprietary, it's open. The specific TensorFlow.save the model format which I usually prefer to work with but there are some times using with the third party library there may be a problem. So you either saving one of those two files and then you basically can save. You can save the whole model. You can save the whole model as a save the model. You can save only the architecture of your model or you can save only the weights as an hd5 or you can even export your weights as a save the model format. So you have a lot of options. For the subclass model the story is more complicated and I won't be showing you here, it will be pointing to the the link in the documentation because usually you need the original Python object so it's more like you are restoring the model to a previous state and they are a bit harder to fully export in a standalone way but with this doable it's not a complicated just a little bit more complicated would require additional time that we don't have. So in conclusion to conclude if you're using PyTorch try tf 2.0 if you're not using PyTorch try also PyTorch I really recommend it but I also highly recommend now to try tf 2.0. Remember that whenever you try the tf 2.0 is not yet very stable in the sense that it is still in beta. It's actually a lot more stable than when it was in alpha but you know there may be breaking changes after that but now I think that we use it in production every day and it is usable. Sometimes you maybe like can beg your hand against your head against the wall but more often than not it works really really well and it's a beautiful experience. If you love Keras try tf 2.0 because basically it's like Keras on steroids. If it's TensorFlow 1.x scared you away because it made you feel like stupid about the graph and everything and I totally understand the feeling. I had it too then come back to TensorFlow 2.0 because everything that scared you is gone. But most importantly follow me on Twitter. I am Mr. Undescore Ubik. I tweet a lot of stuff about the deep learning and TensorFlow and everything and Python and actually I have finished. So if you have questions thank you for listening to me. If you have any question I don't know if we have still time. Okay otherwise reach me either on the group the telegram group of the conference or you can find me around the conference. Also happy Europe Python. Thank you Michele a very interesting talk. So we have time for one question if somebody. Hi thank you for your presentations and the question is we can add this eager execution to the Keras model to be sure that it will be executed eagerly but what is the benefit if we are not using anything from TensorFlow just pure Keras API. Oh if you use pure Keras I don't think that you may need it. It's usually there are some application if for instance you have like a layer which is constructed in an imperative way and usually it's done with the subclass and stuff like that. So personally I've never had to use it but I think if you have to use it it's nice to know that you can force Keras to behave in an eager way. Okay thank you. Maybe one more very quick question. Anybody? Okay so thank you Michele.