 So, the topic that we'll be talking about is deep learning in the browser. I think I should stand probably, Pargav, this side. Yeah. I would just do a quick introduction. My name is Amit Kapoor. And I primarily kind of work in the intersection of data, visuals, and stories. So I'm primarily interested in looking at the world from a data lens, trying to understand it through visualizations. And hopefully telling good stories about it. So I do this for this kind of teaching people around on this domain, as well as helping startups and consulting on projects related to companies willing to adopt a data-driven lens. So that's a little bit of myself. Hi, everyone. Welcome to today's session. This is Pargav. My background is in machine learning and deep learning. I am currently building a product which is focused on personalization targeted for content-rich businesses like media and content marketing. Okay. So this is in sense of a topic. It's kind of really something that's really developing at the moment of how each of these two domains kind of fit in, right? So people that are trying to do deep learning and people who are trying to do it in the browser, right? Those things are fairly evolving and moving targets at the moment. And this is just some thoughts and ideas on how we can use this in our own work, right? So I kind of teach. So one of the things that I think about is what is a learning paradigm that people can adopt when they look at new topics or how can they think about these topics and learn something about it, right? So in the learning paradigm, how does this doing deep learning in the browser really helps us, right? And I know we kind of talk about AI a lot, but for me, when I kind of don't use the word AI too much, I really think of it as kind of how can we augment intelligence in terms of helping people do something that they're already doing, right? So in one way, I'm kind of really trying to say where does the human come into the loop of this whole data science process or AI process, whatever you want to say, the deep learning process, and how can we help the human in that loop to understand better, right? To understand better or to create better or to build something better. So the talk is going to be using these three different lenses to do it. So if you're a user and you want to use this to do something or you're a creator or a creative person and you want to use deep learning to create something, or at the end, you really want to build something and make something using that tool or make a tool that other people can use to do it, right? So the three lenses that we're going to adopt is users, creators, and builders. Or if I was to simplify it, people who are trying to learn this as a user, I want to understand what's really happening. Creators are people trying to play with this. So how are we trying to play and build some, play and create something with it? And builders are really trying to create something, right? So learn, play, create, right? How many are familiar here with deep learning in that conceptual level? Okay. Not bad. How many are familiar with the browser? Okay, browser in terms of doing deep learning in the browser? One, okay. So very few people, right? And how many are familiar with, let's just say, one of the tools to access the browser in terms of, let's say, the JavaScripting ecosystem to do it? Okay, a few more hands kind of go up there, right? So that's kind of one of the challenges in the browser, really, right? I mean, I teach deep learning and to a large extent we use a lot of ecosystem which is heavily into the scripting languages like Python or R or if you're doing larger data in terms of Scala or the Java ecosystem to do it, right? And in the browser, it really kind of brings up these questions of, you know, is this even really possible when I talk to a lot of my practitioner friends who are trying to do this, right? The other pushback that I get when I've done this talk earlier is people say, why are we talking about UI, right? So the association with the JS or the JavaScript ecosystem is it's really the front end and what we're trying to build here is really connect these two and make a possibility of how to do that, right? Does that make sense? Okay. So why to do it in the browser? These are my three reasons in terms of thinking about this, right? One, it allows everyone immediate access to it. So if it's really running in the browser even without a server at the end, I can immediately access what I'm building, right? So I can reduce this friction between connecting with the end product to what the user is doing. So I have an immediate connection. I can reduce that friction and hopefully reach a wider audience than maybe the coding-driven set up your own stack, do everything kind of the world that we are at the moment, right? So how do we expand this audience that is there to get access to this, right? Is that context helpful? Yeah? So historically, it's been really hard to do this stuff, right? If anybody's done trying to do any machine learning or deep learning or just playing with data in the browser, it's really been hard, right? Most of the machine learning libraries are not that well developed. They don't really access the GPU on your computer. So the only GPU you have is the GPU used to drive your display and we need some way to access that. Historically, they've all been CPU-based libraries and they're not really good at numerical operations that are really key in terms of doing a lot of the matrix stuff that we do, right? Also, there is an ecosystem thing, right? So we just don't need to do data science or matrix multiplication in the browser. We also need the supporting ecosystem to get data inside it, to be able to visualize it in an easy way, to be able to create something, to be able to run what you are using, any kind of reactive environments like notebooks in the same way that they are. So there's a lot of ecosystem that is required to do it before we can rate it. So this has historically been a big challenge and it's now changing. So the biggest change at the moment is really in terms of WebGL accelerated learning frameworks, right? So the one way to access the GPU is on the browser if anyone has done a little bit of, or seen any of the 3D models being rendered, they're all using what is called WebGL. And now we have the same WebGL 3D graphics engine that drives your rendering available for us to do a lot of the matrix computation that we would be able to do on a GPU that's on the server, right? So this WebGL accelerated frameworks is really what kind of allows us to really start doing this. So that's the first big change and this is really happening right now. So I teach, for example, Keras and you can actually right now use any of these libraries like Keras.js which will take the model that you're running and use it for inference on the browser, right? So if you're using Keras, there's an option. Even I think MXNet has a JS option, right? So MXNet, so some of these libraries already have their JS equivalents that you can use to drive inference, right? There's also framework agnostic libraries like WebDNN which are also faster and can support a wider set of frameworks. So if you're only looking at model inference, these are the two options that you can do. But hopefully we want to go beyond just model inference, right? So at some point you may even want to have a way to access and really train, if you can, or at least do transfer learning on the browser itself, right? So then you need a library that can actually access it and both TensorFlow has a JS equivalent which emerged recently, really, really new. And there is also an MIT project on TensorFlow or MIT students or XMIT students project called TensorFlow which is also trying to do the same thing, right? So we have a set of Webflow WebGL accelerated learning frameworks that we can then start to see about and use in the browser itself, right? So what does this mean? So that's just, if you're not familiar with this landscape, this is what the current landscape looks like for doing it. So let's go back to our three questions. How do we do the three things that we talked about? How do we learn, play, and create in the browser, right? So how do we learn? We learn as a user by creating something called either by doing it or by allowing the user to actually explore it. So I want to provide a user as a way to explore something and understand what's happening. So this whole concept by explorable explanation is based on this idea that I can provide an interface for someone to play with the data itself or play with the algorithm itself or play with the data model itself, right? So it's really how do I get people to learn by active learning? And this active learning is not the active learning in machine learning. This is the active learning of the user actually engaging with the model itself and learning, right? Make sense? Okay, so let's look at how we can help. I'm a teacher or I'm a explainer and all of us are explainers in this context of AI when somebody comes to you and say why the model is doing what it is doing right now or why is this algorithm giving me this output? I want to find a way to explain people how this thing is working. So I want to help them build intuition around what's really happening with my deep learning models or with my interactions, right? So I can build intuition at three levels, right? So let's see some examples of that. I can help them build intuition at the algorithmic level. So what is really happening at the algorithmic level? I can help them build intuition on what is happening at the data level and then I can help them build intuition at the interaction of both of these which is the algorithm and the data itself or let's say data models, right? So how does the model once trained on the data gives me a model? I want to see the interaction of the data with the model, right? So there is these three layers of abstraction that I'm really trying to help people build intuition on, right? Yeah, does that make sense? Okay, so here's one example which is not so much right now in the deep learning context but this is a beautiful article by Mike Bostock on understanding and visualizing algorithms, right? And these algorithms could be as simple as trying to explain people what quicksort or mudsort is or more complex like how does a depth first search happen or a breadth first search happen to reach a solution, right? And it's a very good analogy for deep learning because, you know, when we do randomized search or we do focus search or Bayesian optimization we are really trying to find a way to get to an optimal solution without really exploring the entire space that is there, right? So thinking about how we can visualize the search space is one way to think about how to visualize algorithms, right? So what surface area of my possible space has the algorithm really such, right? So this is a good metaphor for kind of... and these are all examples that run in the browser. So the links are there but I'm just kind of walking through some of them to show the possibilities that is there, right? So some of the initial examples, Andre Karpati who created COVNET, anybody has seen this or played with COVNET? Okay, Andre Karpati is an image researcher from Stanford, one of the very nice notes on 231N Stanford course if anybody's done and he's now director of AI at Tesla and in 2000 or early part of this decade, 2010 and all he created some of the initial libraries which was really looking at what is the space which is really looking at what is my space and what are the different layers really learning at each level, right? So this is a simple visualization to say can I look at different activation functions, different initialization functions, different architectures it was a very small library but this is what kind of 2014 is when it came out, right? So this is some nice demos if you really want to look at how algorithms really work, right? So in 2016 we had the algorithm neural network which if anybody plays with TensorFlow they may have seen this playground on TensorFlow this was kind of a small port of this COVNET.js library written to really visualize pretty much all we do in deep learning at least in the solution space, right? So changing the learning rate, changing the activation, changing the regularization, changing the number of hidden layers and given some simple inputs and data what is the output space looks like, right? A really helpful tool to help people understand the basic visualization or the basic algorithmic design that is there, right? Is it just playground? It's called playground on TensorFlow it's playground.tensorflow.com or .js Yeah, if you just search playground, TensorFlow you'll get that or the link will be there for all of those are links that you can actually reach and experiment and play on your own I'll show a few actual when we start kind of looking at coding it So this is what really happens You can change the parameters and as the epochs start going on you can start to see given this architecture what's the output, right? So it's a very simple neural network that you can make you can add layers to it and you're constrained by the type of data but you can play around with learning rate, activation, regularization, regularization rates and a type of problem, regression classification If I think about the two days I teach TensorFlow or deep learning this is pretty much what we the search space or what we try to learn obviously in more deeper on each of these questions but we're really trying to get people to build intuition on how these algorithms are really working and doing So tinker with neural network this is 2016 it is not using any of the libraries that I just talked about it was just a custom library based on it.js Okay This is now using TensorFlow.js This is actually a live notebook and we'll come to that when we start to create it and trying to show the same concept of what's really happening in a neural network, right? So I'm really trying to fit a curve to an arbitrary set of points and this whole concept of universal approximation in the sense of how a neural network can approximate any function which is kind of the basis of why neural networks are so powerful and it's just a jiff right now but you can change the number of iterations and see how you kind of fit with it. So this is what was done on a browser based UI earlier now being done in a very notebook like environment which we'll come to see later on and you can actually code this pretty simply, pretty quickly to do that, right? Okay So that's kind of the algorithmic level you also need some tools that run on the browser to look at the data, right? So a few things that really help in the let's say in the data space is if you're multi-dimensional data how can I look and facet it and look at all possible visualizations of how the data looks like especially if you have tabular data so I want to play with this really quickly so you know facet dive is one way to kind of do that this is one way visualization which is a unit visualization and visualizing each possible element and the possible element here is a data row but if you had an image data set you could actually visualize each image in the same space and see how images are changing based on classes or what once you've done the prediction how do they map back again into let's say a confusion matrix and kind of replicate that, right? We also sometimes want to look at data in in a dimensionality reduced way and the TSEE projector or the embedding projector again is again a standalone WebGL driven model recently improved to run TSEE on a linear time model so it's really fast now and you can actually take a lot samples that these are MNIST image no example is complete without an MNIST example no deep learning talk is complete without an MNIST talk and this is really showing MNIST in a three-dimensional TSEE mapping and you can do the same thing for your word vectors for image vectors and start to do it So I want to look at and help people understand the data I want people to help understand the algorithm behind it and then eventually I want them to understand the model that's generated and how does the data and the model interact, right? So how can we do that? How many of you are familiar with activation maps? A few, okay Explainability is a hard problem in machine learning, right? The more complex algorithms that we do, the space to actually understand that really becomes harder, right? It's really hard to translate it, right? So one of the dominant techniques these days on learning about activation is optimizing for a particular image or a particular activation, right? And this is you can think of this as feature visualization I have that image on the left which is a dog and a cat and when I look at the image I'm really looking at which neurons are really getting activated for that type of classification So when it picks a dog what is the neuron that are really getting activated or what part of the image is really getting activated? It's the same as for the for the cat part and what I want to really understand through this feature visualization is whether my image has really learned the right feature, right? Is it around the right part of the image when it's really learning, right? So there are a few wonderful examples on distill.pop if anybody doesn't know about that they should go and look it up on the building blocks of interpretability and really trying to see how we can activate this and really learn with different parts of the feature So now this is really important for me to communicate whether I've learned the right stuff as well as understand itself if I'm trying to debug my model, right? I'll touch on one more interaction where kind of the data model interaction becomes even more clearer and this I think is a far more simpler example but probably one of my more favorite examples in terms of explaining what's happening Many of us would come up with models and then the models are then translated into business objectives and to see how the model, different decisions that I take on the model results in so this is like a loan classification model and I have two different types of population, blue and orange and the two histograms there, unit histograms are really trying to show the default and non-default rate for those loans and depending on the threshold I choose I can make some decisions around what my expected income is going to be and this is the part that we as a data scientist would do in a notebook but getting the business to then say, trying to make different decisions on these edge points, trying to take different decisions in terms of how I split it which are all those things on the left the strategies that I can take I can understand what the business impact is so now if my model is really running in the browser the ability to at least do inference on the browser I can try and run this and get people to understand that that model is not just a set of numbers that comes out and that's kind of the gold truth the business translation that you do to this in terms of the strategies you take a different cutoff and in this case the focus is really unfairness I don't want to distinguish between two classes of people I can make different decisions around what's happening so I think this is really the final view of when the data model interaction actually reaches the end user the previous one is really for me to debug much more or could potentially be used by the end user but here I'm connecting my data science problem or the AI problem back to the business problem and really helping them to make these decisions and that's where running stuff on the browser can potentially be interesting because I can get them to take decisions, do inferences in real time and help them make these judgments and not make this data science as this coders or as this separate silo that we are running where the answer comes we're not able to connect with the business so this connection with the business or the user in this case is really key and unfortunately this is still very hard to do this is really hard to do I had to create communications and saying how do I communicate my model in a way this is one thing that I think we really struggle with in deep learning and this really allows us to link it back to the business problem and say how does this translate to what you're doing make sense? if any? so that's a good question so processing power will come into two different contexts so when I cover the next one on model inference we can have a discussion on what's the processing power for running the model on it in the browser the other decision is can I load data into my model can I load data into it and how do I do it and that stuff is getting more and more synchronized simplified so there are emerging standards like we have on columnar data storage on the disk like parkit and orc we have something similar coming like arrow which is apache arrow project which is also trying to do columnar data interchangeability but from any different if you're building a model data in python exported to r exported to scala, exported to anything and there is js equivalent of that which allows me to export my data not just as a csv or json which does not have any metadata about it but actually in a very efficient buffer driven way and allows me to actually play with the data and load it incrementally and all the stuff that I do on my server based stack I can actually start to do it in a similar fashion but there obviously will be inherent limits of doing things on the browser the limits of how much bandwidth you have the pipeline you have and you will have to make some different tradeoffs in time in terms of loading the entire data or loading even if you're loading it in buffer do I need to reduce this right there is also visualization limit but I will talk about that in the second talk if you have one architecture data visualization so the three things that really this brings you is visualization all of us use visualization libraries but the eventual abstraction ends up writing html.js or css if you're ending up using something on the web no matter how you access it so there is a possibility to do much more interactive stuff here it is reactive in the way that people can react to it and there is an immediacy to it which is really hard to do when even if you have a notebook that you want somebody to start and see your visualizations right so building this on the browser and trying to find people to engage can actually reduce that friction and that's really important for many users to access this right okay the challenge here is it is multi-disciplinary because every time I talk about this people say I don't really want to touch JavaScript or any other tool to do it but it's not just about JavaScript it's not really about the tool itself it's also how do I communicate what's the idea about it do I know how people interact with it so there's a lot of interaction in terms of how people understand stuff so when I say multi-disciplinary I'm not really saying the coding multi-disciplinary but also thinking in terms of how the end user will interact with it how are they really trying to learn can I enhance their learning and that learning is not just presenting my data but also allowing them to think about what the interaction model will be how that happens and that helps in democratizing so both getting users as well as learners on board for something that is still very hard for most users to do okay so how do we create some examples the easiest one is model inference right so that was your question can I load my model so let's forget about training I want to create things I just want to load my model and start to use that to build this right so model inference is in the browser world kind of reverses the normal equation instead of sending my data which is the typical architecture that you will have send the data to the server and then comes the inference I'm doing the reverse I am sending the model to the browser and then doing the inference there so there are issues with this that many people will raise both in terms of challenges in sending it as well as in terms of a privacy of my model or IP but let's understand when you do it on the browser you're reversing it in terms of the traditional architecture of sending data now you're sending model to the user so I'm really sending model to the user in doing it right so let's look at some examples both abstract and perceptual right so deep learning works really well for perceptual stuff at the moment less so for tabular data abstract data so most of the examples will be a little more perceptual so this is a sentiment analysis classical example of IMDB data and as you type I can actually get a sentiment analysis done and I can give that input back to the user right so really interesting applications and classification as you type comment I can basically sense whether it is going to be helpful or harmful to the conversation I can flag it immediately there give you feedback and help you to learn and change it so if that's your domain that you could do it right I have two amnesty examples okay the second is obviously image inferences so how do I do inference is I'm just writing and as I write I can start to infer what the images look like right bulk of the stuff in inference right now is the most interesting work is done in the domain of art and domain at the moment because businesses have not really taken this that much so Neural Style Transfer very common example of taking two images and doing it this is now becoming easier and the second example which is really interesting is can I start to also do image augmentation in the browser so if I'm training with a very small amount of data and can I do some amount of image augmentation within and start to see whether the inference is really working or not right so if I make some decisions on my model I can do an image augmentation in my browser itself and see whether the model is really performing well or not when I see the same image in a different context really important for a lot of practical business cases where we don't have the huge image net data sets but we have our small curated three class thousand sample data points and we want to really run this stuff on the browser the second the second thing that is also underappreciated is how do we collect data I mean when we are teaching the stuff you know downloading a data set and building a model is one challenge but the moment you go more up on the pipeline the fundamental problem comes how do I collect more data from my users right and we need to think smartly about doing this right so yes mechanical talks or crowd sourcing is one option but how about if I have a small model and I get users to give me more input and at that time of input itself I get them to validate whether this is right or not right so the quick draw data set if anybody is played with it which is basically scribbles as you draw the computer is trying to guess what it is or in the reverse how they collected the data was really to get people to make pictures of simple stuff like a scissor or a circle or an animal and they would guess at that point and validate it right the whole entire quick draw data set was basically this exercise of somebody drawing for 20 seconds the computer guessing and then coming you and giving you feedback of how to do it and I think thinking smartly about using model inference to give hints to the user to help in the process of creating or validating data and augmenting my data is the real benefit of doing this and this is what we don't think enough about when we just do Kaggle problems and kind of solve the easier part but not the upstream part of how will we get the data and validate it and that's where model inference can really help in making this work really well this is kind of self supervised learning or semi supervised learning I haven't covered more examples but there are two specific libraries built on top of TensorFlow which is magenta.js and ml5.js which are really focused on creative coding and if you want to see the exciting work on GANs and all of that that's really where you should go and look at so people who are doing stuff with music beat detection art there's a wonderful example of somebody using TSNE on fonts data to identify which two fonts go together as a programmer I feel like we don't really have a good sense of fonts and we don't really understand what fonts go together, what don't go together if you know anything beyond the standard fonts and this is just taking font data running it and saying what are the two simple fonts that you will go together and I'm basically running a TSNE on it finding similarity indexes and giving you that feedback and you can pretty much run this as part of any application that you have on the creative part which may not be the audience's lens in this conference but if you're really keen on seeing interesting stuff that eventually will trickle down in the business then look at magenta and ml5.js okay so two questions there that I raised already one is obviously data versus model privacy biggest problem for doing anything on the browser is I spend so much time collecting my data I spend so much time fine tuning my weights I don't want to share the weights with everyone in the world who can download it from the browser itself so if that is your concern there may be valid concern for not going down this path right I don't really want to do that there is no easy solution to it even though the blockchain people may be thinking about some solution here but mostly their solution is not as easy to say how do I protect my model from somebody taking the weights because I am sending weights to the browser right the second part is obviously how do I make the browser the weights more effective right so I need to think about sending a smaller model and there are two things that we need to do quantization to make my model smaller model size quantization and how do I think about so there's strategies in terms of reducing the model both in terms of word embeddings and in terms of model size so if you turn that you can then go and build easier builds for the web and mobile right let's say now I have managed to convince you a bit that this is interesting enough that you want to experiment with your and doing this in the browser itself or experimenting and seeing what the possibilities could be how do you do that right so how do we build how do we build so how do we do that is there an environment to do that and the idea is really can I rapidly prototype to do this so I want to experiment with this do it in an easy way how do I go about it so there are a few options if you like the graphical user based then there is a model builder in deep learn which was the older version of TensorFlow where you could drag and drop and start to do it if you're not in that world but I'm guessing a lot of people here would be interested in doing it using code so the other option is can I write code immediately on my browser and run all this stuff right so the equivalent of what we have in jupyter notebooks is reacting notebooks on the web is observable which is basically allows me to access the entire javascript npm ecosystem and run this stuff on the browser in a very reactive way right so I'll show you one example of that is it running no okay sorry I think the stuff is not mirrored so I just mirrored it okay so this is literally one example of an interactive notebook running on the browser itself it is basically you write code so you can write markdown as you would and you get this but the interesting thing for example here is I'm running a mobile net model here and I can I can get an image from unsplash in this case just getting an image from unsplash and I'm trying to predict what the classes are so the very simple model inference example that you're running on this there's a very little code here just to get the image what the image is the value and I'm just hitting the unsplash api and getting the image right so this is just the code for the slider but maybe a little code to just boilerplate code to get the image right how does this really work so as I select the image I'm literally creating this I'm just using ml5.js and to start playing with this I just load the library which is literally one line of code load the model mobile net classifier here and get the predictions for that image what would be probably five lines of code also in python or three lines of code you're doing the same thing here so I'm loading a model that I've already trained or is pre-trained in this model and now I can actually start to build a model inference just on the browser I have not downloaded anything the stuff is reactive the only key difference from people who use Jupyter notebooks is there is all the cells are linked so it's a topological order rather than a linear order so every cell is connected to every cell so if you update one cell the other cells automatically get updated so that's the key difference but in three lines of code I can start to now play with a deep learning model just for doing image inference right if you're excited about this will like anything else you can just go fork this and start to make changes on this and go from here and build on the same simple example and say I want to show feature visualization on this how do I do that and it's not really hard to do anything with the web trying to show another example on pose net which has been very popular these last couple of weeks people trying to identify poses the internet's okay and in this case this is an image Roger Federer tennis fans these are my pose net points and I can just play with the image and I'm looking at this on this right now this is the example of what I was saying this is not created by me but I'm just forking somebody's work and I can start to play around with it interact it here change it to you know publish it make the link applicable to everyone right okay so you can just go to observable at amitcaps and you can do it or the links will be there on the speaker deck you can just go and click really easy for people to start doing and creating these things and exploring these ideas of how do I build something how do I help people learn this how do I help people build something how do I create it right so this is the one creative option to really play around with this right taking it further from the browser you can obviously use node and take the same stuff to the server and run what runs in python and are on the node on node in the server so if you really need GPU access you can use the same load it's still under development so you may not have custom parity with what your custom layers if you're using in your deep learning models or features that you're using in python and c but that will improve as the ecosystem improves right I would suggest try out these other libraries for doing fast visualization run time I showed column in memory if you want to look at in memory data to do that right yeah so way forward I hope I managed to convince a few of you to go experiment with it to see the potential of how you can help people learn how help people build something help people create something and start yourself as a creator and hopefully we'll have a stronger ecosystem more libraries in this domain and possibilities to show deeper use cases once many of you have come back with that right hopefully many of you will come back and share this with us yeah we're at to a first we're at amit caps anywhere at amitcaps on twitter amitcaps.com at impel or at bar gov on your popular social media right right I mean if you are interested in those sites then if you're running it within your network you can basically do it right so if you're on an htps and you're using a local like a local network to do it you could probably do the same thing there so you could be the applications that run within your system but are playing with this if you want to build like production stuff like what we're doing in using capis then it's very similar to how you access python code you would use the node part of the javascript ecosystem and do the same but then your bandwidth is really dependent on whether you are comfortable as a tooling whether you want to approach it through python or R node if that's your developer ecosystem and that support it the thing that I'm actually excited about is really not having to do a lot of that stuff and get people to immediately play with it and create it which is has its own challenges in the production environment where I think for me the biggest challenge is people don't want to share their model with the world so I don't want to get the model I want their data to come to me and that's one of the real world examples well I'm using it to teach people how to do that these are so one we are at the early stages of deep learning there is a very small community which really does this stuff so how do we reach out to a larger community to explain what we are doing and building something that for me is not just academic in one way but is really helping people understand what we are doing and I'm building stuff on this ML5JS creative coding people are doing stuff on this in your domain if you say like he's saying in banking and production I have more security regulations will I go there? probably not but you'll have to take a call on that I mean for me this is as real world in terms of reaching out explaining stuff as is anything that's hidden in a jupiter notebook and accessible to maybe a smaller set of people but your mileage may vary ok thanks so much Amit and Bhargava I'm afraid that's all the time we have for today if there are any questions I'm sure they'll be around let's have a round of applause for both of them thank you