 Hello everyone, so I am Chetan Khatwik from X1 Labs in Bangalore, India and we are speaking on the sport right now, AI technologies and application for the enterprise. So mainly I will speak on which is not the end slide, but I will speak when we create the model in machine learning in AI and when we want to develop the model on the mobile device is it's little cumbersome because you train your model it requires training on GPU, but it's like training on GPU but do influence on mobile. So for that I will discuss what are the tools available, what are the techniques are there. So as you know software is eating the world and AI is eating software. So GPU and TPU are eating the linear algebra, linear algebra is eating deep learning and deep learning is eating machine learning and machine learning is eating artificial intelligence and AI is eating software and software is eating the world right. So basically when we define machine learning it has many different components, software and unsupervised and reinforcement learning and again you can do deep learning with supervised and unsupervised learning but I mean 80% of economic growth in the market gap is unsupervised learning, so you provide your data set and then you train it and do influence right and still I believe deep learning is not capable of the way human being think or we are not still at the edge of the brain of me at least. So the difference I feel with the supervised learning and unsupervised learning or the sorry supervised, traditional machine learning and deep learning, in traditional machine learning you need to define the features and then it can train statistical machine learning but in deep learning the model will automatically understand the kind of features like if you provide the house price it understand the schooling will be good if the society is paying good house price on that area. So for deep learning is like if you provide the image it will understand we see and this guy kind of object from this and for this we use the model written model like 0 or 2 or kind of that which is a it is understand the object and give you the outcome. So we try kind of this is our demo it detect the emotion of the person so this is everything is pre-training there is nothing to happen to me so what is the technology behind this convolution neural network, recurrent network and the vision camera it is just if we can create a smart camera it is all about the vision there is nothing else if we can make the camera as smart as our eye that can do everything and this is the one I was talking about in YOLO right so it detect every object in a video that is webcam right now what to want to CPU, detect the bottle there are table phone device keyboard cup and the person so this is not using any GPU like NVIDIA 360 or like that so it is inference the goal of this talk is like you do not need GPU for inferencing right and or it is like you understand the architects of the network it has the input layer, middle layer and output layer or you just bypass this for the connection layer and make it inference working so as you know AI and neural network were there from very long but why it took a lot of time to come on the interface level or at the application level because at that time it was not web scale data and you see every year we are coupling the volume of data and we got massive GPU and GPU at the industry so it made it was not only the innovation at software side but hardware side it was the good innovation and we got different architecture from the different labs like Google, DeepMind, WorldRain and from Toronto, Benjiro and Geoff Hinton, Ian Bullfellow, Facebook, Jan Lackon and Summit Sintana from even like Hedabal and so this different R&D fellow they published papers just they published their written source code so they help to the community and open source society to learn from that and change the architecture and depth of the program level. As I say just supervised learning is just required the input and it gives the response if you just map the input and response in a very well manner it works like for example if you provide input as an email and you provide response either as a spam or not or you provide image and you want whether which object it is from numbering and you provide audio and text and you provide machine translation which is a little note accurate work in the research. What can be done with machine learning and AI use cases you can do character design OCR you can do computer vision you can build the convolutional catalog agent which supports and new thing that one of the lab from DeepMind person Andrew Trask is trying is a decentralized artificial intelligence with help of blockchain so couple of use cases as you see AI we are not taking care of data security because it's public data and so that concept is a federated learning so tools and technology is of now we use we can use R, Scala and Python for data quality, Pandas, NumPy and Spark there is one packet in Apollo Spark called Data Cleaning for predictive modeling we use Scikit-learn, Scikit-image, MN, state models and Sparkling ML. Deep learning there is a various framework from different different open source companies and societies like TensorFlow from Google, Kafe, was from Kafe2 from Facebook and Keras from Google, Intel, Neon, Python from Facebook and Pyramid from Baidu, Mxt Microsoft so the problem is like you create one model in one framework but if you want to exchange to any other framework it takes a lot of time so after that Amazon, Facebook and Microsoft standard project or the O and NX that does the interchange of the format so if you create the your model train your model in a pipeline and what if you transfer the typical file of written model which is compatible for test of law you can use O and NX.PA open source toolkit and this contribution by Facebook, Amazon and Google you can really factorize the workload of machine learning so it takes a less time to spawn of the instance and less time to close the instance. For data visualization, plotly, c-bond and different codes you can schedule each and everything in the front job with unseable or air flow in open source and GP1M will be used the UPI for patent line or NUDA Puda. So machine learning process is a learning you use to connect and prepare the training data choose and optimize your machine learning model setup and manage environment for training train and improve the model, deploy model and production scale manage the production environment. So high level either we can use any APIs from AWS, Azure or Google and set up the environment this is the many some of the algorithm that we use like Exibust also support the APALES PARK as a model so as of now the tests of law and bytocks are not very concurrent and paralyzed the way it works with APALES PARK and kind of frameworks so you need in scikit-lan your data suit your same node you cannot distribute the data and spawn of the different different nodes and for that APALES PARK community is working with to come up with a new package called tensor, a frame and graph frame so upcoming 2.4 you will be able to train and deploy models with APALES PARK so this is one of the use case like it's not only personalized and provide the recommendation using the text data but the visual search and visual the recommendation so you provide the just your in this case you can see I provided my image it understood the pattern color my gender that's why you can see everything is for male there and the pattern also the text was division so it was division it's not very clear in the here otherwise the placer is also placer there from the one of the sopping website as you know the computer reason right now 80% of traffic on the internet is visual data so we can think like if we provide like any user so can you predict like whether this user will click on this ad or not like that or we might think if you provide the wireframe and that generate the screen or not if you provide the screen sort can we generate the source code in form of HTML or not yes it is possible with one of the open source library from Airbnb so you just give the pen or paper over there in a form of wireframe it will generate the screen for you from the because this unit to map the input and response if you provide the input as a this thing as a file and then if you provide the component it will map and generate for you in terms of inference real time for NLP is all about initial of unstructured data and it can does the application as an entity definition there's a key place definition language definition and sentiment and topic modeling and this is all technique and the business application that you can use and so that's it from us any questions there's something that's not there in this slide I would like to talk so what happens that if you train the motor on TensorFlow or PyTorch and if you want to develop that to the mobile so there's two tools one is the TensorFlow light so that has it dot tf light extension so you can provide dot pb or dot s5 extension is a pretend model in TensorFlow and pass it to the TensorFlow light it will compress your matrices and vectors inside the pretend model and will give it to you in a form of lightweight tf light so then you can use android and do inference with your phone camera IPA of android anything on other side with iOS there's AutoML library so that you can use Keras you can import AutoML and pass the for the connection layer to the AutoML it will generate the dot AutoML file for you in iOS and then you will be able to use that in your mobile so it's like if you train your model on GPU or CPU if you want to showcase or using any product or application mobile you can use either AutoML for iOS or Android you can use the TensorFlow light and that go at fully flexible and the comparison and compatibility with the all the APIs for iOS and Android so you can do ingestan with the camera and supply data to the your model then there is some of the pretend model with AutoML like yellow or facial recognition or gender correction on the kind of data report that you can utilize and use that yeah that's it for my sake any questions I have a question so in terms of the full life cycle of such a project she's going to keep going from collecting that training and then operationalizing what's the pain point for you right now what's the pain point is for I believe is the pre-purchasing of data so this is all models requires the data in some format if you do not process and if you view the noise to the model it will not understand automatically this is noise right so you need to process data and set a format otherwise it will be have piles so that's the only pain point otherwise this is all TensorFlow and pathos or cafe 2 is very well highly on APIs and right now it does use the Python Python is not type safe like Scala right so you won't get compile time safety at the top when you train the model you train your model for 2 days and then if you get error it will feel little pain and hurt it's not Scala or type safe that's why my point was like Scala Spark data set API from Spark 2.3 is a type safe type safe in a sense it used to compile time if you do spend mistake like select SWE it will give you error on the spot so you don't wait for 8 hours when you work or run on the cluster and then if you get pain it's really painful because debugging or distributed computing is a really cumbersome like when I was working with ACCA actors and ACCA assistants it's really cumbersome and actually Spark is was trying to use the ACCA actors actor is a lightweight format of passing a message and external message rather than thread but you can utilize ACCA ACCA assistants library and because it's supposed to Scala way that works well but because of the research and gap in academia Python is well forced with the all-famous for deep learning and muscle learning yeah any other questions? Thank you so much.