 Hello, everyone. My name is Minko Gachev. I'm working on Angular at Google. Today, David and I will share with you how to build future proof web applications with Angular and TensorFlow.js. Google has been pioneering in the machine learning space with different products. For example, with Google Assistant using voice recognition, image recognition in Google Photos and Smart Compulse in Gmail. To take advantage of machine learning with JavaScript, we've been using TensorFlow.js. TensorFlow.js is an open source project which allows anyone to run existing ML models, retrain them, or develop custom models for more specific use cases. One of the strengths of TensorFlow.js is that it sets the foundation for many successful libraries and tools. Few popular examples are Danfujas, FaceAPI, and many others. Using TensorFlow.js in the browser usually involves installing it or referencing it with script tags after that configuring it and finally querying the model. You first have to start by installing TensorFlow.js in model and the backend that will execute this model. After that, you can configure the model and integrate it with your existing front-end text tag, for example, with the framework that you're using. Finally, you'd query the model potentially multiple times, getting a stream of results. Angular is a platform which comes with batteries included. Let us see how using the framework, Angular CLI and RxJS, we can make this process even simpler. Angular CLI can automatically install the packages you need and integrate them with your application. Angular has a good integration with RxJS, which knows how to handle a stream of results and could help you query the model. The Angular framework offers an expressive, optimizable template syntax which can hide all the complexity of interacting with machine learning. And now let us look at an example. Let's say that we're building a chat application where we can send messages to other users. When we type something into the chat box, the app gives us heads up if the message could be considered offensive by the people we're chatting with. To do this, when the user starts typing something into the text box, we need to check if their message is toxic or not. A naive approach would be to get the words in the sentence and check if any of them belongs to a dictionary of toxic words. This, however, can return some false positives. For example, calling a person horrible is toxic. But expressing our opinion that the weather is horrible will probably not hurt anyone's feeling and should be considered non-toxic. This means that our basic heuristic is not going to work well in the general case. Luckily, the old TensorFlow.js offers a toxicity model, which by using natural language processing categorizes a sentence as toxic based on different criteria. You can find this model in the Note package registry. To use it in our application, we'll first have to install it. After that, we'll have to import it together with TensorFlow.js in the application. Next, we'll have to configure the model and load it. After that, we'll have to add a change event listener to the chat box. And finally, we'll have to query the model on each change of the input, updating the label manually if the message is toxic or not. Handling edge cases and trace condition would make this cell decode even more complicated and tangled. Although TensorFlow.js is fast and well optimized, running the model on a large input could also block the main threat and cause frame drops. Let us see how we can implement this with Angular and its TensorFlow.js bindings. All we need to do to install and configure the app is just run ng-at-ngxtf.js. This would automatically install all the required modules and integrate TensorFlow.js with our application following best practices. It will also export a pipe that allows us to use the toxicity model and query it outside of the main threat so that we don't drop frames and provide 60 frames per second experience. Now, let us see how to implement the entire app by using Angular. First, to show all the messages, we'll use the message list component. After that, we'll provide an input, binding it to a text property. This way, when the input changes, we update the value of the property. Now, we just need to categorize the text as toxic or not. We'll start by showing the value of the text property on the screen. We'll pipe it to the toxicity model and, right after that, unwrap the prediction by using the async pipe. Finally, we'll just modify the results to give the user heads up visually if they need to give their message a second thought before sending it. And here is the final result, saying that the weather is great, is not considered toxic, but calling a person horrible is toxic. There are many applications of machine learning in web development, varying from natural language processing, image processing, recommender systems, and so on and so forth. In this example, we looked at one sample use case. Now, David will share a sample case study on how we can improve the performance and user experience in e-commerce by using predictive prefetching. Thanks, Minko. Now, let's take a look at the e-commerce site use case. e-commerce sites provide many opportunities to leverage ML to improve the user experience. We will make our example concrete by using data from the Google merchandise store. In this case, we're going to focus on how we can use machine learning to drive down user latency. Latency is highly important to e-commerce sites because it has been directly linked to user satisfaction and can even increase the rate of conversion. For example, NUEG shared that reducing latency through their prefetching solution improved their conversions by 50% by making transitions four times as fast. Here is an example of how a store would look without any latency optimization. As you can see here, whenever the user clicks on a different page, they suffer an irritating delay while the images load. This harms user interactivity and prevents users from having a smooth experience. So how do we prefetch the next page before the user clicks on it? Now, this is inherently challenging because every site has different usage patterns. They have different layouts and varying content. As a result, it is nontrivial to predict the next page that the user will visit. At the same time, the devices that are likely to benefit most from prefetching are banned with constraint. And we have to be careful not to overwhelm them by prefetching too many pages. So the trivial solution of prefetching the whole site won't work. We are going to address this problem by creating a custom ML model for every site that learns traffic patterns to provide a tailored high quality predictions. This will be trained on site data from Google Analytics and will be run by Angular to perform the steps we'll look as follows. We will export Google Analytics data to BigQuery, which is a large scale data store. We will then use Dataflow, our managed service for data processing, to convert the data into features that we wish to train on. Next, we will run tfx, which is our machine learning pipeline, to create a TFJS model. Finally, we will integrate the TFJS model with Angular. To make all of this happen, we simply configure Google Analytics to export to BigQuery and use an AI notebook to work with Dataflow and tfx. Now, let's talk about how we can use Dataflow to obtain the features we wish to train on. This code highlights how we obtain the training features for every user session using Dataflow. We take as input a sorted list of Google Analytics events. We traverse through the list of events taking both the index and the page path associated with the current event as input features. We then take the page path associated with the following event as the label. Please note that every site is different. And any other features that would be useful to the model can be added simply by extending the features dictionary. Now, there are other steps we need to take in Dataflow, such as reading data from BigQuery or storing training data. But these should stay consistent across pipelines. And we've provided examples to help you get started. tfx is our machine learning pipeline that we're using to take the training data obtained from Dataflow and train and evaluate machine learning models. In this case, we will use tfx to create a TFJS model. As you can see from the code here, to run tfx, you will have to modify two functions, the preprocessing function and the run function. The preprocessing function is used to specify the operations that you want to perform on the data before it is used by the model. This can include actions such as normalization. The run function is where you define and train your TensorFlow model and convert it to TFJS. We provide examples how to use these functions to create a TFJS model for prefetching the next page. In addition, tfx provides a rich set of examples for creating models for a wide range of goals. Now, why use tfx? There are many benefits to using tfx, including end-to-end integration. tfx can directly import data from BigQuery, Cloud Storage, or many other locations. It automatically inspects the training data for abnormalities that can harm model quality. It then trains the model at scale using GPUs and TPUs if needed. Finally, it evaluates and validates the models to ensure they are high quality before they are made available to the user. tfx natively supports tfjs, which means that the exported tfjs models are ready to use with Angular. Finally, tfjs is a general-purpose pipeline. It allows you to design your model architecture or use one of the latest state-of-the-art models, whether it be NLP or image processing or many other domains. And these models can be deployed in many places, including servers, mobile devices, or browsers. Now, why use tfjs? tfjs works in any JavaScript environment, whether it be browsers, server-side with Node.js, mobile devices, or IoT. tfjs is optimized for client-side inference. It has small bundle sizes that are lightweight, and it also supports model quantization, which can reduce model size by up to 4x. There's many benefits to client-side inference. It both preserves user privacy and saves server cost. Finally, tfjs is fast. It can use client-side acceleration, such as GPUs or WebAssembly to speed up inference. And it is specifically optimized for a variety of devices, including iOS and Android. Now, let's talk about how we would integrate our tfjs model with Angular. Our Angular and tfx integration can lazily load the model to avoid impacting load time performance, which would harm the user experience. Similarly, it can run the model off the main thread as a worker thread to ensure the frames are not dropped negatively affecting the user. Finally, it can use the model to adaptively pre-fetch pages, striking the right balance based on the user's connection. Our approach has many benefits. We are able to build a custom machine learning model that is tailored to every site by leveraging existing automation that connects Google Analytics, BigQuery, Dataflow, tfx and tfjs, and Angular. Our approach is turnkey. We can leverage the services and data that are readily available to provide a high-quality solution. Finally, this approach opens the door to creating a personalized experience for every user. We can use custom machine learning models to provide an experience that is tailored to every user's individual needs. You can think of this predictive experience as a progressive enhancement over your existing e-commerce platform. Thank you. We hope that you will visit the examples to learn more.