 Hi, my name is Jeffrey. I'm a product manager at Google working on AMP. And I'm joined by Sandeep Gupta, a product manager on TensorFlow.js at Google. Our keynote is divided into two sections. First, we'll take a look at how the page experience and AMP works together. And then Sandeep will walk us through using TensorFlow.js to harness the power of machine learning to build novel experiences for the web. Speaking of the web, web is the truly open and distributed system we have. There are a few administrative things, such as getting a domain name or hosting that you need to worry about. But otherwise, nobody stands between you and what you want to tell to the world. From the early days of the internet, web developers have used this open nature to build compelling web experiences, sometimes of profit, and sometimes just for fun. As the user expectations have evolved, we as web developers are tasked with a great responsibility to build truly unique web experiences that capture people's imaginations, delight the users, and ultimately persuade them to have an authentic connection with you. This might be in the form of consuming content you produce or buying a service that you're providing. A useful way to think about user experience is using the four pillars of UX. They are loading, which signifies how fast or slow the resources of the page is downloaded and displayed on the user's browser. User annoyance, an important pillar that quantifies some of the web page behavior that gets in the way of user accomplishing a task. Security and privacy, a critical aspect of how safe, secure, and privacy-friendly a web page is. And accessibility. The World Health Organization's disability health fact sheet finds that over a billion people, about 15% of the world's population have some form of disability. We have a huge responsibility as web developers to build web pages that are inclusive to all users. This framework nicely lines up with the Page Experience ranking update that Google is launching mid-June. Page Experience is a set of signals that measure how the users perceive while interacting with the web page that goes beyond just the informational value. The Page Experience signal breaks apart into the four aforementioned pillars as follows. For loading, we have largest contentful paint and first input delay. The largest contentful paint, LCP for short, is a metric that reports the render time of the largest image or text block visible within the viewport, relative to when the page first started loading. First input delay, FID for short, measures the time from when the user first interacts with the page. That is when they click on a link, tap on a button, or use a custom JavaScript power control to the time when the browser is able to process the event handlers in response to that interaction. For the user annoyance pillar, we have cumulative layout shift, CLS for short. CLS measures the sum total of all the individual layout shift scores for every unexpected layout shift that occurs on a page. A layout shift occurs when at any time a visible element changes its position from one rendered frame to the next. No intrusive interstitials is an existing search ranking policy and an associated signal used in search that detects the presence and use of interstitials that are user hostile. Such interstitials are often used to build, to trick users to doing something that they do not want to, preventing them from reading or interacting with the page they landed on from Google search. There are a lot of great uses of interstitials, such as the ones required by the law, GDPR for instance, or an interstitial that provides updated business hours during the coronavirus pandemic. These are not affected by the signal. For security and privacy, we have the HTTPS protocol signal. Users should be able to confidently browse the internet without having to worry about man-in-the-middle attack or improper impersonations. And lastly, for the mobile friendliness, signal covers the accessibility pillar, which measures how effective the pages are on small screens, often used by mobile phones. The first three metrics, largest contentful paint, first input delay and cumulative layout shift, are the Core Web Vitals, a set of metrics that apply to all web pages and should be measured by all site owners and that will be surfaced across all the Google tools. Each of the Core Web Vitals represents a distinct facet of user experience and is measurable in the field and reflects a real-world experience of a critical user-centric outcome. They can be measured using JavaScript using these standard web APIs in your browser and provides an excellent product agnostic path for instrumentation. The Core Web Vitals are not just a set of metrics, but also a robust set of threshold guidance that map to user expectations. The Chrome team has done extensive research and come up with the guidance for what it means to be performing good, poor or somewhere in between. Let's take an example of LCP. LCP is where we measure the loading performance of the most meaningful content of the page. Values less than two and a half seconds means that the page is delivering a good page experience. Anything greater than four seconds means that the page is actually performing very, very poorly. Similarly, for first input delay, 100 milliseconds is a maximum delay users have to encounter during the initial input and its response. Anything greater than 300 milliseconds starts to feel like the page is frozen and leads to a bad user experience. Cumulative layout shift is a unitless metric. Anything less than 0.1, excuse me, is considered to be good or anything greater than 0.25 is great, is considered poor. AMP puts developers on a path to success not just by creating great pages, but also helping you maintain that great page experience over a longer period of time. This means the AMP runtime provides developers with the constraints needed to create great performing pages. And the use of AMP cache further improves this because this allows you to have the pages experienced by your users near instantly. However, like many other frameworks, AMP can't implement all the web development best practices into the runtime. This is why in the run up to the page experience ranking launch, we've been encouraging developers to take a look at your AMP pages to make sure your pages are performing very, very well on your domain. So the first step in starting to implement great page experience for your site starts with the AMP core. Luckily, the AMP core, which the AMP project builds, is comprised of extensions in the runtime, have robust set of constraints implemented to make sure that the AMP is performing very, very well. Over the past several months, we've invested heavily to making sure the core is super duper lightweight and extremely fast. If you're using the AMP framework, you are taking advantage of this automatically. The next building block is how the page is served. Core Web Vitals, as I mentioned, is determined for the real world outcomes. That means when the real users are interacting with the page, that's the signal that is actually powering their page experience signal. In case of AMP, pages could be served either from the publisher's domain or from AMP cache, and depending on how the users encounter the content. Many sites will see a significant portion of the AMP versus actually happening on your own domain. So to get the strongest possible user experience on AMP pages, we encourage you to take a look at how your pages are performing. The best way to get started is to use the AMP optimizer. This is a set of tools that we use on AMP cache, and we believe this provides great optimizations to your AMP pages. You can learn more about AMP optimizers on the link that you see on the screen right now. And that's how AMP project thinks about the page experience. We want to make sure that AMP allows you to create great page experience. At the same time, also continue to improve the page experience without you having to do a lot of work. I'll now turn it over to Sandeep to talk more about TensorFlow.js. Thank you, Jeffrey. My name is Sandeep Gupta, and I'm the product manager for TensorFlow.js at Google. So Jeffrey talked about some key concepts in UX that are driving next generation web experiences. I want to talk about another technology which has the ability to transform web experiences, and that is machine learning. Machine learning touches our lives daily with applications across many fields, such as healthcare, education, energy, transportation, sustainability, and accessibility. Due to publicly available large data sets, more powerful computing at our fingertips and research in new methods, we are seeing improvements in all kinds of products and services powered by machine learning. And you see some examples of that here on this slide, such as being used to power platforms for education in classrooms, helping detect diseases or predict natural disasters, and giving people new ways of communicating and interacting. And we're beginning to see how machine learning can help improve web experiences as well, whether it is to interactively experience products, such as this virtual makeup try-on web app by L'Oreal, or accessibility tools, such as the one shown here in the middle, where a person is playing a keyboard by moving their head, or apps that can detect body pose and use that in many useful ways. Machine learning has a lot of users for the web. Let's take a look at some specific examples of how web developers have been using machine learning. InSpace is a virtual learning and collaboration platform, and they use real-time toxicity filters in their web conferencing app. So when a user types something bad, it's flagged before it's even sent to the server for processing. It alerts the user that they may want to reconsider what they're about to send, creating a more pleasant and a safer conversational experience on the platform. Another example is from Include Health, which is a musculoskeletal care delivery tech company. They're using body pose estimation models to deliver guided physiotherapy at scale. So many people unable to leave their homes or travel these days, this technology allows for a remote diagnosis and treatment from the comfort of their own home, using just a web browser and a standard webcam that almost everyone has easy access to. This is an example for creative design. You can bring a character to life by using pre-made machine learning models that can estimate body pose and facial gestures. This pose estimator tool was created by the partner innovation team at Google, and it allows you to draw any SVG character you like and then use your body to control it in real-time, giving animators a motion capture solution to drive 2D character animation that anyone can use with just a webcam. And yes, all of this is running entirely in a web browser. Here's a core enterprise use case. So page load time is a very important factor for user experience on a website. And studies have shown that it can dramatically impact page view, time spent on the site and even conversions and clicks on an e-commerce site. Machine learning can help predict user's navigation patterns on your site, and by selectively pre-fetching the page assets can significantly improve page load time. Here you can see that the site on the right with predictive pre-fetching loads twice as fast as the unoptimized one on the left. So hopefully I've convinced you that machine learning can be a very powerful and useful tool for you. So now you may be wondering, is it easy to use? Do I first need to learn Python? Well, the answer is no, or I wouldn't be here speaking at the OpenJS world. So TensorFlow.js is a library for machine learning and JavaScript that can run in the browser or on the client side or on servers with node.js. It provides an easy JavaScript API through which you can use machine learning with just a few lines of code. By running in the browser, it enables low latency execution, lower server costs for your applications, and by keeping user data on the client side, it enables privacy-sensitive applications. Lastly, it is GPU accelerated, so you get great performance. With TensorFlow.js, you can write your application code once and use it anywhere. And running ML in the web has some unique advantages over running it in native apps. For example, there is zero install needed. You can reach an audience of billions of users instantly simply by sharing a URL with no complex environment setup for your users. Since JavaScript is a very versatile language that can run on a wide variety of platforms, there's a large list of environments where TensorFlow.js can be used. So you can use it client side in all the popular web browsers, as we mentioned, or on server side via node.js, taking advantage of the huge NPM package ecosystem. You can run it natively on mobile platforms via React Native, Angular, or PWAs, and even on IoT devices, such as Raspberry Pi via node. With TensorFlow.js, you can run existing pre-trained machine learning models. You can customize models for your use case by retraining them on your own data or write your own models completely from scratch, just like you may already be doing in Python, but now in JavaScript. We have released many pre-made models which are ready to use out of the box with an easy high-level API. These models range across many categories, such as vision, body, text, and sound for you to use. You can check them out on TensorFlow.org slash JS slash models, and you see them, and you can check out the demos and documentation, and we are constantly adding to this collection. So now let's take a quick look at what this looks like in practice. We will look at a text example built with our question and answer model. So let's say you want to build an app to help find the answer to a question in any piece of text that you present to it. Here we have a Chrome extension that does this on the text on any web page. Just type your question and the model provides an answer and scrolls to the part of the page that most likely answers the question. Examples like this are now possible with the new BERT-based question and answer model in TensorFlow.js. And to do this, you just need a few lines of code. So let's take a look. First, we import the TensorFlow.js library and the pre-mode model Q&A model that we want to use. These are conveniently loaded from our hosted scripts so you don't have to install anything. Next, we can define the text we wish to search. This could be some text on a website. Here we are just using a simple string. We then also define the question the user wants to ask. And this could come from some input query in a real app. Now we load the question and answer model itself with Q&A.load method. As this may take a few seconds to load, it's performed as an asynchronous operation. So we are using the then keyword here to wait for it for it to be ready. Once the model is available, our function will be called with the loaded model passed in. Finally, we can call our model.findAnswers method. We pass to this function the question we want to answer along with our search text. And when this completes, it will return an answers object from which we can get the most likely answer from the given passage of text. In this example, the model predicted cats as the answer to our question, which we proposed, which is correct given the text we had to search. And that's all there is to it, pretty simple. So give it a try. The exact same workflow works for using any of our pre-trained models. Here are some of our most popular models. Cocoa SSD is an object detection model which can identify 90 different object classes and show you the bounding boxes in the images where the detected objects are. MediaPipe face mesh is a high resolution face tracking model which can recognize 468 key points on the face. It's only three megabytes in size and it provides real-time face analysis for detecting facial gestures, lips, and eye movements. This has many applications in retail, entertainment, and accessibility. For human pose estimation, we have a powerful and easy to use pose detection API which supports three different models for a variety of performance and accuracy needs. We recently added two powerful new models in this API, MoveNet, which is an ultra-fast and accurate model that tracks 17 key points optimized for diverse poses and fast actions. And the second MediaPipe's blaze pose, which gives 33 key points and this extra granularity may allow better tracking for certain applications. So you can do a lot with our pre-trained models out of the box that I showed so far, but sometimes you need to train a custom model for your use case. Google Cloud's AutoML service lets you train powerful custom models on large amounts of data in the cloud, no machine learning expertise necessary. AutoML takes care of creating the best model for your training data and shows you how your model is performing on various evaluation criteria. You can also choose whether you want a model with higher accuracy or faster prediction or a trade-off between the two. And once your model has trained, you can either create a cloud endpoint or you can conveniently export the model in TensorFlow.js format and then deploy it in your web application. Another way to train a model easily in the browser interactively is by using Teachable Machine. This lets you train some common ML models directly in the browser and then exporting the trained model for your use. I highly recommend playing around with this tool. So this was a quick introduction on how you can use machine learning to give your web app superpowers. For more resources and getting started material, visit our website shown here and join the community discussion group on discuss.tensorflow.org. We are curious to see what you built. Check out other users creations and share your own work on hashtag made with TFJS on Twitter. Thank you so much for listening and enjoy the rest of OpenJS world.