 Hello, everyone. I'm Ying, product lead of AI and industry solutions at Google Cloud. Today, I will be talking about how to build enterprise AI product. I will share a basic design framework to help you figure out the required component when you design an enterprise AI product system. First, let me do a quick self-introduction. I've been working at Google for about eight years, started as a software engineer on Google Ads and Payments teams, and then transferred to become a product manager on Cloud AI team about three years ago. Since then, I've been focusing on building AI products and AI solutions to help enterprise customers to transform their business. A disclaimer at the beginning, this stack is created to summarize my personal learnings of product management. It does not represent Google or any of my past employers. With that, let's get started with today's presentation. The adoption of enterprise AI product has been growing significantly in the past four years. We've seen about three times growth of the enterprise AI product market. And it is predicted that by 2026, the total addressable market will reach about $53 billion. The contribution of digital AI workers grew by 50% in about two years. All of these numbers show that the enterprise AI product will transform the traditional work processes. On the other hand, we see that technology is ready. AI becomes more and more intelligent. For example, Google's AlphaGo AI wins three match series against a world's best player. And recently, we see that GPT-3 can create poems and recipes almost similar to human in certain situations. But it is still not an easy task for many enterprise customers to really transform their work processes for various reasons. For example, most of them has a legacy work environment. There is no or limited IT infrastructure setup. It usually cost them a lot to purchase new devices or systems to get ready for the IT transformation. For most of their work processes, they may take a manual approach. It might be still labor intensive these days. And that means there can be a lot of dependencies to manage when they try to improve automation. Besides that, the industry ecosystem is not ready either. Even if some companies want to transform their work process, they find it hard to find the right downstream or upstream processing unit and product which can support their new way of production. So overall, designing an automated or AI-empowered system for enterprise use is not easy. Understanding requirement of machine learning code is really a small piece. There are many other pieces that are required for building a successful system. For example, the data collection, data verification, machine resource management, process management tools, et cetera. So when thinking about designing a new AI product for the enterprise customers, it's helpful to think about the three different perspectives as shown on the slide. Meaning to analyze the requirements, figure out the product design, and then flesh out the go-to-market strategy. When trying to understand the user's requirements, we can start with understanding the business process. Thinking about who are the user personas we want to support when building the new product, what's their production environment, and whether they have enough data to get started using machine learning products. From there, we can start designing the major pieces of the machine learning system. It usually includes help users to manage the data, train the model, evaluate the model, and think about how they can deploy in the model and scale up the adoption. After product design is done, we want to flesh out the go-to-market strategy, understanding who can be the partners, or whether we want to take a direct-to-customer approach to make the product available to our customers. So with all these key elements in mind, let's take a look at each of these steps to learn about some details of how we can put together an AI product system by following this framework. Let's start with understanding the business workflow. When get started with a customer trying to understand their business process, it's helpful to figure out the following questions. For example, how does the business process generate value for your team? What are the different steps required in this process? What are the preconditions and dependencies for each of these steps? And what are the challenges that the users may be faced with in each of these steps? For each of the different steps in a business process, there might be different roles, and usually they have different responsibilities. And that means the way they evaluate the outcome of their product or the outcome of their work process can be different. So understanding their challenges at work and how they want to change the process to do a better job is a key question we need to figure out for each of the user group. After that, we also want to learn about their production environment. Some of them might be already using some tools that they don't want to get rid of even if they change the software system. And some may want to try some new AI solutions, but their job function may need to change as well when they try to get started using the new AI solutions. And that means for their overall work environment, maybe they need to learn, manage the different upstream and downstream systems. They need to purchase new devices, like servers or some robotic systems to improve the automation. And there might be limitations because of hardware, because of budget and various reasons. All of these questions may limit the adoption of AI products in their real work environment. After understanding their production environment, we want to help the users figure out what are the available data they can use to train a machine learning model that can meet their expectation for automation. So this is a really important question to figure out before we get started designing the products because the amount of data really impact the performance of machine learning product. We want to help the users understand if there are ways to help them collect the data more efficiently. Or if some of these use cases is not a good fit for using machine learning product because of lack of data. So in all kinds of different situations, we need to really understand the availability of the data and the challenges for manage the data. After going through each of the steps for analyzing the requirement, the output could be something like this, a quick summary of the user personas, the problem statement and the production environment, as well as flow diagram showing the different steps of the business work processes. Let's use logistics management AI as an example. The different user personas we want to support include shopper, requester, approval, buyer and supplier. Each of them wants to perform a different task using our AI product. For example, shopper wants to enter the product they want to buy into the system. Requestor wants to submit these requests after reviewing the information entered by shopper. And approval will think about the need of the company and then approve the request if it's reasonable. Buyer will help find a supplier who can fulfill this request. And the supplier will actually provide the product the team actually needs. So after learning about the different steps in this process, we realized that the amount of manual input required for each of these user persona is actually significant. And it is an error-prone process to rely on the users to directly enter all the required information into the system. So the goal of the AI system would be to improve the efficiency of this order fulfillment process by reducing the amount of manual input the users will need to submit to the system. AI can help automate the information collecting process in this case. And we also understand that many of the users has a legacy setup. They may have limited network connection in their work environment. And the computation power of the machines may not be great. So after learning about all these requirements, we are ready to move to the next step to figure out the major pieces of designing a machine learning system. The first component we want to design is data annotation tool that can help the users to collect and label the data before they get started training a machine learning model. Some sample features of this component can be enable the users to hire human labelers if they don't have enough labeling resources or we can provide manual labeling tools or automatic labeling tools. All these different type of tools can help the users label the data more efficiently or even suggest the labels or assign the labels automatically in some cases. The ideal user scenario would be users can use these data annotation tools generates enough model training samples so that it can help ensure the machine learning model performers in the next step. When trying to understand the user's requirements on model training, usually we need to understand how many of the hyperparameters users may want to control. For more sophisticated users, they may want to tune the hyperparameters themselves, but for other users, they may prefer a black box service. Some of them may want to manage the model training resources on a machine and some of them may want to limit the time they spent on model training. So to figure out all these questions, we need to tie this back to the user personas we want to support and see what is their performance for each of these steps when they try to build up a machine learning system that can help them automate their work processes. The next step would be to help the users understand the model performance. Metrics such as precision and recall area on the curve can be really helpful. And besides that, showing some confusion metrics to help the users understand how the models predict whether it's a true or false example or how the model may make a mistake when they try to generate the prediction. It can be very helpful to help the users further improve the data quality and get prepared for next round of model training for improving the results. And we can also let the users to upload some testing samples so they can understand how the user perform how the models performed when they tried to predict different inputs. After that, we want to help the users to figure out how they want to deploy and manage the machine learning models and to generate predictions to help automate their work processes. Managing the deployment can be a trivial task because each of the machines that the models can be deployed on may vary a lot. So to provide a single page where the different deployment can be monitored and to simplify the deployment process for the user are some popular features for these deployment components. And we can also further help the user figure out how to scale up the adoption. Things like continuous testing and monitoring the model performance after it's deployed can be really helpful. And other ML ops features like running some online testing to prevent the drop of model performance or to send alert when the prediction results are showing some different patterns. Features like this can be very helpful. After going through all these different steps of the product design phase, we may have some output like a product design doc, a UX design mark or a product design prototype. Using the Google Cloud AutoML Vision website as an example, we can see that we may generate some UI marks that shows how the users can import data, label the image or train a machine learning model. And engineers can also generate some prototypes to try out some of these major component we designed for the user so that we can interview some early customers get their feedback before moving into the next step to build a complete product suite. After designing the product, we can think about how to build the partner ecosystem and then bring the product to the market. There can be different types of the partners we can collaborate with when launching the Enterprise AI product. For example, system integration partners may help put together the different pieces, build an AI pipeline to help the users completely automate their workflow. Independent software providers can integrate AI solutions into their existing product so the product will get adopted by the end users in an easier way. Some of the specialized partners have lots of insights for certain domain so they can help us understand the industry experts' requirements and then customize the AI product for their usage. Choosing the right type of partner can help us scale the product more efficiently. Besides working with the partners, we can also support the customers directly. In that case, we want to figure out who are the targeted customers and when trying to get them on board, what are the support we need to provide? Maybe we need to train them to use the new AI product which they are not familiar with. Maybe we need to help them design the architecture of their end-to-end system so they can understand how AI can work together with some of the existing tools they've been using with. We'll also need to think about the pricing model, whether it should be consumption-based, value-based, or if we want to include some other component in the pricing framework. After figuring out all these different questions, then we will be ready to launch the product. So to recap all the different components we've went through in the presentation. At the requirement level, we want to understand the business workflow, the user persona, the production environment, and the data availability. When designing the product, we want to figure out how users can annotate the data, train the model, evaluate model, do the deployment, and scale up the adoption. And finally, we want to figure out how we can bring the product to the market. Either it's through the partners or it's a direct-to-customer route. So with all this in mind, we will be ready to launch our new enterprise AI products. And that is all for the presentation. Hopefully this is helpful to you. Thank you.