 Hi, I'm Karthik Ramachandran. I'm a product manager at Google Cloud. I'm here to talk to you about Qt4 Pipeline's version 2. Before we get into what Qt4 Pipeline is, let me start with what I mean by pipeline. Because the term pipeline can be a bit overloaded. Here, we're talking about machine learning pipelines. Because machine learning models require complex multi-step workflows. When building a model, for example, you may have to clean and transform data, create features, train multiple models, and evaluate this model. Managing and executing these workflows can be difficult, especially if you are trying to run them in a reproducible, portable, cost-effective, and scalable way. When I say the term pipeline, I usually mean a way of modeling one of these workflows with the set of connected steps, where each step takes as input the outputs of the previous step, performs some additional computation, and produces outputs that can be utilized by future steps. What I have on the screen here is kind of a canonical machine learning pipeline, one that you might see in many, many applications that extracts data, updates it, prepares the data, and the data goes into a training step. Once the model has been trained, the model has been evaluated, validated, deployed. Qt4 Pipeline is an open-source, Kubernetes-native framework for building and deploying scalable that people machine learning workflows. It allows you to model ML workflows in the face of container execution, where each container can take input from a previous execution and produce outputs that can be used in subsequent steps. If you can author your pipeline using either the Qt4 Pipeline SDK or the TensorFlow Fender SDK, the SDKs make it straightforward to convert the Python code you would normally author for training a model into a reproducible pipeline. They also make it easy for you to orchestrate non-Python code, because in Qt4 Pipeline, every step of the pipeline is simply a container execution. In addition to containers, you can also use KFP to orchestrate other applications and services. Finally, a number of major clouds, including GCP, provide components and neighbors to leverage their managed services with the new machine learning pipeline. For example, Google Cloud provides components that allow you to take advantage of GCP's data flow or vertex AI, AI's distributed training service. Qt4 Pipeline takes care of the work associated with scheduling each step, the passing information between steps, and captures metadata about the artifacts produced and consumed by each step of the pipeline, just to name a few of the capabilities of the system. When we look at the next version of Qt4, Qt4 Pipeline 3.2, we have really spread our effort across 4 key areas. The overall goal is to, of course, make the system easy to use and make it much simpler by the community builds on top of KFP. The first major area of improvement is metadata metric. Right now, Qt4 Pipeline automatically tracks the artifacts that are producing something by each step in the pipeline. In V2, we're going to add the ability for users to specify their custom metadata for artifacts enabling users to provide rich, curable descriptions for the artifacts. We will provide simple ritual ways to capture ML metrics, along with a robust set of visualizations and the ability to compare ML metrics across pipeline grounds. We're also streamlining the Qt4 Pipeline's DSL, making it much easier to use removing redundant functionality and adding tools that can make it easy to go from Python code to containers. The UI is getting a much needed refresh. We're dramatically improving the scalability and performance of the UI, adding modern styling and making improvements to support new metadata and metrics features that we are adding. Finally, to provide the most important, we're introducing a new platform-independent representation of ML Pipelines to make it easier for the community to build tools based on Qt4 Pipelines. Let's go a bit deeper into each of these announcements for each of these areas. In terms of metadata and metrics, we're making these plus classes in the platform. We'll now provide a standardized type methodology, tools for querying and updating metadata. This includes enhancements to the SDK to make it easier to attach custom metadata to artifacts. The ability to query and visualize artifact lineage and a set of well-defined integration points for those building tooling on top of KFP. For example, you may want to build tooling that enables you to track which data sets we use to produce which models or you may want to be able to track a single record all the way from training through to which model to where it was deployed. In the case that you may need to take a model out of production if you no longer want to use a particular data set, for example, these tools will enable users to have robust model governance on top of the machine learning pipeline. In terms of the SDK, we're making three major sets of changes. One, we are streamlining the authoring of components and unifying the different mechanisms for authoring components using the add component annotation, which is shown on the slide. We're adding a number of mechanisms for specifying and querying metadata. We're introducing tools that simplify the process of building containers from Python functions. The goal here is to really reduce the amount of time between the experimental code that you write and the production code that you need. Overall, we think that the new SDK will be smaller, more easily understandable and as rich and powerful as the original. The UI is also getting very fresh. The new UI will be able to support much larger, more complex workflows. This includes support for all of the control flow features such as loops, conditional, and extra handles. It will also have the built-in scale to Python consisting of hundreds of steps. And finally, we will be supporting the visualization of artifacts and metadata natively within the UI. Finally, let me talk a little bit about the intermediate representation. We have had a number of requests over the years from our community to produce an intermediate representation of machine learning pipelines, which is both DSL and execution engine agnostic. The goal here is to make it easy for users to build new tools for alternate pipelines and enable KFP pipelines to run on alternative orchestrators and backends. Generally, people want to be able to move pipelines between different systems and have a high degree of interoperability. For example, one of the backends that will support the new intermediate representation is GCP's Vertex Pipeline, a fully managed ML orchestration system that is capable of running KFP pipelines today. With Vertex and the intermediate representation, you'll be able to write an pipeline or component in the open source key implementation and run it on GCP's managed service with little or no changes. So in my final minute, let me just suggest a few next steps. First, you can learn more about KFP pipelines by visiting the KFP pipeline site. We also have an initial version of KFP pipeline V2 out with the goal of moving towards the final version in late October or early November. You can try that out and look at the documentation at the bit.ly link on this site. And you can of course go to our GitHub repository where you'll find tons of opportunities to contribute and join the community. I should note that the community is rich and ever-growing and we would love to have your input and your contributions. With that, thank you all very much and I hope you have a good day.