 Hello and welcome. This is a quick peek at an upcoming workshop on OpenShift as a machine learning platform. We'll use OpenShift to help develop and deploy a computer vision model. We'll use native and third-party tools to demonstrate a few use cases both on the cluster and at the edge. For this workshop we'll be using YOLO. YOLO is a pre-trained open-source computer vision model capable of detecting dozens of everyday objects and images, video files, or even live camera feeds. As you can see, it displays bounding boxes around the object with the name of the object and a score indicating its confidence in the level. It can also detect multiple objects and track them independently and give individual scores. It's pretty useful rather than box, but in this workshop we'll customize it to recognize different objects through retraining. I'll detail the entire process of capture, annotation, training, deployment, and retraining to refine our custom model. The result will be a pipeline we can reuse to produce entirely different models for different scenarios. The models themselves can be integrated into intelligent applications where they can interact with business logic and external systems and ultimately interact with the physical world. OpenShift is the ideal platform as its capabilities complement data science and AI ML principles of experimentation and continued iterations while enabling workflow and best practices management through automation and pipelines. Its flexibility to deploy prescriptive moderated tool sets and or combinations of third-party open source or downstream tools enables teams to work with solutions that are right for them. So stay tuned to this series and be sure to let us know your thoughts on this or any other workshop ideas for the future. Thanks for watching!