 Hi, I'm David Adams, a Senior Principal Tech Marketing Engineer with Dell Technologies Cloud, and with me are Ashish Bhattwara, Engineering Head, Streaming Data Platform, and Andre Keady, Senior Principal Software Engineer. Today, we'll be demoing a real-time object detection application which utilizes data streams from Dell EMC streaming data platform and tiers long-term storage to S3-compatible object storage with the recently announced Dell EMC Object Scale, currently in Early Access, all deployed on Dell Technologies Cloud Platform with vSphere with Tanzu enabled. This demo solution is made up of four key technologies. Dell Technologies Cloud Platform, DTCP, delivers a turnkey experience that's easy to deploy and manage due to the tight integration between VMware Cloud Foundation and VxRail. With DTCP, organizations can now develop, test, and run cloud-native applications alongside virtualized applications in a single platform. vSphere with Tanzu is the re-architecting of vSphere to embed Kubernetes into the control plane of vSphere to unify access to compute, storage, and networking. It allows a user to deploy pods directly into vSphere, known as a vSphere pod, or create developer-managed Tanzu Kubernetes Grid clusters which run in virtual machines. We will be deploying both of these in this demo. Dell EMC Object Scale is a re-engineered object storage platform which takes advantage of Kubernetes native automation for deployment, scaling, and management capabilities. With rich S3 compatibility and self-service APIs, developer can quickly spin up object storage containers to fuel everything from big data and analytics applications to ephemeral dev test sandboxes. In this case, we'll be using it as the long-term storage for our streaming application. The Dell EMC streaming data platform takes the best of open-source streaming data software in Provega and Apache Flink and pulls it together in a production grade supported platform. SDP provides the reliable, repeatable platform for edge and core solutions. Next, let's take a look at the basic architectural view for what we've deployed for this demo solution. First, we've deployed DTCP, then enabled vSphere with Tanzu. Next, we deploy Object Scale as vSphere pods in its own namespace in the supervisor cluster and create an object store for streaming data platform. Then we created a namespace and TKG clusters for streaming data platform and Jupyter Hub. Then we deployed SDP, Jupyter Hub, and then our streaming application for object detection. Let's quickly review the infrastructure deployment in vSphere before we move on. Here we can see that our Dell EMC Object Scale service has been enabled under supervisor services and so our vSphere plugin has been enabled. We can look here under object stores to see that we've already created an object store. We can either create new stores here or view our existing ones. This is this object store that we're using for our streaming data platform long-term storage. You can see here that we have several buckets and we can also manage users, Kubernetes resources for this deployment, certificates, events, and health checks. On the left-hand side, you can see that it's all deployed in a Dell EMC Object Scale system namespace as vSphere pods. Taking a look at our streaming data platform namespace where we've deployed Provega, Apache Flink, and all the other resources, you'll see that we have multiple TKG clusters that we've deployed here. We have a cluster for our Jupyter Hub deployment, as well as a streaming data platform cluster, which has highly available master nodes and multiple worker nodes where we've deployed the streaming data platform application, which will be reviewed later on in the demo. Now, I'm going to hand it over to Ashish to give some more detail on the streaming data platform architecture. Thank you, David. So, let me start with the Dell EMC streaming data platform. So, Dell EMC streaming data platform is a modern analytic platform that solves the problem of ingesting, storing, and analyzing real-time and historical streaming data, all with enterprise scale and production support. Streaming data platform is built using community-developed open-source software components. For example, it uses Kubernetes as the orchestration layer. Provega, which is a core key component for streaming data platform, is a streaming storage that simplifies the development of streaming applications by unifying the concept of historical and real-time data, while providing powerful production abilities such as exactly one's consistency and ingestion auto-scaling, etc. The main other component is a plugin architecture to support modern real-time analytics engines such as Apache Flink. Also available in the upcoming release is Apache Spark and Provega Search, which is kind of an elastic search-like engine that also allows for real-time streaming queries of your unstructured log data all on the same STP platform, hence reducing the need for separate pipeline and hardware. By taking the best of open-source and pulling it together in a production-grade supported platform, STP provides the reliable, secure, manageable, and repeatable platform for edge and core solutions. Streaming data platform has a few key features. Number one, unified data analytics for both real-time and historical data, so that data scientists and developers write code once that deals with all types of data without worrying about independent batch and stream processing. In other words, developers don't have to code differently against the live of the wired data versus historical files of data. Now, batch, IoT events, and pure byte stream use cases all can coexist on the same platform. The second key feature is a DVR-like ingestion and playback capabilities that STP has that allows ingestion of historical and real-time data using the single ingestion pipeline unlike to adjusting solutions that require one solution for real-time data and another one for different batch processing for the historical data. For STP, users can go back in time and play the historical stream alongside the real-time stream. The third key feature is the support for two-tier long-term storage, an architecture that allows industry standard storage systems such as iSalon and ECS to be used for the long-term storage. The configuration allows users to configure data retention period or until the size grows to a specific limit. The fourth key feature is the enterprise-grade security which is of prime importance when multiple business units leverages on the same platform instead of every business unit building their own independent solutions. And last but not least is the secure and scalable multi-tenant development platform, a platform that can be used by multiple independent business units in comparison to existing solutions that require each business unit to stand their own system for the analytics. So that means STP has necessary access control to ensure that data is secure. With streaming data platform in place, organizations can now ingest real-time streaming data and work with it in ways that they have never even imagined. We are seeing customers solve some amazing problems just by having this access to the depth and breadth of their sea of data and being able to analyze it however they need to with ease. Let's take a look at our next section where Andre is demonstrating real-time object detection using streaming data platform. Hi everyone today I'm going to show you the object detection demo using streaming data platform running on VMware 10.0. My persona in this demo is data scientist. I'm a part of a team development and advanced driver assistance system. We continuously collect video and other sensor data from the fleet of test vehicles. There is a three stages of this experiment. First we use the Provega gRPC connector and Jupiter notebook to push the video frames into a Provega stream. Provega provides a Python and Java client for the gRPC connector. Here in Jupiter hub we are using the Python client. The data will be stored in a Provega stream that I called a raw video. For object detection I created a flink application. The object detection model runs as a flink job in STP. It uses the YOLO object detection model which stands for you only look once. The flink job uses Java binding for TensorFlow. As each video frame is ingested the object detection model is used to detect objects such as cars, buses, and people and it will add labeled boxes to the video frames around detected objects. It will also enrich the metadata with the detected object list and the detection accuracy. And finally using the same connector the gRPC I can pull the events from the object detector output video stream in Provega and play the video in Jupiter notebook. And now let's move to the actual demo. We're starting at the STP UI. First going to log in and here under the analytics tab I can create a project. This will create a namespace for my Provega streams in my flink cluster. Note that for this example we are using the object store available on TTCP which was reviewed earlier in this demo. To the left side I can create a flink cluster for my application. This can be created using this UI or the helm to deploy the flink cluster size that we need. For the flink application STP provides a local Maven repo where we can upload the application needed for this project. Now I can run the flink application for object detection that reads from the raw video stream, process the data, and push the processed video to a new stream called object detector output video. On our Provega tab here we can see all the work all the streams to build this demo. First we can start with the raw video and the output of the processing goes into this stream. From the Jupiter notebook from the Jupiter hub I created two notebooks one called the ATAS ingest and that will push the data into Provega stream. I'm going to go ahead and start it. The second notebook will use the gRPC to read the video and display it here with the object detection. Both those notebooks are needed for this demo one for the ingestion and one to read the data processed by our flink application. And now let's go back to our STP UI. From here we're going to look at the Provega streams. We should be able to see the data start to get in the stream here. And as the data is coming into the Provega stream we're seeing the flink application reading the data and processing it. So below that here is the flink job reading from the stream processing and writing this data to the object detector output video stream. If you go back to the scope we should see the data again into this stream here as you can see on the screen here. Here is the enriched raw video after running the object detection job. Back to the Jupiter hub now let's test the new object detection model on video collected. That video that raw video has been continuously ingested into the STP and now I'm going to play the demo. As you can see in this picture there's frames around the detected objects people, buses, cars, bicycles and it's running life as we are ingesting the data. The object detection runs the model, detect the objects and we can process it and display it as you can see here. With this this concludes our demo for the object detection using the streaming data platform running on VMware 10.0. Thank you.