 Hello and welcome to Senseimals' presentation session on how to build smarter IoT products faster using Senseimals Analytics Toolkit and ST Sensor Tile Box IoT Evaluation Kit. My name is Chris Rogers and I'm CEO of Senseimal. In this session, we will provide an overview of new AI tools and methods that transform the process of building smart sensor algorithms requiring a fraction of the time, cost, and expertise traditionally required. We will step through an example application showing the process start to finish, highlighting advantages, and best fit applications for new AI development tools targeting the smallest of computing devices at the IoT Edge. Before we get started, I would like to provide a bit of background on who is Senseimal and our history, expertise, and experience in AI development tools for the IoT Edge. Senseimal is focused on delivering innovative AI tools to the IoT developer community enabling automated code generation for complex sensor algorithms. Specifically, we focus on algorithms designed to run on the smallest low power cores that can adapt and learn with use. As powerful new microcontrollers, low cost mem sensors, and 5G communications usher in the next big wave of AI at the Edge, sophisticated new tools like Senseimal will be key to harnessing this technology and thus instrumental to IoT innovators as they build and launch new AI driven intelligence and IoT devices. Senseimal traces its start to 2012 as an intact software tools team within Intel Corporation. At that time, the Senseimal founders were tasked to deliver groundbreaking AI tools for Intel's Curie and Quark SE microcontroller platforms. That tool, known as Intel Knowledge Builder Toolkit, was received with much anticipation amongst the wearable developer community when launched in 2016. By mid-2017, Intel elected to shut down its wearable computing division setting the stage for Senseimal as a spin off and independent software vendor. From that point forward, Senseimal has worked to expand its support and partner ecosystem choosing to partner with market leading hardware vendors such as ST and to bring complete solutions to customers for rapid prototyping and development of smart IoT products. With that, let's get into the actual process of building smart IoT sensor products. There are essentially four options available to a developer. The first are cloud-based AI frameworks where raw IoT sensor data is shipped to cloud servers for processing and insight. Second are the class of solutions focused on providing deep learning at the Edge with specialized Edge machine learning accelerators for neural network models. The time-tested route is hand-coded algorithms relying on the wisdom and efforts of data scientists, signal processing experts, and embedded developers to devise and optimize algorithms in a manual coding environment. And lastly, and the focus for most of our discussion, is a new class of AI tools that brings machine learning automation to the smallest of computing nodes, also known collectively as TinyML. Let's start with the first of these, the cloud-based AI frameworks. You might know these by some of the familiar open-source platforms such as Google TensorFlow and Apache Spark, among others. Such platforms were conceived to address big data applications like web search, financial fraud detection, and marketing analytics. In the context of IoT sensor processing, such systems are adequate in some cases but problematic in others. For instance, remote sensing applications may be constrained by network performance well below that needed to convey vital high bandwidth sensor information needed for a given application. Consider a remote oil and gas or renewable energy field site and its mini-process monitoring and predictive maintenance sensors. Here, pre-processing of rich sensor data at the Edge can greatly reduce the network performance needs as only the insights of interest need be relayed to the cloud for aggregation with other similar endpoints and decision-making. Latency and tolerant applications are another example where cloud AI doesn't fit well. Elderly fall detection devices need real-time response to work as do many other real-time applications where round-trip time for remote AI processing is just not practical. Next are mission-critical applications that simply must work irrespective of network outages. With remote cloud processing of local IoT data, the application only works to the extent of the weakest link in the network path from the IoT endpoint to the data center. Finally, data security and privacy are often factors driving a push for AI processing to the Edge, whether this is critical infrastructure sensor data or privacy concerns as seen with home smart hubs. A partitioning of local AI versus cloud AI often points to the need for Edge or hybrid cloud Edge architectures. Next on our list are the recent introduction of purpose-built AI Edge platforms for deep learning. Narrowly, all of these have been developed from the outset for addressing image classification and vision recognition problems. As such, they are exciting advances to be sure, but the compute requirements for real-time full-frame-rate vision recognition necessarily drive multi-core processor and GPU solutions with power consumption profiles measured in watts, not microwatts. For a great many smart IoT device applications, the cost and power consumption of such solutions are prohibitive. A remote livestock wearable sensor and industrial predictive maintenance monitor for a hard-to-access motor needs to have battery power for practicality with battery life measured in months or years, not hours or days. Thus, the generalization and takeaway is that in many such applications, not only is the hardware inappropriate, but the systems designed for deep learning and neural network inference models built with vision and image in mind are simply overkill and not appropriate. Next, let's look at the traditional approach of hand-coded algorithm development. It's tempting to think of this as the ideal process for building intelligence in the IoT devices, but practically speaking, it's the most arduous method, requires the greatest level of scarce expertise, and limits innovation and productivity. To illustrate, the typical flow is as follows. A new concept is devised and prototyped in hardware. A working hypothesis on the type of sensors and resulting data required to derive the processed application insight is developed. First, by collecting raw data, and then by using this data in mathematical tools platforms like MATLAB or open-source data analysis software. Once an idealized algorithm is developed and tested against the initial data, the task is shared with the firmware team to assess needed compromises and simplifications to hit compute, memory, and IO limits of the target hardware. In some cases, the model won't fit and requires expensive rethink and or re-spin of boards to accommodate components with greater headroom. Once passed, the device is subject to more extensive testing and new data collected where often corner cases that were not conceived or captured in the initial data set are discovered. This is often where code gets complex, difficult to support, and brittle. The net result is a process that is time-intensive, requiring six to nine months just an algorithm and code implementation alone on average. It's resource-intensive, as domain experts, data scientists, signal processing engineers, and firmware developers each have an active role and unique role critical to the success of the algorithm design. It's costly, not only in the upfront development phase, but also in the sustaining support and upkeep over time as the code gets more brittle. It's risky, as the unknowns and setbacks from the human-driven coding process rarely come in on schedule without some kind of surprise. And finally, the process overall doesn't scale. Case in point, if there's a reason wrist-worn wearables have struggled to date to gain momentum as real platforms, certainly an important factor can be argued for the lack of scalability in building smart applications that leverage the common hardware and sensors contained on these devices. Which brings us to the final and most recent class of tools for developing smart IoT devices, namely those tools that are purpose-built for addressing machine learning methodologies on the smallest of processors, known as TinyML. To put these TinyML applications in proper context, it's helpful to look at the following chart showing how various Edge IoT machine learning workloads map to Edge hardware processing power and performance requirements. As stated earlier, most of the press and attention has focused on the upper right quadrant composed of hardware and tools targeting vision and image recognition using deep learning methods. Equally important, and with enormous market growth projections over the next five years, are those applications contained in the lower left quadrant, the realm of TinyML. These are distinctly different from vision and image classification as they have markedly different requirements for power and performance, for reasons described earlier. Consequently, the tools optimization process requires a broader and more comprehensive search for pre-processing transforms and features and alternative machine learning classifiers to arrive at the most compact and power efficient code possible to achieve the desired performance. Key applications within TinyML include motion sensing for consumer and wearable applications, gesture recognition, audio sensing and classification, and industrial vibration monitoring, process control, and structural health monitoring, to name just a few. Expanding somewhat further on potential applications within the TinyML segment, we can think of broad classes of motion, audio, and environmental sensing as shown. While particular applications are virtually limitless, we have seen early traction from customers and consumer, commercial smart buildings and infrastructure, agricultural sensors, healthcare wearables, and industrial monitoring applications. Common across all of these are the need for one or more rich time series sensors used as input for the local analysis of a particular application in sight. Shifting gears, now that we've covered general development approaches and pros and cons of various existing methods for building smart IoT devices, I'd now like to introduce the unique capabilities that Sensible and ST bring to developers seeking a better approach to building intelligent devices for TinyML applications and or a practical entry point into AI for their smart product concepts. The upshot is that creating smart sensor algorithms for such time series sensor use cases just got a lot easier. The first thing to understand about the Sensible approach is that we provide a true end-to-end workflow for the user. This process starts at test methodology definition and dataset collection and ends with on-device testing. On the front end, all supervised machine learning tools rely fully on the accuracy of initial train test data to teach the system how to differentiate amongst various states of interest. Most AI tools lead this task of dataset capture and labeling as a job for the often inexperienced user to figure out for themselves. By contrast, Sensible's years of experience building AI algorithms have taught us the importance of train test datasets such that we've devised a unique tool over the years we call data capture lab that greatly reduces the risk of supplying flawed datasets that inevitably lead to flawed algorithms. The core of the tool responsible for model generation and code optimization is what we call analytic studio. This tool exists in two forms depending on your team's level of familiarity with AI. For teams without data science expertise on hand, analytic studio provides a GUI-based interface where basic parameters and constraints can be specified to the Auto Code Gen engine as it seeks to find optimal algorithm solutions based on your train and test data. The rich reporting and visualization tools contain provide full insight into the resulting recommended code along with bit exact emulation of the embedded target device output to allow production grade QA and validation. For AI power users, we offer a Python client interface variant that exposes all of the underlying model parameters and pre-processing stages with full transparency in a familiar IPython notebook environment with Sensible Python extensions to support model creation and code generation. The last step in the process is testing on the actual target hardware, which we support with an easy to use PC or Android application we creatively called test app. In the end, the efficiency gains afforded by analytic studio and the overall Sensible workflow cannot be overstated. We routinely see productivity and schedule improvements of over 500% or more when compared with hand coded development processes. The resulting tiny ML algorithm generated by the toolkit consists of a runtime data processing pipeline optimized for code size and efficiency. Starting with an input buffer for collection of one or more channels of raw streaming sensor data, the sensor stream is then pre-processed through a chosen set of available transforms selected by the engine as best suited for filtering and triggering segments for feature transformation and classification. The feature transform process is determined by the analytic studio uses configurable search methods to arrive at an ideal set of features from amongst a library of 80 plus feature transforms. The resulting input vector as fed to the classifier stage allows for the simplest classifier to be chosen from amongst the many supported ranging from a simple support vector machine to trees hierarchical up through neural networks as provided by TensorFlow Lite, which is fully integrated into the Sensible workflow. So with this explanation, I'd like to turn the presentation over to Chris Narowski, Sensible CTO to provide a walkthrough of a process using a real world example application. Over to you, Chris. Thanks, Chris. Start by opening the predictive maintenance fan demo project in the data capture lab. First, we're going to collect some new sensor data using the sensor tile box. We'll connect to the sensor tile box, which will start streaming data over the serial port. When we hit begin record, it'll start storing high fidelity data to the SD card stored on the sensor tile box. We'll then create the different states that we'd like to be able to detect within our fan, such as tapping or mount vibration. The video we are recording will also be synced up later when we switch to label mode. Finally, once you're finished collecting the data, hit stop recording, which will transfer the data from the sensor tile box to the DCL and upload it to the cloud server, along with any label and metadata that you had attached. You can then go into the project explorer, find the newly collected file, and begin annotating it. When the file opens, you'll see the accelerometer and gyroscope data plotted in the two graphs. The video is also synced up, so as we play the video, we can identify the corresponding regions in the sensor data. We will now annotate the data set. To add a new segment, we right-click and drag across the region of the sensor data where the fan was on. Then we associate the label fan on with that segment. We continue this process to label the rest of the captured file. Once we finish labeling the file, the new segments are synced to the cloud, and can now be used in training or testing your models. We've just shown you how you can quickly and easily create high-quality, curated data sets using the Sensimal Data Capture Lab. Now we will use the Sensimal Analytics Studio to create a classifier capable of running directly on the SensorTile.box to recognize in real time the different states of the fan. Once you log in, you will see all the projects associated with your team. Let's start by opening the Predictive Maintenance Fan Demo project. In the Summary screen, you will see a list of all the captured files, queries, pipelines, and knowledge packs you have created. In Prepare Data, you are able to create a query to select the data that you would like to train your model against. We've created a filter to pull out the five classes we are interested in classifying. On the right, you can see the total number of samples or segments for each class that were selected by this query. Now that we have our query, let us go build our model. Here, we use the query we just created as input. We set the window size to 400 so that a classification is computed every 400 samples or one second as the sample rate is 400 Hz. Additionally, we'll set our ranking metric to use the F1 score and keep the max classifier size set to 32k. When we click Optimize, Sensimal AutoML runs through hundreds of different feature extractor and classification parameters. When the optimization is complete, it will return the top five candidate models based on the classifier size and training metric that you selected. Now that we have our candidate models, let us go to the Explore Models tab to get a more in-depth view. In the Model Visualization tab, we can compare the different features that were selected by a model in a 2D representation. The Confusion Matrix tab shows us the confusion matrix for all the classes as well as the sensitivity and positive predictivity scores. The Features Summary overview provides a more in-depth look at the features selected along with their input sensors. And finally, the Model Summary shows us information about the classifier and the hyperparameters used for training. Let us move on to validating our model against our test data set. To do that, we will filter out only the data that we want to test against. When we hit Compute Summary, the model is compiled and the captures are classified using bit-accurate emulation providing the same results you would see on the device. When it finishes, it will return the accuracy for each of the test files along with the confusion matrix for all the files in your test set. When we click on Get Results, we will see the confusion matrix for this capture as well as the location of the predicted values and how those compare with the ground truth labeled in the Data Capture Lab. Now that we have validated our model, we are ready to download it and test it on the device. Here, we can select the target device, which in this case is the Sensor Tile box. We can select whether we want the model in binary, library, or source code format. And finally, when we click Download, it will generate the Knowledge Pack firmware that can be flashed directly to the device to begin recognizing events in real time. Now we have shown you how you can use the Sensible Analytics Studio to build machine learning models that can run directly on the Sensor Tile box. Thanks Chris for that brief walkthrough of the Sensible workflow. With that, we conclude our presentation on how Sensible Analytics Toolkit together with Sensor Tile box can enable you to build smarter IoT products faster. We encourage you to explore our virtual booth where you can find a video showing a version of the smart fan application we just built. You can also find additional videos, white papers, blog articles, and links on how to get started quickly. The link in QR code below will take you directly to where you can sign up for our free trial version of the toolkit. Thank you and we hope to hear from you soon to learn how Sensible can help you build your IoT products the smartest way possible.