 Hello, in this demo, I'm going to show you for the first time how a few short continual learning can be done on in-memory compute chip. This chip was developed at IBM Research Suric Lab. We are using images from Cypher 100 data set. First thing we do is we select classes. We are using animal classes. For example, we can choose elephant as our first class and what we do next is we are going to feed a few training examples from the elephant class one by one through a neural network. The neural network is pre-trend and metatrend. The result of the neural network is a high-dimensional vector representation and we are sending these vectors one by one to this in-memory compute chip. Inside the chip, it will select one of the rows on the array and then they get programmed using partial crystallization set passes. Then we select the second class which can be something like wolf. Similarly, wolf images are fed through the same neural network and the array is programmed. You can see how the conductance of the devices on the selected rows varies to represent the vector updates. At this moment, we can query our learner from query examples from let's say wolf class and we can see that it gets 100% accuracy. When a query image is given, similar to the training images, they will be converted to a high-dimensional vector and applied as voltage pulses on one side of the in-memory compute array. The resulting similarity vector is received on the other side. However, if we evaluate the learner on images from a different class, for example like leopard class, having seen no images from this class so far, the learner will misclassify it as either elephant or wolf. To enable continual learning capability, we just need to feed in a few training images from the leopard class. Once a row on the in-memory compute arrays program to represent the leopard class, we are able to successfully classify the leopard query images. We are very excited to bring this new technology to a wider audience in the near future.