 In conventional computing systems, such as our laptops and mobile phones, there is a physical separation between where you store your data and where you process them. So whenever we perform computation, we need to shuttle data back and forth between the processing and the memory units. This leads to significant inefficiency and it's particularly bad for data-centric cognitive computing. In in-memory computing, the idea is to perform certain computational tasks in the memory itself, in a specialized memory unit which we call computational memory. And how we achieve this is by exploiting the physical attributes and state dynamics of memory devices. In-memory computing can be viewed as the first significant step towards non-formal computing. So much of the computing system remains the same. So what we are trying to do is to incorporate a co-processor or an accelerator to efficiently perform some computational tasks in the space of machine learning and deep learning. So we believe this is much easier to manufacture and to commercialize because we are not altering the entire computing system in the process. In our paper, what we've shown is we can use computational memory to find temporal correlations between even-based data streams. This is an application that arises in a wide range of fields, such as Internet of Things, Life Sciences, Social Media and so on. Having said that, we can also use computational memory for a range of other applications such as performing logical operations or matrix vector multiplications or even solving optimization problems. In fact, we have yet another paper which is coming up at the IEDM later this year where we show how we can use computational memory for solving an optimization problem.