 So, our first speaker is our keynote lecturer, Kenneth Harris, who's going to be talking to us about high-dimensional geometry of the cortical population code, as revealed by one with lots of zeros, cell recordings. Thanks, Simon, and thanks for the organisers for inviting me here, my second time in Montreal, but the only time it was safe to go outside, you know, without 20 layers of clothing. So, right. So, what I'd like to tell you about is some work we've done in recording what now seems like quite a large number of neurons simultaneously, although I've got to say I remember when a hundred neurons seemed like a lot, so this will probably be small potatoes in ten years, but the difference this time is that it seems that we've now reached the point where if we were to record even more neurons, the fundamental finding wouldn't change because we've hit some sort of an asymptote, and I'll get to what I mean by that in detail later. Okay. So, this is work done by Marius Paccittario and Carson Stringer, who now have their own group at Janelia, done in the lab that Matteo Carandini and I run at UCL, University College London, and it's just come out on a reprint a couple of weeks ago. Okay. So, when you think of what, say, the sensory systems do in the brain, there's an idea that has been around a very long time in neuroscience and is also seen in a lot of machine learning algorithms that you take an input coded by a small number of input fibers, say, the retinal ganglion cell axons to the brain, and it's projected into a very high dimensional space. You have a large number of processing neurons, say, the cells of the visual cortex, and then this information is harvested back down to a small number of output neurons. You see this pattern all over the place in the cerebellum, these would be the granule cells, these would be the pockinji cells in the cortex, these would be the IT neurons and these would be the PT neurons, a small number of cells that protect the output out. So the idea is that you take an input and you make a high dimensional representation of it. So what properties might you want that representation or the neural code to have? And again, the ideas on this go way, way back in history. So one of the first properties you might want of a neural code by which I should say I just mean the correlation between the stimulus and the firing pattern, one of the first properties you might want is orthogonality. So what does that mean? The idea is if you have an image that you see with your eye, it causes a particular firing pattern in the visual cortex, if you have a different image, it causes a different firing pattern in the visual cortex. And downstream from this, there are neurons that, you know, maybe in a structure like the superior calliculus that is doing controlling behaviors, there are neurons with weights to these visual cortical cells such that if this picture, this image of a tree comes into the eye, you will make the behavior to climb it, say whereas if the image of a fish comes in, you will make the behavior to feed your fish. And the point of orthogonality is that if these populations are distinct, if they don't overlap, then you can learn these output weights in one trial. Simple Hebbian learning, if these orange neurons are active at the same time a training signal comes to this one, in one trial, at least in principle, these weights could be learned and they won't interfere with the weights to this other one. So orthogonality is a great property for neural codes to have in principle because it enables rapid learning and also efficient coding. And the reason it's called orthogonality is because you can think of these firing patterns of N neurons as defining a vector in an N dimensional vector space. Orthogonality means that the dot product of those vectors is close to zero. So mathematically that's orthogonal. On the other hand, that can't be the whole story because if every single pattern was orthogonal to every other single pattern, then even the slightest change in this input would lead to a completely different representation and you'd have no generalization. So you also want a property that when a similar stimulus comes on, by which I mean behaviorally similar, it drives the same behavior, not that it's physically similar, which should cause a similar pattern. They should overlap. So not everything is orthogonal. Some patterns need to be related and this leads to a concept of smoothness of the neural code. The code has to vary smoothly with the required behavior and these patterns won't be orthogonal. So putting these two together, you get what could be called a manifold hypothesis. So the idea is that there are N neurons, they define this N dimensional vector space, but not every possible combination of those N could happen, only a subset. And a manifold is a subset that has a smooth structure. So in other words, the representation, the firing pattern caused by this image and this image might be very different, but the behaviorally similar images will have representations that are not too far from the original ones and that means the neural code needs to lie on this surface that has this smoothness property that if you have this point is in the set of possible firing patterns, then it needs to have neighbors there as well. They can't all be isolated atomic points all over the place and this idea that it's a manifold is pretty much everyone takes it for granted, it's an assumption. It actually doesn't have to be true and we'll see later an alternative that could have been the case. Question? What do you want to do? I'd prefer to just, well, it's up to him, but... No, no, it's not up to you, it's up to Tommy's bed as well. All right. Okay. Yeah. Clarifications, yeah. Okay. So the third property you might want in a neural code is redundancy. So this is a very different idea and there's great paper way back by John von Neumann, one of the inventors of this type of computer, and one of the fascinating things about this is that they refer to logic gates like AND gates and NOT gates as neurons back in 1952. The other point though is he makes a point that if your neurons or logic gates are unreliable components and you want to make a reliable organism, as he called it, out of them, you need a lot of them doing the same thing. So in other words, there would be teams of neurons that all, it's one for all, if one of them is going to respond to this stimulus, they're all going to, or at least they're all going to try to, and then between them, after you take into account the failures, there will be enough going on. And this idea received experimental support not very long after with the discovery of the cortical column as it at least appeared then. Now more questions about whether this is actually true, but as it appeared then, divine neurons always encode the same thing, so you have this kind of redundancy. Okay, and there's a tension though between redundancy and orthogonality. If you have redundancy, you have less neurons to play with, which means you can code less neurons in a fully orthogonal way. So the question of what the brain actually does is an experimental question that needs experimental data to answer. So we're going to try and answer it here, and to do it, we're going to use a concept of dimensionality. So the idea is orthogonal representations are high dimensional, redundant representations are low dimensional. Dimensionality has many different definitions. We're going to do three in this slide, a fourth one is going to come up later. But let's just work with these three for now. So imagine you had just three neurons, then that defines a three-dimensional space of possible firing patterns. But also imagine that the actual firing patterns that actually can happen in reality were these circles here. Now these lie on a plane of dimension two. So we say the planar dimension of this code is two. But they also lie on a curve, a curved line of dimension one. It's not a straight line, it's a curved line. The planar dimension is still two. But because the dimension of this line, this curved line is one, we say the intrinsic dimension or the non-linear dimension is one. So we've got these three different definitions of dimension. This is the number of neurons. This is the size of the plane it can fit on. And this is the non-linear dimension of the manifold itself. And they need to be in this order. This needs to be bigger than this, which needs to be at least as bigger than this. Okay, so the question, is cortical activity really high dimensional? It's an experimental question. From the orthogonality theory, it ought to be. It's an experimental question. The data so far seem to say no. In this review by Gowang and Guli, they collected together a whole lot of data from many different experiments that had done dimensionality reduction. And they find that nearly all of them can represent the neurons in a low dimensional space. On the other hand, as pointed out by these very same authors in this same paper, there's no way these experiments could have given a different result. If you have a small number of stimuli or an experiment of limited complexity, you have to get a low dimensional answer. For example, if you only show three stimuli, then the responses to those three stimuli have to lie on a two dimensional plane. Endpoints always lie on an n minus one dimensional plane. So these experiments don't really tell us whether the neural code can be high dimensional when you have a high dimensional input. So our strategy was two pronged strategy. First, we're gonna present lots of sensory stimuli so we don't have that problem. And in particular, we record responses to thousands of natural images. And each one is presented twice for a technical reason because we need to do analysis based on cross validation. And second, we're gonna record lots of neurons. And we do that using actually conventional two-photon microscope called the B-scope. You can buy it, they'll come and set it up for you, it's not cheap, but it works. And the key trick is to use GCAM-6 slow, which has a slow time course. The fluorescence hangs around for about a second. This allows us to image 11 planes at a slow scan rate of 2.5 Hertz. So 1 frame, 2.5 frames a second, which means that we'll still see all the spikes because the GCAM-6 slow fluorescence lasts that long. And we're recording in the primary visual cortex of passive-awaiting mice. Here's an example of some of the data. These neurons are flashing away as they do and there's 11 planes. This one here is the fly-back plane, so don't count this. But there's 11 different planes that image down to about layer four of visual cortex. And when you add it all up, you get 10,000 neurons. The difficult part is not the experiment, it's the informatics. And Marius developed this quite amazing suite of software called Suite2P. And since this is an informatics conference, I will just talk about it quickly. There's multiple steps in the pipeline, image registration, detecting the ROIs. There's a small step of manual curation, which isn't completely avoidable. Basically, the ROIs that come out, many of them are cells, some of them are dendrites, some of them are just errors. So there's a step of manual curation that the operator has to go over this and do it. But then there's a learning algorithm that after you've done enough manual curation, it learns what you're going to say, anticipates it. And by the time you get to the stage that it's predicting everything you're going to do, you stop bothering with the manual curation. And then the final part is spike deconvolution to find the times of the spikes. So here you see some fluorescence traces, these are the inferred spike times. They may not be 100% accurate, it doesn't really matter. The code is written in MATLAB, and it's available at this GitHub page. And there's a preprint about it as well. Okay, this is an example of some of the cells recorded in one experiment on these 11 planes, they're pseudo-colored by cell identity. And you just see that there's about 10,000. I think in this case it was 14,000 neurons. This is an example of the population code. Here you see 110 or so stimuli. And here you see 300 of these 10,000 neurons. This is actually repeated stimuli. This is the average response. So the pseudo-color map shows how much this cell responds to this stimuli. And you see exactly what you'd expect from a population code. There are some stimuli that give responses in some populations of cells. Others that give responses in others. The code is generally fairly sparse. There's some cells that respond to a lot of stimuli. There's some stimuli that drive a lot of cells. And there's some that don't get much response in either direction. So what we'd like to do, and one thing I should say, I think I cut the slide, is that if you want to play the game of reconstructing the stimulus, guessing which of the 3,000 stimuli it was that was presented from the neural activity, you can do it with about 75% accuracy compared to one in 3,000 chance level. And you do that in one trial learning by a simple nearest neighbor algorithm. So the information is there in the population code. The question is, what's the format that it's in? So what we'd like to do is answer this question of dimensionality. So, and we'll do that with pretty much the simplest way you can think, which is principle component analysis. We're looking at the planar dimension here. We're going to ask, what is the planar dimension? What's the dimension of the cube this lives in? And what principle component analysis does is by measuring variances, it measures the dimensions of the sides of the cube. So the sides of these cubes, this would be variance one of the principle component, variance two, variance three, et cetera, if these now go to zero, and with cross-validation, they really would go to zero after a certain point. That means that the neural code is in a flat subspace, but if they don't ever go to zero, that means it's in a high-dimensional subspace. And we use a cross-validated method to do this that I'm not going to tell you about for the interest of time, but I'd be happy to answer a question. Okay, so that was the question. The result was so weird that it made us change the question. It was one of those results that you're really not expecting to see, and it makes you realize that the question you were asking was actually the wrong question. And the answer is that the variance follows a power law. We weren't expecting to see this. Some people love to see power laws everywhere. We're pretty indifferent to power laws. It's not that we were interested in the power laws, but to paraphrase Trotsky, you may not be interested in power laws, but sometimes power laws are interested in you. This was one of those cases. So this is the variance as a function of the dimension number for seven individual experiments, and you see that in each case on this log-log plot, the variance is a power law function of the dimension with an exponent frighteningly close to one, just above one. There's some experimental error, there's some variability, but it's just above one in all cases. Question of clarification. Yeah, so if you sum the part- This bit, yeah, yeah, no. Well, this is actually only about two or three dimensions. That can still be percent. Yeah, but the other thing about this, there's a reason we think that these, these are being mis-estimated because of spontaneous activity contaminating. This is now getting beyond a question of clarification. Okay, all right. So we have this power law. What on earth does it mean? Why might we have it? Well, first, what does it mean? So it means the variance goes like one, a half, a third. This is a mathematical object called a Hilbert cube for those who are interested. So the code is high dimensional, but that now seems like a fairly boring question to have answered. The real question is, why on earth does it have this power law? So the first question is maybe it's just because we didn't record enough neurons. But actually, if we increase the fraction of neurons we analyze, going here from red to blue, we find that the power law as measured by this correlation coefficient gets more accurate and the exponent gets closer and closer to one. So this is what I mean about we're reaching an asymptote. We're going up from a small fraction of the neurons to the full 10,000 and we see the power law gets more and more accurate as you add more and more neurons. So we infer that if we were to record even more neurons, 100,000, it would continue to get more accurate. There's an inference from the data. Same with the stimuli. If we take subsets of stimuli, we increase the size of the subset. The power law gets more and more accurate, correlation coefficient closer to one, and the exponent converges towards one. So it seems like this is something that's going to hold in a limit as you have as many neurons and as many stimuli as you want. So another thing that I've cut in the interest of time, you might think this is because of the statistics of the images themselves. The images themselves have a one over F power law, but that's not what causes this because if we filter the images, whiten them to get rid of the power law in the data and show these to the same mice, we see the same thing. We see a one over N power law for the dimensions of neural activity even though it's gone from the stimuli themselves. So it wasn't that explanation. So what on earth might it be? All sorts of things we investigated to see why it might be an artifact, none of them made sense. So what might it be? Well, remember something at the beginning that this idea that the code had to lie on a manifold was an assumption and that there are alternatives. So what might it be if it's not a manifold? Well, one thing it might be is a fractal. So fractals are objects that show increasingly more detailed structure at finer scales. And a good example of this is coastlines. So in this very famous paper now from the 60s, they measured the length of the coastline of Britain and the West coast of Scotland in particular has this property that the smaller the ruler you use to measure it, the longer it appears to be. So as you go to finer and finer details, you see more and more detail at smaller and smaller scales. And this is measured by a quantity called the fractal dimension which measures how the length you measure with your ruler scales with the length of your ruler. So this is the fourth measure of dimensionality we're gonna talk about here today. So the point is that the relationship to what we've been talking about is that if the neural code was to lie on a technically a differentiable manifold, that means a smooth manifold, then it has to have a fractal dimension equal to its intrinsic dimension. On the other hand, if it was rough, if it was one of these fractal objects, the fractal dimension can be more than the intrinsic dimension. If as you go to finer and finer details, finer and finer scales, you see more and more and more details then the fractal dimension can be more than the intrinsic dimension. That's not the sort of neural code you would want because it would be susceptible to noise. And I'll give you an example later. But the reason this is connected is that we were able to prove a theorem. And I've written the theorem in gray because I don't want you to read it unless you actually are a pure mathematician and in which case all of these fine details will be appropriate. But for everyone else, just look at the approximate translation. The translation is this. A differentiable manifold of intrinsic dimension D, that means not a fractal, has to have variances decaying at least as fast as a power law with exponent one plus two over D. This is the possible connection to the power law result we saw. We saw a power law of exponent one to a very high dimensional set of these natural image stimuli for which D is basically very, very large because there's so many different stimuli that's a very high dimensional space. So this exponent of one is pretty much the limit you'd expect for a very high dimensional stimulus space. The eigenvalues, the variances could decay faster. This theorem doesn't say that they cannot decay faster but they can't decay slower unless the manifold they live on is actually a fractal. And that would be a bad sort of neural code. So we can make an experimental prediction. If this idea is right, that this is why we see the power law of exponent one, then if we were to present stimuli of lower dimensionality, then the variances should decay faster because if D, for example, was eight, then two over D is 0.25 and the variances would need to decay at least as fast as a power law of exponent 1.25, et cetera. So as a mathematician turn neuroscientist, I always get a kick out of having the word theorem an experimental prediction on the same slide which you never thought would happen but this time it has. So to test it, what we did is take the image stimuli and filter them by projecting them onto a set of basis functions. In this case, eight basis functions means that now this high dimensional stimulus set has become a low dimensional stimulus set that just looks like these sort of blobs and it's an eight dimensional stimulus set. And if we do that, we again see a power law but this time the exponent is larger. Remember that the critical value of the exponent was 1.25. In this case, we have 1.5. So it's more than the value of one, we saw with a full stimulus ensemble and it's safely above the critical value beneath which it would have to be a neural code which is a fractal. For four dimensional stimuli, the critical value is 1.5 and we're on the safe side of that again. And in fact, when we take various different types of stimuli, we're always on the safe side of this line which is the one plus two over D exponent. So to summarize, the visual cortical function code, it has a high planar dimension, basically a full planar dimension. The unexpected thing was that the variances of its dimensions follow a power law and the exponent appears to be a bit more than one plus two over D where D is the dimension of the stimulus ensemble. So the intrinsic dimension of the manifold cannot be any more than the stimulus dimension, it could actually be less, but it can't be any more. So assuming it is the same, then the exponent is just above one plus two over D where D is the dimension of the stimulus set. If the dimensions decayed any slower than this in the limit, then the representation would have to lie on a fractal, which means that the nearby stimuli could have very different responses. And let me give an illustration of this. So this is an illustration of some synthetic data we created in response to a 1D stimulus that lies on a circle. So you can imagine the orientation of a grating some one dimensional variable in a circle. And we've simulated four cases with different variance spectra. This first case is a low dimensional code where you've got two eigenvalues and we take a 10,000 dimensional representation with these eigenvalues and then make random projections into 2D. And what you can see through these different random projections is that this one always looks like a circle or an ellipse. It's low dimensional. That means it can't do any interesting processing on the data. There's nothing really coming out of it than didn't go in. On the other hand, if you had a high dimensional representation, in this case we've got 100 eigenvalues that are all equal and then it goes down to zero. You get something that looks like a ball of string that you threw into your backpack. It's actually smooth. It's not a fractal because these all go to zero. So in the limit they decay faster than the power law. But the problem with this is it doesn't respect dimensions. If you have two stimuli that differ by one degree on this circle, their representations are as different as stimuli that are completely opposite. So this is a high dimensional code. It's close to the orthogonal code that supposedly was optimal, but it doesn't have good, it won't have good generalization properties. If we take a power law now, but with an exponent below the critical value, we get a fractal and it looks like a kind of fuzzy sort of mess. It's better than this one in terms of preserving distances, but it's still not really very good because a lot of the variance is devoted to these very fine details that probably wouldn't be behaviorally relevant. On the other hand, if we're at the critical value, which in this case is three because D equals one, now we get this borderline fractal. It's able to make complicated shapes. In other words, it's able to do more sophisticated information processing, but it still respects the distance in a way this one does not. And the reason this might actually be important for information processing is illustrated by something that happens in deep networks. So deep networks most of the time have this problem of adversarial attack, which means if you train a deep network and now an adversary comes along and wants to fool you, they can take an image, corrupt it by a tiny amount of noise and it produces a completely different answer. So in this case, this is a network that was trained to predict classify animals. It gives this a 57% chance of being a panda. You add a tiny amount of noise and get out an image that looks to the naked eye exactly the same. And this network gives it a 99% chance of being a gibbon. So the point is this network is susceptible to noise and that's exactly what a fractal would do because it's not differentiable. It means that there is a direction, there is a type of noise that you add a tiny amount of it and it can give you an unlimitedly large change in the representation, which would then fool the output layers into doing this. So it seems that the brain may have a neural code that is not susceptible to this in the sense that it has this smooth structure. Okay, so to conclude, the covariances in the visual cortex have parallel eigenvalues with exponents close to one plus two over D. This is close to a critical value where the representation becomes fractal and this may be giving the brain the highest dimensionality representation it can have, the most orthogonal representations it can have while still being smooth and therefore being able to generalize correctly. And that's it. So thank you to the people who did it and to the funders and of course to you. So we have time for questions. People are asking questions. Can you make sure to use the mic on your desk and press the button? So why not there? Hi, really interesting work. So I was very interested with your last comment regarding the adversarial images. So there was an interesting paper that came out from some researchers where they actually showed that if you take adversarial images that can fool large numbers of deep networks and then you show them to limited humans, it degrades humans' performance. Now, but I also find compelling the idea that you're articulating that the brain might have smoother manifolds than some of these deep networks. Of course, there's actually one way to test that and I wonder if you guys are pursuing this at all to impose some kind of smoothness term in the cost functions in a deep neural network and see if you can protect against adversarial images and get closer to human behavior on them. That's a great idea. And if we knew how to do it, we would pursue that. If you know how to do it, I'd love to talk to you. I think I might so we can talk. We'll talk right afterwards then, great. So actually I have a couple of questions. So firstly on the adversarial images, I mean, is camouflage not an adversarial attack on a natural visual system? Yes, but the point is that's a large change. I mean, if I was instead camouflaged, that would be many, many pixels very different. This is only a few pixels, a tiny amount different. The second question I had is, so in terms of the dimensionality question, there are a number of different things like dimensionality, I mean, different formalisms that you can look at, I mean, complexity, various types of entropies, compressibility. Have you sort of compared with any of these other sort of formal approaches to see whether it sort of follows the same approach as the sort of manifold type? Because, well, our theorem really only, well, okay, the question we were asking was about the planar dimension, and I think we answered that, it's high, it's as high as it can be. The question about the non-linear dimension, the intrinsic dimension, that's much harder to answer. In principle, you can do it with correlation dimension and we tried, the results were somewhat unreliable so we didn't pursue it much further. I think it's just a much harder question to ask so we haven't really tried yet. I mean, I think you're right, it's much harder to do with finite amounts of data, basically, to estimate these things. I played a little bit, actually, with one of the other sort of large data sets with compressibility motion, but I haven't got that far on it yet, all right. So here you analyzed early visual cortex kind of cells, right, in the mice. Would you expect to have the same hierarchy you go because it's different mechanisms? Great question, great question. And I can only speculate, because we haven't done it, but I would actually think that this one plus two over d bound won't be exceeded anywhere in the nervous system. That's a guess, it's a speculation, but that's what I would guess. The question, there's two questions. The first is whether the bound is exceeded and you get a fractal. The second is whether it's fully used, right? It didn't need to be any sort of power law and it could have been a power law that decayed faster. And when we make very simple models like Gabor linear filters, they don't have a power law and if you try and fit a power law to them, then it decays much faster. So it seems like there's something in V1 that's using the maximum it can to go up to that limit. If you now were to say, if I've got a behavioral task and my job is to classify these images into two types, then in the muscles, you're not gonna have that power law. You're gonna have a binary yes or no. So maybe as you go further down towards motor production, the full power law is no longer used because it's thrown away some information. This seems to me like a way of keeping the maximum amount of information you can while remaining smooth. Okay, really great talk, very much. Over here, over here, thanks. Brief clarification, then a question. So the method that you've mentioned described at the start, so it's the covariance of the speaking points with the stimulus, that's what you're decomposing. Is that right? Well, the covariance of the responses, yeah. Covariance of the responses, right. So, and maybe a naive question because I'm not super familiar with this type of methodology, but because this is kind of an impulse response. Yeah. Would you expect some of the structures to also be present in the intrinsic activity? Ah, great. You mean without a stimulus? Yeah. That's a great question. That's a very good question. And we've done everything we can here to ignore the intrinsic activity and not include it. However, it accounts for a large fraction of the variance, maybe as much as half. This is why we need to do the cross validation to make sure that we're looking at only the stimulus responses. If you just did PCA on the raw data, then you would be including those spontaneous dimensions. The reason we think it has this little fall off at low dimensions is because some of the intrinsic dimensions are shared with the stimulus responses, but in a whole other stream of work I haven't talked about, it seems like there's really only one dimension of large overlap between the two. And that's why we think it's only a few dimensions that are underestimated there in the power law. So anyway, fantastic question, a whole lot of technical answers which I'll have to tell you about later. Thank you. That was a really good talk. Thank you. So I'm curious, you mentioned in the beginning these principles and two things that I was going to say. So I'll say one thing and then I'll ask a question that's related to it, right? So the way these papers work, it doesn't have to be in the same area, right? So some versions of this work have first you do... Which work? This one? For instance, right? So you could do total orthogonality in one layer and then do your second principle by learning in a next layer, right? And related, you see what I mean? That you want to have this smoothness, it doesn't have to be in the same group of neurons. I think I disagree because if you had one point, which completely orthogonalized everything, then one layer, it's obligatory, everything, a bottleneck, that completely orthogonalizes everything. That means it orthogonalizes the representation of this and this corrupted by one pixel. And you can do whatever you want after that, you're never gonna recover smoothness. Yeah, so this brings me to the second question. So it seems like what you're describing in some ways that you want to have a more hierarchical code in the sense that of course there is structure in these images, right? The fact that you chose sort of two trees and just changing one pixel, which doesn't change. Do you have a sense, and I can't do this off the top of my head, but it seems like when you do have things like a hierarchy of images, then they would generate differences in variance in a way that happens across finer and finer dimensions. So I think there's some overlap, I just can't frame it in my head well enough, but you probably thought about it already. So I thought just to ask. Yeah, no, I think you're exactly right. I mean, so that's the idea, right? We talk about this manifold, and the point is it's not, what's the metric on that manifold? It's not the metric in pixel space. It's the metric in what I would call behavior space. So when two images have the same behavioral consequence, they ought to be similar in that space. And more of the variance will go to representing large differences in that space and then the fine distances in that space. And so have you guys tried to play with that? I know people like in the monkey literature tried to play with that a little bit, but have you guys done anything yet? Not yet, not yet, but we have the data. The data is also online. You can find a reference to it in that paper. So I would encourage anyone who wants to. And with that note, I think we'd better move on to the next talk. So thank you. Okay, thanks a lot.