 Good afternoon. I'm sorry That we have to sit in here. Although the weather is so nice. I also would like to sit outside So what do we want to do? So I would like to give you First after the introduction I also want to or maybe earlier I want to tell you in what sense My lecture relates to the data challenge you will get tomorrow and I and then I will start with pairwise correlation analysis like cross-correlation Unitary events and will also tell you quite a bit about analysis pitfalls So what you have to consider and then is a next bigger part big part like about higher order correlations In how you can approach this or it extract those From population measures a method spade for the text detection higher-order synchrony In spatial temporal patterns, however you I will emphasize the higher-order synchrony because this is something you may also need For your data challenge So what is the relation to the data challenge? I just tell you what will happen and I this is kind of a warning that you it's better that you listen So the data challenge will be that you get six data sets of massively parallel spike trains and One of this is real data when five others are simulated and We will actually tell you That the features what what sort of say the models contained We want to make it a bit more simple than the last years where the challenge was only detect the real data set and Your challenge is that you identify which data set is which Yeah We will give you a longer introduction tomorrow that you understand what we are talking about But I just want to let you know that the challenge requires some of the methods introduced here Okay Okay the introduction is if you Consider every day's life, then you notice that natural behavior is dynamic and fast and For example free viewing what you basically do all the time. Yeah You have one sequence of saccade and fixation within 250 milliseconds This is where this is a pretty short time where you get input and have to digest this somehow and Direct direct your next saccade Yeah, so if you think about How much time you have left per? computational stage approximately You have about less than 20 milliseconds for each stage involved and for reasonable firing Rates, this is then in the order of one spike per area This is not my calculation this Simon thought it at some point So this leads me to the hypothesis that neurons coordinate their activities on short time scales in Dynamic fashion for the computation of all that so This is also supported a little bit by or not just a bit but by Some connectivity numbers. This is actually a book from a book. I took from a book from Moshe Abelis who is sitting in the back he This is a list of for example Neuronal density You have about 40,000 neurons in a cubic millimeter of cortex this is a Mixture head or monkey, okay, and You have about This now this number is quite high But you have on average about ten to twenty thousand synapses per neuron in pinching So these are I find the most important numbers here Meaning I think I have a different slide Yeah, so this means that you have a high density of neurons and you have a lot of input per neuron And what is not set on this slide is that you have about the same number of neurons At the neurons project to about the same number of other neurons about so what you basically have is In at every stage a high convergence on to an individual neuron and a high divergence To other neurons. Yeah, and this I find this is one of the arguments for network processing That not the individual neuron is sort of say playing the most role, but neurons together now There are people also and I just took this figure from Ken Harris because I liked the figure But there were people much earlier who thought about cell assemblies and how how neurons could coordinate in order to process information and This was actually have in 1949 who first proposed this idea of cell assemblies and This is being done by interactions of neurons and for example Cross neuron in the X correlations are indicators of such assembly membership So if you think about that you have sort of a sensory input eliciting in a subnetwork Some spiking activity which propagates to yet another subnetwork and yet to another and Think about for a moment. How if you want to detect this You could if I take it directly then if I were able to record from the neurons involved Then I would for example have synchronous activity in the first group or almost synchronous and In the next and in the next and when the same assembly is activated again and again Or meaning the same behavior is requested the assumption is that the same assembly is activated And then you would have repeatedly these kind of activity Now is this this what we actually expect to see all the time? well As I said already Here you should have caught and recorded all these neurons which are involved But likely is that we may be a sub-sample That we may have maybe one neuron recorded in each of these groups then you would see Just a spatio temporal pattern a spike firing with some other neuron with a given time interval bill after it then another and yet another yeah or If you have only local Locally your electrodes you may see only the synchronous activity whenever this group is active or or that or that But these are I think these are sub-sampling issues Now what would I then expect? How would I imagine that this looks like that we have for example a number of neurons recorded as a function of time you need You need to observe the neurons in parallel otherwise you wouldn't see this correlation We can discuss about that later on why this is necessary and then we you could imagine that for example when a Particular cell assembly is activated that a subset of the neurons you observe is firing Synchronously I'm talking about millisecond precision or in for another Behavioral request another set of neurons is activated But you could easily also imagine that something like this happens that an individual neuron is Participating at different instances in time in different assemblies Yeah, you have sort of say the time space as an additional space single called this multiplexing but this is can easily be imagined here and so what we want to do is to identify correlated spiking in multiple channel data and What what I started to do many years ago is To develop statistical methods to be able to do this to analyze experimental data and See in how far the relevance of this network interaction is in relation to the function Yeah, otherwise. It's kind of useless. This speaks already for looking at Recordings from awake animals which have which behave. Yeah, where you can relate it to so There was a time where the debate was was much more intense and it was hard to survive in In this field because people are so much like firing rates because they are much more obvious and There was one seminal work Which I like to cite in this context. This was by mining and session of ski 95 Where the question was always are neurons precise enough? Would they support? Fine temporal fine temporal precision and what mining and sign off ski sign off ski did in a dish was to To record intracellularly from a neuron upon Current injection a constant current injection here in this case and they repeated this experiment several times and You see here the spiking activity from the first the second third and so on trial and what you see is that the first spikes are Aligned but then they the actual spiking activity the diverges quite strongly however When they inserted a frozen current Into the cell and recorded then the spiking activity This is the overlaid picture of the 25 trials and here is the Extraction on to the dot display. You see that the spikes occur very much at the very same time Nowadays if you are involved in Modeling and so on. This is kind of trivial, but at that time this was ha this is an argument Here you see that neurons can be very precise Yeah So so what makes a neuron fire? This was also a long debate and I only put part of the work on to this so for example Moshe sitting in the back had a paper in In 82 said and he was asking integrator or coincidence detector So what an individual neuron? Integrate in time the input Yeah, so that the time constant is rather long to be able to add all this up Or is it a detector which is? Responding preferably on to coincident or synchronous input so and Basically what you require what you require to make neurons fire is that That you need to reach a threshold Yeah, however the threshold in cortex is about On average about 30 to 40 times above the amplitude of an individual EPSP Yeah, and now you can consider issues like if all these neurons would fire Independently with a certain rate What is the probability that I have? Coincident firing of this in of this incoming spikes to make it fire however, if they arrive synchronously within a certain time Window few milliseconds then the EPSPs can directly add up and cross threshold This is the fact this is Accepted in the meantime by everybody at that time. It was still a big question whether this is true or not But everybody agrees that a neuron prefers or Preferably elicits a spike upon synchronous input Now How how may this be used in the system and how can we take this? Yeah, that there is a correlation between two neurons So that when one neuron is firing the other one preferrably fires as well in a short or both synchronously and there was In 1967 Gerstein and his group Developed besides the PSDH developed also to cross Corellic cross Corellogram or brought it into neuroscience actually, he was a physicist as many of the computational neuroscientists and Knew this from physics and brought it into into The neurosciences and I just want to give you an idea what this does so You have two spike trains as one and as two you have here spikes And from the other neuron and what it does It represents at the end the probability to finding any spike in spike train as to As a function of the time before or after a spike in train as one meaning What you do is you sit on a reference neuron, which is in this case the upper one and and record Register all the spikes Relative to this one. So you register so to say the time interval that the spike is occurring here after this and there and there These are all these intervals which are registered here and you enter so to say at these points in time you enter an Entry in this histogram and this is then you continue like this you sit on every spike of the reference neuron and register these Spikes of the other neuron and This is then accumulating in this histogram And at some point when you bin this actually a histogram involves spinning you may see so to say now A time axis relative to the spiking of the first neuron and When the other neurons fired Before or after and you may see for example such a peak which may indicate to you that these two neurons tend to fire Simultaneously or within a certain time interval Okay, this is the simple story Now you can also do this analytically typically you you express it as Yeah, cross correlation with the delta t at with how and then you sum this and you can there is an analytical description In case you have parallel Poisson stationary Poisson you can actually write it down analytically, but beyond that it becomes immediately difficult Now what do people observe with this? Well, you can for example, this was from per rush 2012 in PNAS he was interested to to identify so to say whether excitatory inhibitory neurons are Correlated to each other and you can then see different and he observed different types of cross correlations For example, you here can see that there was a spike first And then there was so to say a dip or here vice versa, so you can you see different types of this is exciting exciting or suppressing depending on the combination of neurons or Also a paper which I can recommend to you which I like very much This was a paper by it's son fun. Kali son fun Fakali Yeah, and corn 2015 in neuron he was interested on cross area Correlation and what he did basically he Recorded neurons from v1 and v2 Calculated the cross correlation and extracted the delay peak in order to see Or in general the position of the peak are they synchronous or do they have a certain delay and this is his Statistics so basically when we when he recorded with neurons from both from v1 that he found that there is They basically are synchronously active, but when he looked at At v1 v2 he saw that the neurons in v2 Actually fire some milliseconds later Yeah, so basically what what is a statistic being done here that this peak is a bit later than zero This is then the deal to assume delay between the two neurons or other studies Older ones which I also find quite impressive. This is work by winrich freiwald Who recorded to neurons from visual cortex? No visual cortex Yes a 17 of cat in this time kind and what he was interested in is Do we have so to say a reflection of gestalt perception in the neuron Activities in the correlation of the neuron electivities and what you see here is So to say the receptive field of neuron one the receptive field of neuron two and What he did is he once stimulated this neuron with a common bar and move this bar across these two Receptive fields together or in this case he cut the bar into two and moved on bar Onto the top neuron from this side and the other one from the other side and What you see here is the cross correlations Between these two same two neurons one and two In this case in the one bar situation in this case the same Neurons when they have two different bars moving and you see that you here have a clear Peak multiple peaks actually in this cross correlogram What would this indicate to us? But that there are multiple peaks oscillation So typically you need to make in such I come back to this But as you may read it here when you see this you may think of that day in a Oscillatory fashion and also with Oscillatory so to say also the delays that they coincidentally fire Yes So this is this is why you should also look at the autocorrelation at the same time that you can observe whether they are actually Oscillatory firing or not But the intern most interesting part here is For me that if these two neurons get this kind of stimulation, they are not correlated You don't see this anymore and this is what I typically would call functional correlation that Neurons can depending on the request or depending on the stimulation either be correlated or not and This means that Connectivity is not everything because we are talking the two same neurons but just in different context given the Sensory input Similar picture is here, which is maybe not so super intuitive, but this is Too simultaneously recorded neurons in CA1 When red was performing an auditory or a visual discrimination task The very same neurons and these two neurons exhibit a clear central peak For the visual task, but if you present them an auditory task, they are not correlated Yeah And the interpretation of Sakurai was that in the interpretation that these two neurons belong to this to a cell assembly That enables visual discrimination. So this is the line of thought that you see the argument now Something what you also want to know This is what I forgot your name What no, I'm just said Now if you want to know whether there are Whether there are coincidences or synchronous events Are they beyond what I would get by fire by just the firing rate? Why is this of interest? Yes, it could be one neuron is firing oscillatory the other one not and suddenly you see a Modulation in your cross correlation you want to know does this indicate interaction between these neurons or is this because this is the Just their base firing rate. Yeah, this is typically what we want to know and I cheated a little bit that you you probably saw here that there were some Lines in here and so on these are different types of predictors However, this was in early times and they were maybe a bit too simple So what I want to show you is That what we want to do is to compare the empirical cross correlation to the cross correlation predicted by the firing rate and My conclusion from over the years working with this kind of with real data which are typically non-stationary and Inhomo genus across trials and maybe even deviate from Poisson That you that sort of say classical predictors analytically do not work at all You cannot capture all these features in in your predictor that many of the surrogate Many of the predictors and kind of bootstrap methods Also violated some of of the assumptions So I want to show you what you can do and what you can do wrong So and I show this to you on the example of the cross correlation, but it's holds for all the upcoming analysis as well So this is what I just said your neurons Typically fire and you put a lot of effort into it to make them change the firing rate Right you want to respond them to to a stimulus and so on so the firing rates change over time but and But also there is often so to say a these in a way a DC change across trials Which may be modulated by changing a 10 attention level or so they are often They often deviate from Poisson At least in the data. I mostly work with motor cortex. They are more like gamma processes But there are also other ways And you often have a latency variability When the neuron is responding and if you then want to make an statistics across trials You need to deal with this as well. Yeah, this is then a non-stationarity across trials So here you see an example of Of a real data set I took it took this figure from paper. It could be any paper From Wadia. So what you see here is Time in a trial You see across trials one neuron across trials and I would like to tell that you tell me what you see in this display Yes, so you see here. Yeah, this whatever you can call a latency onset But you see that this change is quite considerable across trials This is mostly yeah, and is it okay then to make such a peer PSDH What does this chain tell me here? uh-huh um Yes, you have so to say a non-homogeneous data set and Your conclusion that the firing rate is changing slowly is not true for the individual trial Yeah, and because you have to do because you have to do your Correlation analysis on a trial by trial manner to neurons one trial to neurons one trial to neurons the other trial You need to consider this aspect Otherwise you get false positives so And I think you have also some exercises contained in this these are so to say just illustrations again For simulated data so you in this case these are simply Poisson processes across different trials and firing rate Profile we used was I think 20 Hertz and then it jumped on to 100 and then it went back and I would strongly suggest that when you even when you do a Course a correlation analysis that you still look at these aspects of the data don't do a sophisticated analysis directly on your data You need to get a feeling for your data you need to know what they are doing yeah so So here for example, what what is illustrated here the PSDH the dot display and here's you see a rough For a rough estimate the spike counts across trials What do I want to learn from this? Yeah, so what do you say to this example? Why in here? How do you see this? You may see that's true You see here that there are these stripy is this stripy appearance so the underlying Algorithm to generate this data was That in some trials and randomly so to say in some trials they jumped up to 150 Hertz and in some only 250 or something like this, but on average you again have 100 Hertz in the PSDH so if you immediately for example do the PSDH you would have a wrong conclusion and Here you see that these spike counts are much more variable This is at least you could so bet this you could still for post op processes And so on you could still calculate what variability you would expect here. Yeah Even if it's so mixed But it's a rough indication. Now. What's going on here? Excuse me. Yeah, this is this is the body our example We saw before yeah that they change at different points in time which appears as a ramping firing rate and The variability is also larger than for this and typically you should also I don't didn't show this here, but you should also look at the end of spike interval distribution at the same time Are they a person or is there something else weird? This is important that you know that Yeah, here it comes related to this. I just want to train a bit your eye So what do you see here in this dot this place and how far do they deviate? Although their PSDH's are very much alike what you see not not reading the title. This doesn't help What else here in this case it's more the offset, right? You see that that the increase of the firing rate is kind of You even see it systematically here because we aligned it like this But this when you estimate the firing rate by a PSDH as we did it here and use then this for all the trials You miss this Change and off offset in time. Yeah Yeah, but that's true But this is so to say also a function of the firing rate as it is as it is here here The intervals are also denser than here although or in other words If you calculate the eyesight distribution of a process which has which has a changing firing rate You get a strong mid mixture and you should be aware of that Yeah, yeah, this is how I See them as little because I see these little snakes, but this is I think a mere visual impression You can try yeah But it depends It depends what you want to do So this is only looking at individual spike trends and It depends on the method you want to continue with what you want to do with your data, yeah Whether you want to take this effort so what I want to say is That an analytical description of the cross-correlation. I mean of the predictor for the cross-correlation That includes all these features is basically not doable even much smarter people than I try to do this and who are analytically much more capable as I and They Yeah, Calvin Frisbeck or others. This is basically very difficult. I just tell you yeah So and why I tell you this is of course you would like to have an analytical description because it's much faster than what I tell you now Yeah So What we then at some point started to do and this I actually learned in a on a conference which was actually mostly on EEG and There I heard the first time the word surrogate and Surrogate is an expression for an artificial data set However, generated from your original data set. Yeah, and But what people did early on already are things like Trial shuffling you may have heard Yeah, that you want to look at correlation from the empirical data trial by trial You correlate them and then you mix the trial IDs and then see what what is left over when the neurons are actually independent But as you will see in a moment, this is only of limited use Or what people also like to do is to randomize the spike trends thereby you also Destroy the correlation if you place you keep the number of spikes, but you place them randomly You can do a shift of the spike train Yeah, to destroy this the correlation or you can as in Ikegaya he used the very weird very weird surrogate that he exchanged in individual spikes across the trials Or What you can also try to do, but I don't want to emphasize this too much here is to so to say keep the Inders bike interval now and next and then according to a certain probability distribution Displace the spikes But what we nowadays mostly do also in more sophisticated methods is to do the following that we Take from the original spike data each individual spike and Ditto this a little bit in time in order to destroy the potential correlations and I show you the effects of a number of these effects So of these methods so here you have this is also in my book chapter In one of the book chapters I have in this book By the way, I brought a box with books which I put here that you can have a look at them or use them so what we wanted to show here is that on Artificial data Which we have completely under control. Yeah, that the important thing is to know the ground truth Yeah, what is in the data if you use experimental data, you don't know what what what features they actually have You can just try to extract them, but you cannot say yes, this is correct or it's not correct so what we did here is to simulate to neurons With a given firing rate change as a function of time they do this coherently to a certain degree What do you conclude from this here? What else features do they may have I can tell you we did put in Whatever we knew can make false positives. Okay So we made them in homogeneous across trials They are obviously not possible, but have a preferred in the spike interval They are have a trial by trial spike count correlation, which is also a strong generator of false positives So this is what was our data set and we analyze this with different kind of our Zoro gates And I don't I want to illustrate to you by this How you can try to consider sort of say the features as best as possible so in black you see the original original simulated data and When we had written a paper on on these Zoro gates and then the reviewer said Well, you need to illustrate on real data that this works I thought How should I illustrate this on real data if I don't know the ground truth? But then I had a great idea I took to data to to spike trends to data from two neurons from a From a data repository from which I knew they are independent They were one was recorded on the one day and the next on the other day and Then you can so to say also look at at at the cross-core L a gram and Look for example at coincidence counts trial by trial or so and Then you can illustrate that depending on Here we tested more of these Zoro gates. So this was the real count This was the empirical count that for some Zoro gates, you would say this is highly significant and for some you would say it is not at all significant Yeah, so the ground truth was they are not correlated but some of the of the analysis resulted in highly Significant So now you know all the pitfalls and Keep them a little bit in the back of your head because this is sort of say All the time coming up when you will analyze data in the tutorial in the exercises that You need to learn to account for this appropriately So this is an intermediate discussion Cross-correlation is a standard tool that allows to observe synchronized or delayed spike correlation Even if neurons fire independently from each other, we find delayed coincidences. They occur by chance Differentiate chance correlation from excess correlation to distinguish neural processing schemes for example by comparison to independent data calculating a predictor and We learned about Factors that influence the correlation No, you didn't see it yet. This comes later and There are extensions now of the cross-correlogram possible To observe the temporal modulation of correlation This is our next goal. So to say the point is for a cross-correlation You need quite a bit a big piece of data beyond the extent of your cross-correlation Yeah, this prevents you in some sense from From following the correlation over a function of time the dynamics of the correlation and there were other extents made and I want to Mainly talk about this But I wanted to at least mention the joint PSD H by Erzen et al. 89 1989 which is so to say In a way you observe the trial by trial cross-correlation in a matrix and where you can observe along the diagonal at least The modulation of the peak and can maybe see that there are also a side band But I wanted this is what I did in my Doctoral thesis I wanted to go actually beyond pairs of neurons Which partly succeeded? So I wanted to tell you about the unit air vent analysis Which is for uncovering time-dependent changes of correlation. So as I said this method is actually was originally developed to observe to analyze more than two neurons at a time and What you basically do is you represent your spike trains as sequences of zeros and ones and Then you look for each point in time you for each bin What kind of composition you have as a vector and ask yourself is this is this pattern the synchrony pattern One zero zero zero one one zero is this occurring above chance or not And as we learned before We can actually calculate the predictor derived the distribution and Then estimate a p-value or we convert it typically to the Surprise this was a bit quick So what we do is to count the empirical number of coincidences across Across the trials in a certain window which we slide along the data and the null hypothesis is you can try or you we in a first approach you can use so to say the assumption of Poisson processes you take the firing rate of the neurons within this window and and Calculate the product of the contribution to get a spike or a non-spike this gives you then the expected probability and What you then can do in order to account for the cross for the cross trial non-stationarity You can sort of say calculate per trial this expected probability So you estimate from trial by trial the spike counts and you multiply this Correspondingly and get the expected probability and can sum this up across the trials thereby you correct for the non for the cross trial non-stationarity and How does it go on and then you can calculate so to say from your This from your mean expected number of coincidences Assuming a Poisson distribution, but we have also other ways of Generating this distribution and compare the empirical count and get a p-value and Typically because p-values are typically good when they are low we transfer it into a surprise measure by this So that you have then a high surprise if you have a high basically a low p-value or a negative a negative surprise When you have Undecorrelated data Huh, yeah, exactly both can happen. I'm not sure if I exactly understood your question yeah, so the Activity yeah, we played around with this if you so to say whether you should ignore the neurons Which do not contribute with a spike because the more in this null hypothesis the more you include which do not contribute as the less you approximate a Poisson distribution, yeah, this is one thing and This is why I also would like prefer to use Sorugates also here to generate the distribution that I get away from this all these assumptions and in addition I learned that It's probably not so good to use unitary means for many neurons Because it becomes very it's very sensitive and and yeah No, I don't but we can discuss I would like to discuss about this if you have an idea No, we didn't Where are we in time? When did I start? Okay? Need to drink something So what do we define as unitary events just to get a bit used to the also to the way we visualize this if we have So to say three neurons. This is again a simulation if you have three neurons Recorded simultaneously The analysis unfolds to the individual trials and you look for a particular Constellation across the neurons you count these events if let's assume if you don't do a sliding window then you count All the coincident events here the triplets and then you ask yourself is the empirical count larger Significantly larger than they expected and then we circle so to say these bikes which are involved in such an event I will see in a second why why this may be helpful We do this in a sliding window analysis of 100 Approximately 100 milliseconds and then treat these individual pieces Together this I skip for now and then here is for example one one result one of the early result we got but We found this again and again. So here you see this is a study. I did together with Alex Ariele Marcus these men and at the Erzen so What we did here? This are now real data. This are two neurons Recorded in motor cortex in a monkey doing a task where the monkey Gets a signal here and Then randomly he may have a go signal here or here or here or here And I pulled here the trials which were of the maximal length of the waiting time So I only took these trials and now you see here the part of the result of the unitary events first of all you see here in dark blue you see the Expected number of coincidences as a function of time. This is the result of the sliding window analysis At the same time we calculate we also calculate per window. We actually count the number of coincidences Which is the light blue one and you see already by eye that they deviate quite strongly from each other At a certain points in time and it turned out if you then evaluate make this a significance analysis you realize that at certain points in time it becomes highly significant and If you project it back on to the original data these These squares mark the spikes involved in excess synchrony Yes, here is no excess and we don't know why But what I want to say is here you have this classical example I was behind so to say that So to say the excess synchrony very strongly as a function of time. It's not constant It is related to behavior when the monkey expects the go signal Yeah, we thought of a number of We also thought about whether this there is some Rhythmicity that the monkey is kind of counting with his leg because this is so regular or other events And this is why Alexa did then another experiment No It's it's not the case. Yeah, so so what she did then she retrained the monkey at some point and He had only two one point in in the in the trial actually originally here at 600 milliseconds He could have gotten his His go signal or here And then after that time when she recorded that she started to retrain the monkey for 900 milliseconds and this time and this are Is in an early period when he was able to Generate the new behavior and did not make so much mistakes and this is in a later case Yeah, but what you see is when the monkey new to make the new task The occurrence of the unitary events. We are still at the former time at 600 milliseconds and after some training it occurred at 900 milliseconds So this was for me a sign. Yes, they can also be relearned It's not just counting Yeah, that's for me a very strong argument that this is not chance or so These are basically summaries how you should do a unitary event computation I think you heard this now and this is a reminder of what were these various Violations which we need to consider So this is now a short summary of what we found in data with the unit events in motor cortex we found That unit air events occur at behaviorally relevant points in time When the timing of the task changed occurrence time of the unitary events changed as well and Unitary events look stronger to the face of the LFP than the chance coincidences in Visual cortex we all decided in show but we also applied it to visual cortex data during free viewing Unitary events occur at the beginning of rate increase in v1 first spikes after fixation on set lock to the LFP modulation there as well beta and This beta oscillation is related to Sakat onset and These spikes which synchronize we which are locked to the LFP Beta oscillation are synchronized and exhibit then unitary events Okay, so now a Lot of these things you find in this book which I brought also By the way, I Yeah, this I wanted to tell you so this book was originally printed as hardcover and cost enormous amount of money which I couldn't fix and What they did now Two three years ago because this book has a very People use it a lot. You can either get it as a e-book if you're if your affiliation Has corresponding contract with Springer or you can get a copy of this book for 25 euros This is called my copy again if your institution has this springer Abo I Have it also with me. Unfortunately. This this this is a soft cover. This is not in color But it's cheap Okay, I thought I add for the last quarter of an hour First a first approach to massively parallel spike trains and what you can do with what we did up to now Which was also quite interesting and then you can maybe also start playing Then in in your exercises and go beyond pairwise So just that you get a feeling maybe some of you anyway have such data So these are data which Alexa really recorded With the Utah array Because I asked her at some point couldn't you record more neurons than just five or so but But I must admit I was super naive Because I did not know what what is coming in terms of statistical analysis you have to deal with so So this is just a basically a movie of several trials What you see here is the dot display now of different neurons about 100 You see individual spiking what you see here is when the monkey is moving his arm To the object and back. This is trial start. This is the go signal and Basically, what you see is also this data have all these ugly features We saw before that the neurons that across the neurons They are also extremely in homogeneous in terms of the firing rates in terms of the coefficient of variation meaning the regularity The here this is basically a population histogram Where you see that the most dominant what you see is that the firing rates change due to the movement and Yeah, this is more or less I wanted to give you an idea of how such data look like and One of the data sets you are dealing with for the data challenge Also, the real data also look like this now We started to look into such massively parallel data already earlier We got data from I forget his name From Utah from he recorded Multi-unit activity no actually he recorded single unit activities with a Utah area in visual cortex of cat however, this cat was an astatized and At that time you see this was the paper was from 2007. So we began about 2005 to the thing about this data and we were kind of overwhelmed because we had 280 single units, I thought gosh, what do I do with this and then I said first Let's merge the spike transpec to multi-unit activity. This is for now enough. It's more than about 80 or 85 and what we did as The first step was to pair wise correlate these data with cross correlation So you see here a cross correlation of Of two of these channels, I must say and What we what we did is we compared this cross correlation To surrogates rather white surrogates and extracted Considered this to Positions as correlated when they exceeded Two or three standard deviations of the surrogate and when the peak was relatively close to zero So we went through all the pairs Calculated cross correlations and then you these pairs were correlated or not and What we then did as a next step is we said let's merge this There were many many many pairs correlated and we thought how to continue with this mess of data and What we then did is we? considered clicks And started out a click is sort of say if Neurons I consider for example three were all to all correlated so pair wise correlated Yeah, so we formed clicks which had at least three multi-unit activities involved and They went up to clicks of about seven or eight So there were groups of of neurons which were composed of seven spike trains. We are all where pair wise Mutually correlated, which I found already amazing, but this was still too much too messy and what we then did is we formed so to say cluster of clicks and the requirement was that the clicks overlap by at least one with unit activity and The result of this you see now here So you see here These were the IDs of the electrodes and you see each line indicates a significant correlation according to the to the definition I gave before and Interestingly when you when you now cluster these you realize that immediately we found four groups of intra inter correlated clicks So this was a highly inter correlated Set of multi units this one this one and this one and these individual ones here Where are correlations which were also significant so above chance, but we're not involved in clicks I was very surprised about this and found this interesting and Wanted them to know how this maps onto the cortex This is now a so to say activity correlation map so to say and We as you see now we we colored them in red and these in green and these in cyan and these in dark blue and Map them back onto the cortex. Oh, this is a terrible figure. Okay, but I think you can Recognize so what you see here now you see now the uterine These are the individual electrodes and And what we marked here is when the activity from this electrode were involved in at least one One correlates significant correlation within these clusters and the larger the radios the The radios indicates in how many significant correlations this data this Multi unit was involved So as you could can also see from here used to can just count in how many there were involved in many Yeah, but in addition what I found even more surprising is that these that these activity correlations cluster in space in cortical space The reds are together the greens are together the cyan ones are more or less together and we have one Thingy here and one there Which actually makes sense if these were so to say these areas of common preferred orientation and There is some distance to the next so to say Place where there is the same orientation as for that one and so on so There is at least an explanation whether why this is also red why this is involved in this activity cluster and This grouping I found extremely interesting I'd never saw it again be afterwards in another kind of cortex I guess it's related that it was visual cortex and anesthetized cat and One important thing which you should also know what we looked and also at the distance dependent probability to find correlated pairs Distance dependent Probability to find correlated pairs because we can calculate on the under yuta array the different distances of the recordings and say For for this distance relatively close There's a high probability that they are correlated than a decade, but then they came up again Which also speaks in a way for this Correlated orientation domains so to say Okay, so I summarize now this last part and then give a general summary so What we did is we detected pairwise correlation by cross-correlation the significance was evaluated by dithering Many pairs were significantly correlated Reformed clicks and clusters of clicks for distant clusters of correlated activity appear and these clusters also cluster in cortical space and The interpretation was very much in line to what people found on orientation domains like Kenneth at all or Thotics at all where they found they found things like that so to say the Even during spontaneous activity that orientation domains were Regularly visited Yeah, this orientation domain and this orientation domain and this one which could be the reason that we observe these Correlations, I can highly recommend these papers. This is the modeling part of it and this were the experimental data and In general the fine temporal correlation may be overseen You just don't see it when you look at it Data you need to use tools and one standard tool is the cross correlation histogram You need to correct for for the expectation by the rate to get access by Xyncrony It enables also to observe delayed correlation but it requires long data stretches and it's hard to follow Dynamics of correlation. This is why we invented at that time the unitary vendor analysis Unitary event is not restricted to pairwise analysis and pairwise approach to massively parallel spike trains based on pair Provided based on pairwise cross correlations and clustering So I think I stop for today and continue then tomorrow for the last part on higher auto correlations But maybe you can already work into this direction on your exercises that you can ask me a lot of questions tomorrow as well Yeah, thank you very much