 זה פעולה להיות כאן ולמרות את זה. אז אתם רגעים את זה, אתם רגעים את זה already, זה מה שוניה סמאלטלי דפיין כתוביל אוניטריה, קצת של שני נרוניים ביחדים, בבקשה מההלכת בגלל, והסppers עושה בו שזה אין אוהב על את בספר המאולה אוכל לבוא על לך עשרה תהיה על עשרה, היו אוקיי. אז вот הנה מבה workout, אני מאוד מתרזלת על אלכסה ומתקתולים ושוניה, שני נוראים רכבים, 30 תראיילים, שכל פעם יש שני תראייל, ואני רמנתי לך שאתה רואה את היו, שאתה צריך לבין איזה פעם, או איזה פעם, או איזה פעם, או איזה פעם, בבקשה, הוא עושה לנסה, והצליחה פה היא only from cases where he had waited all the way up to there. ושוניה ואלכסה נראה את הדתה, הם יכולים לבין הרבה אינסטנציה של קהינסטנציה בין שני התראיילים, הם הלכים פה בין סיינסייקל, בין התראיילים, זה somehow קצת לבין, אבל אני חושב שיש שני קצת פה. במ hospitals there firing rate of each of the neuron. From this, they estimated the expected coincidence rate in black here. And they measure the actual coincidence rate in cyan here. And they computed what's the probability of seeing so many coincidences when they expected is so many. וזו הגרע של פרוביליטי, שזה בעצם מינוס לגרע של פרוביליטי. או זה גם פרוביליטי, זה פרוביליטי, אוקיי? בגרע של פרוביליטי. ואתה יכולים להראות שאולי זו פרוביליטי של פרוביליטי, וכן אני חושב שזה 5% כאן, וכן הם הולדים לגרע של כל התחילים שקרעים כאן. אוקיי? עכשיו, הם שקרעים עד פרוביליטי של פרוביליטי, וכי הוא חושב, שאולי הוא חושב לגרע של פרוביליטי. אוקיי? וזה שהרע של ספורט הזה, שזה עד פרוביליטי של פרוביליטי, ולא רק כאן, פרוביליטי של פרוביליטי שהם יעסקו את התחילים. אז זה מאוד גרע, זו חושב על התחילים, שבאוקיי, שאולי שאהיה שזה מאוד יעסק. אל תשמע, אני רוצה להקרע את עכשיוות קשה.oulbs. Let's take this case! The number of coincidences is roughly twice the number expected. That means that roughly half of these things are chance-events and we don't know which of them is chance and which of them is not. If we wanted to use these coincidences to further analyze the circuits in the brain, we don't know which of them we should use. איזה שאלה פרקים. שאלה היא לא על כל תחיל, יש שאלה. יש 30 תחילים כאן, וכאן יש 13 תחילים, זאת אומרת, על 17 תחילים, לא יש שאלה. אז זה לא משהו שבכלל, שבכלל, המנקי עושה את הדברים, אתם מגיעים את הדברים. ומה זאת אומרת, כאן מלכים, ולא, נראי לי זה לדברים. כאן זאת אומרת, לדברים כאן, שוניה תקשיב לך את הדברים כאן, ואת הדברים כאן, ותואלת לך, לתב לי, כמה התחילים קצת כאן וכמה התחילים קצת כאן. אתם רבים שזה סופוי שלך. אתם נתקו לך. אתם חלטים, מפלגים רדים, מואלה more coincidences than here in Blunder, אהיה קיצדי 1, 2, 3, 4, 5 או משהו ככה, אז שאתה יודעת שאתה חלבת את החלבים 13 מיונדים של 30 שירים או של כרון ותTell me where it came from. You see that to do this you'd have here a serious problem because here there are 30 trials only in 13 there was a coincidence so for 17 you would not be able to tell that they come from this example. Here there are about five coincidences so for five cases you see coincidence and you'd think that they came from this trial so your overall success in sorting single trial by this say unitary events would be something like 70%. Now this is considered fairly good in computer science but I think it's not really really exciting. Now you would think that I am harsh on Sonya and Alexa and this is not true because my data has exactly the same problem through tens of years so this is an example of spatial-temporal firing pattern. I think it's taken from the PhD of Yifat Brut recording from prefrontal cortex of monkey doing some localization in space task recording with a six electrode there have been something like 13 or 14 single units and in all this data we found many cases where this neuron fired after 104 millisecond plus minus one this neuron fired and after 402 I think it is millisecond plus minus one this neuron fired this happened 30 times chance was something like 10 times so it's the same issue it's 10 of these cases are maybe random and only 20 are really something that you want to further investigate furthermore if we look on all the events that we know of around the monkey we see that a go to the left happened here here here here here here here and there so this event tends to appear after this go signal but it's not time looked locked to the signal but again it happened for 13 trials of this type and there were like 50 in the experiment so again I wouldn't be able to say for a single piece of data from one trial is it coming from following this go signal or from somewhere else so we have very similar problems in our own data and I admit it's always bugged me a lot okay now the aim of what I'm going to talk about is to show that by extending the concept of unitary event one may find a set of such events that can sort single trial at 100% accuracy now anybody who ever tried from computer science or anywhere to sort single trials from brain data know that this is amazing to be able to do it 100% correct and for me it says that when it's possible that I really put my finger on the essential property of this data so I will show this by data from magneto encephalography MEG and I will spend few minutes explaining what is MEG although you may know it so MEG is a big thing like this the subject is either supine or sitting with this around his head there are many coils inside there and you record magnetic fields that come out from the head you can record it at the rate of 500 or 1000 hertz so we can get fairly good time resolution you can go even higher but I never did okay how does this thing work so this is how it works this big hood has in it a container with liquid helium that keeps everything at about four four or five degrees Kelvin about the absolute value in this temperature the metal from this there are coils here the metals of these coils and the wires that conduct from them are all superconducting they have zero ohmic resistance now whenever a changing magnetic field is whenever there's a wire in a changing magnetic field it induces current in the wire so this this is from the inside so these coils are seeing magnetic fields around the head the changing magnetic field is causing currents in this coil this current is led by bundles of superconducting wires up here here just at the bottom of this דואר with a helium there are little transformers called squids because in the squid current can pass from one coil to the other and the amount that passes depends on the magnetic field so what happens is the following this coils pick up the magnetic field around the head they lead the currents up there and here is a repeating coil that generate again the magnetic field that was recorded here inside this magnetic field is this little transform and its resonant frequency depends on the strength of the magnetic field so electronics outside not in here is driving the the transformers to find their resonant frequency and the resonant frequency is proportional to the magnetic field around these squids so what we measure is a the control signal that controls the frequency of the oscillator that drives this little transformer and it is proportional to the magnetic field that was here which is the same as the magnetic field that was here and this is the recording now why is this worth doing so you have to remember physics okay around any current there is a magnetic field if the current goes in the direction of my thumb in the right hand the magnetic field is in the direction of the fingers okay so if there is a current in this dendrite there would be a magnetic field around it circular magnetic field around it and this would be true for any dendrite but the apical dendrite of pyramidal cells collect most of the current so we believe that they are the major contribution to what we record but also others contribute now if the coil was here it would see nothing only if the coil was up here or down here it would see the magnetic field coming out of the of the brain so the place of the coil that sees some big thing is not the place of the activity in the brain unlike ee g it's the side of it however there is a huge advantage to magnetic field recording because below frequency of one megahertz the brain itself the cerebral spinal fluid the skull the hairs and the air all are completely transparent to magnetic fields in this in this frequency range and all the electrophysiology ranges is well below one megahertz therefore whatever comes here goes freely out of the brain into the air and the coil somewhere up there could pick it up now one neuron generates very weak magnetic fields and you would not be able to record it but if there are many neurons with the same dendritic pieces aligned and have some synchrony in their activity they will add up and you would see it outside the brain okay so one advantage is that everything is transparent the other is that you don't need a reference אלקטרוד now anybody who tried to measure ee g have seen this terrible problem you cannot measure the voltage here you can measure only the difference between here and somewhere else maybe the ear maybe another electrode maybe think like this and when you see something you cannot be sure if it came from this electrode or from this electrode so there's a major problem in interpreting ee g here i seen many psychologist friend who are measuring ee g and they are very happy they take the average of all the electrodes and use it as their reference so they subtract the average from every electrode now you can see that if there was something in many electrodes it would show also on the average and when you subtract it from everything else you would see it with the minus signs everything else and then you measure coherence and the and the correlations and things among many electrodes and it's all because of the way that you treated your reference line so this no need for reference electrode is a major advantage now but we want to know what happens in the brain not around the skull and there is a way to try and reconstruct the current dipole inside the brain from the measurements outside the head this method is called synthetic aperture magnetometry some in short and robinson and vibray 99 and there is a fantastic book by saki saki hara and nagara yan from 2008 that shows the physics and mathematics of how you may reconstruct the current dipole in one point from the measurement all around the the the skull this method is nice in the sense that it tries to compensate from correlation between different points but it's not perfect some years later in 2011 mosaic smart guy said well instead of reconstructing the current in one point i have a method that you choose several points and reconstruct the currents in all of them such that there would be minimum crosstalk between the points so we thought we can use this method to improve very much our recording we do the following we are interested in this green spot on the cortex we put around it a cube of two by two by two centimeters so you see here the corner of the cube the other part of the cube is inside the brain you don't see so we have eight corners of a cube plus the central point is nine points we solve the source simultaneously by the method of mosaic for all for all nine points the corners and the center now here is we take this point and three nearest neighbors and show the current dipole amplitude without this box around deciphered for these three points you can see that there is a lot of similarity like in these spindles here in this wave here and so on between the channels they are highly correlated but if we use this cube and rotate it in several directions and find the direction in which the center is least correlated with the corners you will get these signals which are much less synchronized with each other than that so by this trick of putting the point of interest inside the cube rotating it in several directions and finding when the center is least correlated with the surround we can get cortical currents dipoles which are weakly correlated only even for neighboring point so we get this recording and now we want to analyze it now I admit that I felt like this student in biology who had an examination in biology and and it was a little bit lazy so he decided that probably they'll ask about the elephant and he studied everything on the elephant now he comes to the examination and the question is what do you know about the biology of the fly so the poor student writes down the fly is not like an elephant because the elephant is a composition okay so I am a little bit like this I don't know how to deal with such analog signals but I know how to deal with point processes because spikes are point processes and this is what I did all my life so I looked at the data and I've seen that from time to time you have a little bit sharp peak sometimes up sometimes down and I said okay these are interesting points we'll pick up the points where something like this exists by matching it to a template that looks like that and then the times that this exists will be a point process so now from the recording over here we have many parallel point processes and this I know how to work with okay now in EEG when there is activity under an electrode the electrode becomes negative here in the MEG the direction of the current vector may be this way or that way if it's this way the magnetic field may be negative and if that way it may be positive and we don't know which way it is so we separate the recording to peaks that are going up and peaks that are going down here ohad did many points 550 we are less ambitious we have 130 points over the cortex and the cerebellum for each of them we have two point processes one times of peaks going up and one of times of peaks going down so we have 260 parallel point processes and in this point processes we look for patterns that maybe before I go to that I have to justify why what is the meaning of this sharp transient at the beginning I thought well we see this thing we work on them we find in very interesting results fine okay but then many people said what why this sharp little thing it may mean nothing and so on the fact that we find something interesting may not mean anything so I thought I must try and look what are these transients by going back into micro electrode recordings in monkeys so I'm trying to find out what is the meaning of this sharp transients in analog data and here is a recording row unfiltered or filtered between 1 hertz and 5 kilohertz of a single metal micro electrode filter it below 80 hertz and we see what is called LFP that will be following the slow excursions in this trace we filter it above 300 hertz so we see high frequency component and you can see that is fluctuating a lot but from time to time you can see a sharp peak like this one or this one or this one this if you look on them are looks like spikes of single neurons so we can isolate single neurons but we can also put the threshold somewhere down here and get all the very small peaks or many or many of the small peaks also we set the threshold such that the average of small peaks would be around 80 per second and we call this multi unit activity it's a sum of many neurons in our recording the average firing rate of a neuron is about 2 per second in the prefrontal cortex we see here 80 per second so it's about 40 neurons but it's not exact because some of the peaks are smaller and we don't detect them but order of magnitude now we convolve these times of many small peaks with a small Gaussian and we get the rate of the multi unit activity so now we are interested in two analog signals the LFP is lower fluctuation but with sharp peaks from time to time and the multi unit activity rate which also have some peaks from time to time and we want to understand how they are related so we detect sharp down going peaks in the LFP because this is brain activity only and the negative peaks may be meaningful here and we catch sharp transients of rate like this one and we like try to see what is the relation between these two this is the mean LFP around a peak in an LFP so you see the peak it's about 16 microvolt big in the LFP and there are some side vents around it this is the population firing rate around a peak in a firing rate so the average firing rate here is around 80 and the peak goes up to 240 and it also sharp peak has something in front of it and also these have something in front of it now we can ask another question what is the average LFP around times of peak in the population activity and also what is the average population activity around times of peaks in the LFP and there should be some similarity if they are related so here it is the blue was the LFP around the peak of the LFP the red is the LFP around the peak in the multi unit rate same here the blue is what you have seen before the rate around the peak in the multi unit rate and the red is the rate around the peak in the LFP in the blue one they are not equal you see here the peak of the LFP was minus 16 microvolt and the peak in the LFP around multi unit is only minus 4 microvolt also the rate here went up to 240 and here is only 140 but the shapes are very similar similar analysis was done for microelectrode, EEG and MEG by a lady called Stephanie Jones she did it in a different way she did time frequency analysis and found that from time to time there's a burst in the beta range so she picked up the times of peak frequency in the beta range as times of interest and she averaged the LFP around these times and she saw something very similar she called it beta events so if you see in the literature beta events it's the same thing like this thing now we think that this sharp peak in the LFP on the MEG is similar to the process that generates evoked responses upon a stimulus if you measure LFP or EEG from primary auditory cortex and give a sharp stimulus like a click or a flash or a touch on the skin you would see a sharp transient negative transient called N100 because the peak is about 100 millisecond after the stimulus and these things are similar to this N100 evoked response so we call them everybody wants his have his own name not beta events but we call them mini evoked responses something like the mini N plate potential for the neuromuscular junction that at time was very efficient okay now here is for the MEG all the other things was in a microelectron now I'm going back to the to the MEG here is the shape of these sharp peaks in the MEG or in the CCD in the currents recovered from the EEG around the peak of detection of such a peak and on it I put the shape from microelectron they are very similar the main thing is a sharp peak but there are some undulations around it okay so this MEG transients are very similar to those detected by time frequency analysis in LFP and MEG by Stephanie Jones you can see the paper and she called them beta events we called them mini evoked responses so these are the same things okay now the experiment I want to talk about until now this was the introduction for the lecture now we start the lecture so the experiment was done by a student of my in dental he is a professional drummer and he wanted to study music perception through MEG and he designed the following experiments he generates drum beats that goes like boom tick boom tick boom tick boom so on and from time to time the meter changed from two quarters boom tick boom tick to three quarters boom tick tick boom tick tick okay and then back boom tick tick boom tick boom tick and the subject had to tap with his fingers to these drum beats second finger for the primary bit third finger for the secondary bit so here goes boom primary second finger tick third finger and so on now all of a sudden the rhythm or the meter changed but his continues in what he thinks is and oh he tapped with his second finger but we hear a secondary tick so we understand that the meter changed he's well trained that there are changes usually he would stop for a while and then come back in synchrony with a new meter and again here he pressed with his third finger but he heard the primary tick so he stopped and then resynchronized so we study two pieces of the data what happens before the change now the change was actually here but the subject knows that there was change only now so what happened one second before this point that the subject realized the change and what second what happened one second after he realized okay so this we call the last piece of congruent tapping and this is the first after change and we call it sometimes change so it's the first piece after incongruent tapping now in routine tapping it's very easy you all know it and I don't know if you know but after a few bits the movement is actually before the sound like 50 millisecond before the sound because you know already what would happen here of course everything stops the subject have to realize that the something is different has to stop the ongoing motor plane have to recall for memory the alternative motor plane has to find the next expected sound in which phase of this of this meter will fall so he knows that he has to now tap with his little thing with the third finger and not with the second finger and so on so there are a lot of processes carrying in the brain after a change we think that there are less but we don't know before before the during regular tapping so one sorry point 25 second before is what we call last and one 25 second after the change we call changed so in experiment we'd have between 85 and 95 cases the one I'm showing you here had 93 pieces of last and 93 pieces of change in each page piece and each of the ccd channels we set the detection threshold to obtain four mini evoke responses okay so we had four during this one quarter second and four during this one quarter some of them were up and some of them were down usually it's half and half but not on every single trial and then we looked for repeating spatio-temporal patterns with interval accuracy of plus minus five millisecond okay so now the the pattern is more precise a then b after a certain delay and then c after a certain delay again and again and again with the same delays up to plus minus five millisecond with lengths that can go up to 40 milliseconds we find a lot of patterns in these parallel point processors the complexity may be from three events we didn't look for for less up to 27 things a b c d f d z repeating twice with the same intervals most repeated only twice we hope that if the same brain process goes every time there is a change we'll see a pattern happening almost every time there is a change but we saw patterns repeating but only twice very disappointing so we decided that we look only for triplets we took all the patterns and decomposed them to sub triplets so if we have a b c d it would be a b c a b d a c d and b c d patterns okay and if there are 27 events it will be 27 choose 3 which are i don't know something like 100 000 something like this triplets so we decomposed all this to sub triplets we make orders so we don't see the same thing twice and we get about 3 million triplets repeating most of them only twice but many of them repeated several times some even 25 times okay so still we don't have for every trial uh repeating triplet but we have many triplets we thought we can now try and differentiate between the trials based on these triplets however we had only mediocre success we tried jitter of plus minus two millisecond four millisecond five millisecond in all of them we were in this range we noticed that with bigger jitter we have little bit bitter success now this caused me to sit down which i'll do them now because my back is not too good and think what does it mean it means maybe that the precise timing is not that important maybe i should try and abandon completely the attempt to see repetitions with precise timing okay so i tried this so what do i do i took a small time window say of 20 millisecond it's sliding over the data and in every place i look which triplets happened so supposing this window channels number one two three and four happened i write one two three two three four so i write all the triplets within the window at the order at which they appeared but i ignore whether the delay was short or long in between so now we can try and see if we can short with these triplets single trials so we take one trial at random out all the others are a training set we find in them triplets that repeated many times n times for one behavior and zero for the other this n is the specificity of the triplet if the triplet has a specificity of four it means that it happened four or more times in one behavior and zero in the other and we found that if we choose triplets with specificity of three four five or six in all of them we could sort 100% of the trials correctly i thought wow this is amazing because nobody ever reported that he could do such sorting based on neural data but i said okay so it's all within 20 millisecond maybe if i use larger windows i get the whole mess of things and they would be useless so i tried the windows of 40 milliseconds 60 and so on and this is what i found this is the window width 20 40 60 70 80 100 120 this is the maximal specificity at which i could find 100 correct so this is not exact the lower bound is the highest specificity for which i get 100% correct then i went one above it for say for 20 millisecond with six specificity of six i could do it 100% correct i went for specificity of seven and there were like two errors out of the 186 trials so i put the point near the top if there were more errors the point would be down if there were 50% errors it will be down here so the dots are an estimation of the specificity of the good specificity and the real perfect specificity is down down here now if we can do so well with windows of 20 40 120 maybe the order is also that doesn't matter we can mix we can just look on the names of the of the of the point processes that made it so if we have one two three or one three two we call both one two three and so on and you see that it's much better we don't keep the order if we keep the order we could get up only to seven without the order up to nine for bigger window i can get even higher for bigger window even higher but then that everything falls down above 60 or 70 millisecond the order become very important because it's still improving and here it fell down to almost nothing how could this be looks very strange so the only explanation i could come up with is something like this so pause in the brain there are some slow waves going up and down and points at which processes start which is this is what we detect are happening only some phase of this cycle so there would be three plates here and not here it doesn't mean that the brain doesn't work here it means that things start this is the mark of start of something here but this thing may be continuous for a long time so things start at one phase not at the other again in another phase not in the other again in another phase not in the other now if the window is small you see only things that happens within the same phase and there apparently the order is not important once you have a long window you see events some of which started in this phase and some of them continued in the other phase and there the order is important because the ability to sort the tiles fell down steeply when the length of the thing was so long now if this is true i would expect that the maximum length of the good triplets that happens many times in one behavior and zero in the other would be either short because they all failed within cycles or long because they fail across cycles so you can make an histogram of the duration of the patterns that were specific that is occurred many times 12 or more in one behavior and zero in the other and this is how it looks this is not very strong now but still there are three picks this is sample so you should multiply it by two four milliseconds so a lot around here of 20 30 millisecond then a pick here around 60 millisecond and then a pick here and around 95 millisecond this is consistent with the idea that the cycling things of starting and not starting is either in the better range for here 60 millisecond is a 16 per second or in the alpha range 11 or 12 per second maybe what he just called mu frequency and not alpha okay so this histogram is not very strong but it is consistent with the idea that some of the innate rhythms better or alpha or mu are tuning the the times for which new processes can start so what happens in bit name i would think that things start here then something is processing them maybe hippocampus maybe talamus maybe i don't know what and then other things based on what was going on here other things start and then they are processed and then after a while other things start i think this is a strong finding about the dynamic of organization of circuits that start to be operating in their cortex of course the fact that we have specific triplets for one behavior and many specific triplets for the other can also tell us things about the place in the brain the thing happening but it's not the place in the brain it's the combination of places that that matters here it's not that this is important it's only this and this and this within some interval starting together is important so with triplets of events the type of behavior could be sorted with 100 accuracy such sorting required many about 50 types of triplets so again we have this problem that one process is not always the same structure there was no triplet that appeared in all or even most of the trials of a given type max was 20 out of 93 but using this assembly of triplets we could sort by 100 percent within a short window up to 60 millisecond the order within a triplet was not crucial but for long windows 80 millisecond and more it is crucial i think that this approach to analyzing this is just starting now we understand shows not only where relevant activity is but also the dynamics of organization in the brain in time