 V pravdu, da bilo vratit, da bo se prišlo v Uiči, tako da bejim, da sem prav�, da skupim taj vse. Zato ne musim vse posledati, ker presef Dargo vse posledaj, bo se vse posleda. Zato tako, da je bilo vse, da sem prišlo v Reak kaj je vseč 20 lat in tudi je zelo vseč, zato v radoče v Moskvu, in tudi sem vseč, kaj je zelo vseč, zelo vseč, kaj je vseč, vseč, in tudi sem vseč, in tudi sem vseč, kaj je vseč, ko je vseč, in priča kompozisy. So sem odpravljena, da pa je zelo, tako bih labičnih bazernje PDF, vpaču in malotidimensh vzeltaj, kar so da brjuk svoje j frying v Triple L, in na zaščenje je rečeno in vzela remenje, in tako povolj tih naprej taj zapotakov, zelo v veliko zrčenje pustili pri vsem posrečenacke vzela taj dobro je data. Mebozina vcipli ima neskrihya in tudi, da pričo vizela, Kaj je izgleda izgleda in kaj smo tukaj, in kaj smo tukaj. Nekaj, zelo, da bomo pošli na komputerne interfejze, zelo, da bomo pošli na komputerne interfejze, in da bi se vsezvalo paradim, kaj imeli, da bomo pošli na to, da bomo pošli na komputerne interfejze. Nekaj, da bomo pošli na komputerne simulacije in demosje, kaj je to vzgledaj, da je to vzgledaj, in kako je to vzgledaj. Nelj sem da bo se počutiti o vsej vsej vsej, vse, da je to vzgledaj, je to ne vsej, je to vsej interakcij, je to je tudi hyperscanik. In tudi, kako smo vzgledaj in vsej vsej, Tukaj sem postočnjen, bo je to kaj najbolj matematik, ki so daje imeli naša matematik nekaj matematikov izgledaj. Zato včate prišelim, da je potenšel način, in especially način je potenšel način zelo pomembno, kjer je zelo potenšel način za vizet, sej za datamajnik, speseli za veliko data, in especially for brain data. And I conclude my talk about potential application of this technology or this approach and also open problems and challenges. So, OK, so I am not neuroscientist. My background is machine learning, signal processing, mathematics, and in fact electrical engineering. So my approach is not to understand the brain but rather to find the tools which allow us to better analyze the brain data. So assumption is that signal processing and machine learning are necessary to analyze complex human brain and behavior data to extract information. Usually the raw data, I don't say is useless, are necessary, but we need some kind of preprocessing and smart analysis of this data to extract interesting future. So, OK, my recent project, everything is related with application of multi-way component analysis or tensor. So one project which I was involved is big data and data mining, especially analysis of EEG, MEG data, near infrared spectroscopy, but also electrocorticogram, electromyografi, also fMRI data. Another one is aging of the brain. I am very much interested in how the brain developed and also diseases of the brain. So early prediction of Alzheimer's disease, for example. Another hot topic of our research is related with human-computer interaction and brain-computer interface. And recently also deep learning, because this is hot topic and everybody have interest in deep learning and application of deep learning to analyze the brain data. So, OK, so goal in BCI, as you know, is to develop safe, non-invasive communication between human and computer or machine, like robot, robot arms, or wheelchair. So, usually this is multi-stage process. We first must do data acquisition after this preprocessing, removing artifacts and feature extraction. After this feature selection, we must reduce feature and after this we must develop classifier which classify the commands. So, what is the new in our approach? In our approach we make data acquisition in the form of tensor, so multidimensional tensor. So, I explain detail later, but alternative approach and much more promising approach is now deep neural networks. In deep neural networks we don't need all these stages. The deep neural network make automatically the feature extraction, feature selection, and even removing artifacts is performed automatically by deep neural network. Problem with deep neural network is that they need a lot of training data to find set of millions of parameters. So, we now try to apply tensor to reduce or to modify architecture to reduce the training set. So, in fact the data here is presented in the form of some kind of tensor. So, all convolution layer is tensorized or represented by tensor. Why by tensor? I explain later, but roughly speaking, is because we can perform some kind of super compression of data. So, what is tensor? Is multidimensional matrix in brain science tensor can be natural. For example, MRI data is three-dimensional tensor, but EEG data is not tensor. So, we must tensorize data. How we tensorize this data? We usually divide EEG data by blocks or trides or windows, and we create such data. And after this, we can eventually transform this by time frequency representation to spectrogram, so we obtain four-dimensional tensor. But this tensorization, nothing change. So, we must do this with this new representation. And what we're doing? We decompose this tensor to factor. So, we call this tensor the composition or tensor factorization. So, if our data is represented like here, try time, frequency, and channel space, we can, by the composing, we can obtain some components or factor matrices which have physiological meaning. In this case, we have space component or channel, we have time frequency component, and if this data is related with some specific mental task, we have some classes of different kind of mental task. Okay, so tensor in physics is a very different definition of tensor, yes? But I consider that tensor is just multi-dimensional matrix when natural representation is space, time, frequency. But, of course, the number of moles could be larger. So, tensor can have any dimension. So, okay, so, in tensor network we use special symbol, which is illustrated in this slide. So, we represent by any shape and number of branches give us information about dimension of each mole and order of the tensor. So, of course, the scalar is zero other tensor, vector is the first order tensor, matrix is two other tensor, but generally tensor can have any dimension. So, what is different between tensor, the composition, and tensor network? Tensor network represents in distributed form the network, high-dimensional network. So, this is interconnected tensor. Tensor factorization of the composition is just the compose or factorize tensor to matrices. In fact, the tensor of the composition is special case of tensor network. Okay, so, why tensor of the composition? Well, in real-world brain data affected by multimode factor. So, for example, if we take EEG on MEG, we have stimuli, we have condition, we have channels in their location, we have subject, we perform the experiment usually for many different subjects and for many times, so many trials. So, in this case, we obtain naturally the tensor. Very important is that we have usually not one tensor, but block of tensor, and each of block represent one of kind of data. For example, one block represent EEG data, another can represent MEG data or fMRI data. So, tensor is very useful for data fusion. So, in other words, we can simultaneously analyze data from different kind of modalities. And the most important application and perspective for tensor is big data processing. So, usually is opinion that big data mean big volume, but as you know, the big data definition is little more sophisticated, more complex, because not only volume, but also variety, veracity, velocity. So, what is this? So, volume usually, okay, few years ago it was sufficient to have memory on the other gigabyte or megabyte, even now we use terabyte or petabyte. So, the most important in neuroscience that data have veracity, what this mean veracity? Usually, real-life data are corrupted by noise and then incomplete. They are not only corrupted by noise, but also outliers. So, we can say that data are inconsistent with anomaly. And tensor technology is very convenient to use for such kind of data. The next feature of brain data is that data have verity. So, we have different kind of data, time series. So, for example, MEG, ECOG are represented by time series. But we have also neural images like MRI, PET. We have also genetic expression, behavior data. And we can have also logistic data. So, how to make fusion or how to simultaneously analyze all these multi-model data is important. And the most challenging still unsolved problem is velocity. So, in other words, if data is really large scale, how to analyze this data online? So, this is big problem, but again tensor can be represented by smaller block and we try to develop the new parallel algorithm which can accelerate algorithm and can work maybe not in real time, but near real time. But this is still big challenge. How to analyze data online? This is very important to analyze data online, especially for BCI, because as you know definition of brain computer interface is not that we collect data today and tomorrow make classification. We must have pipelining data and from our data we must have immediately decision what the subject doing, what is the intention of subject. So, the second myth about big data is that if we have bigger data, we are more rich and we are better. So, bigger insight comes from bigger data. This was already yesterday discussed that not only big data, but we bigger insight come from quality of data. So, meaningful feature extraction and selection and dimensional reduction and data selection. This is the key factor to analyze big data. And again the tensor technology, of course not only me work on tensor, I think now around 1,000 researchers around the world are working on tensor. So, tensor again is very useful for big data, at least potentially useful. So, machine learning is part of artificial intelligence. So, now many researchers around the world try to generalize existing method from machine learning to, from standard machine learning to tensor. So, for example, classical method like PCA, principal component analysis was generalized to multiliner PCA or tensor PCA. Singular value decomposition, which is just working horse in dimensionality reduction, has been generalized to higher order SVD. Support vector machine, which is used for classification, is now generalized to support tensor machine. Canonical correlation analysis, which is very useful in BCI, is generalized to multiset or tensor CCA. And for prediction, partial least square or regression method has been generalized to higher order partial least square. So, okay, so how we can generalize independent component analysis or group component analysis using tensor. So maybe I illustrate again. So, if we have tensor, tensor it can be represented by slices or matrices. These matrices can be unfolded, so matrices, so tensor can be represented by three different matrices according to each mode. And after this, we can perform matrix factorization using different, different constraints. So, for example, if we impose statistically independence, we have ICA. So we obtain in this way multiliner ICA. If we impose non-negativity, for example, the spectra or power in respect to frequencies of spectra component and non-negative. And also many components are sparse and smooth, so we can apply sparse component analysis, non-negative matrix factorization, PCA or ICA. This is the maybe trivial or naive approach. So, okay, so few words about tensor decomposition. The tensor decomposition, the first work in mathematics was in 1927. So maybe many of you doubt if this topic is solved, why now the tensor have future if this topic, well, in those time it was no computer, no good computer, it was no high memory system, it was not graphic processor and the algorithm was really slow. So, this is the simplest model, which is called parafac or canonical decomposition, which decompose tensor to factor matrices and core tensor is diagonal tensor. So, in fact, this diagonal tensor is just scaling, so we have only factor matrices. In 1966 in psychology was proposed so-called Tucker proposed model, which decompose tensor to factor matrices and small core tensor and e represent error. So, well, this concept was generalized to higher order SVD, what is different between Tucker and higher order SVD? In this case, we impose orthogonality constraint on the compose factor. So, what is the problem here? Problem is like this, that we have data and we want to represent this data in different form in the form of factors or factor matrices which have orthogonal property. And why we doing this? Because we can extract hidden information and we can compress the original data. So, this is this compression or dimensionality reduction. Of course, if we want to compress more data, our approximation is not so precise. Ok, so we have two kinds of decomposition. Ok, now I would like to focus on BCI. Before BCI, I would like to show you our experiment which we have done in Ricken. So, this is my colleague, Japanese colleague who control this wheelchair. We call this in Japanese Kuruma isu. So, he control this wheelchair by brain signals directly from EEG. So, the system must work in real time. And what he doing? In fact, he doing nothing, he just thinking. He thinking, he imagine rhythmic movement of right hand, left hand and feet. If he imagine feet, he accelerate. If he imagine right hand, the wheelchair tend to the left. Ok, he make training more than six months so this works very good. Ok, sorry. This is another my colleague who is still working in Rickenshibinjao and he make this after one week training as you see his movement is not so smooth like previous colleague. This is another Japanese colleague who try to develop for paralyzed people simultaneously wheelchair and robot arm. He can deliver for example the drinks or medicine or food when the person is alone. This is very important in Japan because the society usually the parents live alone and in this case if they alone they need some kind of help and this help can be performed by robot arms. So, this is, we develop some kind of training to accelerate the process to improve performance of BCI and we call this neurofeedback so we can see the activity if we imagine movement of the feet that we see this on the screen. So, this is just toy example the subject control the car on the screen and in this case the training is that we can visualize the brain activity and we can see how the brain responds to specific. Ok, so the next step was to develop so called affective BMI. So, what is affective BMI? Affective BMI is to give some visual or auditory stimulus which have some emotional context. So, we use such strange faces as our stimulus so this is the English actor which represent different kind of emotion and why we use this? Because, ok, so in standard oh, this doesn't work. Ok, so this is so called steady state visual evoke potential so usually researcher use checker board with different frequency but in this case this cause a lot of fatigue and also performance is not perfect. So, we also develop such game when we control small car so how this is controlled. The subject change attention to this checker board and each checker board have different flickering frequency and method is quite primitive because the brain responds to the same frequency as flickering frequency. Also interesting the brain responds also in this second harmonic and set harmonic. So, process by brain is non-linear. So, what is our modification? Ok, sorry. So, we use instead checker board such flickering faces. This is still annoying and irritating but they give slightly better performance than using the checker board. Ok, so here we have eight commands and depending on attention of the subject for specific each of these small image or smear faces flicker with different frequency so this is relatively easy to extract from EEG this signal and to extract efficiently we use tensor. How I show later? Ok. So, another approach is so called p300 paradigm. So, in p300 paradigm we use oddball paradigm so some kind of surprise. So, peak is 300 millisecond of the onset of this. So, in this case we use different kind of images and again by experiment we prove that if we use as stimulus human faces we obtain stronger or more consistent responses of the brain. Ok, so. Ok, maybe I skip this slide. So, this is one of the paradigm which we developed or we make experiment. So, here on the screen is shown two faces and the subject must count not flickering time but must recognize the emotion of this and gender of the faces and in this case we focus not only on p300 but also on one of the even related peaks like n170 and so on. So, this is the paradigm which we use and one of my Japanese colleague found that if we use inverted faces the cognitive task is more complex and in this case ERP is stronger. I show this. So, ok, we call this positive potential, vertex positive potential and also so-called late positive potential LPP. So, this LPP is stronger for inverted faces but normal faces. Ok. Also so-called peak n170 so, this is peak which is 170 millisecond after stimulus is stronger when we use inverted faces instead of normal faces. Ok. Ok, but we found that all this paradigm which I show works to some extent for some subject even they achieve 100% performance but subject don't want to use this because they are tired to see such annoying flickering. So, we try to to use instead of natural faces artificial faces we call this in Japanese emoji in Polish also emoji. So, this is some simple characters when we try to make minimal changes on the screen. So, in this case we have only six commands and subject must focus on one of this emoji and we use three different. One is only flickering light another one is natural faces and another one from unhappy to happy. And we make hundreds of experiments for 20 subjects and conclusion is that we achieve two goals. One, we improve performance by 6% which in BCI is a lot and also we reduce fatigue and three we interference between all these stimuli is much smaller. So, we published it in several papers and this method really work in practice well and don't make fatigue of subject like the previous method. So, objective was to reduce strong interference between stimuli which must be presented simultaneously and second one we reduce fatigue and annoyance for this. So, we achieve for 10 subjects performance around 95%. So, in comparison to other method improvement is something like 5%. Okay. Also, we use such kind of in spelling machine instead of flickering letters we use such symbols this was developed by my colleague Professor Jin Jin and what is novelty here? Usually the researcher use just rotated symbol but we use rotated and zooming so the symbol decrease in time and what we found the difference is very tiny this red is when the in spelling machine the object not only rotate but also decrease so some kind of vanishing or zooming and black, black responses of the brain is when we use only rotation symbol so as you see although this tie this is very tiny differences but in performance this again we achieve improvement around 4%, 5% improvement in comparison to other competitive method. Okay. Okay, so brain-computer interface is interesting from engineering point of view but also this is important not only to achieve the high performance BCI but to understand the information we able really to extract from the human brain by using EEG and how the brain or motor cortex works so we make a lot of efforts to analyze the brain responses and to understand what is limitation of BCI and to understand how the brain really works of course the general opinion is that human brain is so complex this is the most complex system in the whole universe so we need probably the next 200 years to understand the human brain well but at least to understand how work some motor sensory cortex or motor cortex and so on this would be exciting to better understand this Okay, so very important is neurofeedback so if person imagine some kind of movement is good to give him cue on the screen and to dictate him the speed and how many time he must repeat specific movement and this is very characteristic for each individual to find optimal movement strategy allow us to improve performance Okay, this is important slide because I'm sorry so many researchers work on BCI because they believe that BCI could be useful for different kind of rehabilitation especially rehabilitation after stroke so we also try this experiment unfortunately this is young healthy subject not at stroke but okay so what is the point the point is that we use personally the robot arms who help to perform movement if we can extract appropriate signal from the brain so this is some kind like encouragement or reinforcement learning because if the subject cannot make movement but the robot help him to make this movement which he imagine this give him something like improvement of rehabilitation procedure Okay Okay, so I'm very much interesting in very challenging problem which is called neurofeedback so usually most of researchers use only visual neurofeedback so in other words they represent some brain activity in form of visualization but we also try to apply sonification or outdification in other words the brain data is converted to sound and the subject here his own brain and also very interesting approach his haptic approach so we convert the brain data to some kind of vibration of some kind of massage which inform us how the brain responds to specific stimulus or to specific mental task Okay maybe I skip let's return back to tensor so what how tensor allow us to improve the performance of BCI well we decompose so on the left side we see the activity of the brain using EEG so this is spectrogram so frequency time and we can decompose this to components so special component or channel time frequency component we have only two classes movement of left hand and right hand so this allow us to find which component are related to specific mental task in this case imaginary movement left and right hand this is different the composition using tensor when we have special component but also frequency component and time component okay so another very hot topic of research which I am actually working is tensor completion so usually we assume that our data brain data is incomplete so it's missing data our data is corrupted by some kind of artifacts which is necessary to remove so what we doing we represent incomplete tensor by factor matrices so we okay sorry we sample this we call these tubes or rows columns and tubes fibers and we project them and we compute this small core tensor U and after this we make some transformation so recently we discovered that very good transformation of original data is so called hankel matrix or time delay embedding transform so this slide illustrate how to do this so this is original data and this hankel matrix is transform data this with random information and after this from this block we build tensor this allow us to find the mutual correlation in different blocks and when we apply tensor when we apply tensor completion instead of matrix completion in this case we obtain quite good performance so we apply this to MRI data 25% of voxel are missing so this on the left side this is this is avilev data and we reconstruct the data from using this is comparison with competitive method but due to applying this idea of hankelization so hankelization is represent the original tensor by higher of the tensor and after this we get tensor completion so here is toy problem this is just small but the same can be applied for image, for neural images so this is this is original image this is avilev image so in this case the only 10% of pixel or voxel is avilev and this is other method which is now from literature and using this hankelization method we obtain the best performance so the same can be done for neural image we also try much more difficult case when some of the slices or some of the please imagine that some of the image is corrupted by horizontal vertical line and also some of this image are completely missing so ok, this is original image which is video clip which is not avilev this is avilev this black blink that the concrete slide is missing and as you see this method for example is not able to reconstruct any of this this have some distortion this also have distortion and this is how a method which have slightly better performance ok, so how many minutes I have should finish 5 minutes ok, so this is exciting research related to I call this brain to brain interface Fabio Babyloni call this hyperscanning in fact this is hyperscanning so simultaneous recording of brain activity of two or more subject and the objective here is to find also some kind of connectivity but not inside of one brain but between brain yes and how tense or technology can be useful for this application so so what is EEG hyperscanning EEG hyperscanning is to measure or to record the electroencephalography simultaneously for two subject they performs a mental task or discuss and so on so why this is important because social interaction for human are extremely important and we would like to understand how two person click to each other which person can collaborate and how they person compete and so on so all this activity is very important so we perform series of experiment using EEG and we collect simultaneously data for this subject and what to do with this data of course we can analyze this data in many different form so we can for example analyze the grander causality or partial direct coherence we also exploited direct transfer function by the way direct transfer function DTF was developed here in this university by professor Blinowska and Kaminski so to find this connectivity and click between two brain or some kind of synchronization we use tensor so this green block represent our row measurement and we decompose each of this tensor matrices and this information in factor matrices give us hidden information from original data so we so generally by joining this make concatenation of these matrices we obtain tensor and this tensor can be represented by factor matrices like this some common factor A1 and A2 and this is core tensor which give links between them this is in fact taker model which I mentioned but we developed a little more sophisticated method which is called link multiliner ICA or link multiway analysis so what is this so this block tensor represent data collected for this subject and we would like to find common component so we impose some specific constraint factor matrices like special factor or temporal factor we assume that have two parts one is statistically independent another one is strongly correlated and this strongly correlated or identical components give us information how brain interact to each other and how the brain what kind of component is common during the same mental task or during simultaneous measurement so we have here three kind of components spatial components spectral component and spatial spectral and temporal component so for each of these components so this blinking matrix factor matrix is common components so similar the same or highly correlated ok, this is another method which is called higher partial least square method this method so these green blocks represent the original data and one of the data is brain data another one is behavior data and we decompose this factor matrix is common or highly correlated this is the idea of higher other partial least square and we ok, so I skip this but this is interesting experiment which we perform so this is monkey and in monkey we record echo G so electrocortic program by using only 32 channels and this data is represented this time series is represented by tensor so time frequency channel we also simultaneously on the monkey pole we have some kind of sensor and we using motion capture record the movement of the of the hand so monkey take food from experimentator and we make hundred or thousand experiment so this is training data and with this we would like to what is the task the task is to that from echo G data we can predict exactly the movement of the monkey hand so you know there was after training with thousand of such tensor we found that we can encode the motor cortex of the monkey so in other words having only any new echo G data we can predict what monkey doing so what is the movement of monkey not only the in three axis x, y, z but also the speed and acceleration so of course this reconstruction is not perfect now we try to apply to the same problem deep learning and probably deep learning give us better performance but still to same extent especially z we can we can reconstruct the movement of the monkey using this technology so another topic is how to use EEG to classify emotion so very briefly because I must finish one minutes so the problem is formulated like this that we elucidate emotion for subject by watching the short video clips and on basis of five second or ten second EEG we would like to guess what kind of emotion the subject has so we found that EEG itself not give good performance but if we analyze simultaneously we make data fusion so we use also behavior data in this case we use quick eye movement so we combine two modalities the eye movements and EEG signal in this case we during watching the movie we can have here topographic map and also and also the eye movement if we analyze this together by using the autoencoder deep neural network we found that we can classify four different emotion with other facturacy of 95 percent so ok the tensor network is another topic when with the first with tensor eyes tensor to high or the tensor represent this tensor by graph so why this is important this is important in deep learning because tensor network give us inspiration to create the new kind of deep neural network so tensor network can compress the deep neural network so in the future we would like to apply deep neural network for mobile or smart phone or not only in high performance computer ok so unfortunately I must finish the main idea is to to apply to deep neural network is to consider layer by layer and to compress this especially convolution layers and fully connected layer to achieve some kind of compression so I strongly believe that tensor network is some kind of useful tools in machine learning and I hope that many researchers in the world still work in this area and this is very perspective tools is many potential application not only in BCI but also prediction of diseases detect of hidden components detect anomaly and trends so prediction of time series is also very hot topic ok if you want to have more information you can find on my web page software but also some kind of books which is free avilep so we have some tutorial about tensor decomposition and we wrote some monograph about tensor network this is my collaborators who work with me professor Amari Japan Vietnam China Russia ok I would like to finish with this slide I have no voice sorry ok so sorry this ok so now I am very fascinating but small robot so brain to robot interface so we I apologize because I have due to same reason no anyway so I would like that this robot will behave like human robot so like human so in the sense that they make some mistake and also sometimes they fall down and so on so maybe the human robot would be the new technology in the brain science because such robot can be agent and can help the human especially there and so on ok so sorry I apologize because same my slide doesn't work anyway thank you very much for your attention thank you very much Ange for presenting very complex technical material in understandable way and we are quite impressed with the number of applications and you know the future of neuroinformatics is of course understanding the brain but also developing new neuro technologies and definitely what you've shown shows us the way so we're a bit late but maybe three short questions if there any overwhelmed people keep quiet alright alright left first for a very interesting talk and very thought provoking so I'm just going to see if I'm understanding what you're saying yes I would like to say take gene expression data and spike trains and fMRI and behavior and push all these together in some way could you use a tensor to analyze that and reveal some latent variable even though they are across very different time scales very different spatial scales so this is very good question you know all this data this is very different world so they have different scale different dimension and so on so problem is challenge so I cannot guarantee you that for each kind of different kind of data as you mentioned but idea is like this that we for each kind of data we build own tensor so they represent one block and we have block of this data because this data have different physical meaning or physiological meaning we cannot concatenate them so idea is that we because this data is too complex to analyze them directly so we would like to analyze this data in different transform so we decompose each of this tensor to some factors or components but in generally such decomposition is not unique so if we make them separately we don't find anything so important is to impose specific constraint and same expertise and knowledge what kind of component or some working hypothesis what kind of component we have is very important so in other words we would like to represent this data by factor matrices or we call components like independent component or principal component or non-negative component but we impose we don't know how many but we would like that same of this component are correlated and this correlation give us information complementary to each other or what kind of for example we would like to detect some biomarker for some disease and each of this data maybe this biomarker exist so if we find some common component this give us indication so we try many things like f, m, r, i data and t, e, g data but as you know this is different scale because f, m, r, i sampling is one second or half second in e, e, g we use sampling ten kilohertz or one kilohertz at least so millisecond and second this is by three other difference so this is challenge but special component are similar so it's difficult to analyze this in temporal domain but and also frequency domain but is special domain same factor are common and this give additional information but to answer to your question yes this is potentially useful technology but no guarantee that they give always solution thank you for your comments question ok we have another question please thank you very much for very very interesting talk thank you I apologize that my slide doesn't work so if I understood it correctly yes your tensor tensor decomposition yes will detect linear correlations between different things are there any nonlinear versions of this this is again very good question so this is big disadvantage of tensor that tensor is just extension of linear algebra linear algebra and in fact tensor is based on multi-linear algebra which is little more complicated and maybe more rich because much more complex mathematical operation so generally we should if the processor model is highly nonlinear this is not applicable but the tensor can be can be combined in nonlinear model because I have no time to show this but we can model also nonlinear model by using some nonlinear function to get emerging so first is linear process and after this like deep learning we have some pooling we have nonlinearity and so on so here also we can apply the same techniques but this is just emerging so same probabilistic nonlinear model for tensor exist and they are very promising but to my understanding and my knowledge not so much more so to answer your question yes this is severe limitation of tensor because model which most researchers consider are linear extension are possible but no clear solutions so far ok, last chance for the question professor Pinojska, please experiment brain-to-brain communications were what kind of common components you find I can expect that there were some temporal components because they are watching the same picture but a part of that they do found any similar components between different components so in the brain computer interface depends on paradigm because as you know we have 3 basic paradigm one is steady state visual evoke potential so in this case the subject sees flickering light and the brain responds with the same frequency so in this case the common components is frequency component so if flickering light is 10 hertz we should expect same peak or some spectral component in frequency domain for p300 is very different because we expect it as you mentioned temporal component so in this case we want to extract the common component which is related in time domain in time domain mm I didn't mean BCI I meant this brain-to-brain communication the people watching the same movie mm-hmm in special correlations yeah yeah yeah so if people watch the same movie so the most important is in my opinion component in frequency domain because synchronization but also as you know we should analyze this that is not only frequency a magnitude of oscillation but also phase so phase lock is very important information which we can extract using this and you know your colleague professor Babiloni or his brother Fabio Babiloni is one of the pioneers in this area he published several papers and so what is new here is because I believe that tensor the composition of this link component analysis give some new tools to analyze such kind of paradigm there is a number of experiments now showing what happens when we look at the video looking at fMRI distribution of activation of the whole brain but nobody has done that in hyperscanning mode so it will be very interesting to see similar are brain activations when people look at the same thing but that's something that needs to be done experimentally first many people criticize why we need hyperscanning if we still don't understand the individual human brain when we want to analyze two brain this is too much why we need to talk to people we are social so we need to understand social interaction our emotion for example if we want to build some team for very difficult task how to find a partner who work with me which I trust and so on so due to this is exciting criticism why should we talk with the people if we don't understand ourselves ok we have to move on thank you and see you