 Can I start whenever? Oh, yeah. OK. Yeah, you started me. OK. Hi, everybody. This is something completely different. And not very much computer science related, although this popped up on Hacker News and generated some discussions. So the paper is about analysis of functional MRI data. And I guess on Hacker News, people started talking, is it a software bug or is it some other kinds of bugs? So the reason why I want to talk about this, this is used to work in this field from 97 till 2010. And I kind of stopped paying attention to this. But when it pops up on Hacker News and people start talking interesting things, I tried to have a look at this. So that was the Hacker News. This was early July. Yeah. So then I went to read the, so this was actually not the paper. This was just the article on the register. And unfortunately, I didn't take the screenshot of the original one. But when I was so upset, so Facebook helped me to save the original title, which says, MRI software bug scoop append years of research. And this article was the best example of how not to do or publish anything about science generalization. It was really, really good example of super bad journalism. Like just I could come up with a sensational title, copy some things from a paper, and say all science basically is broken. I was really upset. How's that title better? Well, first of all, there's MRI and there's f-ray MRI. That's a big difference. And then there is software. As if the whole field of MRI has a single piece of software to do everything, which is now. So the funny thing is that this happened this weekend about the same thing, about the same paper and the same problem with an image which is absent in nothing to do with the topic. And it's just another example of even worse article about it. You see concept arts. Yeah. It's actually animated. You can Google this title. It's crazy. So some of the quotes, basically these are the quotes which were made in the register articles. Like the most common use of the packages for functional MRI analysis results in post-poverty rates of 70%. These questions of agility, basically of all studies that have been done up to date. And the fMRI exists for more than 25 years now. And the next quote was, a bug has been sitting in a package, whatever, for 15 years. And it produced bad results. OK. Now, people on hacking use. It produces bad results. Yeah, yeah, yeah. That's how you do science journalism. And people on hacking use, of course, started talking about, yeah, yeah, but fMRI and philosophy and politics and free will. And we're going to solve all of this. And of course, somebody mentioned Salmon Study. I don't know, possibly. If you don't know fMRI at all, in 2009, somebody published this paper about a dead salmon. They put it into a scan and they scanned it, showing some pictures. And I found three pixels active. And it actually won an Ineoval Award in neuroscience. But every time. It was rotting slowly. No, well, some people say there is a reincarnating soul into the salmon. How long was it there? Sorry? How long was it there? It's alive now. It's reincarnated. No, no. We don't know. These details were not in there. But totally irrelevant for this study. And then people started saying, this doesn't sound like a bug. It's actually a statistical problem. And no, no, no, no, no, no. It cannot be statistical. It must be. It's implemented in all common software packages. Therefore, it's a software problem. So clearly, fMRI, again, exists for 25 years. And it's like a black or white magic. It's either work and everybody's happy, or it doesn't work and everybody's happy again because you can bash it. So I'm going to bear with me. I'm going to do five minutes into to fMRI. fMRI 101. So without bashing. So we're using magnetic resonance imaging. It's a gain technology which, in medical field, exists since the mid-70s. In physics and chemistry, it exists since the late 40s. For human applications, this is the device. We use a very strong magnetic field. So you need a V application of superconductivity than most industry-wise, if you want. So inside the magnet, so the patient or subject goes inside. It's quite large. Inside the magnet, we also have three types of coils which can change the intensity of the field on three directions. And that allows us to detect spatial detection of the signal. And the signal is generated by what is known nuclear magnetic resonance phenomenon, where you can send an air wave of specific frequency that hits a proton, which is in the magnetic field. It can destabilize it. And when it comes back, it will basically does electrical induction. That's what we measure. And because it's dependent on the field and the field is varied in space, we know which frequency, so which point in space is coming from. Some people talk about fMRI scanners. There is no such thing. We use a regular MRI machines. The only difference is that you need to add what we call a stimulation device and the response device. So functional MRI means we actually want to study the function of the brain. So we need to either do a visual stimulation or auditory stimulation or maybe tactile stimulation. So this is different kind of stimulation devices which are put inside the scanner. And during stimulation, we use regular MRI sequences. The really cool thing about MR is that we are imaging soft tissues. We're basically imaging water or fat, like protons. And no other techniques allow this. And we have tons of different ways of creating different contrasts because it's not a single parameter like x-rays, which is only density. Here, we rely on proton density and some magnetic properties like relaxation times, how quickly things change in time when you excite them. So that allows, there are hundreds of types of contrasts you can generate in MR. And then there are also, you are not limited in how you make an image in space. You can make your slice in any orientation. You can do any body part. You can also image things like vessels because you can differentiate between a static tissue or moving blood. So that's a type of contrast we can use. And if you are fast enough, you can do what we call dynamic imaging. And then you can image things like heart motion. So in daily clinical routine, this is used millions of scans per day. It's totally non-invasive. It's done more to medicine than what x-ray did to medicine in terms of utility. So this is anatomy, like day routine work. And then we can also do physiology. So these are examples only in the brain. We can also do physiology studies in the body. But in the brain, there are basically three main types. It's like functional MRI, perfusion MRI, and diffusion MRI, where functional means brain function, localization of some activity. Diffusion means a local blood delivery into tissue, very useful in strokes and tumor stuff. And diffusion is measuring relative diffusion of water molecules inside white matter. So that allows you to do connectivity between where the white matter fibers are going. So it's quite useful, again, in presurgical thinings and things like this. So at what conceptual level does this categorization exist? Is this how the procedure is done? It's different types of imaging and different types of. So to acquire an image, we have to send what we call a pulse sequence, pulses of gradients and RF pulses. And each of them will use completely different sequence. Some parts may be the same. The big difference I wanted to show you, the anatomy is what is coming out from the scanner because it's built-in. This stuff is usually you take lots and lots and lots of data, and you do a lot of post-processing. And this result is later overlaid on anatomy as a representation of what you're trying to observe. So this paper we're going to discuss is only about functional MRI, which is this one. So we want to look at if. So this is an example of what we call a visual activation. A person in a scanner sees a black screen versus, let's say, a colorful screen, which basically does a stroboscope lighting to you. So that makes a huge activation in the back of your head. When you say that activation is visualized this way, what does that mean? Is it just water and patent issue? Yeah, I'm coming to this. Yeah, activation just to mean that we see an increase in activity in an anatomically correct location. Yeah, I will, next slide. So fMRI works using this effect. It's not a, so basically when a neuron starts to fire, the metabolic demand is increasing. So they actually need to start. There's a signal to start, give me more oxygen. So you will have a local extraction of oxygen from blood vessels, which will increase. But at the same time, the brain will compensate by the mechanism called vasodilation. So you will start flooding the local area with more blood. And what happens is, blood, the oxygen is carried by hemoglobin molecule. And if hemoglobin carries oxygen, it's called oxyhemoglobin. And once it's extracted the oxygen, it becomes deoxyhemoglobin. And magnetic properties of these two hemoglobins change. Because we are in a magnetic environment, this is what we are measuring. It's a localized increase in blood flow due to increased neuronal activity leads to a change in local magnetic properties. That's what we want to measure. But it's a very small change. Depending on the main field strength of the scanner, between, so normally now all of this is used, we're using three Tesla scanners. Again, signal change is around 5%. This is just to show you that a lot of vessels in the brain, there's like, the brain has more stuff than the neurons. But the vessels at a very small level, they are very special. That at a very small level, they are very dynamic. They can change locally. It's especially on the arterial side. They can immediately, not immediately, I will show later. Within a few seconds, they will respond to local demand and increase sustained neuronal activity. That's what we're going to measure. So how do we do this in imaging? So we need a very fast imaging technique. So the one which is commonly used is called echo planner imaging, EPI. And normally we do a multi-slice imaging, not complete 3D imaging, because 3D is slower. So multi-slice is faster. And normally we do about 10 to 15 slices, each slice is 5 millimeters, with in-plane resolution of 64 to 1 to 8 pixels only for a 25 cm field of view. So you are roughly 2 to 4 millimeters in space. So if you think of this as a 3D pixel, which is called a voxel, this is pretty big voxels. There's actually millions of neurons inside, a single voxel. Each slice takes about 50 to 100 millisecond acquisition time. So a brain volume takes about 1 to 1.5 seconds to acquire. And then what we do is we're going to acquire 100 to 200 of those volumes in time every 2 to 3 seconds. And while we are doing this, the person will experience some kind of activation paradigm, like either you are in one condition and then you change the condition. And then we will see how this change. And for the analysis, normally there are some preprocessing steps. Like maybe the patient is moving in a scanner, so you need to correct because otherwise in time you don't know where you are. There is some slice timing correction because 50 milliseconds for slice, it difference for different types of the brain. Normally people apply some spatial smoothing. Normally it's Gaussian in 3D. And then the actual analysis, which is the subject of the paper, is the statistical analysis. So you have this 3D collection of voxels, and you want to look in time how the signal intensity changed depending on the paradigm that was used. So you're trying to look at the signal change with a similar to the paradigm. And later you somehow, so you have this statistical map in the end in general. And you want to say that this is an active area versus this is a non-active area. This is what the activation means in this context. And you can either project it on a different anatomy or you can do some fancy 3D reconstructions. But that's irrelevant in a usually you don't really need this. So in the statistical analysis, people have tried more or less everything. We know what's known in statistics to see who will find a better way. I don't know 90% of it, it's too much. Just to mention something, again, FMRI does not measure neuronal activity directly. It's a metabolic imaging technique. So we are measuring a vascular response to neuronal activation. And we actually know that it happens between 2 to 4 seconds after the onset of neuronal activation. So these are things that needs to be taken into account. Either you model them or you try to measure them, which is not easy. But if you do have model of what we call a hemodynamic response functions, you can overlay and convolve your paradigm and then use it as a model to look for your activation areas. Any questions? So the perfusion, this is FMRI. So FMRI is sort of a subset of perfusion because you're measuring blood flow of sorts in the water and then you just have a stimulation. So perfusion in an MR context, you really are measuring only what's called cerebral blood flow in a whole of the brain and independent of any task. And here it's a change in... It's change in perfusion slash oxygen extraction. Because you can tell the difference between oxygen and oxygen. In people who do quantitative FMRI, they will separate all this into cerebral blood flow maps, oxygen extraction fractions, cerebral blood volumes. Those are the three main parameters. And for what people use clinically for perfusion is just to know that the rate of delivery of blood into the tissue is okay. Versus, oh, this person had a stroke. Therefore, there is no insufficient perfusion of tissue. Okay? Okay, so paper. So this is roughly what you need to know. Well, of course, there are tons of details of how people do statistics and actually a lot of stuff goes into preprocessing stuff. How do you motion correct? How do you normalize the data? There's actually lots of unsolved issues there. So what this paper did is they have taken, well, first thing first, the actual quote in the register does come from the paper and it's really bad and they got really criticized for this. And they've already submitted a correction to say, I'm sorry, we didn't mean that. But yeah, unfortunately, so this, but again, the big problem in the register paper is that it's a quote taken out of context of something which is much more narrow. That's the biggest problem in this register article. So what have they done? Just one thing, people are now doing something called resting state fMRI where you put a subject into the scanner and you scan him like for regular fMRI scan, but there is no actual activation going on. And people have tried to use it to basically take a portion of a brain and see which other areas of the brain response just like this. So it's like natural models of what you're looking in the brain. And using this, you try to find what we call connectivity between brain regions. Maybe you're like a motor function is correlated to some prefrontal cortex in a maybe delayed time delayed, but otherwise it's a similar pattern. So it's a kind of connectivity measure potentially, but resting fMRI has no activation. So what they've done is 500 subjects. They took resting state fMRI and then they do four fake simulations. Like they assume that this patient was looking at four different patterns of activation. And then they tried a few variation of parameters like smoothing or types of statistical test they're gonna run. And something about either you do voxel by voxel or you do a cluster. So the whole thing about cluster f is about this cluster. So this is the result, one of the main results showing that if you do, forget about titles that, but basically if you do a cluster defined threshold at p-value 001, and they say because we know it's a null data, there is no activation possible, we're gonna set our p-values at 1%, 5% significant. We are expecting to see 5% false positives. We expect them. And this is what they show. This is our expected error bars. And we are seeing anything between 20 to 40% instead of five. So that was like, something is going wrong. But if you change the cluster defined threshold, you go 10 times more restrictive, it goes down, right? So- But there are still examples of 20%. Yeah, yeah, yeah, yeah, yeah. But that's not 50. So actually this is nowhere near 70% of all of MRI studies. Wow, but no, no, no, after you uncover something, the reasonable thing to do is to just take all the raw data from previously published. Coming, coming, coming. Yeah, so they've tested all the standard statistical toolbox which are available and people are using as a standard tool. And so this is the result which is producing 70% is coming from this figure where they do a, this is the most common way of analyzing data without using a cluster defined threshold which basically means people do a very simple statistics and then they say they get a noisy image out of it. They may be this threshold, let's say at, I don't know, p-value, at the Voxel level, this threshold at the, I don't know, 5% alpha 005. And then they say I want five Voxels together and the rest I discard. It's a non, how to say, they don't really calculate what is the probability of having activation cluster. There's no statistics involved. If you do this, you get your false positive rate up to 70% or even higher. So that's where 70% is coming from. So. That's soon, that's soon. I've had a whole class. So what this paper is actually talking about is actually it only applies to, you can do fMRI on a single subject which is actually amazing because you can take a guy, put him into a scanner, finger tap and you know exactly where it's happening or do a language task or do a visual task and it's anatomically sound. We know it works. It starts breaking down where you start doing some free will and political orientation stuff and people are trying to do this. Put the people in the scan and show him Trump and Hillary and hopefully we'll get a Nobel prize in peace for this study. I really hope this happens. Yeah, but now a lot of, you can do a lot of good fMRI work if you know what you're doing. You just need to, you just need to be sure you have a statistician aboard as well as a physicist who knows a little bit about your noise distribution. The biggest problem in the paper is when you do smoothing as a preprocessing step, most people just apply Gaussian smoothing so they can run regular random field theory, regular Gaussian statistics. Like you just, my smooth, my special error should be Gaussians so you smooth them, of course you have them but actually it turns out it's not a, the spatial autocorrelation function is much heavier than it was expected before. The noise, spatial noise is not Gaussian and that's the main reason why the cluster statistics are not working as they expected to work. That's the main message from this paper. But again, only if you do this kind of cluster-based statistical analysis to define clusters of activations. Regarding the software bug, the software bug existed in a single package on a, like a, nobody actually, I would say 99% of users don't ever use this. No, not software. No, no, but you tell that most of the packages assume Gaussian distribution. Yeah, yeah, yeah, yeah. No, it was a wrong assumption. So it was for all the packages assumption. Yeah, but it's not a software problem, it's a statistical problem. There was a real software bug that was discovered but it was so minor that it's insignificant for at least for the hack and use crowd. Yeah, because it's in a very small part of it. Yeah, the big problem with functional MRI is that people try to be creative with what they try to measure with this tool. So currently we really, we barely know how it works on a physiology level. We barely know how to apply it at a very, like rough, like a motor skill or visual skill or this kind of things. It's very useful clinically. It's not useful in political science. I think it's like a wider problem. So every few years, I see a crop of papers that say that in this domain we have a lot of false positives. And now it is normal because most of our scientific results now are effect of software processing and heavy statistical processing. Yeah. I understand the MRI starts with some kind of recording radio signal after the impulse comes. So you need to go wide a lot of steps before you get any 3D image. Yeah, absolutely. But again, this is... Every one of these steps may be broken because somebody made a real statistical assumption. That's why. So there's a very good blog post about this in a Discovery magazine by a guy called Neuroceptic. He did a very nice summary of what went wrong, what didn't wrong. And the good thing is a lot of scientists went to comment on this, like what's the actual issues about the noise characteristics which we don't know where work should be done. So this is a very nice blog post if you want to do. The big lesson is actually somebody in... How can you say? The takeaway lessons of this research is that the open data is of vital importance. In medical research, data sharing is almost non-existent. I cannot go and say, hey, I like your project, give me your data, I want to reanalyze this. It just doesn't happen. It's almost... So it's just started recently, people started creating repositories of datasets. So this paper was only possible because of this OpenConnectOM project which made data available for free to anybody. So these 500 subjects were possible to do this study. So does that mean that they couldn't do the obvious thing which is to be run on raw data from older... Previously... That's not science, that's just people who are publishing because that's what they need to do to get their salary from their university, but they're not giving right hand how much stuff is in it. You would be... Yeah, it's actually some... Yeah, but it's important to be outpaged. Yes, it's good that more people realize that this is not happening. So you can go and, hey, I'm paying my money for research. This actually happens not because of researchers, but actually this is normally blocked at university levels because people... There are either no policies or people freak out or this is maybe patient data or there are a lot of barriers for data distribution in an easy way. Only very few subjects make available data easily for non-scientists. Or even within the scientific community, just because I work in this field in this university doesn't mean I can easily go and request data from another place. It just doesn't exist. It's just happening just now. The data centers are available because I used to be in the UK and I remember asking for a huge amount of data from the website and they actually gave me... They were like maybe 12 studies and I asked for the CD and they've been sending me the CD for free. So... I was using SPM-5 at the time. Yeah. And over time, a lot of studies dealing with the... I think FPDC or TC... So it exists... So it's either exists because of an existing agreement or a collaboration study because it's really hard to scan hundreds of people in one site. So you normally distribute it across site with an agreement that people will have access to all the data. That's maybe one possibility. So now more and more people start sharing data because more and more people say we actually want others to replicate my results. Otherwise, we are not sure. It gives extra weight to the study because everybody now is like... No, no, no, no, no. We don't want to share the data or even less the analysis methods. But it's slowly, slowly coming. In big centers, you probably have... some place like Oxford or whatever, they provide some data. But working in the US, of course... Another huge problem is, again, scientists have no idea how to store and distribute this data because most of this data is currently on USB drives in somebody's table. Let's not call them scientists because basically what they're saying is that they are not really interested in finding the data. They are not computer scientists. They're not to find out about the share of terabytes. Because unless you want the data, it appears that you want to share the data, so others can discover that your results are wrong. You don't want to be right. This is platonic ideal. Which really doesn't exist in reality, unfortunately. University level friction is very common as well. Even if the researchers themselves don't care. I mean, even before talking data sharing, access to journals is still a huge problem. Open access movement started... This it is. As a scientist, you can publish your... But it's not happening because there is strange agreements between universities and publishers and stuff like this. But again, it's not in the hands of scientists directly. I'm sorry, it sounds like... What you're saying is that this is mostly a social slash political slash cultural issue? Yes. Isn't there also a feasibility problem here? Because my understanding is that particularly if you're talking about this kind of medical data, there's enormous volumes of it, right? But YouTube has sold it a long time ago. And CERN is producing like ten times more than YouTube. So we know technology, it's not a big technology problem. But it's expensive, so... But again, the example that you're giving is one where there's a particular group of people working on this. I don't know, are they sharing all of their data? Currently, in our field, I would say maybe 2% people share. Data? So they're able to share... Yeah, yeah, yeah. So there are a few projects either specifically like OpenConnectome project or ADNI project, which is Alzheimer's data set project. This one is China, I mean, Beijing. There's also data set from... Yep, yep, yep. So there are groups that share for specific projects, but it's not like I published a paper. He's a github for my data. That doesn't exist. Someday. Some people will probably not be able to replicate. Yep. But we won't know until we try. Another question is... I mean, we are open to data, but what about the other people who are very overly concerned about privacy and the scam that... No, it's... The anonymization part is again a solved problem. I know, but... We can solve this problem. How many of... When you go for a survey, how many people actually choose to opt out for... It could also be a... It's a social area. It's a social, but you can... Again, you can... It could be an exclusion or exclusion criteria for people to participate. You introduce a bias of somebody not willing to share his... So, basically, before you get scanned, normally people will give you a tick box. Do you want... Yeah, also the problem is if 70% of people take no... Yeah, but you don't scan them or you... It's not a... Probably more than they should have. There is an issue with privacy, but this is a known problem and we know how to deal with it in terms of anonymization. That's not a big problem. Sir, you said anonymization is a solved problem. I'm quite curious about that. Sorry? You said anonymization is a solved problem. I'm kind of curious about that because my understanding is that de-anonymization is a significant research issue. De-anonymization. De-anonymization on something at the level of location data is... It's very simple because you can independently get a second set of this data. Now, if I were to scan your brain, I could easily find your brain, okay, but you're not easy, but I could potentially find your brain along the dataset of brains, but I can't scan your brain without... Access to your brain data is harder. So... What I'm saying is that it sounds like this elimination is very specific in the sense that it's not... It's not a generalized estimation as to why this... Okay. I can also... So, whenever somebody goes into a scanner inside the medical facility, you have all the details like the name, maybe the ID, the age, and things like this. And when this data comes out of a scanner, there's a huge chunk of metadata, right? We know how to cut this chunk off, and that means you just have a raw data, which is just a collection of pixels of your brain. It's virtually impossible to identify a person based on his axial scans of his brain. There's no... It's not like a fingerprint, right? I mean, I would have guessed that the things like... Even things like noise would act as fingerprints of where a scan was done. Again, you can only fingerprint it if you can get that noise in another way. Yeah. No, no, no. But I'm saying, at the very least, you could do things like... Again, this is completely hypothetical, of course, but if I have a whole bunch of scans and I can sort of do some sort of component analysis, you can find... But you can hide different people. You can say, guess what? This guy had a brain scan twice. That's all you're going to get with component analysis. And even then, it's not obvious. This is a brand of MR machine. So it's not obvious that it's the same... That's kind of the same guy. Sorry? It's not obvious to find that this is the scanning. If I have a multi-slice with slightly different angulation, and if I don't know how to reorient the brain, I won't have enough data, I won't be able to say. I mean, 50% of the time. I don't know. Because usually you take the whole volume of the whole study, and you can actually do some IC, et cetera, and maybe study the noise of that particular scan. Yeah. It's not easy to identify the brand of the scanner. So there are things like... There are more problems with things like, let's say blood samples and genetic analysis, because these things can reappear in completely different unrelated studies, and you can maybe trace. So if you start doing this, and then you combine, let's say, anatomy images, and then you link those together, so as soon as you add more and more information, maybe at some point you will be able to have an identity associated with this biomedical data. But isn't that the issue only with what we have already termed as the junk part of it? Because if you do a study on my brain activation, if I see some flashing light, what if you can identify me? Just with your alchemy rice? No, no, no. Rest assured? No. For a long time. With the political stuff, I can see how... Okay, this is your brain activation on various porn images. People have tried to use ephemera as a light detector device, and failed miserably. No, no, no. I'm not saying there is no... My brain is the one which lights up in this particular way if you show me a flashing checker for them. There is an issue with privacy, but we mostly know how to deal with it. In a sensible way, within a scientific context, we can deal with it relatively well. That's the question. If we want to advance the science, we need people to participate and be willing to share data, and we just need to reassure them that their direct personal information will never be shared, so you can give fake names or fake name ID, so you can randomize some things like this. But directly from the data, it's almost impossible to identify a person. Who would be in position to push more sharing of data? Mostly. Actually, high level university administrators would be a good target. And the publishers would also impose that... Well, publishers are becoming irrelevant because people go open access way already, and there are... There is a precedent in some domains that every study in a particular domain has to deposit data there. Yes, so there are fields where they start requiring raw data, but they also provide the service. Some publishers already provide, let's say, in some bioinformatics like 3D models of proteins, you can already upload known formats, and it has to be there because in your paper you have to provide the link to this dataset. But again, it's a minority of fields where this exists. So astronomy is doing well in data sharing because astronomy is similar field in terms of data science. Particle physics are doing well, they are sharing most of the data. That's all I can... Some biochemistry is happening, but that's it. Yeah, yeah, yeah, of course, of course. There's also a problem like you can't just... Okay, I want all the CERN data, all right? Give me 100 petabytes, and maybe I'll get 10% of the data. Yeah, but a couple of terabytes today is a couple petabytes tomorrow, right? For most researchers, it's really hard to manage data because people are just used to manage Excel files and even then they are having troubles. You have a postdoc who lives after four years and who takes care of the data. Redo the study. No, no, really. We have a research group that existed for 80-60 years. So what exactly is the gist of the paper? Is the reason for the postdoc basically that they were doing statistical efficient, or was it... So the main reason with the... Our assumptions of spatial autocorrelation, the noise of the... spatial noise that exists in our data. Our Gaussian model is definitely incorrect and few people are making, how to say, trying to solve this problem. So most people just want to run a study, run it through a black box analysis, get nice activation images and make some claims about cognitive science. No, because you need to know what kind of thresholds they were using and what kind of actual preprocessing steps were done. So when they published additional information for the paper, so of course they said this is not affecting all the studies. The current estimates, it affects like 3,000 studies, because this is 3,000 that we're using this kind of analysis techniques. But still, the big problem is actually the assumptions of noise in the data. But again, because now we have more open data, hopefully more people will start working on this and exposing it more. So the really good thing about the paper, this is the problem that we have a problem. All right?