 Hi, good afternoon and thank you for coming lecture by TSVP visitor. So I'm very happy to have Professor Aikeda here. So I introduce him just briefly. So he graduated from Information Physics of University of Tokyo, which sounds quite new actually, in 1996. And then he immediately became a special postdoc researcher in Liken, where I was also working from 1996 to 1997. Then, yeah, he became, I just keep middle, but he became associate professor of Institute of Statistical Mathematics in 2003. And now, then in 2016, he became a professor in the same institute. And also he is a visiting scientist of Kaburi Institute of Tokyo University, and also a professor of National Astronomy of the Battery. And so according to his page, his expertise in information geometry, astro statistics, and signal processing, and actually he is one of the very rare Japanese scientists who are willing to analyze the real-world data. And today he is talking about his great achievement in Blackfall Charo. So please thank you. So thank you, Fukai-san, for kind introduction. So today I'm going to talk about the Blackfall Charo imaging. And I'm a data scientist, so I'm not an astronomer, but I am somehow involved in this project for roughly eight years. So I'm talking about how these images are worth taking. And I think that there are some people who are not scientists here. I'm trying to remove all the math, so I try, it makes sense still. But anyway, let me start. So in case I have to wear a mask, I put my figure here. But anyway, yes, Fukai-san kindly told him about me, and I'm living in this lab for E-13 just over there until the end of July. And as he mentioned, I'm working in different types of theory, but also different types of data. So I have talked to brain-sized people, but also fishery resources. I did some work with them. But recently I'm using most of the time for astronomical data analysis. So let's start my Blackfall talk. So the introduction. So actually the images we are showing was first reported in 2019, and just one month ago we got another image. But the data was taken in April 2017. So five years ago we took the data. And after two years, we released this image about the M87 Blackhall. M87 is the name of the galaxy. And at the center there is a Blackhall, supermassive Blackhall. And if you want to distinguish between them, then we put star here. Star means there's a strong emission coming from that object. So yeah, M87 can mean galaxy itself. So I put star here. And after that, we use another three years to announce this one. So it took so long time. And people may think that what these guys are doing, you know, but there are a lot of things going on behind the scenes. So I will explain that using one hour of my talk. But the first thing is the beautiful pictures. So this is about the M87. So this is living in the North hemisphere sky. And the direction is, yes, this part. And if you zoom in, this is combining a lot of pictures coming from different telescopes. And this is a galaxy M87. And we know that there is a strong jet coming. This is a Hubble image. Yeah, strong jet is coming. And if you zoom up the center and go to the center, then finally you will see the black hole. This is the result. And what we know about this M87 is that the actual size of this donut shape is super big. So whole solar system goes inside of this black part. So it's very, very big. So even the light travels from one side to the other with some days. So it's super big. And the mass is 6.5 billion times of our sun. So it's super large. And the distance is 55 million light years. So it's quite far. So that is, this mass is determined by our result. So this is based on our result, but this is the information of the local. And it is emitting a lot of light in a different wavelength. So it is very bright with different wavelengths. And we observe that there are, there are, there is a strong jet coming out from the center. So we know that black hole sucks everything. So the big jet is coming out from the black hole. And it's very strong. The speed of this jet is relatively close to the light. So it's very fast. And so this must be very energetic or something is going out. And there's some mechanism which is making it possible, but it's not fully understood. There are some theories, but the conclusion is not yet given. So we know that there's a strong jet. And we don't know how it is created there. So that is, okay. That is called event horizon. And that is, this size is roughly 2.5 times bigger than the no event horizon. So I will come back to that later. And this is about the other one. So this is about Sagittarius and stuff. This is living in the center of our galaxy. So Milky Way galaxy center. We have a super massive black hole. And that is Southern hemisphere object. But if you go to the center of this galaxy, then there's another black hole. And that is what we took a picture of. Yeah. So this is also combining a lot of results from different telescopes. So maybe you have heard about LIGO. LIGO, they're detecting gravitational waves. And they're also detecting a lot of black hole merging. Those black holes are much smaller. So like 10 times bigger than sun or 100 times bigger than sun. Those are detected. And there are a lot of them. But we believe that super giant super massive black hole exists. Just single one is located in the center of each galaxy. And that is super big. And that is very active, which means that it is sending a lot of light so we can observe it with telescopes. The small ones, margin, they are not emitting strong lights. So we cannot observe it with a telescope. So this is located in the center of our Milky Way galaxy. And this is very small compared to the previous one. The Mercury orbit can be written like that. So it is very small compared to the previous one. But of course it's super massive. So it's four million times larger than the solar sun. And distance is just close rather than the other one. It's 26 times thousand, 26,000 light years. Still quite far from our common knowledge. But anyway, astronomers say that it is close. So it's close. So for example, from this, the general relativity predicts the size. And that will give direct estimation of the mass. So that is one thing. And there are other observations. And I'm coming back to that right now. But anyway, this is not sending any strong jet. And this is well-known objects. So there are a lot of observation done. And this is a famous one that they are using infrared telescope. And they're seeing the center of galaxy. And there must be something super heavy here. And this is a ear. So they're doing very, very good observation for many, many years. And they say that, OK, these things, they can estimate the weight, mass of them. And there must be something compact and super heavy here. Otherwise they cannot explain these orbits. And so that is also giving some clue about the mass of these sensors. And this is one of three Nobel Prize winners in 2020. So Nobel Prize said that they are these, again, he got the Nobel Prize because he found that supermassive compact object at the center of our galaxy. Nobel Prize didn't use the word black hole for this result. So they left a room for us to find that, OK, this is black hole. That is very good. And comparing the size of M87 sensors, A star, actually this looks a little bit bigger because it's closer. But the size is more than 1,600 times bigger. This one is bigger. And this one is smaller. But it is more than 2,000 times farther. So computing a balance, this looks a little bit bigger. I think you can understand that math. So what we know about the black hole, and I mean, I'm not an astronomer, so I studied from them, astronomers, but what we know about supermassive black hole and what is that shape, photon ring, which we call. I explained that. So as I answered, that astronomers believe that every galaxy at the center of each galaxy, there is a supermassive black hole. That is what they believe. And actually, a lot of galaxies, there are some super bright thing at the center. And they call it active galactic nuclei. And they believe that it should be or it must be black hole. But of course, if you are not 100% sure, it's better to give another name and they call it AGM. But they believe that there must be something very super heavy and super energetic at the center of each galaxy. And it should be supermassive black hole. And there are a lot of strange things are happening there. One is, of course, a jet, how it is created. We don't know. This is just an illustration. So there are a lot of simulations going on and a lot of theories going on, but we don't know how it is created. The simulation is, yes, very beautiful that they can create some movies like they call it GRMHT, general relativistic magneto hydrodynamics. But they can create this kind of beautiful pictures by changing some parameters. But they don't know what is the true parameters describing the jet or the mechanism of the black hole, supermassive black hole. So of course, we want to observe it and check what is the truth. The other thing which is answering the Doya-san's question is that when there's a black hole, there's an event horizon. So if you pass through that plane or sphere, nothing can come back from that place. So that is called event horizon. And you cannot see event horizon because it's super black. And what we can see is roughly 2.5 times bigger than that. There's a stable orbit of the photon. And we can see the photons coming from that part. So if there's an event horizon like this, then light in the black places will not come back. So we are looking for the black hole from the right. So you will not receive any photon coming from the black part. But the blue part, you will receive photons. So that is why you will see a black hole shadow. So it's difficult to keep saying the right thing to everybody. But we sometimes say we saw the black hole, but it's of course wrong. We are not seeing black hole. We are seeing the black hole shadow. So that is how this ring is created. And basically, if you look it from any direction, orientation, it should look like a sort of ring. If it is rotating, maybe the bright part will be different. But basically it should look like ring. That is what is predicted by generality. OK, so we want to... So some people at the very beginning, it started in 2009, they said. But some people said that, OK, let's make a big telescope which can see the event horizon, or not see the event horizon, but the scale of event horizon. So there was a good estimate of the size of the supermassive black hole of our galaxy and M87. So they think that, OK, if we create a huge telescope, maybe we can see it, see the ring. So some people said that. Especially as a chef doorman got the idea and he used 10 years to create the image. So it's very great effort. But anyway, the thing is that it is a visual proof of black hole, of course, and tests of general relativity. So ring shape, if it is really ring, or if we know the magnetic field around the black hole, we can take it by polarized light imaging. And we have done it for M87. But anyway, if we know this, then maybe we can attack this sort of problem, like how this strong jet is produced. So in order to answer that kind of question, they started the project of the event horizon telescope. But it is super difficult because maybe you should see this. The resolution we have to achieve in order to see black hole is something like this. So zooming up the moon and go and go and go. And we have to see a donut on the moon. And this is the size of the size there is A star. So the recent image, the size is something like this. And the M87 is slightly smaller. So it was smaller than this one. And we have to see this resolution. Of course, we cannot see the donut on the moon because moon donut is not emitting any radio wave. So we can only see if the object is emitting radio wave. But anyway, super massive black holes are active and they're sending strong radio waves. So we should be able to see it. And the size is this size. So it's super difficult. And there are known or estimates of the black hole size, super massive black hole size. And such as the Milky Way center of our galaxy is this size and M87 is this size. It's more than 1,500 times difference. And this one is much closer to us. They're using a little bit weird unit. This PASIC is 3.3 light years. So it's a little bit weird thing. You can't forget about that. And this is talking about the size. So again, they're using a different unit because it's not familiar to anybody here. But this is roughly the size. So the size is slightly bigger than M87. The next big one is this one. And it's compared to the size is 1 over 5. So just 20%. So it's quite difficult to see these. So anyway, the main target we can see the ring are these two. And these are always the balance between the size and the distance. So small one close or big one far. I mean, there is a balance. And these two are the things we can possibly catch. And the angular resolution of the telescope depends on the wavelengths they're catching and also the diameter of the telescope. So comparing the optical telescope and event horizon telescope, optical telescope, this one is super telescope located in the top of the mountain in Hawaii. And this one is, of course, schematically showing the event horizon telescope. And wavelength of light is, of course, less than one micrometer. So it could be 500 nanometer or something like that. In the case of our telescope, even for the telescope, we are catching 1.3 millimeter. So the wavelength is much longer. We are losing quite a lot. So maybe 2000 times longer. That's bad for angular resolution. But the size of the real mirror can not go beyond few 10 meters. Japan is now building 30 meter telescope, but 30 is really hard. And you need a lot of money. But this one, we combined these telescopes, and they are more than 10,000 kilometers away. So virtually, the diameter is much larger than this one. So, again, computing the balance, we have much better resolution compared to optical telescopes. And of course, from the calculation, if we can combine this in a very good way, then we can see the size of the donut on the moon. That was a calculation when they started this project. Optical interferometer is still, you have to combine, say, a few kilometers away from each other. And I think we are not successful for combining them to give these kinds of images. So in April 2017, these actually six locations, but eight telescopes, two in Chile, two in Hawaii. So as a total, there are eight telescope participated in the campaign in April 2017. And actually, we didn't build any new telescope for this project. We just borrowed some time of all eight telescopes at the same time, same one week in April. And it's very interesting that telescopes, they usually run by different countries, but they have some operation time available for public. So even you can write a proposal for Hubble telescope or any of these telescopes that you want to use your telescope for observing something, finding alien or something. And if you are successful, then they will allocate some time for you. And basically it's free. So it's included in the operation of every telescope. And event hazard telescope is somehow using that kind of stuff. So it's, I don't think it is fully free. But of course, this, for many of the telescopes, they are happy to collaborate with event hazard telescope if we can see the black hole. So, yes, we needed to install some new equipments. So we paid some money. I mean, event hazard telescope use some money to install a lot of new things, but it is not building new telescope. That's a very good point for event hazard telescope. In 2017, these participated in. And in 2016, at Boston, we had a meeting. And I think this was second meeting of event hazard telescope, but at this time we had 200 people. And 13 stakeholder institutes. And this was, yeah, I was here somewhere. So you cannot find it. And in 2019, this is after we released the MAT7 image. So we are all relaxed and really happy. And we had a meeting in Hawaii, in Hilo, Hawaii. And there was something like that. So it's not so clear sky, but Hilo is something like that always. And yes, this time, the collaboration men were increased to 250 or 60, I think. And now we have more than 300 members and from 80 institutes from 21 countries. So it's getting bigger and very huge currently. If you compare to LIGO, those things, then it's much, it's not that big, but yeah, we have 300 people. And some are operation people. Some are radio interferometry people. Some are like me, data science people, maybe three or four data science people. And there are some theory people like GR people, general relativity people and simulation people. So it's a mix of a lot of different backgrounds. Yes. Then yes, I talk a little bit about radio interferometry. So radio interferometry combines different telescopes. So for example, if we have this kind of telescopes, then we, by using a pair of them, we get some information about the image. And these are plotting the pair of the telescopes. And by combining some telescopes, we get more information about the image of the source. So these are the sample point. I come back to this, what it means later. But this is a sample point for the M87. And this is for sensors S star. So yeah, there are some samples, but it is not fully sampled. So I'll come back to this. But before that, I also want to show you how difficult and why it takes so time to produce image. So when we do the observation with different telescopes, they store all the data in their hard drive. Then we ship the hard drive to two locations, Boston and Germany, close to one. And they do the correlation, correlator, what they call. And then we have to do the calibration. And these process took months. So more than six months to just process the data of the M87 or sensors S star. Sometimes we have to redo it. So send it back to them and ask to do modify a little bit. Then another six months we have to wait. So that kind of thing happens. And finally we do the imaging. Imaging also takes time. So we did use, that is why we needed five years to produce the sensors S star image. And of course the first part, shipping the hard drive can take time because one of the stations is in the South Pole. And we did observation in April. Then South Pole becomes winter. And then we cannot fly there. So the hard drive is literally frozen there for six months until we take the data. So yes, many process takes time. And we are very moving, very rigid steps. So every process is doubly done with rather independent people or rather independent softwares. And we do check every step and move forward. That was how we proceeded. And just a little bit of math about the interferometer is that radio interferometry. It's different from the interferometer you read in the physics book. So the math behind this is just equivalent to Fraunhofer diffraction. So Fraunhofer diffraction is say there is a plane here and there is a hole with some shape. And you hit it with, you send a coherent light, so usually laser. Then what you see here is not proportion to this shape. But it is a Fourier transform of this shape that is detected here. And so this is a light speed Fourier transform, which is very neat. And this is the same trick is used for diffraction imaging. So X-ray diffraction is used for determining the 3D structure of the protein or those things. Maybe some of you may know it. And the same thing is happening there, hitting the protein crystal with coherent laser of X-ray and they're seeing the structure. And then use the inverse Fourier transform, they try to understand the 3D structure. That is what they're doing. The same thing is this Fraunhofer diffraction. So Fourier transform is the key here. Fourier transform is linear transformation. So it's beautiful and it's very simple. I think it is not. So I think maybe some of you may not understand. Go ahead. It means the same wavelength and the phase is aligned and if it is emitted. So laser is a very good example of that. So if you hit it with that, yeah. So in the case of the interferometer of the telescope, the thing is that so of course we cannot send the light to the black hole, right? In order to come back it takes millions of years. You cannot wait for that. What we do is record the same wavelength signal coming from the same object at the same time. So we are carrying a very heavy clock to different telescopes and just aligned the timing, exactly the same timing and recorded for some days, then bring the data back and take the correlation. And that is exactly same as a diffraction. So we compute diffraction on the computer. So that is what the interferometer is doing and that is why it takes so long in order to process the data. The original data is more than five petabyte and after processing when we do imaging it is less than gigabyte. So we're just smashing every data by taking the correlations and so on. So the whole process of this light diffraction is done on the computer. That is what interferometers are doing. So we observed this free information. Free information. Sure. Someone sent a video from the South Pole. Yes. Well, there are some reasons like security. That is also one thing. So we want to be sure that nobody has seen any of the data. So in order to be secure, we send it physically, not electronically. Thank you. So we observed this visibility. A good thing of the radio interferometry is that we can observe the phase. When we do the diffraction of the x-ray of those we cannot detect the phase. But in this case we can see the phase. They are super noisy. So that is also another thing and I can talk about that for hours. But for radio interferometry basically you can have the information of the phase. That is a good point. But if you know information of all points on this plane, then Fourier transform is linear transform. So just snap your finger and you've got the image. But it is not fully observed. So we have to do something. And that is what we call imaging. So Fourier transform is a linear transformation. So if you know all the information of all the points, then just one linear equation you just solve it and you've got the solution. But because it is sparsely sampled you cannot solve it. I mean, you can solve it, but the solution is not unique because the condition is not fully observed, provided. So you need some additional information. If you know more about the image or something, then you can use that information in order to solve the equation. And here we, imaging is usually assuming something about the image. And usually or what we used is that the image itself is smooth. So there's not a small dot all over the plane. So there should be smooth. And sparse means that there are a lot of dark parts. That is our assumption used for black hole because we expect that sort of image. And of course, theory predicts that. So we assume that is the solution we are looking for. Of course, imaging also have to talk up and deal with the algorithm because when the problem becomes non-linear and non-convex, then often the solution depends on the algorithm, which local minimum you attain depends on the algorithm. So imaging is dealing with this kind of thing. And I am involved in event horizon telescope in order to deal with these kind of things. And actually event horizon telescope itself developed new imaging methods and actually three new methods. But the two were used for M87. After that, we used three years to take the image of such as SSO. Another one was developed. So as a total, we have three new imaging methods. I'm not talking the detail of these. I can talk if you want, I need another occasion. So let's talk about how we proceeded for M87. So there is a traditional imaging method called CLEAN. And this is used for more than 40 years. But we created new methods, which is we call regularized maximum likelihood estimation. But anyway, one of them are EHT imaging. I think this one was developed by people in Boston, mainly. Another one is called SMILY. And this is our algorithm. So Japanese side developed this one. So we got three methods for analyzing the image. Then here is a very important question that, okay, we got the data. We have some imaging method and we got the ring. So let's say that if they clearly retrieve the original shape, then we say that, okay, that parameter is working fine. And we chose optimal parameters. So there are roughly 2,000 parameters of survived through this process, 2,000. Then we fed it with the real M87 data. And we got, we had four days data. So four days means we have 8,000 images as a result of this process. So this is how we proceeded. And yeah, it's Google Cloud. So this is just some examples. So we used the three imaging method and there are different morphologies. So yeah, it's not perfect. But this is a quality we can attain with that data. So these four are showing that, yes, if it is disk, then yes, maybe we can say this disk. If it is ring, yes, we can say this ring. So using, for example, these algorithms or parameter set, we use that for the M87 data. And we can distinguish between them. And this is a result of the M87 data where this is the old one, the traditional one, and these two are new one. So we believe that they are doing slightly better job. But of course they are traditional people and they don't believe these. It's always happening. So of course, the traditional method should work. And we all did this. And finally, it's, you know, a part of things are all by social things. So I mean, we had a discussion and we didn't pick up one method, but just average everything and produce the result. And this is what we did for M87. Okay. I also talk a little bit about such as a star, which is the image we announced three weeks ago to the public. And such as a star is the center of our galaxy. And it's a little bigger and it's closer. But it took three more years to produce the image. Right. And it's a little bit weird. But there are two reasons. One is that it's small. So it is moving fast. So Therry predicts that it is rotating, possibly from few minutes to 30 minutes. And we are observing it with 10 hours. So we rough, rough idea is that your exposure time is 10 hours. And your seeing is moving with say a few to 30 minutes. That is what you're taking a picture. So it's something like that. So if it is, I mean, your shutter speed is very fast, or if it is stopping, then you can take a picture like this. But if it is moving, all you can do is something like this kind of image. So that is the case of such as a star. The other thing is that we are, yeah, our solar system is somewhere around here. Nobody has ever seen the galaxy from the top. So this is just an illustration. But we are seeing through the galaxy to the center. So there are interstellar medium. We are seeing through the galaxy. So there should be some effect for the image we are observing. So they call it scattering. So what simulation says is that, okay, maybe this kind of time variability, the clock is of course moving faster than real, but moving like this. And the scattering happens then maybe this is at most what we can do even if you can take the movie of the image. This is the most thing we can do. But we want to, okay, still we can see some sort of ring structure. So that is what we want to detect. So one thing is that talking about the variability, maybe we have a mean image and the residual part is something like noise. It's not noise, but it's something we throw away. So we created this as noise. And the other is the scattering effect. Yeah, scattering is, yeah, it's tricky. But still we think that we also think this as a noise. And we concentrate on this structure. If we know the noise structure in the Fourier domain, you can mitigate the effect of this variability as a noise. That is what we took as our strategy. I told you that we had three imaging methods for MIT7. But here for Cessars ASTAR, we had to add new parameters that to mitigate the scattering of variability. So parameter combinations increase quite a lot. And also we had the, you know, algorithm called, this is Temes proposed by Canadian people. But this is using Bayesian statistics in order to do the imaging. And the strategy is the same as MIT7. We prepared these kind of images. Can we distinguish between them? That is our question posed for the imaging method. So again, the information in the Fourier domain looks like the real data of MIT7, Cessars ASTAR. But there it is moving and morphologies are different. The averages are like this. So if we can retrieve these, then that is our goal. Again, the imaging process, prepare these movies images and created the data. We have four imaging methods. And the number of the parameters that increase because we have more parameters and also new methods. So it has a lot of combinations. We tested every method of these and created the image from these data. If it works well, then we use that for real data. And created the image. And in this case, we got 10,000 images as a result. We didn't use whole day. We only use single day because that's the quiet day of Cessars ASTAR. And also the coverage in the Fourier domain is the largest. So we use just one day for the application. And these are the results of the morphology check. So it is doing a good job, I think. It is not perfect, but it's doing a good job. And so I think you should understand that this is the quality we are expecting our result. So we are not saying that each pixel value is correct or something like that. But we are saying that, okay, we can have, if we have this, this is our result. And that is the thing we are doing. Okay. Yes, yes, yes. And noise are really complicated, but that type of noise are also included. Yes. Yes. And so we created 10,000 images and we made the average of them. And there are a lot of different types of images. I think you can take a look at that. And finally, image was the average of this 10,000. And of course, the big question is how do we cluster the 10,000 images into four classes? One is non-ring and others are ring, but bright part is slightly different. But maybe if you see the quality of the reconstruction, then maybe you can expect that the bright part can change. And that's why we are doing this. And that's why we are doing this. And that's why we are doing this. And that's why we are doing this. And that's why we are doing this. Bright part can change. So yeah, bright part, we need more stations. We need more better analysis method. Otherwise we cannot say a strong thing about the bright part. But this non-ring thing is a little bit problematic, because if it is ring or not ring, that is our question. And actually we got 10,000 images, but only 3% are like this. So yeah, maybe there are some something, but we are not thinking it's a major drawback. But we posed some questions like it. If it is non-ring, not likely, because we did some GR image simulation and if we plug in some non-ring image, then most of the images are non-ring. So it means that it is quite likely that original image itself is ring. Of course we added some scattering noise. So this can make, possibly make the ring non-ring, but it is not likely because without scattering, still the ring is ring. So if we remove the mitigation process of the scattering, it is still ring. So I think we think that it is not likely. And maybe time variability made it non-ring. This is very likely because we did a moving simulation of the GRMHT and plug in the data, then some of them are non-ring. So it explains what is happening. Of course it's difficult to check it, but we believe this is our, say, narrative. We can provide you. So when you say you have 10,000 reconstructions, is that reconstructions of the same data with different parameters or different images? No. Okay. So then the time variability would be the same for all of those different reconstructions. Is it just that the mitigation of time variability is causing it to become non-ring in some of them? So there are different parameters. So of course image assumptions and also mitigation parameters, they're different. So the combinations are resulting in different images. Thank you. But because the data is not fully absorbed, so you cannot just neglect those results. So we included everything else. And this is the result, yes. So conclusion is, yes, this is a big and difficult project. And two years for M87 and two plus three years, five years from such as ASTAR. So it took a long time. And every week we had Zoom discussion from FIS. It starts from 11 or 12. And yeah, but anyway, it's just huge project and it's very exciting. And it's a big question that visual clue for general relativity. And so far general relativity survived. So some people say that it's not a good idea to fight against Einstein. And so far it is true. And I'm participating in this project as a data scientist. And developing new image, an imaging method is very interesting. Of course, it's very nice, but also discussing how to build a scientific claim from data. That is a very important part of data science. So I'm really happy to participate in this project and discussing this kind of thing. And it's huge collaboration. So I'm getting know to some people who, I mean, 10 years ago, I didn't imagine to meet. So it's really nice to see these people. It's anyway, it's super exciting. And the HD is continuing. So in 2018, Greenland Telescope participated in. So we got nine telescopes in seven locations. And in 2021, 22, we got two more stations, and one in Europe, one in United States. And those are participated in the data collaboration. So we have these data, but haven't analyzed anything about 2021 and 2018 we started. And the final thing is that, yes, we, I think I'm working for data science and data science is building a claim from data. And there are a lot of methods, but also we have to design the experiments and there are a lot of problems with big data and so on. But anyway, these things are the exciting part for our research. And of course I'm going to analyze more data of HD, but also discussing with you for different field people. I just welcome. So I'll be here until the end of July. So if you have any question about HD or any kind of your data, I'm happy to have some chat with you. So finally, a fun part is that, I don't know if you know this, but on the, we released the such as ASTAR image on 12 May. So I mean, three weeks ago, then next day. So Krispy Kreme announced that they're giving one donut free for everyone in just in United States. And nothing happened in Tokyo. So I come, I couldn't get anything, but it was really fun part. And one, another thing is that this movie is on Netflix and one of the members of the HD was a director. So this is talking about two project. One is our HD project. One is the Stephen Hawkins Hawkins last paper project. So it is very beautiful documentary movie. You can find me just for two seconds on the screen, but just two seconds. But if you are with Netflix then just find time to look at this. So this is the end of my talk. Thank you very much. So I understand that the black hole is kind of taking a picture of the black hole is kind of a big stretch, but can you test your methods on something that's closer that you already have like a high resolution image of and kind of user for validation of your methods? So we have already tested with the, okay, we have five days and so of course we can allocate time for such as his star and M87. And good thing of radio telescope is you can see through the daytime. So we also collected different objects. And of course this one is getting better resolution than the known results. So and we are checking release some results for them and they are really consistent with the known results and getting a better resolution. So those are what we call calibration, but we are doing some calibration imaging and so they are just coincide with the same parameters for those two. No, no, of course we have to check also the parameters for those. So okay, the thing is that even if you plug in these parameters, the parameters which survives will be quite different, right? So if the thing is different then the surviving parameters are different. So that is what is happening. We have to redo all the assumptions. No, no, no, we didn't do this type of things because this is tuned for M87 data like this. So we didn't do that, but we changed the parameters and then of course do the fidelity check and to compare it with the known results and doing that kind of thing. And we released those results also. Yes, because this doesn't work for different things. This works for M87. In this case this works for such as S. So yes. I have another question which is somewhat related. You mentioned acquiring new data in 2021-2022 I think. Is it going to be with the added one and I guess one and three telescopes? Is that going to be data from the same black holes or other? Okay, so as I said that there are calibrators and we also see some of them, but also the main two big black holes, supermassive black holes are M87 and S. So of course we are collecting the data from them and of course we also expanded the wavelengths. So if we have a shorter wavelengths, then we can see the final structure. So we also collected some data for them for these years. Yes. Sorry, I have one more. If you use the orbit around the sun, can you create theoretically like a supermassive telescope? Yeah, yeah. So there was a project for launching a satellite, but it didn't succeed. So we don't have the satellite version of the radio interferometry. But the United States decided to go for NGVLI. So they are trying to launch this kind of telescopes all over the United States. And if it is working in say 10 years, then we will have beautiful pictures for sure. So as a result, I think this result, IEHD result contributed for the success of that proposal. So we are looking forward to having some good picture from the United States telescope. Do you see some chance for a temporal resolution so that you can in the near future get some new view? Yeah. Yes. A very, very good question that actually we tried for this such data to recreate the movie. But because at the each time stamp, the number of the UV plane, I mean, three components are so small. So the reconstruction was not good. And so algorithm must be updated. And also the telescope is crucial. So of course we are looking for that, but we need, really need to develop a better algorithm to do that. So that instead of two years and... Can I just get in? Yeah, thank you very much. 2018 data we are now analyzing. And we just started from, I think, February or January. We are sure that we are getting some result in the end of this year. So of course, once you are experienced and seeing the same object, then you have a better experience and we can speed up the process. Thank you. I understand it for the online audience. So one question was, is the accuracy and precision of your atomic clocks limiting the spatial resolution? Atomic look. Is the accuracy... No, no, no, no. It is not, because yes, we are carrying the clock, very good clock to each location and compute observing the data and later we will do calculate the correlations based on that clock, of course. But actually the thing is that the telescopes are looking a different sky. So the thickness of the atmosphere is totally different. And we are observing 1.3 meter. So the light travel distance easily change 1.3 meter with a few minutes within a few minutes. So actually we have to readjust the clock depending on seeing the data at every moment. And that is why we used more than six months in order to do the calibration. So of course, we need to have a good clock at each location. But finally that clock is not... Even the clock is correct. The light path changes. So we have to update the information later by calculation. Thank you. Another question from online was, after averaging 10,000 reconstructions, what do the bright parts represent? Yeah. So for the... So for such services, they are all bright parts, but we are not fully sure if that is true or not. We are quite sure that it is ring shape, but the bright part we are not fully sure about it. But for the MD7 case, we believe that this is a evidence that this black hole is rotating. So for the people who know black hole, this is a car black hole. This is not Schwarzschild black hole. It is not stopping. It is rotating. And also it's indicated... So rotation is very important to produce jet. If it is just still not moving, there is no hope we can explain why jet is coming out. But because this is rotating, this is a clue for rotation, then we believe that it is having the starting point at the jet area. So this is an indication of that. That is also... Maybe one last one. What are your thoughts on the James Webb telescope rotating black hole data? Of course, it's a super clear image. It's coming out from James Webb. It's beautiful. But I think it still cannot reach this resolution. So every telescope has their own territory. And ours is good design for this kind of resolution. So it's slightly different. But of course, good collaboration must be with the black hole science, I think. So from this kind of image, can you tell which is the direction of the jet or which is the axis? Yes. So GRMHD people produces a lot of GRMHD simulations of a black hole, including the inclination. And their prediction is that we are seeing almost from the top of the rotation axis a 30 degrees difference. It's coincides interestingly for M87 and such as ASTAR. Both are almost the same from the top side. So that is what is predicted currently. Yes. Is there any other question? If no question, then I want to thank Professor Ikeeda again for you. Thank you very much.