 Let me press this button, go live. Perfect, I believe we are live. Welcome everyone, thank you for joining us for today's low physics webinar. My name is Alejandro and I'm going to be your host. Today we're presenting the Black Hole Photo and Ring by Aleksandro Lukshaska. Alex completed his PhD in physics at Harvard University and before moving to the Princeton as a Gravity Initiative Fellow, he was a junior fellow in the Harvard Society of Fellows for three years. Alex is a theorist with training in high energy physics and general relativity. And he's broadly interested in understanding physical phenomena that arise at the experimental frontier, especially in relativistic astrophysics and gravity. He's currently working on Black Hole Imaging to interpret the first black images released by the Event Horizon Telescope. And he's interested in predicting interesting futures that will be targeted by future observations such as the Black Hole Photo Ring, which is today's topics for our webinar. Remember that you can ask questions over email through our YouTube channel or Twitter and then the questions will be read at the end of the talk. Now, without further ado, we will turn it in over to Alex. Thanks for joining us. Thank you so much for the invitation. I'm very excited to try this format, which is new to me. So let me start by sharing my screen. Okay, and off we go. All right, so as Alejandro just said, the topic of my talk today will be the Black Hole Photo Ring, which is part of an exciting new story that's developing and I hope to report the latest developments to you today. So the story begins a couple of years ago, actually in April 2019, when the Event Horizon Telescope released the very first image of a real black hole up in our sky, and we're about to zoom into the region of the sky where this black hole is located. It's a supermassive black hole weighing six billion times the mass of our own sun. It's located at the heart of the galaxy M87, which actually has been imaged many times over the past decades with increasing levels of resolution. And in fact, the images that you see were taken by a series of experiments over the years that got deeper and deeper into the heart of this galaxy, which actually has a jet coming out of it. And we now understand that the jet is powered by the black hole at the center. And finally, in April of last year, the Event Horizon Telescope resolved this image. This black hole is 50 million light years away. It has roughly the size of our solar system. So it's very big, but it's so far away that it has roughly the same size in our sky as an orange on the surface of the moon as seen from the Earth. So, you know, of course this image is a little bit blurry, but if you think about it, it's a remarkable achievement that we're able to take such an up close picture of a tiny object in the sky. It's like putting an orange on the surface of the moon and trying to take an up close shot of it. And of course, now that we know that such a technological achievement is possible, there's already a lot of efforts underway to improve the technological capabilities of our telescopes, improve the experiment and eventually get better and better pictures of the source and eventually others like it. And so the big question becomes, what do we expect to see as the resolution improves and what do we expect to learn from that? Okay, so this is a very beautiful image. The one thing that we see is that there's a dark patch at the center, which is the black hole, and it's surrounded by this sort of thick ring of light, which we think should actually be very thin. And so we believe based on state-of-the-art simulations that if we had a higher resolution, say an infinite resolution in principle, that the true image would look something like this with the image that we actually got being sort of a blurred out version of this true picture that results from the not yet perfect resolution that our telescopes have. So this image is the result of a GRMHD simulation. So GRMHD is general relativistic magnetohydrodynamics. So it's the general theory that describes fluids or plasmas in curved spacetime. And there are many people, there are experts in these simulations and they've produced many, many of them. The EHD has a huge data bank of such images. And what was striking about these images is that in 2019, people noticed that they all shared the striking feature that they're dominated by a very thin, narrow, bright ring. And the fact that this ring is always present, regardless of what the astrophysical details of the plasmas surrounding the black hole, what the details of the source were, really raised eyebrows. And what we've come to understand in the last couple of years is that really this photon ring doesn't depend on the astrophysical details of the plasma around the black hole and what's happening there, but rather is a consequence of the bending of spacetime and the associated lensing of light by the black hole. So this is an effect that follows purely from the fact that there's a black hole there. It's a gravitational effect which depends only on the geometry of spacetime around the black hole and generically leads to this lens feature which looks like a ring. And this is the photon ring and that's what I'll explain in detail for the rest of the talk. Okay, so the three questions I want to address today are, one, what is the origin of this ring of light? Two, how can we better resolve it going forward as our experiments improve in the future? And three, what is that gonna teach you? Why is that interesting scientifically? And just to give you the punchline, I'm gonna answer all three questions right away. So as I started to explain, the photon ring is a universal effect of general relativity. That is to say, it falls purely from the geometry of the black hole spacetime and is matter independent. It doesn't depend on the details of the astrophysical source. And that's great because it means that when you image the photon ring, you're actually directly probing gravity. And not only gravity, but gravity in the strong field regime where gravity is very strong. And what I'm gonna explain to you is that actually it's not just a single ring, but in fact, it's a ring which has a very rich and intricate substructure. It's composed of multiple subrings. And these subrings have a very intricate relationship to one another that's governed by these critical parameters gamma delta tau that I'll explain, which are new fundamental quantities that have been derived in the last couple of years that describe lensing by Kerr black holes and that are in principle observable and that we hope to observe in the future. Now the photon ring is present in the image as you just saw, but it turns out that the way that we image black holes involves a method called interferometry which I'll briefly review. And effectively what an interferometer does is that it doesn't directly take an image the way you see with your eyes or with a camera but rather it measures the Fourier transform of the image. So it sees the image in Fourier space. And that introduces all sorts of complications but it's an important fact that we've understood within the last couple of years also that the subring structure that the photon ring and its subrings produce strong and universal signatures. So again, that don't depend on the details of the source on long interferometric baselines which means large separation between your telescopes and that means that we can detect the intricate structure of the photon ring using a space-based interferometry experiment that could target the source, the black hole M87. And last year, together with Sam Grell and Dan Maroni who are at the University of Arizona we proposed a very precise experiment that could measure the shape of the photon ring. It looks roughly circular but it's not perfectly a circle. There's a deviation from that that's predicted from GR. And we showed, and I'm gonna explain this to you at the end of the talk, that the shape of the photon ring is actually again, very insensitive to the astronomical source profile and measuring it could give us a very stringent test of strong field gravity using space and for interferometry techniques. Okay, so before I launch into this let me just mention that everything I'm gonna tell you has pretty much been understood in the last couple of years and the study of the photon ring, I think is a very exciting area of research which is now very active and growing. I'm gonna be telling you about some of these papers, there are many more by multiple people around the world and I think there are only gonna be more and more papers it's becoming hard to keep track of all of them. I think there's a lot of exciting aspects to this story and I hope you get a taste for it. So my own involvement with this started at Harvard a couple of years ago when I was at the Blackwell Initiative and the initiative houses a number of Event Horizon Telescope observers and they had just taken the image and released it and we're wondering about the presence of this ring in all their simulations and we started to have these long discussions to try to understand why the ring was always present and the result was this paper which I think is quite nice. And has a lot of the story that I'm about to tell you with a lot of the technical details later worked out with Sam Grella and these two papers. So if you want to see the technical details of what I'm about to tell you you can check out these three papers. But the story begins with the photon shell. So you've probably all heard that a black hole has an event horizon which is a region of space time from which nothing can escape, not even light. What is perhaps a little less well known is that around the black hole slightly outside of it but not too far, there's a special region of space time that we've dubbed the photon shell which has the feature that in this region it is possible for light rays to orbit at fixed radius in bounce spherical photon orbits. Now it doesn't mean that every light ray in this region is on a bound orbit. Of course you have to be aimed in just the right way to stay on an orbit otherwise you can just pass through. But if you're in this region, the photon shell, it's possible say if you're at this radius and you aim a light ray just so so that it's aligned with this fixed radius surface it's possible to let it stay there. Now these bound photon orbits are unstable. So if you perturb these light rays a little bit they'll either fall into the black hole or escaped infinity. But in principle there exists these unstably bound orbits. And Leo Stein on his website which is due to symmetry.com has a very cool 3D Java applet that lets you visualize these orbits in 3D and manipulate them and play with them. It's very fun and I highly recommend it. So these bound photon orbits are going to be very important to us for reasons which will become clear. And I want to introduce three parameters that characterize their behavior. And so the first two are defined here. They're somewhat easier to describe that's delta and tau. So I'm going to define an orbit of a photon to be the path it follows as it goes from one turning point and it's polar motion in theta. So as it goes from here to here and back. So that's one orbit. And so a half orbit just means the path that travels as it goes from the top to the bottom or from the bottom to the top. That's one half orbit with an orbit being a full period. And so as these photons sweep these spheres around the black hole in the photon shell. Well, every time they go from the bottom to the top or back. In other words, every time they complete a half orbit they sweep out some azimuth. They rotate in the angle phi by some amount and that's delta. And that half orbit incurs a time lapse. It takes some time tau. Now, delta and tau therefore characterize the phi and T behavior of photons that are bound in the photon shell. And these parameters depend on the mass of the black hole in its spin, which fixed the geometry of the spacetime and also the radius in the photon shell. I should say, if you have a non-rotating black hole like Schwarzschild, then the photon shell actually shrinks down to a single sphere which is often known as the photon sphere like this just the white circle only. But as you spin up the black hole the photon shell thickens and widens. And as your approach, say maximal spin so black hole with the maximal and outspin its event horizon shrinks the radius of r equals m and the photon shell extends from m to 4m. But in Schwarzschild the horizon has r equals 2m and the photon sphere is at r equals 3m. That's not gonna be too important. That's just for the experts in the audience. But anyway, so that's delta and tau. That's two of the critical parameters that are important for our story. And now I can introduce the last one which is a little bit more complicated. And that's the Lyapunov exponent gamma. And so this exponent, what does it do? It characterizes the instability of these nearly bound orbits. So like I said, these bound orbits don't like to remain bound, they're unstable. And so if you imagine having a light ray here which is not aimed exactly in the perfect way for it to stay at fixed radius forever but instead you shift it a little bit. There's a tiny little geodesic deviation. Then by looking at the geodesic equation or the Jacobi equation that described geodesic deviation you can show that a light ray that's not aimed perfectly well will have an exponential growth in its radial deviation from the photon shell. And so it's gonna move away very, very fast, exponentially fast. In fact, in each half orbit N, its radius will grow exponentially with an exponent gamma which is called the critical exponent or Lyapunov exponent that characterizes the instability of these orbits. Okay, so hopefully this was clear. There are these three exponents that characterize the behavior of bound orbits and they're defined locally intrinsically in the geometry of the photon shell. Of course we're gonna be interested in images of black holes and so we have to connect the behavior of these rays and there are three critical parameters to properties of images. So that's what we're gonna start to do now. So first we have to introduce coordinates on the image plane. So imagine you're now an observer very far away from the black hole and you point your camera such that the center of your camera screen is aimed at the center of the black hole and this can be made precise even in curved space time. The idea is that the black hole is spinning but you can project the spin axis of the black hole onto the plane perpendicular to your line of sight that defines the vertical axis beta. So this is just the projection of the spin axis and then the perpendicular direction we call the alpha axis. And the idea now is that you can imagine shooting light rays back from your camera into the geometry and you can ask what happens. So let's think about this for a second. If you aim a light ray, if you shine your flashlight directly at the black hole then obviously the light rays eventually cross the event horizon, they get sucked into the hole. Now, imagine you aim slightly farther away such that in flat space, if space were flat the light ray would actually miss the black hole. Well, in curved space time, of course, the black hole can bend light rays inwards, it can pull them in. So even light rays that would have avoided the black hole can get sucked in. But eventually if you aim far enough away, the light rays might get deflected a little bit but they can still escape back out to infinity. And so it turns out that this was understood by Bardin already in 73 that there are two regions on your image screen. There's a region of photon capture which corresponds to light rays which when shot back towards the black hole get sucked into the event horizon and captured by the hole. And there's another region of photon escape, rays are deflected but escaped infinity. And the boundary between these two regions is the so-called critical curve that delineates the region of photon capture from that of photon escape. And it turns out that if you aim a light ray exactly on this critical curve, then what does it do? Well, it can't fall in to the black hole and it can't escape back out to infinity. It's going to asymptote to one of these light rays that gets bound around the black hole and unstably orbits it forever. Now again, because these orbits are unstable, if you want it to be exactly bound forever you have to aim infinitely well on this critical curve. But if you aim slightly away from the critical curve, so at some perpendicular distance D either outside or within the critical curve, then the light ray will not be captured in the photon shell in orbit forever, but it will orbit multiple times before eventually falling into the black hole if it's a little bit inside the critical curve. So at a negative distance D or it will orbit many times before eventually escaping back out to infinity if it's at a positive distance D slightly outside the critical curve. Now, let me just mention a very odd feature of this story which is that aiming at a different angle around this curve corresponds to shooting light rays that end up a different radii in the geometry in the photon shell, okay? So different angle on that image corresponds to different radius in the geometry. And that's very weird because if you were to look at a star when you look around the star on an image you're seeing around the star in its geometry that's the normal intuition that we're used to. But with a black hole when you look around this curve you're actually seeing deeper or farther away from the black hole which is really the warped nature of space time in your face brought to life on your image screen. Another funny feature of this story is that every point on this curve is actually a sphere because every point on this curve will correspond to light rays that sweep out the corresponding sphere and orbit it forever describing this kind of trajectory. And I think that has tantalizing connections to holography which I won't have time to address but I think that's a very interesting direction that people are actively exploring now. For the purpose of this talk and understanding astrophysical images it's the near critical light rays that are gonna be important. So not the critical rays that are exactly on the critical curves but the ones that are slightly away from it and orbit multiple times around the black hole before eventually escaping back out to the telescope. So having said the stage I'm gonna briefly describe to you lensing by curve black holes. And the key idea that you have to keep in mind is that because of the strong gravity of the black hole and the existence of this photon shell it's possible. In fact, it always happens that a single light source will generically connect to a single fixed observer along multiple light rays. And that's because these light rays near the critical curve circumnavigate the black hole multiple times. And so for instance, this little patch of light can send light rays up that go around the black hole like this or down that go around the black hole like this. And these two light rays will eventually reach the same observer. So that means let's look at this cartoon and I'm gonna describe to you only the simplest case of an equatorial point source observed by an observer that's on the spin axis of the black hole because that's the simplest case in which the math is easy but you can refer to this paper to see the lensing behavior more generally. The takeaway is that a single light source will generically give you infinitely many images because you'll have a direct image from a light ray that's coming directly to you and is only weakly lensed. But then you'll have a light ray that's actually shot down towards the black hole and then bend back towards you. So it describes an additional half orbit. So it's not the direct n equals zero but it's the n equals one image because it does one half orbit. And then you'll have yet another image that describes two half orbits. In other words, it describes a full orbit around the black hole it circumnavigates at once, that's this n equal two and then there's an n equals three and four and so on and so forth. And it turns out that the successive images of say some equatorial disc have a very simple relation between them which is described by this formula. And let me explain it pictorially. So suppose that you're this observer looking down at the black hole and there's an equatorial disc which is painted in this manner like this colored wheel. And suppose that the nth image that you see from light rays having done half m half orbits looks like this. This black curve is the critical curve which I just described to you in the previous slide. Well, it's easy to describe what the n plus one image is from light rays that execute an additional half orbit. And basically there are three effects. The first effect is that the image gets squeezed in towards the critical curve. It gets de-magnified by an amount e to the minus gamma where gamma was the Lyapunov exponent that I just introduced. And it turns out that for a Schwarzschild black hole gamma is exactly pi. So e to the minus pi is about 4%. But if the black hole spins faster, the de-magnification factor grows and the maximal spin it's about 10%. So the images get squeezed a little bit less the faster the black hole spins. And that's described by this relation which tells you that the distance from the critical curve is exponentially decreasing. So that's the first effect. The second effect is that the color wheel rotates and that makes sense because remember these light rays get spun around with the black hole because of its frame dragging effect. And so for a Schwarzschild black hole the rotation is exactly pi or 180 degrees. That's this delta parameter here. And as you spin up the black hole there's more frame dragging so the light rays spin around more. And so for each additional half orbit they get spun by some greater amount and a maximal spin you can get almost exactly three quarters of a turn. And finally, each additional image arrives to you later because it had to do an additional half orbit around the black hole and this time delay tau is actually roughly spin independent. It's about 15 to 16 M. So for instance, for M87, M in units of time is about eight or nine hours. And so this is saying that you should whenever there's some flare of light around M87 you should expect to see a light echo from a light ray that went additional time about 16 times eight hours later which is about six days. And that's an effect that people are now actively looking for. So this equation is derived by solving the knowledge of the equation that describes light rays around the black hole in the regime of large N. So many orbits in which the equation is really simplified is a beautiful mathematical theory that's worked out here and I invite you to look at it if you're interested. But that's all I want to say for now about lensing because we can put together everything that I've told you to understand the photon ring that we see in images. So how are those simulated images obtained? Well, you have your image plane with coordinates alpha beta and to simulate an image, you want to compute an intensity i of alpha beta at every point on the screen. You wanna say how bright each pixel in your image is. And so to do that, you're gonna ray trace your image which means you're gonna shoot light rays back from your screen at every pixel into the geometry, evolving them using the knowledge of the equation in the curve space time. And eventually your light rays gonna cross into the bright hot radiating matter source around the black hole. And whenever it intersects the matter, you're supposed to use this equation of radiative transfer, which tells you in one little affine length of the light ray, d sigma, you're supposed to load a number of photons d i. So this is the change in intensity that the number of photons you load into that little patch of the ray. You're supposed to load photons onto the ray according to the local emissivity of the matter that the ray is crossing. So how bright it is. In principle, you might also wanna subtract some photons if they get absorbed by the matter. But this turns out to not be very important for M87 at the frequency that we're observing it at because a 230 gigahertz or 1.3 millimeter wavelength, the plasma around the black hole is optically thin, which means the light rays don't really get absorbed. So this is actually negligible for us. So putting everything together, I told you that the distance from the critical curve on your image goes like e to the minus gamma n or inverting this equation. It means that if you want to aim a light ray such that it does n turns around the black hole, you have to aim it at a distance d from the critical curve that goes like this. So if you aim your light ray closer and closer and closer to the critical curve as you shrink this d, the number of turns that it executes in the photon shell before falling into the black hole or going to infinity diverges logarithmically with a coefficient that's governed by this critical exponent gamma. But all you need to remember here is that, and the number of turns executed by a light ray diverges logarithmically in the distance d from this critical curve. Now, for optically thin matter distributions which don't absorb photons, this implies therefore a mild logarithmic divergence in the observed ring intensity near the critical curve. Why? Because roughly a light ray that completes n half orbits through the emission region will collect n times more photons along its path because every time the light ray goes through the matter region again, it collects the same number of photons again. So the number of photons loaded onto the ray, the intensity should be proportional to n. And so the intensity should diverge logarithmically as you approach this curve. And so just like in high energy physics, when we define particles as bumps in scattering cross sections, we can define the photon ring as the bump in the photon intensity which contains this logarithmic divergence near the critical curve that's due to these near critical orbits that go around the black hole multiple times. And although this divergence is cut off by a finite optical depth, meaning that in the real world eventually the light rays, the photons do get absorbed, they can't orbit forever, the striking feature remains visually prominent in many ray traced images of GRMHD simulations. And in fact, to illustrate this graphically, I'm gonna show you a nice animation that we produced, which summarizes everything I've told you. So you look at a black hole and you see light rays that come at you more or less directly, they're only weakly lanced or slightly bent and they show you roughly an image of everything that's around the black hole from light rays that come directly at you, that's this n equals zero image. But on top of that, you get yet another image of the entirety of the surrounding matter around the black hole from light rays that do a U turn about the black hole. So they do n equals one, half orbits. And because you have to aim them just so for that to happen on your screen, they end up lanced into a tiny, into a small ring. And then you have these n equals two light rays that do a full orbit around the black hole. But again, they have to be aimed exponentially closer to the critical curve for this to happen. So they appear even exponentially closer on your screen. And when you look at a black hole and you put together all these contributions, the resulting image will be a superposition of all these images. So you have the background n equals zero image and it's superposed on top of that. You have yet again another image of the entirety of the source around the black hole from n equals one. And then within that you have yet another image from n equals two and so on and so forth. And this brightness enhancement is what gives rise to the photon ring, okay? And so in this paper that we wrote a couple of years ago, we looked at a state-of-the-art numerical simulation that was done by George Wong. He ray traced an image that looked like this. And if you take a horizontal and vertical cross-section of this image and you look at the intensity, you'll get this kind of profile. And we can zoom in on this region or this region to obtain these plots. And what you see here is the background image, the photons that came from the n equals zero part of the image. So the photons that came at you directly, that's this background here. And then on top of that, you have the spike here, which is blown up here, which comes from the light rays that did an additional half orbit. And then yet again, on top of that, you have n equals two and then n equals three. And these things get exponentially smaller every time they're demagnified by e to the minus gamma. So it becomes impossible to see them pretty fast. But I think George saw them up to n equals six in the simulation. Okay, so let me now conclude the theory part of my discussion by summarizing everything that I've told you. So it's a prediction of general relativity that embedded within a black hole image, there lies a thin photon ring. This, these spikes, which is itself composed of a sequence of self-similar sub-rings. We see them here in this image. Each sub-ring is a lensed image of the main emission of the stuff around the black hole, indexed by the number n of photon half orbits executed around the black hole. So you see the same thing over and over again. The n equals one is repeated in n equals two and repeated in n equals three, just squeezed, rotated and delayed. And this current lensing is completely characterized by three critical exponents, gamma, delta, tau, that respectively control the demagnification, rotation and time delay of these successive images. And notice that the critical curve, which is a theoretical curve that I described before in self is not directly observable, but these photon sub-rings converge to it because they have to, that's one of the formulas that I showed you. But just remember for now that these sub-rings converge to the critical curve but are actually distinct from it and in particular don't have the exact same shape. And that's gonna be important for the last part of the talk. Okay, so now let me switch gears and talk about a paper from last summer with Sam Grahland and Dan Moroni in which we produced an experimental proposal for how to learn something, for how to resolve the photon ring and learn something interesting from it. So this paper came out in Physics Rev D and there's a bunch more details that appear in these two papers if you're interested, but for now I'm just gonna quickly summarize the story. Before I can do that, I have to tell you a little bit about interferometry and how these black hole images are taken because I'm not gonna assume that you all know how this works. So the idea is, how can we even resolve an image of a black hole? So remember, M87 is tiny in our sky, has the same size in the sky as in orange on the surface of the moon. And if you do a back of the envelope calculation, you'll find that the size of the dish that you need to resolve such a tiny object is roughly the size of the earth. And we can't really build an earth-sized telescope but we can do the next best thing which is to have a virtual earth-sized telescope. And that's the basic idea behind interferometry which I'm gonna illustrate with ducks. So this is a cute EHD animation, duck ferometry. And so what's the idea? Suppose you have a bunch of ducks that are swimming in a pond. And so in this analogy, the ducks are gonna be the bright sources of light around the black hole that radiate. The waves that the ducks produce in the pond are gonna be the electromagnetic radiation that the light sources produce in the electromagnetic field. And then the edge of the pond is the surface of the earth where we sit and we measure the radiation. It's like sitting in the side of the pond with your feet in the water and feeling the waves lapping at your feet. You can see the water rise and fall. And the point is that depending on how the ducks are moving, how heavy they are, how fast they're moving and how they're spread out, you're gonna get different wave patterns in the water which when you're at the edge of the pond lead to different waves lapping at your feet. And they'll differ by the amplitude and wavelength. And in principle, by measuring this rippling pattern at the edge of the pond, you can infer where the ducks were, how they were moving and how fast. So the idea is you're on earth, you have multiple telescopes on the surface of the earth and they measure the rippling in the pond which is the electromagnetic field. And from that you can infer what the sources of electromagnetic radiation was. That's the basic idea. Of course, in practice, it's very complicated but roughly this is the intuition for how it works. And the point is every time you have two telescopes on earth, they really let you measure one Fourier component of the image. So you see the Fourier transform of the image with your interferometer. And so using an interferometer to take an image is like listening to a song but only being able to hear a few notes. And in particular, every pair of telescopes lets you hear one Fourier component of the image or lets you hear one note of the song. And what I wanna demonstrate to you with this cute little exercise is that even if you have very few telescopes, so in other words, you can hear relatively few notes, it's still actually enough to recognize a song or to get a blurry image, which is what the EHT did. So let me just show you with some cute song, how this works. So I'm playing a well-known song now but I'm only letting you hear a few notes at a time. Okay, so if you're anything like me, you could recognize the song Happy by Farrell at about 10 or 12 notes. And the first image that we got from the Event Horizon Telescope two years ago, the one that I showed at the beginning of the talk, look very blurry and that's because we only had five telescopes or 10 notes. And actually that doesn't really seem like enough, it's barely enough to start recognizing the song that I was just playing. But there's an additional trick that the Event Horizon Telescope used which is the rotation of the Earth because which free component of the image you see, which note you hear is actually determined by the distance between the telescopes as seen from the source. So in the plane projected to the line of sight, measured in units of the observation wavelength. And that's called the baseline length. And as the Earth rotates, the telescopes move and so the distance between them as seen from the source changes. And so you actually sweep out, you therefore hear more notes. So you sweep out a greater proportion of the Fourier plane of the image, this so-called baseline plane. Okay, so for instance, if you had only two telescopes related by one link and you saw only these four components, so this is the Fourier plane of the image, you would only be able to infer this kind of picture. But as you add another telescope, you get using again the rotation of the Earth, more data points, more Fourier components. So when you take the inverse Fourier transform to get the image, you get something less circular. And then as you add yet another one, you start to see structure. And finally with five sites, you can see a ring. Okay, so I'm describing this because I wanna start talking about what it would mean to observe this theoretical structure in a black hole image that we're not seeing directly with our eyes but using an interferometer. And the point is that in the image, there's a separation between astrophysics and general relativity. What do I mean by this? If you ask me, what is the direct image? The n equals zero direct emission? Well, that depends on the nature of the source. You really need to do astrophysics to determine this. In other words, whatever you see in the background is whatever is around the black hole. That's not universal. That's astrophysics. But once you know what the background image is, once you see whatever is around the black hole directly, the photon ring images, which are just lensed images of this direct emission, are fixed by gravity because they're just the same thing lensed again using the critical exponents. And so the photon ring is universal. And there's a separation therefore in the image between all the stuff outside the ring and then the ring itself, which is just a lensed image of the background. And so likewise, if you look at the Fourier transform of the image, it turns out that the short baselines, so the Fourier components that you get from telescopes at small separations, give you information about course features in the image, so the background. But the farther your telescopes are, the longer the baseline, the larger the Fourier component of the image that you're seeing. And so therefore you're resolving finer features of the image. And the only very fine features that can survive for a very long time are these increasingly sharp photon subrings. So the idea is if you take two telescopes and separate them very, very far, you're seeing very sharp features of the image. And the only ones that should subsist for very long time scales, of course they can be flares, but those are transient, the only ones that should subsist for long time scales are these subrings. So the idea is that also in the Fourier plane where we actually take black hole images, there's a separation between astrophysics, which are the small Fourier components, the part of the Fourier plane near the origin, and the part that's controlled by GR, which are long telescope separations, large baselines, large distance in the Fourier plane. So making this a little bit more precise, when you have a radio interferometer, what you do is you connect a bunch of telescope dishes. Each telescope dish measures its local electric field. But then you look at cross correlations of the electric field across telescope sites, so the two-point function of the electric field. And this quantity is called the radiovisibility and it's generically a complex quantity. And it's a theorem in optics called the van Sitter-Zernicki theorem that this complex visibility is the Fourier transform of the image. So I of X, X is a 2D vector, I of X is the image, and it's Fourier transform V of U is the complex radiovisibility that your interferometer measures, and U here, the point in Fourier space is again this baseline, which is the projected separation between the telescopes as seen from the source, measured in units of observation wavelengths. So this U is dimensionless. And the key idea in that first paper, I talked about it in the beginning of my talk, is that if you have a very narrow ring in your image, then it's gonna have a very simple Fourier transform in this regime. In other words, it's gonna have a very clear signature. When you're going to baselines that are large enough to resolve the diameter of the ring, so you can tell that there's a ring there, but not so large enough that you can resolve its width so that it still looks like an infinitely thin ring. So if you were to go to even larger baselines at some point, you would tell that the ring has finite width, and then the details of what happens in the ring would really matter, but when you're in this regime, you can just tell that there's a ring, but it looks like a delta function ring. And it's a simple exercise in Fourier theory to show that in this regime, the Fourier transform of the ring has a simple Fourier transform, so a simple radiovisibility, which in particular has a periodicity that encodes the diameter of the ring. And that's the key idea that I'm going to use to propose an experiment to measure the ring. Okay, so more precisely, this is a little bit more technical, but this is the general formula that tells you how all of this works. Suppose in your sky, there's a very narrow ring that has some arbitrary shape. It doesn't have to be a perfect circle. It can be arbitrary closed curve, but suppose it's very narrow. In other words, you haven't resolved its width. Then in the Fourier domain, if this is your image, you'll have a complex visibility V, which takes this form. And in particular, if you take the amplitude of this complex radiovisibility, you get the so-called visibility amplitude, and in this regime, you can show, it has this offsetting pattern with a periodicity set by the diameter of the ring, or your curve generally. And at each angle of Vr phi in the 2D Fourier plane, you'll have some ringing pattern with a periodicity D1 over D phi, where D phi is the diameter of your curve at the corresponding angle of Vr phi in the image plane. Okay, so you have an image plane at each angle of Vr phi, your curve has a different diameter, and correspondingly, at the associated angle of Vr phi in the Fourier plane, you'll get a ringing pattern with a periodicity that's set by the diameter at the same angle, okay? So that's a general fact. And if you take the absolute value, so you look at only the visibility amplitude, you find that this pattern depends only on the projected diameter of the curve and not the other details too much. And this is important for us because it turns out experimentally that measuring the full complex visibility is very hard, and particularly the visibility phase is very susceptible to noise, whereas the visibility amplitude is much easier to measure. So in practice, the easiest thing to do by a lot is to measure visibility amplitude. And if that's the only thing you can see, then you can only measure the projected diameter of your curve in the sky. And it actually has some interesting consequences because for instance, if you take the circle, which is a close curve with constant diameter, D equals two R, but always the same center, well, it turns out that it's not this unique shape with this property, that it has a constant diameter. There's a famous one called the Rilla Triangle, but there are many more studied already by Euler centuries ago. There are many other curves of constant width. So this is a very famous one that appears in architecture. And this Rilla Triangle has the feature that it has constant diameter. So if you were to shine a light from any direction, you would find that this shape casts the same shadow as the perfect circle from any angle. So it has the same projected diameter at every angle. But the difference is that the center of this object, the centroid varies with angle. And it's a funny feature of interferometry that if you have an interferometer that only measures visibility amplitude, which is often called an intensity interferometer, then you can't actually tell these two shapes apart in principle. So when we talk about measuring the shape of the ring, at least for the near future, we're envisioning measuring the visibility amplitude of the ring, which means we wouldn't be able to tell apart these two shapes, but we'd be able to measure the projected diameter only. That's kind of an interesting feature of interferometry. And there's a lot more details in this paper. There's some beautiful connections to math and algebraic geometry where you can interpret this in terms of dual curves, but I'm not gonna go into that. The only thing I want you to remember from this is what is the projected diameter of an ellipse? So it's a fairly simple exercise to show that if you have an ellipse with radii R1 and R2, then the projected diameter of the ellipse, how wide it is as a function of angle of R5 is given by this formula. That's the only basic fact we need. And so now I'm getting to the actual experiments and the question we wanna address, which is can we test general relativity by measuring the photon ring shape? And that's really two separate questions. The first one is this entire story that I've told you, the clean separation between astrophysics and general relativity between these short and long baselines actually hold in realistic models. That is, can we actually test general relativity in principle using the photon ring? And then if so, is it actually possible to achieve the experimental precision required to have an interesting test? That is, can we perform this test in practice? And the answer we gave to both of these questions in our first paper on this subject is tentatively yes. We looked at many models of M87 and we found that the diameter of the N equals two photon ring could always be inferred from this ringing on long baselines. That's its universal interferometric signature that it was always that of an ellipse independent of the astrophysical details of the source. So we regard this as a prediction from GR for the diameter of the photon ring as a function of angle. And my colleague, Daniel Moroni, who's a leading event horizon telescope experimentalist and actually works on Alma, gave us some basic requirements that he thinks can be met for how to do this experiment by putting a satellite in a far earth orbit, roughly a lunar distance around the earth. And we did an experimental forecast and we found a sub-sub percent level of precision for a test of strong field GR and the Kerr hypothesis, which I think is very exciting. So the details of the story are quite complicated. I just will flash two slides very quickly to show you what we did, but I invite you to read the paper if you want more detail. Basically to answer the first question to demonstrate that the photon ring diameter is measurable in principle, we looked at about a hundred source models varying the spin of M87, which we don't really know for sure, varying the inclination angle that we think we're observing it from. So the best guess is sort of that the black hole is fairly rapidly spinning and then we're looking at it from 17 degrees relative to its spin axis. And that's based on the fact that it has a jet that we're almost 17 degrees from. And then we imagine that the matter around the black hole is radiating with different emission profiles. So this is one example and we varied all these parameters. And so for every black hole spin, observer inclination and emission model, we ray traced an image, we obtained an image of a ring which had cross-sections that looked like this. So this is vertical and horizontal cross-section and we see this photon ring and sub-rings and so forth. We took the Fourier transform of this image to compute the visibility and we looked in particular the visibility amplitude on short baselines from zero to 10 giga lambda which correspond to earth separation in units of the 1.3 millimeter observation wavelength of the HD. So on short baselines, we checked that these models all agreed with these data points which are the actual data points released by the HD that they observed in 2017. So all of these are good models. And then we went to much longer baselines, went to 300 giga lambda which at this observation wavelength corresponds to a distance of earth moon. And you see that in the beginning it's quite messy but by the time you get far away you get this perfectly clear ringing signal which is a consequence of the existence of this very sharp feature, this almost delta function thin ring. And we saw that we had this perfect ringing which exactly matched the predicted universal signature for a narrow curve. And by fitting this formula to the ringing that we saw we were able to infer the projected diameter and we've checked that it was always that of an ellipse. So this is a GR prediction for M87. And of course the details of where you go to do this fit don't really matter. So you could have gone to 400 or 500 as long as you're far enough away that you're in the universal regime where you see these sharp features, this is gonna work. And then to actually make an experimental forecast we assumed some kind of realistic technology that we think we can either build now or in the near future. And just to give you a sense of what that would mean we'd imagine that if this were the true signal with realistic technology we think that kinds of data points we could get from a realistic experiments would be something like this which looks very, very messy. But if you know about MCMC and data analysis it turns out that this is actually really quite good and we showed that you can infer the projected diameter of the ring at every angle around the ring from these kinds of measurements and we checked that it matched this functional form. Now if you do this experiment in the real world you'll get a projected diameter at every angle, d phi and then you'll try to fit this functional form to it. And if you have a successful fit as measured by a chi-square near one you can report a GR, a test of GR with a precision given by the root mean square deviation which for us was very small. And if you cannot fit this parametric form to your observation then you exclude GR at the associated p-value. So this is the big picture. We want to test strong field general relativity using black hole imaging. We have in mind putting a telescope in orbit around the earth in the plane perpendicular to the line of sight to M87 as the satellite orbits around the earth it looks at the, it measures this complex visibility and in particular the visibility amplitude and it gets this ringing signal and the periodicity of the ringing tells us the diameter of the angle, the diameter of the ring at the corresponding angle. And we do this as the satellite orbits around the earth we measure the diameter around the ring at every angle. These are the kinds of data points and error bars that we expected that we got in our forecast. Notice by the way the scale of this image has a diameter of about 40 micro arc seconds and notice here how much smaller the scale is. So we have this tiny deviation from perfect circularity. This is the fact that it's an ellipse. And so the GR prediction is this elliptical functional form which is plotted here in orange. And then these are the data points that we would imagine getting in our forecast. We can compare these two things and see if they agree or not. Okay, so that's the big picture. Now the big thing, the big weakness in our analysis which is a problem for the future that I'm very excited to tackle is that this image here is what we expect to see once we've averaged, once we've led the satellite go around, observe the source for a long time and taken the time average which allows us to ignore these flares and other fluctuations which we can average away. But in practice, if you take a snapshot at a fixed time you won't have those clear perfect images. You'll have some additional noise due to flares and whatever other things that can happen around the black hole. And so there's a question that we need to really understand quantitatively which is how long do you need to time average? How long do you need to observe the source for the fluctuations to go away? And we just assume just as a starting point we said if you have a noisy snapshot with noise at about this level well then the signal on long baselines that you would get from your telescope would look like one of these dashed gray curves instead of the green curve which is the sort of true signal that you would get if there were no noise. But the point is that if you were to make multiple such snapshots with different obviously they would have different noise so you get these different dotted gray curves but then you average them you would get this black curve. I think here we have 17 snapshots being averaged. And the only thing you care about is to see that the nulls are between the true signal in green and the average signal in black are aligned so that they have the same periodicity and therefore you can infer the correct diameter from the black curve. Of course we need more detailed modeling to improve our understanding of the feasibility of this experiment and in particular we need to understand the astrophysical noise. But let me just mention that if the source fluctuations are very large then actually there will be auto correlations in your image because you'll see a fluctuation coming at you directly via n equals zero light and then you'll see it's light echo coming from n equals one light that did a half orbit around the black hole a little bit later. And it turns out that the relation between the direct images of fluctuations and their lens images these photon ring auto correlations also encode gamma delta tau and the entirety of the structure of the photon ring. And so if the fluctuations are very large then we're not in trouble because we can probably see the photon ring and its structure via the light echoes and I wrote a paper recently about this but I think there's a lot more to do there. And of course everything I've told you so far this proposed mission is single purpose but if you have a baseline to space you should be able to do lots of other cool things with it and I'm interested in hearing people's thoughts on this. I think we should be able to do more. And lastly, I wanna conclude by asking this sort of more philosophical question which is do we really need to test general relativity? Because it's a question that I get asked a lot. So we know that general relativity has to fail deep in the black hole interior because it predicts a singularity there which is unphysical but there's no reason to expect a breakdown of GR outside. That being said, the event horizon is a visible edge of the universe and now with experiments like the event horizon telescope we can peer all the way to a visible edge of the universe and the photon ring is made up of light that probed that region in the strong gravity regime right near the edge of the visible universe. And so I think it's our best shot to see any deviation. I think we should look there and we should test whether GR is correct and know that this prediction of the curve geometry has not been directly tested yet and nonetheless it underlies a huge amount of astrophysics also playing a driving role in theoretical physics and it's just good science to directly test important assumptions that impact multiple fields. And the last thing I'll say is that in order to have a real test it has, you have to have the possibility of it failing so it has to be possible for your mind to be changed. And this is a real test because it's possible to be surprised in the sense that if you see a ring and it's not the shape predicted by GR then you know that GR is wrong. There's no wiggle room. So that's why I think it's a very interesting one. And I'm exactly out of time so I'm gonna stop here. Thank you. Thank you, Alex for this very nice talk. Let me check the YouTube channel. So I think we have still time for a few questions. So someone is asking on the YouTube channel quantum particles have basically three things. Relativistic mass charge spin. Black holes also have three things mass charge spin. Is this why we're curious to find our relations between micro and macro? Can you ask this last part of the question? Is this why we're curious to find our relation between things that are microscopic and microscopical? Right, so at the level of this discussion we're treating the black hole as a classical object. And so indeed it's only described by three parameters, the mass, the spin and the electric charge. And moreover for astrophysical black holes we think that it's unrealistic for them to have electric charge because if they did have one they would quickly attract the opposite charge and neutralize. And so for the purpose of this discussion I'm assuming that the black hole is described by the Kerr geometry which we believe describes astrophysical black holes and so it has only two parameters, the mass and the spin. And by measuring the shape of this photon ring in principle you'd also be able to measure these two parameters, the mass and the spin but that's all we're claiming. Of course in principle we believe that the black hole is actually a quantum object with a huge number of microstates as encoded by its entropy which grows like the area of the horizon. But we don't really understand in detail how this works theoretically and I think there's certainly no proposal on the table to observe any of these features experimentally as far as I know. Thank you. There's another question in the YouTube channel that says on the lensing by Kerr black hole slide why is that a single point gives rise to multiple light paths? Sure, let me call up that slide again. So imagine that you have a flare in your geometry so you have a little point source that shines light in every direction. And if you're in flat space the only direction in which light will reach you is the straight line that connects the source to the observer. But in the curved space time of a Kerr black hole because of the strong gravity around the black hole which can bend light rays so much that they execute multiple orbits. If you have this point source that shines light in every direction there are actually multiple directions that will connect to the same observer and this is illustrated here. So these two light rays actually converge very far away. So if you have an observer at infinity so very, very far away these two light rays will reach him or her. And that's the case even though one is emitted upwards and the other one downwards but they just, this blue light ray is n equals one. It does a U turn about the black hole, one half orbit and this green light rays n equals two. It does two half orbits or a full circumnavigation of the black hole before reaching you. And so in the same way so these are actual same light rays in the geometry. This is a cartoon but it's the same idea. If you pick a single point source in the geometry you'll get a light ray that's almost like in flat space that comes at you directly but also in addition a light ray that's shot away from you downwards but then gets captured by the gravity of the black hole and slingshot back at you. And then you'll have one that does this with two half turns and three half turns and so on and so forth. And that's why in principle you can have arbitrarily many turns around the black hole and so that's why you can have arbitrarily many images of a single point source. In fact, it's a fact that if you have a black hole space time and you take any two spatial points they're connected by infinitely many light rays. And these light rays just execute more and more turns around the black hole before reaching their end points. I hope that answers the question. Thank you. And then there are two other questions that are a little bit more general that you might or might not know about it and someone is asking if there is an introductory online course on interferometry or maybe you can from the top of your mind a reference or something to study interferometry. I can't claim to be an expert in interferometry. I'm still learning it. I think it's a very subtle subject and I also would like to know if there's an easy standard reference. I've learned it from working with people that are experts and have shown me a lot along the way. There is a book by Jim Moran and other co-authors and another book by somebody named Thompson which are sort of big reference books that have a lot of results. I haven't spent too much time reading about them. I would say this is the fundamental relation. Okay, it depends what you wanna do. If you wanna get into the nuts and bolts of what part of the electric field that each telescope you measure, et cetera, et cetera, it gets very complicated very fast. If you're interested in understanding the theoretical properties of interferometry, I think you can get very far just by looking at this relation which is just Fourier theory. It's just the idea that you wanna form an image. What your experimental apparatus measures is it's Fourier transform. I will send Alejandro, there is a short review paper from the 90s that has these basic equations and a few more that I found useful but I can't recall it off the top of my head. I guess I'll email out. Yeah, and we pause that in the information of your talk. And then before us, some people here in the low physics coordinator that have questions, let me just finish with this question against general which programming language was used in the simulation shown. Thanks. So this, okay, so, all right, so the very fancy GRMHD simulations that you see here, the state of the art ones were produced using some code which is public called IPOL, I-P-O-L-E. And I think though I'm not sure that it's written in C. If it's not C, it's Python but it's probably C because it has to be really, really fast. And in my own work, when I made the simple images which don't keep track of the details of the source, I just looked at some simple models. I did this in Mathematica. So you can get very far with Mathematica if you're not interested in really crazy state of the art models. But I think in practice people, there's a whole suite of tools in between and actually Alejandro is an expert in this much more so than I am. I think he's used a code called Geoto. So there are many, many codes. It depends on how fancy you want to get but I think the state of the art uses very fast code written in C. Cool, I think George Wong just replied on the chat saying like, yeah, for GRMHD, I harm 3D and for GRRT, I pull simulations and we use C, so, cool. Thank you, George. I'm glad, George, you're great. Good, I think, thank you. So, Joel, I think you have a question. Yes, yes, thank you. So I'm wondering regarding your phrase of testing GR, right? So for instance, in the search for the Higgs boson, right, you had no hypothesis which was no Higgs boson, the alternative hypothesis which was there is a Higgs boson. So here we chart, which is what's your no hypothesis? What is your, because I mean, you're talking about a black hole, so GR is there, right? So for me, I don't really understand how you're comparing to, or is it just like, oh, we're seeing these patterns or we're not? Yeah, I think, okay. This is a philosophical question, I think. You have a theory which predicts that the diameter of this ring has to have a functional form which is given by this and corresponds to the shape of an ellipse. Now, if you do a test, okay, I think this is a very simple statement. If you measure this ring and you find that the shape doesn't follow this functional form, it doesn't look like this, then your theory is excluded. That's it. Okay, that was what I was wondering. Now, of course, you can also frame this in the language of no hypothesis tests, but I think that's a little bit misleading because then the precision that you report for your test depends on how far away the no hypothesis is. So let me, this is a little facetious, but let me say that, suppose my no hypothesis is Newtonian gravity. So is GR, okay, you could phrase it like this. I'm taking an image and I'm checking whether it's Newtonian gravity or GR and in Newtonian gravity, there isn't a photon ring and so the deviation is huge and so then once you see the ring, you say GR is confirmed with astonishing precision. We've rejected the no hypothesis with incredible accuracy, incredible precision, but it's a little fake. So I think that I don't really like to phrase it in terms of no hypothesis tests because then you can sort of play tricks with how good your test is, but I had this, I think this is the way I like to think about it, which is there's a prediction from your theory. It has to, you measure a signal, it has to fit this functional form. You find the best fit parameters that make your data closest to the model. If you have a successful fit, which you can tell by a chi-square test, then the root mean squared deviation of the best fit model to the data, I would say is the precision of your test. And if you can't find a good fit of your model to the data, say the chi-squared is far from one, then you've excluded GR at the associated p-value. I think that's the way of phrasing it, which doesn't depend on a reference theory or your null hypothesis. Okay, okay. Thank you very much, thank you very much. Thank you. Thank you, a nice thing. Okay, let's, we have a question for her. One last question, Roberto. Yes, first of all, Alex very, very nice, the webinar was impressive. So I have, I mean, two questions, just I'm gonna make a second one in the same state. In the sense is the two question in principle is, is it possible to obtain more information about the signal that you get? Like for instance, what happened if polarization is measured from the light coming from the ring? And the second question is with this, I mean, always expected that maybe to measure some transient events, I mean, in the sense, if there is some brightness variation of the brightness that makes the ring to shine more and less depending on the time or something like that, is it expected also this type kind of variability in time? Yeah, okay, these are excellent questions. And I think they've been partly addressed in the many papers that have come out on this subject already, these are excellent questions. So yeah, there's, okay. So let me start with the first question, which was, well, let me start with your last question about time variability. So there is one paper on the subject and I think there need to be more. So it's this one with Char Hadar, Michael Johnson and George Wong who's also on this call called Photon Ring Auto Correlations. So there, and actually I think George Wong also has a single author paper on Black Hole Glimmer which gets to the same idea. And that is that if you have random flares, so your signal is varying in time, if you were to look at a randomly fluctuating plasma in flat space, you would get an image of the plasma and if the fluctuations are uncorrelated, then each pixel in your image would also be uncorrelated. But now because of the lensing behavior, if you have random flares in the plasma which are uncorrelated, you'll get the direct image in your n equals zero image and those will be uncorrelated again. But then you'll get the same image lensed in the n equals one image and in the n equals two and those will appear later and later in time. And so now the pixels in your image are no longer uncorrelated. They're actually autocorrelated because there will be one pixel which shows you a time 60 NAM later what you saw at an other pixel earlier. And so it'll be really the same image of the same thing just from light rays that orbited the black hole an extra time. And that autocorrelation pattern also has a rich and intricate structure which encodes these critical exponents. And so if you're able to make a black hole movie and observe the time variability and see these correlations which are caused by using lensed images of flare, you should be able to also retrieve these exponents and test the core hypothesis. I think we've really only begun to explore this. I think it's the next frontier and it's very exciting because it's likely to be possible. It may well be the first way that the photon ring is detected because these autocorrelations don't necessarily require you to be able to resolve the ring. You don't necessarily need to have the resolution required to see the thin ring. Even with lower resolution you may still see the echoes. And so it's quite possible that we'll be able to see the photon ring from Earth without going to space. And I think with the NGEHD that's supposed to turn on in five years or so, I think it's quite possible we'll have some signature of light echoes and photon ring autocorrelations. I think that's very exciting. And I'm thinking about it and I think a lot more people should be thinking about this. And regarding the first question there was also a paper about polarimetric signatures and there's so much structure in this story because at the end of the day we only have these three critical exponents but the light that you see has not just the intensity but also the three stokes parameters that describe linear and circular polarization. And all of these things have to obey a multitude of relations and this is really an over constrained problem. I think right now because our image is so blurry we can't really see very much but as soon as we get above a critical threshold where we can have sharp data it should be so over constrained that we should see signatures of these things all over the place eventually. And the experiment that I was describing is really only the lowest hanging fruit. I mean a very exciting experiment I think would be for instance to try to see not just one ring but two successive sub rings. So say you see then equals one and then equals two sub ring and you can see their widths as a function of angle. The ratio of their widths should be e to the minus gamma and gamma actually varies around the foot on ring. So you should like if you could see two successive sub rings you could test at every angle around the ring whether you get the correct de-magnification factor. So I think there are many, many relations on this data essentially because all the images that you see are images of the same stuff that's just de-magnified. And so that imposes many consistency relations that you should be able to look for as soon as you can start resolving these sub rings. Cool. Okay. Thank you and thank you Alex again for this wonderful low physics webinar. As you said in one of your slides, yes we will invite you for the next season again to tell us how this is going. Thank you so much for the invitation. And thank you everyone for joining us today. So see you in the next low physics webinar. Goodbye. Thank you very much. Good. We are...