 Okay. I think we are live. Welcome everyone. Thank you for joining us for today's low physics webinar. My name is Alejandro and I'm going to be your host. Today we're presenting Event Horizon Telescope Imaging and Models of Sagittarius ASTAR in the Galactic Center by George Wolk. George got a bachelor in physics, mathematics, okay, so a bachelor's actually in physics, mathematics and computer science from New York University and then his PhD and master's from the University of Illinois at Urbana-Champaign where he worked with Professor Charles Gammie. George is currently a member of the School of Natural Sciences at the Institute for Advanced Study and associates research scholar at the Princeton Gravity Initiative at the Department of Physics in Princeton. George is also a member of the Event Horizon Telescope Collaboration and the Horizon Collaboration. So he's going to talk about some of the recent results there. George's main research is devoted to studies of models so that creation onto supermassive black holes and the connection between black holes are relatively steep jets and he's an expert on all these models and we are going to hear a little bit about that today. So we're delighted to have him in our webinar series today which is the last of this season so stay tuned for the next ones and remember that you can ask questions over email through our YouTube channel or YouTube or even Twitter and then the questions will be written at the end of George's talk so without further ado we will turn it time over to George. Thanks for joining us. Thank you everyone, thank you for having me. I'm going to try and share my screen now and hopefully everyone can see this. Perfect, yes. Great. So as Alejandro said I am a member of the Event Horizon Telescope Collaboration and today I'll talk a little bit about the imaging efforts and modeling efforts that went into analysis of the data that the Event Horizon Telescope Collaboration took in 2017 and again I just want to thank the organizers, coordinators, for giving me the opportunity to talk about this. I think it's really exciting. Okay so here's a short little outline of my talk. You can think of it as being in maybe two and a half sections so first I'll kind of go over what the Event Horizon Telescope actually is physically and kind of how it works and we'll get into some detail later on about this but the main thing that I want you to take away from this first section that I hope I'll be able to motivate is that there are parts of the data, there are parts of the image that you can trust and there are some parts that you should trust less perhaps and so it's important to understand what goes into making the image so that we can understand what parts of the image to trust. So that's the first part of the talk then in the second part I will talk about the astrophysical modeling that went into actually interpreting the data after we had taken it and kind of understood the uncertainties so I'll talk about the physics of accretion and then the precise constraints that we applied and kind of what they meant and then after that I'll talk about the future especially in the context of these astrophysical models. One thing that I feel it's important to point out is that the SAJ star the galactic center set of papers that we published were in total depending on you know what exactly you count about 400 pages. There is no way that I can cover 400 pages worth of content here so again I'm just going to be talking about the imaging efforts and the theory but that does not mean that this is all that we did you know we had other tests for example general relativity that I will not be discussing now. Okay so what is the event horizon telescope? I think it's best to start with a bit of motivation for what the event horizon telescope aims to do. So what is the goal of the event horizon telescope and in a sentence it's to produce images whatever that means it's to produce images that are really really high resolution so that we can actually see the event horizons around nearby supermassive black holes and so I have this animation that was produced by the European Space Agency that is showing kind of the scales involved with this question. So the nearest supermassive black hole is at the center of our galaxy and you know it's supermassive it's very large it's millions of times the mass of the sun but at the end of the day it's also quite far away from us so it's it's very small and so what you just saw in that movie was kind of scanning through you know if we were to look up into the sky if you were to look up into the sky at where the galactic center should be you know what would you see I'll play that movie again what would you see an optical and so here you can actually you know see the dust in the way if you change over instead of optical if you change over to radio you can kind of see through some of this dust and there are all these really interesting structures on larger scales that I will not be talking about at all but again this this movie this animation is just to kind of give you a sense of scale of how far in you have to zoom to see things the one of the typical size comparisons that may be a little bit more grounded maybe more comprehensible is that if you were to place something like a quarter or you know a coin on the moon on the surface of the moon and look at it you want to try and resolve features of that size this is a rather challenging task so how do you do this well you may remember that it's pretty hard to get high resolution if you don't have a very large instrument so there's a natural kind of earth-bound limit for how large you might be able to make your telescope and that limit in size is of course the diameter of the earth so in principle the largest telescope you could make on earth would be a telescope the size of the earth of course there are issues you can't actually make a telescope the size of the earth but what you can do is you can take a whole bunch of different telescopes that are all over the planet and link them together through this glbi very long baseline interferometry method that i'll i'll talk about on the next slide but i thought it was a good idea to show this other animation that was you know again made by the eht kind of showing where all these telescopes are that's what you see on this globe that's turning on the left side of the screen and also kind of show you what all these telescopes look like so again these are radio telescopes so they don't have these glass lenses you know you're not going to actually look through the telescope but they're kind of like giant antenna and you point them up at the sky and you listen to radio frequencies so in particular we are listening to radio frequencies at 230 gigahertz or just below that which if you prefer wavelengths is is 1.3 millimeter and the way that interferometers work the way that vlbi works in general is you take all of these different pairs of telescopes so you know in that animation of the globe or in the image of the globe you see here here there are one two three five telescopes on screen or five sets of telescopes observatories on screen and you take each pair of those observatories and you measure the distance between the the two observatories and you also kind of measure the orientation so for example there's this horizontal line between sma and lmt so you measure that orientation as being horizontal or maybe being diagonal and you measure the distance between them and then over time you correlate the radio signal that you're you know recording you correlate the signal between those two observatories and that correlated signal ends up mapping to a Fourier coefficient in the Fourier domain and the position for that coefficient the position for that data point corresponds to exactly that orientation between the two telescopes and the distance and so to make that a little bit more clear hopefully what i'm showing you on the right here is just the location of all of those different telescope separations and so they're colored by different pairs of of telescopes but again you know if you have two telescopes that are kind of right north south on top of each other and they're at some distance then it'll show up as a point kind of in the north south or on the y-axis of of these two plots on the right and one thing that you'll notice is that a single color in fact corresponds to many different points and in these plots and that just corresponds to the fact that that's just because of the fact that over time the earth turns and so the apparent distance and apparent orientation of every single pair of telescopes changes this kind of changing this evolution will turn out to be very important later on and also kind of annoying as well so useful and annoying as as most things are so right the productive distance between each pair of telescopes corresponds to what we call a baseline and you will notice in these two plots on the right which are just two different days of looking at the galactic center you'll notice that the points do not in fact cover the entire domain and so we call this a sparse sample and so one of the problems the the technical challenges associated with interpreting the data that we take is how to take this sparse sampling of the Fourier domain how to take the sparse data and back it into kind of some information about the full image so you you may recall in general Fourier theory if you want to you know learn information about everything that's going on in an image you take the Fourier transform of the image you still need to in principle know information about everything in the Fourier domain we obviously don't have that so we have to build some assumptions into for example the smoothness of data or you know what do we expect in general about the overall structure the size so is it possible that the image is very very large or or what so we have to make all of these choices there there's a lot of history that goes into kind of the best ways to to solve this inversion problem but at the end of the day this is there is an algorithm that is involved in actually translating all the data that we take into an image or into a sequence of images or a movie and so before I talk about the actual images that we produced I want to go back to this point about how over the period of of the observation which is maybe eight hours to 12 hours you know half a day approximately a third to a half of the day the points change so some some technical information is that each data point or a scan is about 10 minutes and this is this is just associated with the the technical construction of the actual telescopes so it takes some amount of time to you know record data and so on accumulate enough signals noise and so this time is approximately 10 minutes you can lower it you know you can increase it depending on on technical points but 10 minutes is a good number to have in your head so that's one time scale 10 minutes for a single data point and then the course of an observation is approximately eight hours nine hours so you know that's significantly longer and this this raises an interesting problem in modeling the galactic center in particular so again i'm going to be talking about the imaging of the black hole at the center of the galaxy which you know we know from various other measurements or we have strong evidence for from other measurements this object has a mass of approximately four million times the mass of our sun so if you you know remember the the speed of light and you remember Newton's constant you can come up with a characteristic time associated with you know how long should it take light to cross an object that's this massive approximately and and that turns out to be on the order of 20 seconds and so if you then ask the question well suppose I have an object that's approximately this size and I have maybe some gas that's swirling around it you can ask okay what is the length of time it would take if this gas is following you know a reasonable circular orbit around the object how long does it take for a blob of gas to go all the way around an object and you know there are some assumptions about the source they go into answering this question but it turns out that the answer is somewhere on the order of five minutes to 20 minutes and and this is this is why it's such a problem that each data point is 10 minutes long because that means for a single data point you know you're not getting a snapshot of the evolution you're getting kind of a smeared out time average of the evolution of the source over an entire orbital time so you have to keep that in mind so this is this is point one point two is that again every single data point which corresponds to this kind of smeared out average every single data point is taken maybe many dynamical times apart so you take one data point at our zero of your observation and then you wait dozens of dynamical times later and then you take another data point and again these data points correspond to different angles in the Fourier domain because the earth is sweeping you know it is rotating and so you're getting information about the spatial structure in your image which is kind of like this smeared out average image in the first place but you're getting information about the spatial structure in one orientation in one direction maybe hours you know dozens of dynamical times before you're getting information about the spatial structure along another along another orientation and so you have to be really really careful when you then take these two data points these two different orientations and kind of back them out into an image you know what do you actually what should you trust from the image at the end of the day if you want to understand this from a more visual perspective I think that the kind of plots on the right are or I guess all of the plots on the slide are rather useful so you can think of this process is happening in either or you can try and ask the question what are the visual correspondences of of this process either as it happens purely in the image plane or purely in the Fourier domain and so what I'm showing you here on the top left in this graphic produced by Andrew Shell and Avery Broderick and what I'm showing on the top left is a potential image it's a simulated image maybe we should see this for the Blackwell occlusion but what if we saw this it doesn't mean this is what we expect to see it's just what if we saw this and so that's the top left and then kind of going from the top row to the bottom row what I'm doing is just taking a Fourier transform so the Fourier transform of the top image is the bottom image it's this kind of blobby thing the exact details of that don't matter but what does matter is the fact that again when we're actually sampling the Fourier domain right we have a VLBI instrument so we're not taking an image we're taking these Fourier amplitudes when we sample the Fourier domain we're not seeing that full image what we're instead seeing is that image multiplied by a whole bunch of delta functions so in the bottom center again I'm just showing you the locations of where we're taking the data and so the actual imaging process corresponds to taking the truth image the Fourier data in the bottom left and only sampling it at all of those data points and then at the end of the day what you get is you get a list of numbers because you're not getting this two-dimensional data anymore and so you can plot that list of numbers however you want in the bottom right what I'm showing here is just the kind of sampling the amplitudes of the complex coefficients plotted against the displacement in the Fourier plane from the zero and that doesn't matter so much but what I think is really interesting and what can build intuition here is that if you take the Fourier transform or the inverse Fourier transform of your coverage of your sparse coverage you get this kind of blocky or blobby organized or patterned structure that you're seeing in the top center and so you have all these different blobs these different splotches and this corresponds roughly to the spread of all the different data points in the Fourier domain and I want to be very clear here so this top center image is purely a product of the bottom center image so it's purely a product the dirty beam is purely a product of where your telescopes are it doesn't matter what your source image is this is kind of how well you can see your source image in some sense so of course you'll remember that multiplying in the Fourier domain is the same as a convolution in the image domain so in principle the best that that you can do if you don't know anything else about the image is you could recover an image that looks like the top right right so you multiply your Fourier domain sampling by your UV coverage it's the same thing as convolving your truth image with the dirty beam and you get this kind of blobby structure okay so why did I take all this time to explain what's going on here well I want to again just emphasize that there's this natural blobby pattern me structure that comes out of the dirty beam which is just purely a product of or purely a side effect result of the fact that you are not sampling the Fourier domain evenly you're getting a sparse sample of it okay so again the problem that of imaging of doing VLBI is to somehow recover what you think of should be the truth image at the top left given information about the bottom right and the bottom center and so there are a whole bunch of different algorithms that can do this some algorithms that are you know decades old the clean algorithm some some algorithms that are relatively new kind of associated with machine learning and I'm happy to answer questions about these algorithms to an extent at the end of the talk but I'm going to skip over the details of the algorithm and just kind of show what the output of these algorithms is and since we only have incomplete data since we only have a sparse sampling of the Fourier domain that means that there are many different possible images that are completely consistent in some sense completely consistent with the data that we took so in the top left I'm just showing a randomly drawn sample from the so-called top set of images that are consistent with the sparse sampling of data so there are different assumptions for example different parts of parameter space some hyper dimensional parameter space that you can explore and and you just construct these different images and what you will notice if you look closely at these images is that they all kind of look like they have a ring maybe but some of them have more pronounced rings than others some of them look particularly blobby some of them have two bright spots or two predominant bright spots some have three some have one and I'll get into the consequences of this later but I want to now compare the imaging reconstruction that happened for the galactic center to the imaging reconstruction this kind of clustering question the sampling question for the reconstruction effort for m87 so the EHT the event horizon telescope also did this kind of procedure when it produced images of the black hole at the center of the m87 galaxy and what you will notice here is that there's not nearly as much spread and this as it turns out corresponds to the fact that m87 is about a thousand times larger than sahaja star it has about a thousand times more mass which means it varies much more slowly so nine hours or you know eight hours approximately for sahaja star is like a year for m87 so you know you take the observation you know the amount of variation you expect to see in in sahaja star in the galactic center over the course of an observation is significantly greater than the variation you expect to see for m87 so m87 overall is just an easier an easier source to image because of the time scale question okay so you've run this imaging algorithm you've run these imaging algorithms plural you produce all these different images now what do you do well what you can do is you can try and group all these images into what are called clusters and so you know this is a very well defined mathematical question there are different ways you can pose the question mathematically but at the end of the day you're trying to identify groups of images that are similar in some sense so you know how much how much extended emission do they have you know what is the radial profile of emission so on and you can think of a whole bunch of different characteristics for this question and you run this clustering algorithm I'm going to run it on the left now on the Sagittarius A star potential images and so what I'm showing you here down at the bottom is just a whole bunch of different selections so if we if we iterate through all the different images that I'm showing on the top small images and and you group them then you get these these four different clusters and at the end of identifying the cluster what you can then do is take all of the images that belong to this cluster and average them together just to get a kind of representative image of the cluster now to be clear this average of the cluster does not have to correspond to an actual average image that that was reconstructed but it is kind of representative hopefully it picks up morphologically features that are representative of the cluster overall and just for completeness I'm going to show the same process happening for m87 you will be unsurprised to see that the clusters for m87 all look very very similar and then at the end of the day you take these average images that you get from this clustering procedure and you average the average images together and you get something like a representative image of the overall structure that you would expect to see the structure of an image that's consistent with the data and in the case of m87 there are some really robust features so all of these clusters all of these average images have a bright kind of a bright side on the bottom and a very clear ring structure and so that's what you see for the m87 reconstruction but that's not the case for the sajay star reconstruction again some of the average images have what appear to be a very clear ring and three blobs with a blob that's hot or a bright blob on the top or maybe a bright blob on the bottom or maybe no ring at all or you know depending on what you count is a ring no very clear ring at all but just two blobs and then so on and so this is what I meant when I said I want to kind of warn you about what you should interpret from the images these reconstructed images may be rather misleading and just to provide a kind of concrete example of what I mean by misleading I thought I would show a high resolution simulation where we absolutely know the ground truth so I take a high resolution simulation of an accretion flow and I look at it over time you know I produce a time series a movie of this of the simulation and then I run it through the imaging pipeline and so you can then ask the question well you know does the pipeline reconstruction correspond to particular features in the original simulation we can actually compare the reconstruction to the truth and ask how similar or how different those two are and the upshot here is that the average image does not necessarily have to reproduce any individual frame and moreover it can have features that are incredibly misleading so if you look at these four snapshots from the top this is again coming from a high resolution GRMHD simulation and I'll explain what I mean by that later sorry to keep pushing things off to the future but effectively this is a high resolution simulation of what we think reasonably in accretion flow might look might look like and so I've selected four snapshots from this overall movie and so you see this there's this kind of chiral structure there's this thin ring feature and there are some bright spots maybe corresponding to where you have what looks like filaments that are falling into the central black depression and and and that's kind of characteristic of these four snapshots but also many of these snapshots that you pull from the overall time series and so again since we have the truth since we have the snapshots since we have the high resolution simulation we can just actually average the simulation and that's what you're seeing in the bottom left so if we actually average all of the frames that go into the simulation all the frames that we're simulating or we're going to simulate an observation of you see something that is perhaps not so surprising given our knowledge of the snapshots you see this dark region in the middle you see this kind of thin bright ring that looks particularly bright in the bottom right maybe on the right hand side more than in the bottom and then you see this extended feature now that's the high resolution average you can then say okay well i certainly cannot hope to get a high recover a high resolution image so what if i blur to the image the best possible image i could get by constructing an instrument on the earth again remember that the size of the earth limits the resolution that you can see the larger your telescope the smaller features you can see and so you blur the the high resolution weighted average to the eht resolution approximately and the eht resolution here is just represented by that circle that white circle in the bottom right and again you see something that i think is not particularly surprising again you see this bright this bright ring of emission it's particularly bright on the right and in the bottom and you see this dark feature in the middle okay so this is what we expect to see there's nothing surprising you then run this through the reconstruction procedure so you simulate an observation with a whole bunch of telescopes distributed exactly like the event horizon telescope telescopes are distributed you run the imaging algorithms you run the clustering and so on and then you can produce an image that looks like the reconstructed image here in the bottom in the third panel and i just again want to stress that this reconstructed image doesn't really look like any snap top right you see these two blobs so the two blobs don't correspond to two blobs in the eht resolution observation they don't correspond to two clear blobs in any of the snapshots and in fact one of the bright spots is is on the top right which you just don't see in any of the snapshots and you certainly don't see it in the average images without this imaging reconstruction procedure so when you look at the image that the eht published or the set of representative images and you see these three different features and you see that one of the features in the top right is particularly bright you should look at that with some skepticism and and not draw any inaccurate inappropriate conclusions from it because they might not be real now to be clear it is totally possible that there are three bright spots in a particular snapshot those three bright spots can absolutely be real there are reasons to believe you should see bright spots in fact in the simulations right you do see that we produce a bright spot bright spots totally could be real but you should not yet infer that these these features are real and in fact you should you should not ever infer that the features in the images that we have published so far are real in in this sense so the problem then becomes how do you identify which features are robust right because at the end of the day we have these great data you know you can either analyze them in the for a domain or you can try and gain intuition from the image plane but if you want to use these data and and learn something about the source learn something about the black hole or the plasma that's around the black hole do that kind of astrophysics then you have to identify what the robust features are because you could completely get the wrong idea if you don't do this and I'll come back to what those robust features are now I'm going to kind of interlude and go to the second part of the talk which is about how we actually produce astrophysical models of what's going on but while I'm talking about this I want you to have in the back of your mind you know what are these robust features or you know what what what could we possibly identify as a robust feature okay so what's going on so we think that the black hole at the center of the galaxy is in fact a black hole I shouldn't have led that way we think that whatever is at the center of the galaxy is a black hole a very large supermassive black hole we expect that there is a bunch of gas in fact we see the evidence of gas nearby and so what we expect is going on is we expect there's a whole bunch of gas near supermassive black hole the gas is gravitationally bound and it's falling onto the black hole and it's not just gas but it's plasma and this plasma has magnetic fields in it and so the magnetic fields are being pulled with the gas as the gas falls or creeds onto the central black hole and so if you just have this picture in mind it turns out there are a lot of different parameters that could go into satisfying this model and so I thought I'd just show these three neat three these four neat 3d animations or 3d representations of what the gas could be doing so in each of those or in each of these these images there's a black hole that's very very small on your screen and it's probably about the size of the dark region in the a which shows up in each of the panels at the top left a equals 0.94 for example it's about that size and then you're we're showing contours of the density of the gas that's falling onto the hole it gets more dense as it gets closer to the hole and then these white lines are showing the structure of the magnetic field so there are a whole bunch of different ways that you could drop gas and drop magnetic field onto the black hole and ideally if you want to build an astrophysical model if you want to somehow compare the observations to these astrophysical models you want to kind of sample the space of different possible scenarios for the secretion but before we get to that because that's a really hard problem let's try and come up with the simplest model that we could think of given that we know that there's gas and we know that there's maybe a black hole or at the very least we know that there's some object with some mass and if it has some mass and it has some approximate size and so the the simplest model you can construct or one of the simplest models you can construct is a so-called one zone model which just means that we're going to assume that there's this uniform ball of matter that has some size but it's completely uniform throughout so it has you know the same number density everywhere it has the same temperature everywhere it has the same magnetic field structure everywhere and and from this model we'll see you know if we can gain any intuition for what's going on in the source and and use the results of this model to kind of motivate what part of parameter space we should look in for the more complicated models and so what are what are the inputs to to this one zone model and procedure well again we know the approximate size of the object how do we know the size we know it from just for example tracking the you know these gravity observations and and and other tracking objects near near the galactic center we can estimate what the paths of the objects are and so from this you write down some potential you know you use your your physics one knowledge to back out what the mass should be you get an approximate so given this mass you know an approximate size you know the approximate distance from the the galactic center wherever this object is to the earth and so you write down a very simple model you say okay I have some object that's approximately let's say three times 10 to the 12 centimeters in radius and that that has uniform magnetic field strength uniform number density uniform temperature and say okay so what does this object need to do in order to produce observations that we see and in particular what does this object need to do what are what are the you know the temperature where's the temperature of the particles what does that need to be in order to produce the observed flux at a particular frequency right so if you literally just look up at the source and you count the number of photons that are approximately 230 gigahertz you get a number you get that it's approximately I'm not going to give you a number of photons well I'll give you an energy so you get that this is approximately three jansky worth of energy at this 230 gigahertz frequency and so then you can ask the question well okay what could possibly produce that amount of energy and you you work out all the math there are some nice nonlinear equations and it turns out that what you expect to be happening is you expect for there to be a bunch of ions let's say but the electrons are what matters here you have so you have a whole bunch of electrons and ions but electrons that have some distribution of velocities so they're going all different ways but they have some average velocity in some sense and there are magnetic fields in this region and what the electrons are doing is that they're kind of orbiting around the magnetic field so again you remember your early physics knowledge of of how do charged particles move in the presence of a magnetic field you have a magnetic field the charged particles will rotate around the magnetic field and charged particles as they are accelerated as they're accelerated in circular orbits will emit radiation and if they're moving particularly fast then they will emit what's called synchrotron radiation so it's kind of like a relativistic cyclotron thing and so there are certain properties of this radiation so you work all this out it turns out to be self-consistent the average energy which is kind of like a measure of the speed so the average energy of the electron in this uniform ball will be 10 in units of m e c squared so in units of the rest mass of the electron the number density turns out to be about 1 million particles per cubic centimeter and the corresponding magnetic field strength is about 30 grams okay great so here's my simple model but as as we saw before you know the simple model probably doesn't explain what's going on you know the image has some source structure it's not just a ball and also it's changing over time we know that the the movie the time dependence is interesting so the simple model is not enough so what do you do well you take this one-zone model you take these outputs these approximate numbers and you run a fluid simulation so you say okay it's not just a uniform ball that's falling onto the event horizon in fact it's a huge gas that has some any of the runtime let's say some energy field strength and it falls and and does something so what does it do well it obeys conservation laws certainly so you can write down conservation of particle number you can't you don't expect to be creating a whole bunch of electrons or whatever so it obeys conservation for particle number and it obeys conservation of stress energy so you know like momentum and energy and so on in some relativistic language and then of course it has to obey Maxwell's equation so you write that down and you get a set of conservation laws it's it's very complicated and you write these laws into a computer you run a computer simulation and you run a whole bunch of computer simulations in fact and you get at the end of the day from every single computer simulation you get a description of time series of what the density looks like what the internal energy or the temperature of the gas looks like what the velocity of the gas looks like what the magnetic field strength looks like as a function of both time and space around the black hole and then you take this fluid description this time series and you simulate the synchrotron process and you ask okay so all these different particles can be doing something they emit light at the end of the day where does that light go what does that light look like to us so this is the full simulation procedure again this is a computational question and and so to be a little bit more precise about the simulation procedure I'll just talk about some of the inputs to it so again in this fluid simulation step what are the inputs well the the two main inputs are the spin of the black hole so black holes have mass but we know approximately what the mass is but they also can have angular momentum so this is a parameter we need to vary we don't know a priori what the angular momentum is so you choose a whole bunch of different possible angular momentum and you run different simulations for that you also don't know what the relative strength of the magnetic field is it's possible that the magnetic field is very very strong and it's you know halting you know the magnetic pressure builds up so much that it halts the creation in some cases or it could be very weak you don't know so when you're doing this forward modeling procedure you you choose a whole bunch of different magnetic field configurations a whole bunch of different black hole angular momentum and for every single one of those you you generate this fluid description again the angular momentum I think is is is pretty clear what that is you can just spin up the hole but when you're considering this magnetic field configuration I just want to put in your mind the kind of dichotomy that we care about so of course there's an infinite number of ways you could put a magnetic field and you can't simulate an infinite number of configurations but what you can do is try and find limiting cases so the two limiting cases are effectively differentiated by whether or not the magnetic field strength is strong enough anywhere in the simulation domain anywhere in your accretion flow whether it's strong enough to materially stop the accretion and and so if it is strong enough it's a magnetic pressure is approximately the same as the inward ram pressure then the accretion is said to be magnetically arrested and so this forms a magnetically arrested disc or a mad disc mad flows are are kind of choppy as you can see on the left and and there are these large regions that are low density and and and so this is one one side of the possible accretion scenario the other side is the so-called sane side and it just means that the magnetic field strength or the magnetic pressure is not strong enough to kind of halt the accretion flow sane for standard and normal evolution so sane flows are of course they're still turbulent but they're much more steady than these mad flows that have this kind of choppy behavior and I realize I should have commented I should have noted that these are that what you're looking at here are the profiles of the log of the density in the midplane of the simulation so you have an accretion disc that's falling in and I'm just slicing straight through the accretion disc and plotting how much matter there is as a function of time and just to provide some units I'm showing this time as we're marching forward time in the GM over C cubed and so one in GM over C cubed for the glacic center is 20 seconds okay so that's those are the inputs to the GRM HD simulation to the fluid simulation um but then as I said you take the output of the fluid simulation and you run it through uh these radio transfer codes where you simulate what all the electrons are doing the acceleration and how they emit photons and so on and there are more parameters that go into this um so one of them is just the inclination you know if you have a 3d simulation of the fluid um that's great but are you looking at it straight down or are you looking at it from the side or at some oblique angle in the middle so you have the inclination um another another parameter that that goes into um this this ray tracing procedure this image generation procedure is the thermodynamics of the fluid so at the end of the day your conservation law treats uh all of the matter as a fluid or at least in this prescription it treats it all as a fluid with a single internal energy um but remember what you care about is all of the electrons that are going around the magnetic field so you have to translate this one number this internal energy of the fluid into a kind of distribution function of what all the electrons are doing um how many are going very fast how many of them are going slow so there's some question about the thermodynamics of the fluid um that you also have to input and again acriori we don't know what that is um so these are the different parameters that you can vary when you're generating these potential forward models um so again starting from the left you have the the GRMHD simulation angular momenta the magnetic field strength in other words whether it's matter sane um then you also vary over the inclination so in the orientation how you're looking at the black hole system um and different ways to translate the internal energy into a distribution function uh for your electrons um and I forgot I had this wonderful uh so here's a graphic representation um of the different ways that you can do that modeling of the thermodynamics um so I'll just kind of repeat myself here and and I guess say when you're translating the fluid internal energy to an electron distribution function what you're precisely doing is you're writing down some prescription for how many electrons you have at a particular Lorentz factor so that's in the bottom left here dnd gamma effectively how many electrons are going very very quickly and then your constraint is that you want to match the overall shape of the curve or sorry the overall integrated area under the curve uh to have the right number density overall from a simulation and also to have an inappropriate amount of internal energy uh and then it's also worthwhile to point out I think that um you know it's very possible for the actual regions that you light up when you do this uh this this translation this thermodynamic translation it's very possible for the regions that are lit up around the hole to be very different so I'm showing you a snapshot from one of these mad models with strong magnetic field and a snapshot from one of the same models and showing you where the actual photons would be coming from given some choice about your electron distribution function your electron thermodynamics um and you know I'm just rotating around so there's nothing interesting time wise going on there um and and these different uh different simulations these different paradigms can end up producing very different structures um which means that at the end of the day you may be looking at very different parts of the system around the black hole this is worthwhile to keep in mind um okay so this is where the slide I thought was following that um so at the end of the day what do you get you you get a whole bunch of images potentially so you can look at the images at different frequencies so you know 86 gigahertz um you can also look at 230 gigahertz which is what the eht is observing at um but you don't only get images you also get spectra so uh you know you can imagine uh asking the question how many photons come at this frequency versus another frequency higher and higher frequencies um at some point you have to start worrying uh not only about the synchrotron radiation uh but also scattering so photons you know enter this region with a whole bunch of really hot electrons and so they can inverse scatter and gain more energy and so on um so you produce these observables from every one of your simulations varying all of your parameters um you get these images at different frequencies you get these these maps of the energy versus frequency space um and and then you get a whole bunch of different images because the parameter space is is quite large um so for reference uh there were uh many many different people and and efforts involved in in this uh simulation procedure um so there were six different fluid evolution codes uh there were two different imaging codes and and three other codes that were used for validation um and then there was one Monte Carlo code to produce the spectra um the overall library ran over four different possible accretion states so I talked about mad insane accretion um it's possible that you know you can feed the gas in uh from large distances at weird angles so you know well maybe not weird in a cosmic sense but at different angles so you can you know change your boundary condition and you can get tilted disks you can get wind fed disks where you just have you know blobs that fall in at certain times uh we surveyed nine different possible angular momentum for the black hole um and then we also varied the possible fluid adiabatic index for for the fluid simulation so those are the the parameters for for the fluid simulation but we also have these different prescriptions for the electron uh for the electron distribution function so we had seven thermal six non-thermal we looked at at nine different observer inclinations um and at the end of the day we ended up producing about six million possible images um and you know one in a quarter uh million possible spectra um and if you add up how many data point or how much actual binary data corresponds to just these electromagnetic observables turns out to be about 50 terabytes worth of data um and this is not including the fluid simulations themselves so so this is a pretty large a pretty large amount of a pretty large amount of simulation data that you don't have to compare okay so now i'll get into the into the constraints uh so there were 11 different possible constraints five of which came from uh directly eht image data um four of which came from from other data that were taken at the same time but that were not directly produced from this uh telescope array um and then two other uh questions uh two two other constraints coming from the variability of the time dependence and i'll treat each of these in turn um but uh since each of uh the possible constraints is is uh can be rather uh mathematically in like i could spend like five slides and have your constraint i'm not going to do that i'm just going to give you a broad overview and then i'll of course be happy to answer questions later on um or you can read the papers if you want to um okay so first talking about the eht constraints directly um well the simplest thing you can do is you can just ask you know what is the image size approximately uh and so it turns out that you know the extent of the mission in an image corresponds to something like the second derivative of the for a transform uh and so you can say okay well you know given the data again i want to find these robust features so what will be robust well you know maybe i don't know about the for a data at every single point but i do have a good sense of how smooth things are in a particular region let's say it turns out to be the case so you can measure or get constraints on what the second derivative is and you can get constraints on the approximate image size the compactness um so this is the first uh so-called pre-imaging constraint the second moment constraint um and it turns out to be not that discriminating in the sense that 98 percent of the you know six million images that you could produce are all consistent with the data okay so this is one constraint another constraint that again comes from the for a domain is just to ask you know okay given a simulated image if i were to observe it take the for a transform um is the location of the kind of zeros uh consistent with the data and so uh what i'm showing you here on the right is uh a set of two different possible simulated images uh and kind of like their for a transforms um and so what i want to draw your attention to in the bottom row are these two different gray regions um and so effectively the question that we asked here was okay given the simulated image the for a transform of the simulated image is the location of the first place where these colored lines this for a transform is the location of the first place that these colored lines gets low you know goes to zero um is it doesn't lie within this first gray region that's that's constraint one uh and constraint two uh for this va morphology uh thing is you know are the data points in the second gray region uh sufficiently small or do they have sufficiently low power um and so on the left hand side uh we're showing an image that passes and you can contrast contrast this with an image that doesn't pass on the right hand side um and so you know this is another constraint you can do again this is a robust feature of the data because you don't need to know about all of the data points you know across the entire x axis you're only looking at places where you actually do have a fair amount of information um and if you apply this constraint it turns out that about 80 percent of possible images pass uh this constraint so it's not particularly constraining but you know it's better than 98 percent um and it's also interesting because it means you're actually constraining models with just the pure eht data these boxes just label how the how the image on the right fails um another thing that you can do is is you can do model fitting uh so you can say well you know suppose that i i know i don't have a perfect high resolution image but i i do know uh that there are some geometric features that are reasonable and so what are these geometric features well one is kind of the width the approximate diameter of this dark region so if there is a ring let's fit for its diameter um and then come up with some distribution over time and ask if this distribution is consistent with the distribution from a potential model simulation um so this is the ring diameter constraint you can also say okay i i know that i not only have a dark region but i know i have some donut around the dark region and that donut has some width um is that consistent um and not only is that consistent in general but it is the distribution of those widths is the distribution of the width of the donut uh uh as a function of kind of angular location along the donut you know maybe it's it's really wide on one side and really narrow on the other side is that distribution consistent between the data and a particular simulation with a particular set of parameters you can also measure the asymmetry you know if the image is brighter on one side than the other side is that consistent um this turns out to be a quite uh or the these set of constraints turn out to be quite constraining especially this ring width distribution constraint in fact kind of the most uh constraining of the constraints uh from the pure eht data um so so now i'm going to move on to some of the non eht constraints very very quickly um is the spectrum consistent do you have too much energy in in places uh again you can you can apply this constraint um and and okay maybe i should i should back up a little bit more um so you know you can apply this constraint and and say in particular for example um you know if you if you let's say you go to a high frequency so the x-ray constraint is the right most data point right most of of these four data points in each of the frames um if you don't produce enough x-ray emission is that a problem um and you'll see that there are these arrows here which kind of imply that it isn't a problem because there are a whole bunch of ways to play with the electron distribution function it turns out um and and produce more emission in those regions um so this is one constraint you know are you producing too much emission um in the x-ray or in the near infrared which is the other data point um you can also apply size constraints uh you know so the second derivative second moment constraint um at frequencies other than two-thirds so i i mentioned that 86 gigahertz was one of the constraints um and and so we're applying a kind of like a second derivative second moment constraint from the 86 gigahertz at it anyway um these are the non eht constraints and then at the end of the day you can apply all these constraints to all the models what you want to do is you want to identify which models uh or which parameters are uh belong to models that are consistent with all of these constraints with these nine that i've talked about so far and this is what you get um i'm not going to walk through every single uh you know region arc region i guess i don't know arc um in in this plot but i will uh point out that you know there are some general trends so the models that are mad um in other words that have these strong magnetic fields um that have a particular orientation for their accretion flow so the angular momentum of the accretion flow is oriented in the same direction as the angular momentum of the central black hole um so mad prograde um spinning also what is the imaging inclination well we we seem to be looking down at the black hole uh and we can also learn something about the electron thermodynamics the electron thermodynamics is probably cool so these are the surviving models um there are also uh some summary statistics for what are disfavored so single temperature uh fluids are our disfavor this is again just a question of the electron thermodynamics um looking at the system edge on instead of you know down so edge on through the disk is disfavored uh retrograde simulations are are also disfavored so in other words we think that it's it's you know from these constraints in particular um and given the set of models that we looked at we think it's somewhat unlikely for the accretion flow to be kind of rotating around the black hole in a direction that is uh opposite of the direction of the black hole itself is spinning okay these are the summarized constraints so far but they said that there were 11 constraints and i've only gone over nine of them um the other constraints are are associated with variability and you'll recall that in the beginning i talked about how um you know variability is a big factor in the analysis it has to be because you know we're not actually taking snapshots we're getting these kind of average things and and and so on so it's really worthwhile to consider variability um okay so so how can you do that so i'm going to just talk about one of the potential variability constraints and that's the so-called modulation index constraint um and so mathematically what this is is just you take a light curve it doesn't even need to be you know a resolved image so that we do have did have image variability constraints but you just take a light curve the overall flux that you're getting as a function of time and you ask kind of what is the spread in the variation versus time what is the standard deviation of that the sigma and you divide it by the average signal there so sigma over mu over some window for example a three-hour window is what we chose um though you know you can you can vary this and you can ask questions and confirm that it's a rigorous statement but you ask this question what is the sigma the average sigma over a three-hour window uh divided by the average uh sorry the average variation in the signal over a three-hour window divided by the average signal in a three- hour window that's a number you plot the distribution uh because you have historical information about sadge star you know how much this is varying over time historical information you plot all of that you ask if the distribution from the true data from the observed data is consistent with the distribution from any of the simulations um and uh this is I think one of the most interesting results from the astrophysics side to come out of uh the analysis because it turns out that almost all of the models that we have almost all the fluid models you know the simulated models are two variable um and uh it this is I think rather surprising I think that uh if you had asked if you'd asked me or if you'd asked us you know five years ago what we would expect we would actually expect that they're not variable enough you expect you know you add in more physics you know you have a more precise treatment of the electron thermodynamics you get more variability but in fact we saw that there's less variability um and uh when you apply this constraint uh to the data with the other nine constraints nothing that we did worked nothing no model passed all 11 constraints um which again I think is is kind of exciting because it means that we have we have a ways to go um I guess I should very briefly comment that um perhaps unsurprisingly the same models are less variable than the mad models uh this just ties back to that slide before where I showed the very chaotic mad inflow with all these bubbles so um mads are more variable than sains but still um there isn't uh overall any model that passes all the constraints okay so what did we learn uh you know we produced petabytes of fluid simulations terabytes of simulated observables um and part of this process uh in part of this process or as part of this process we learned that our fluid codes you know again I said there were six different fluid codes there were multiple imaging codes are all surprisingly well I shouldn't say surprisingly are all very happily we are happy that they are all consistent they produce consistent um predictions so this was great um there uh we compared our model against 11 different possible constraints the EHT image constraint right so this VA morphology uh this kind of geometric fitting turns out to reject a whole bunch of models uh so the EHT image does tell us something about uh about the astrophysics we learned that the prograde uh mads are kind of favored um we learned that you know single temperature so we learned something about the the fluid uh thermodynamics um in the context of of our models um but we also learned that most of our models are are too very which again kind of points us towards something to do um in the future and uh before I I get to my my kind of summary and future thoughts slide I want to caution that our best pet model so my uh bullet point number five here and also the conclusions of bullet point six are all with respect to our prior so you know we did not simulate every possible spin we only simulated nine possible spins only nine different angular momentum states um so you know we had some prior we only sampled the space of possible distributions very very sparsely um we believe the trends so we believe that you know prograde things are more likely than retrograde things but you know the fact that for example a plus uh 15 16th spin model you know might do significantly better in in in one analysis than a plus one half spin model you shouldn't necessarily trust these things to a quantitative precise extent because we only sampled our prior very very sparsely um and you know we didn't include all the possible physics we didn't include for example viscosity viscosity could change um so best pet models are with respect to our prior you know don't believe that a reported model that that does very very well um is is definitely you know what's going on or or that it's unlikely to change when we include more physics okay so here's my uh one one slide uh what's next uh kind of discussion so evidently we need to improve our fluid model we need to you know add something that will allow us to produce simulations with less variability to be consistent how do you do this well you add a conduction you add viscosity you change the fluid composition for example um if you include radiation in the fluid model so I just talked about conservation laws for number density for stress energy and Maxwell's equations but you can also add a photon fluid that transfers momentum and so on um so maybe radiation matters um we can also include even more constraints you know so we didn't have any models that passed the constraints we have but we can add more constraints and see what we learned from those we have data from different years um I probably want to consider alternatives uh to the uh to to our boundary conditions so again you know we just need to sample the parameter space a bit better um as I said change the gas composition in the in the future of uh actual the observations what can we expect what we expect at some point in the future we will actually have high resolution blackball movies and you know we'll be able to get way more information from those movies than we get from you know 16 blurry pixels um hopefully at some point we'll be able to actually track fluid parcels in the magnetic field near the horizon that might tell us something about um gr and connections between black holes and and relativistic jets we might be able to put satellites in space uh and and overcome the limits of the size of our planet um which would which would be great really really high resolution movies and and also look at you know time dependent signatures in different parts of the images there are reasons to believe that uh black hole movies should exhibit certain echo periods in different regions so we would also be able to look at that um so that's the future um before I I say thank you I want to just point out that the EHT collaboration um is you know comprises hundreds of people um and and none of this would be possible without everyone you know working very hard and I'm very much honored to be a part of this really uh you know large collaboration that's doing some really impressive work um the last time we all met in person was 2019 before COVID but we're going to be meeting in person in in Granada Spain um very soon so that's that's also exciting we'll be able to update the photo okay so with that thank you everyone uh for your time and your attention and I'm happy to take any questions thank you George indeed it's a massive and very interesting work let me start with some questions I see one question in the YouTube channel how much does the distance uncertainty between the different EHT observation points affect observations are cartographic corrections relevant are atmospheric effects relevant I'm going to copy this question in the chat in case you yeah so um I think what you're talking about for a distance uncertainty um I think what you mean is is you know in this VLBI uh question when when you have this plot of all the different baselines so the distance um on one of these plots there is some uncertainty because we don't know down to you know the nanometer how far away two different telescopes um are uh so I am going to give a very hedged statement here because I don't work on this particular thing but I know that um people in in fact use VLBI to measure distances on earth so it's not like someone you know got out the map in the Mercator projection or you know choose your favorite projection and made some estimates there are some very very sophisticated things that go into ensuring that these distances are are sufficient to do this this kind of a measurement again also remember this is 1.3 millimeter so you don't need to go down to the nanometer scale at all you just need some kind of accuracy you need some estimates um um um the scale associated with your wavelength there's also a fringe fitting step so you know there's a whole bunch of of of technique that goes into uh tamping down all of those uncertainties um does the atmosphere matter yes the atmosphere absolutely matters so if you look at um an actual track which is a set of observations you'll see that we point the telescopes um at the galactic center for example or at M87 for like 10 minutes or whatever uh and then you you you point them somewhere else where you know exactly what you expect to see and this is your calibrating source and so on so so you have to take into account all of these uh different effects to the other model so um I cannot open the chat actually because I don't know where my cursor is so if I miss something in the chat uh tell me and and I'll come back to it I'm sorry okay thank you um if the coordinators have more have a they have a question they can unmute themselves I'm going to read another question okay um there's another question about like what sets the number of clusters why do you show four or what are there just four um yeah that's a great question that I should totally be able to answer um and I cannot answer why why far was chosen I do know that uh if you kind of change your clustering algorithm you you get very similar results um then this kind of analysis was on but I I I don't want to say something wrong I don't know exactly why it was for okay Jorge I thanks for a very interesting talk so um at the beginning you show like the difference uh distribution you get from our galactic center and from m87 and you said well one of the differences was the scale the timescale for both black holes but I was wondering if the fact that our galactic center accretes very little also makes a difference or in other words can you get the accretion rate somehow from the images or from the data yeah so um right so so to add uh add numbers to that statement um you can estimate the ma7 accretion rate to be something like 10 to the minus six then the minus from their modeling uncertainties um in in this characteristic eddington mass rate and you compare that to the galactic center which is something like 10 to the minus nine um again with with modeling uncertainty so you know that's that's four orders of magnitude difference uh maybe so one one way that you can constrain this is with these these one zone models so you do have an estimate of the approximate size of the source you have estimates that the gas should be approximately varial or maybe slightly sub-varyal for example um and so from this you can estimate directly kind of what the number density should be um I will also say that you know if you run a simulation and you bump up the accretion rate it just looks different so maybe this is a more direct answer to your question uh you know if these relatively inefficient decretion flow models are correct which you know we we do believe um or we at least believe that they're representative um there are limits on kind of how much uh oh dear on how much energy uh you can actually or sorry on how much stuff you can put there because if you put too much stuff there it produces too much emission or if you put too much stuff there uh the disk gets too large um and you know it turns out to be the case and then I think this is absolutely incredible um that you know for M87 and for the Glacuk Center even though there are these you know very different accretion rates they do end up corresponding to very similar looking images and you know this didn't this didn't need to be the case but I hope I've at least somewhat responded to the question uh I you know you okay yeah okay thank you there's another question in the in the youtube channel what kind of new algorithms would need to be developed developed to try to overcome the variability of the reconstructed images in order to produce black hole movies ah so so there I think there are two parts that question um one is what kind of algorithms would need to be developed uh to overcome the variability and the second one which might seem like it's the same question but I think isn't is what kind of algorithms do you need to produce the black hole movies um and I am going to start with the second question um which is that I really think you know in order to gain confidence gain for us to gain enough confidence uh to you know publish a movie and say we think that this movie is you know has some robust features in it um I I really think you just need more data um because over time if I can remember where that yeah where these pictures are right in order to produce a movie right ideally you want to have some idea of a full snapshot every minute or so um you know let's let's be generous and say every 10 minutes so that means that you take all these different colors and you choose one point from each of the colors um and you try and produce a frame from that and I think that that's just very challenging so I actually think that the main technical uh issue with with producing movies is that you want kind of more baseline coverage you so how do you get that you get that by adding more telescopes all over the world so that you just get more data points so I think for movies um where you have where you want confidence um in the features or in the movies in some sense I do think that the solution there is not really algorithms it's instead adding more telescopes um but you know the first question is how do you improve the algorithms to to kind of learn more given what you have so maybe you can't produce movies but maybe you can learn more um I think that's that's that's a pretty challenging question because at the end of the day you know you want to be sure that your algorithms um are not missing something you know so for example we could have said we could have developed algorithms they would have been horrible algorithms but we could have said you know come up with an algorithm that produces the best fit circle for the data you know and then you you would have a lot of confidence in that circle but if you didn't realize that circle that the circle model is wrong that you know it should be a donut um then you're missing something and you can improve the algorithm more and more and more but if you don't know what you're looking for um you know at the end of the day you you have to make some model assumption and I think that um you know the way that you do this is is actually through theory of course I'm biased I produce simulations a lot so so maybe I'm too biased to be answering this question but I think that my answer is you need to you need to really understand what all of the different possible model spaces uh in a very fundamental way and then you can use that to kind of feed back in on the algorithms I'll also just end that by saying um uh you know there are there there have been recent develops in algorithms that say okay you know we expect to see this kind of structure so this allows us to get what's called super resolution um so you know maybe the earth only tells you you can see something in 25 micro arc seconds but we expect that there's some overall structure in the foray domain that has again been trained by the theoretical movies by the theoretical assumptions uh that allows you to get down to like 15 micro arcs and so on so there is movement in the algorithm front but I think that you really want to motivate it with with with more city oh no sorry now I'll actually say last thing also it's still a theory question I think but how do you build in knowledge of overall average structure into your algorithms right so maybe you only have a single data point um edit for a single time but you have some expectation from theory about how much the source can change over 20 minutes maybe it won't change significantly over 20 minutes in which case you ask the question okay how do you build that information into the algorithm uh kind of some time dependence some some smoothness of your time dependence also multi-frequency stuff uh and so on okay now I'm done with that thank you I I have a follow-up question which will connect to the other question that is on the youtube channel which is like you said like this GRMHT simulations are perhaps too variable then I am and then connecting to what you were saying and with Sagittarius a star it seems like you somehow we're seeing an average so how is this like time variability entering into a counter is there a way to easily understand why you think it's too variable of course you you know because the the black hole is smaller but in some question how can this average data can tell us that our simulations are too variable in some sense is there an easy way to understand that uh the I I think the best way for me to respond here is that you will notice that the second to last constraint the LC variability constraint I have not put in the derived from EHT data uh section so in fact for this particular variability constraint that I was talking about where things are the simulations are too variable this is not something that comes out of the average image this just comes out of uh Alma light curve data well not just Alma light curve data but but light curve data where you just look at the source you don't try and make a movie at all you just count all the photons you don't care about exactly where they came from you don't care about the image features and you ask how many photo photons am I receiving as a function of time so that's where this light curve variability constraint comes in it's not from the image it's a very good point um one constraint that I I didn't mention is this 4 giga lamb to variability and so that does come from EHT data you ask about you know what uh kind of what's the variability at a particular place in the Fourier domain um that that you know you can also ask questions about the structure you know how much do you expect uh small scale features to change versus large scale features to change um and there the constraints actually aren't uh as as problematic as the overall light curve variability constraints so you can get variability information from the EHT data but uh my statements about the variability in models overall being too much um was a pure you know integrated light curve statement or it wasn't purely from EHT data yeah and uh I guess I'll follow that up by saying um you know you could ask the question well you know why didn't we know this before um you know because I'm saying it's it doesn't have to do with EHT data and so the precise statement about variability is for the models that match the EHT data constraints that match those those five constraints up above those models um are in general too variable um and of course I could write down some other crazy model that doesn't look like a donut at all that reproduces the variability so you know it's kind of like this game where you want to match the EHT constraints but you can't just match the EHT constraints you need to match you know what's going on in the larger ecosystem of observations I think I think we have more time for another question that it's uh here in in the youtube channel so it's combining Newton's constant and the speed of light it's not enough struck a time scale a mass scale is also needed what mass was used the black hole mass what does this characteristic time represent yep so yeah that's that's that's exactly right of course so the time scale is uh in in this box the second row on the left GM over C cubed um and so um I guess the answer to this question depends on on how much you know uh GR but effectively there's a characteristic size of an object in in GR the event horizon the event horizon the size of the event horizon scales with the masses the the black hole as it turns out it also cares about the angular momentum but a good general kind of intuitive picture to have is is that the black hole event horizon is has a radius of something like two times this GM over C squared number and so the time this characteristic time that I've written down here if you multiply it by four it's kind of like the amount of time it would take for light to pass through uh this is a very non-GR statement to pass through one side of the event horizon and come out the other side of the event horizon obviously that's not going to happen because that's not what an event horizon is but it's about the time it would take you know you multiply this by four or multiply by 30 is a better number for GR um it's about the time it would take for light to kind of go around the black hole um so again you know given a black hole mass you can figure out what the event horizon size would be and then you can ask the question how much time does it take for light uh to to go around the black hole precisely that this 20.4 seconds is just the light crossing time for one uh black hole mass but if you want a more physical intuitive picture you multiply this number by anywhere between five and 60 or whatever which you know five and 60 in a strontometer they're the same number so you multiply this 20 by five or by 60 and that's the amount of time it would take for light to kind of travel around the black hole and at a reasonable distance okay thank you george oh i see roberto sorry this year no go ahead very nice uh webinar george so i was wondering because since the earth we're we're within the the galactic plane so we're kind of watching the observing the black hole in the in a very particular position yes is any other black hole candidate to be observed like to learn more about the like i don't know the black hole is it possible to reach with the um right so i i don't have this plot i should have included it as a backup slide so there's um all right so the direct answer to your question is um you know there may be another one well so actually the the the zero thorda direct answer your question is we also looked at m87 so we looked at the galactic center black hole and we looked at the m87 black hole um so so that's sample size two um uh we expect just from general arguments that maybe there would be another black hole that we could see we don't know about it yet it's probably unlikely that we don't know about it but but anyway just from general geometric arguments but there are two numbers or there are two criteria for being able to see this or being able to see a black hole one is the size um and the other is the flux so it does need to be bright enough um and and as of yet uh you know the other objects that we know of are either too small or too dim um to be seen again you know by general geometric arguments maybe we would expect that there's another one that we somehow don't know about but I don't know so um I think that really the the our our best hope for getting these high resolution images is it's probably or movies probably lies with the galactic center or with m87 um which we've already imaged however if you go to space and you don't care about this high resolution um uh movie and you just kind of care about general features of the source if you go to space we can see you know maybe hundreds more black holes you won't make movies of them um you know high resolution movies or at least not not in my lifetime well not i'm i guess i should not kind of wait yes in my lifetime um but i i don't i don't think that um you know we'll we'll have high resolution movies in in my lifetime of hundreds of black holes but you couldn't make some measurements learn something maybe about their masses or something but okay thank you yeah okay thank you very much and thank you everyone for attending this uh last season of this webinar of the season we will resume maybe in august september and i'm trying right now to convince Jorge Cuárez we are our first speaker so now i make it public so there's more pressure on that and thank you everyone and thank you george very much for this interesting um a webinar and then i can imagine people can go to your webpage if they have follow-up questions stuff like that so thank you very much and have a nice break bye bye thank you okay we are not live