 Hello. Welcome to our Latin American webinar on physics, number 151. We are very excited because this is the beginning of our season number 16. And our guest today is Professor Ken Von Tilburg, who will give an epic seminar and we'll talk about that later. So Professor von Tilburg is an assistant professor at the New York University and the Flat Iron Institute. He got his PhD at Stanford University and after that he was a postdoc at the Institute for Advanced Studies at Princeton and at the same time at New York University. And after that he was for a brief period of time a postdoc at the Kavli Institute for Theoretical Physics in Santa Barbara before joining NYU as an assistant professor. Today he will give us a talk on his recent research on extended path intensity correlation or epic for a short. Thank you very much Ken for being with us. We are looking forward to your seminar. Remember for you all following the transmission on YouTube that you can ask your questions on the chat of YouTube or you can send your questions via Twitter and make sure that you follow us on Twitter, Facebook, YouTube so that you are posted on the upcoming webinars. We have one next week, for example. Alright, so take it away Ken. Thank you very much and good luck. Thank you Walter for the invitation. Yeah, so let me share my screen and start the presentation. Okay. Yeah, so again thank you so much for the invitation and please feel free to ask questions. I'm happy to provide extra clarification during the presentation. So I will be talking about work that I've been conducting with a postdoc at Perimeter Institute, Mario Scalanis, and then two faculty, Masha Barriachtar at the University of Washington and Neil Weiner at my own institution. So what we propose is EPIC, which is a variant of intensity interferometry, which is a classic technique invented in the 1950s by Henry Brown and Twist to very precisely measure stars. And so many of you might not have heard about it and I will explain why the technique was abandoned and I'm advocating now to reinvigorate it with a slight modification. So the experiment, the diagram is shown on the slide. And if you want to read more details there in a short letter here as well as a much longer paper, which is the second archive number here. So the reason for inventing EPIC was to use it for astrometry. So we'll give you a brief introduction to astrometry, which is measuring the positions and motions of stars and other celestial bodies. And I will summarize the capabilities of standard imaging telescopes like your eye and standard telescopes. And I will also summarize the capabilities of amplitude interferometers, which is the current state of the art for astrometry. Then I will tell you about intensity interferometry. So the old classic idea from the 1950s. And then some of its limitations, which is one of which we saw with our extended path addition to intensity interferometry. And then I will outline the performance of this device based on realistic parameters and some technical details. But I'll try to keep the technical details at a minimum, but feel free to ask me afterwards. So the four authors of the paper are particle theorists, not observational astronomers. So our aim was mostly at the science because some of us had been thinking about how astrometry could benefit fundamental physics as well as astrophysics. And so in the long paper, we outline a myriad of applications in astrophysics and fundamental physics. So the things you can do with Epic is extreme precision measurements of many systems, including binary orbits, measuring their masses and orbital parameters. Epic can be used to do exoplanet detection. That's maybe the most the scientific case that's mostly a homerun. One can perform stellar microlensing, possibly measure galactic acceleration in the future, as well as use it for more cosmological applications like calibrating the cosmic distance ladder and measuring gravitational lensing of very small dark matter halos in a strongly lensed quasar system. And a few more, a few other applications that I haven't listed here, but that I might mention at the end. Okay, so it's three parts. So let me start with the introduction. So astrometry is by far the oldest science, right? So we've been looking at the stars since since I don't know exactly when humans first started looking up at a long time ago. So the the Greeks first made it into a proper science. So in particular, Hipparchus, he invented trigonometry alongside measuring stars. He measured the earth's procession of the spin axis and invented astrolabe, which is that yellowish device depicted there, which is where he depicted the stars on. And so he roughly had a precision on the so global astrometric precision. So the precision at which he made tick marks on this astrolabe of about 20 arc minutes, she's maybe 1% of a of a radian precision. And so the the limiting angular resolution of an eye just by the diffraction limit is about one arc minute, which is a factor of 200 or so, sorry, a factor of 20 or so smaller than the absolute precision. So yeah, so Hipparchus was basically the inventor of systematically cataloging the stellar positions. Although there's no written records, which were first evidence of which is the Almagis by ophthalomy. And then I this knowledge was preserved by all Sufi and other Arab scientists. And then, yeah, but really, most most of the most of the observations were done with with just with the eye. The first person to really systematically improve upon it with with instruments was Tico Brahe, who invented this mural. So light entered through that small hole in the wall. And then he could measure he could measure the stellar position on on on a mural. And he achieved that 10 minus four radian precision or so. Of course, since then the telescope was invented, and I'll skip Galileo and everyone else in between. And so right now the state of the art are in terms of imaging telescopes on the ground are 10 meter class telescopes, such as the kick observatory, so it's two 10 meter diameter telescopes. And so these large telescopes are limited by astronomical seeing, which is basically saying that the the the atmosphere acts like a lens with some characteristic size of 10 to 30 centimeters or so, limiting your diffraction limited resolution by the wavelength of the light over this parameter, this free parameter set by the atmosphere. If you use adaptive optics, as the kek telescopes do, you can you can improve upon this and get the the minimum resolution set by the wavelengths over the diameter of your telescope. In space, you can do much better, or at least you don't have you don't have to use adaptive optics because you're not limited by the atmosphere. So the first serious astrometric space mission was Hipparchos, the the satellite, not the person launched in the late 80s, and it cataloged about 10 to the five stars with 10 milli arc second precision per measurement on each of the stars. And right now, Gaia is running with a factor of 10 to the four increase in precision, 10 to the four increase in catalogue and and about a factor of 100 in precision per measurement. And it also will run a little bit longer, it's still running to this day. And three years of data have already been publicly released. Um, so Gaia has the same, roughly the same diffraction limited resolution. However, it can do exquisite light centroid in precision. So it can it can measure the the position of the light centroid of a star to better than the resolution, it's rough roughly by the signal to noise ratio of the detection of the star. And so as you see, as you can see, the light centroiding precision for a typical star in Gaia can be or for a relatively bright star in Gaia can be about 100 micro arc second, whereas the the fiducial resolution is is a little only a little bit better than one arc second. Let me just quickly mention spectrograph. So so many ground and space based telescopes, not only image the light, but they split it spectrally, right? So they let it impinge on a prism or a grading or a combination there of and split it into different wavelengths and then record the intensity as a function, the function of wavelength or frequency with the typical resolving power. So wavelength over spread in wavelength set by the depicted here on the y axis. So as good as the factor of 10 to the five, and a factor of a few thousand is doable with standard diffraction grading. So these are all of the instruments mounted on the kick telescopes. Okay, so that that that was it for quote unquote standard imaging telescopes. One can achieve even better light centroiding precision and astrometric resolution using interferometry. So I will so epic is a variant of intensity interferometry. So so let me contrast it with standard interferometry that that you might be more familiar with, which to be precise is amplitude interferometry as opposed to intensity interferometry. So what you do there is, so let's say you have a source s here. What you do there is you record the electric fields at telescope one and telescope two. So you record both the amplitude e sub s, which is the same at both sites, as well as the phase of the wave. So here I'm using this complex representation. And this the phase of source s at position j tells you, you know, oscillates with proportional to the frequency. It also knows about the propagation phase from from s to to your telescope j. And there might be some random phase at the emission site. So I'm assuming that this phase is kind of random. So it's not a not a laser. It's some chaotic light source. So the you can use this for astrometry because the path difference from s to one and s to two has knowledge of the position of the source. So in particular in the small angle approximation as proportional to the vector, the unit vector pointing to the source dotted into d, which is the baseline separation. Okay, so that's a lot of setup. But so what's the actual measurement? So you record the electric fields at sites one and two. And you if you can record recorded as a function of time, so if you can keep track of this oscillating phase, then at the end, you can measure this correlation of e one and e two star. And what you get is something proportional to the amplitude squared, which is typically difficult to calibrate. But in particular, you measure this, this path length difference in the relative phase. And if you use your, if you use the correct time offset, these random phases at the emission point are common in one and two. And after you take the expectation value, which you can think of as taking the time average, that that if you take the if you set up set the time offset right, that cancels out to one. And you just measure this phase here, which is the position of the source. And so doing doing so actually gives you a fiducial resolution that's the wavelength of the light divided by the baseline separation D. So not the size of the telescopes, but just the total baseline separation. And then again, if you measure this quantity very, very accurately at a high signal to noise ratio, then you can do better light sorting, light centroid precision by the by the SNR. So amplitude interferometry one can perform in the radio band where electronics are fast enough to keep up with this rapidly oscillating phase at hundreds of gigahertz. And so the event horizon telescope, which is a collection of radio observatories around the world achieves a fiducial resolution of about 30 micro arc seconds. So that's what you get by taking roughly one millimeter light divided by the diameter of the earth approximately. So that's the state of the art in the radio. And it's difficult to shorten the wavelength and measure the this rapidly oscillating phase correctly while keeping atmospheric phases at bay. And of course, it's hard to increase the baseline beyond the size of the earth unless you go to space. So one way to shorten the wavelength is to do it in the optical. So electronics are not fast enough to keep track of this phase in the optical or infrared. But what you can do is physically recombine the light. So rather than recording the electric fields individually, you can let them impinge on on some apertures here. And then so if this is E one recorded here and E two recorded there, you can physically recombine the light and interfere it and look at look at interference outputs and beat nodes. So which means you don't have to keep track of a optically so an optical phase oscillation. So that's what optical Michael sin interferometers do, such as the the char array and and other state of the art interferometers. So they achieve 200 micro arc second resolution at best. And that's what you get by taking an optical wavelength over a few hundred meter separation, I believe char is about 300 meters. And this is difficult to improve significantly. Because making the baseline longer than 300 meters is prohibitively expensive. So you need to know the relative distance between these telescopes to better than a wavelength precision. So micron or better, ideally a tenth of a micron or so. Otherwise, you lose this relative phase information. And that's just difficult to do over long distances, as we know from LIGO and other experiments. Okay, so that's the state of the art. So so right now, with Gaia with the event horizon telescope, and with char and gravity and other amplitude interferometers, we can achieve light centroiding precision roughly at tens of micro arc seconds after after integrating for for some time, which is now the state of the art. Okay, so that was my introduction to astrometry. I will now tell you where intensity interferometry comes into play and then how astrometry can be used for these specific scientific applications. Okay, so the basic idea. Well, sorry. So the the premise of intensity interferometry is that one can in principle achieve sub micro arc second resolution and possibly even better light centroiding precision. And so let me explain the the the basic idea. So just like in an amplitude interferometry, you have two telescopes observing, let's say, to start with one source. And rather than correlating the electric fields, one correlates the intensity field. So you record the intensity at site one as a function of time and the intensity at site two as a function of time I one and I two. And the the the quantity that the actual observable in the end is this excess fractional intensity correlation. So you take these two intensities, you time average them. And you divide by the, so you take the product, you time average them, you divide by the product of the averages and subtract one. If these intensities were completely independent, this final observable would be zero, but kind of counterintuitively C is not zero, it's positive in general for photons. So so it's an amazing fact that if you look at the sky with two telescopes that may be quite distant, the time that the intensities at both sites are correlated, which means that the time of arrival of photons that in one telescope and one that's quite distant would be would be correlated. Okay, so so let me explain why. So if you so here in the on the top panels, I'm plotting I one relative to the average and I two relative to the average. So if if your source is some chaotic light source, like a star, so it's not not a laser, the intensity actually has order one fractional variation, fractional variation, because it's a superposition of many wavelengths with random phases. And so if you put together a superposition of many, many, many wavelengths and with random phases, you get something that actually has a variance of a fractional variance of two. Sorry. Yeah, fractional, fractional variance of two. So order one, fractional variations. And so what that means is that the the intensity field that sites one and two is exactly the same up to some different travel time from s one, one to s two. So here, these actually are the same plots, except I two is shifted by 50 picoseconds, I believe to to the right. You know, so here it's at seven, you know, 775 this peak and here it's at 825 picoseconds. So that this time variation is very, very rapid. It's unresolvable by the human eye. It's also unresolvable by most ccd cameras or even the best, the fastest photo detectors to date. Furthermore, you don't detect intensities where light is quantized. So if you have a single photon detector, what you actually record is the time of arrival of the photon. So so here I've just made some really, really bright source. So the the red sticks here are the arrival times of the photons and the the rate of detection of photons is proportional to this intensity field here. And so based on these red sticks, right, the photon arrival times, you try to you try to reconstruct, you know, build an estimator for the instantaneous intensity field. And it's a terrible estimator. Most of the time for a realistic source, you get zero because you don't detect a photon. But the the correlations between these fields does survive at weak signal to noise ratio in in your reconstructed intensity field. So if you take this reconstructed intensity field, the blue line at I one, the blue line at I two, and you compute C, so this fractional excess intensity correlation, this is what you get in green, right? So mostly, you know, over 1000 picoseconds, right? So one, one nanosecond observation, where you see these, you know, 20 to 30 photons, you you basically just see noise, right? You you get something that's consistent with zero. But the signal is this tiny, this tiny bump here at 50 picoseconds. So if you integrate a little bit longer, so if you integrate a microsecond, so 10 to the six picoseconds, so do this 1000 times, then you actually see it, you see this excess intensity correlation. And here I've assumed sort of a typical wave length of 500 nanometers, spectral light, spectral resolving power of 5000, and a timing precision of 10 picoseconds, which is roughly the uncertainty on this on this line here. So this is a really, really bright sort, really bright source that's unrealistic. But this is just to prove the point that intensities can be correlated. And it's just the fact that if you take, let's say, telescopes one and two very close to each other, they see the same intensity field. And so in that limit, this is I squared average over I average squared, which is measuring the fractional variance of the light, which I said was two. So yeah, that's the origin of the intensity correlations. Okay. So, so you measure C, and you measure you can, if you set tau right, you can measure it to be non zero. But how does that tell you about the position of a source? So you do the same thing as an amplitude interferometry, except now you compute this intensity correlator, which is really a four point function of the electric field, because the intensity is just E squared at each site. And if you go through the same math as before, you get, you get a number of terms now. So if you remember, there was this random phase here at the emission point, and these brackets signify basically time averaging, or if you wish averaging over these random phases. So if you take the random phase from E one here, and, and cancel it against the negative of the random phase of this one, you just get the average intensity, and likewise for the contraction there. And that will just give you average intensity at one, divide by average intensity at one, same at two minus one, like just give you zero. But you can do the other another contraction as well. So the random phase in the electric field that telescope one, it will you can contract with the oops with the with the random phase that in the electric field that telescope two. And so this cross term between the between the blue terms and likewise between the between the red ones is this excess intensity correlation. And if you go through that, you basically find an excess intensity correlation that's one over the spread in wavelengths recorded, and the timing resolution. And then, yeah, that's that's roughly the size of the excess correlation if you set tau to be the correct value. So commensurate with the longer time, the light has to travel to two as compared to one. And so, yeah, so so that that extra time then gives you some information on the global position of of the source s. So that's the global astrometric precision. And it's absolutely terrible. So it's only, I mean, it's not that bad, but it's it's nothing. Nothing too great. And because the this global astrometric precision is not the wavelengths over the baseline. But it's more like the uncertainty in the time of arrival, which is maybe 10,000 or 100,000 wavelengths divided by the baseline. So this is not that great. So it's not it's not unlike some device like Gaia, this intensity interferometry has bad global astrometric precision. However, it has very good relative astrometric precision. So we cannot tell the global position of star a relative to some global celestial reference frame, but we can tell the angle between two sources a and b extremely accurately. So if you go through the same math as for the one source case, right, so you record the total electric field at sites one and two, which is now a superposition of the electric field from from a and the electric field from B, likewise at site two, you get you get this again, this four point function of the electric field. And I've color coded here, the four electric fields depicted here. Right. So what what you're measuring is the propagation phase of a to one plus B to two, so that you you you measure the distance RA one and RB two minus RA two and RB one. So that's one of the terms that comes out in this in this four point function. And so this doubly differential distance, if you do some simple trigonometry, it's the wave number of the light so two pi over the wavelength times the baseline dotted into the relative angle between A and B. So, yeah, your correlation has really good. So really sharp information on the relative angle of of B and A. And so in general, so this is what you get for two point sources. In general, what you get for a more extended image is this excess correlation is the modulus squared of the Fourier transform of your image at some angular wavelength that's given by the angular wave number that's given by the baseline over the wavelength of the light. So that gives you a relative angular resolution of lambda lambda over D, just like for an amplitude interferometer. The difference here now is that it combines the best of both worlds here. So it's the for amplitude interferometers. It was also the wavelength over the baseline. But the wavelength was quite long in the radio. But the baseline was maximal. In the optical, the wavelength was is short. It's an optical wavelength, but the baseline could not be long, because you have to physically recombine the light in intensity interferometry. You get the best of both worlds because you can measure the intensities at one and the intensities at two. And then only after after your observation offline, you can correlate the light. So your baseline can be it can be kilometers. And so you can improve upon the fiducial angular resolution by making your baseline longer and keep an optical wavelength. So this is how you can get micro arc second precision with an intensity interferometer. Okay, so this is a depiction of what the what the actual correlation so the C looks like for two point sources. So so if you if you take the the baseline to be or if you take the angle between the sources to be much less than a micro arc second, so much less than 10 minus 11 radians, then you see this excess correlation. But as the angle increases, you see this you see these fringes. So by measuring C, you get information on this on this angle here. Okay, so this intensity interferometry was invented in 1954 by Henry Brown. And then Henry Brown and Twist in 1956. Basically demonstrated it in the lab and on a celestial source, namely Sirius. So what they did was they had some artificial source mercury lamp. And then they split the light with some beam splitter here. And what they found was that the light recorded in this PMT and that PMT was correlated right so they demonstrated this photon bunching effects. So this intensity correlations at two different photo detectors in the lab. I believe this is the the the number the significance and number of sigma so seven, seven sigma, six sigma and a couple of experiment, a couple of independent runs. Then instead of using a mercury lamp in the same year, they actually just used Sirius with some leftover reflectors from World War two. And as they varied the baseline, they actually saw this intensity correlation drop off. So so they saw first of all, they saw excess correlation, which decreased with with baseline, which is from which they could infer the finite source, the finite source size of Sirius. So they measured it to be 6.3 milli arc seconds, as you as is written in the caption here, which was the first measurement of that quantity, the angular diameter, and is in fact within 10% of the true value, which I think is 6.8 milli arc seconds. So that was their first device. Then Henry Brown went to Australia at Narabri or Narabri, I don't know how you pronounce it. So at sea level, he built these six meter reflectors, basically on railroad cars that could sort of drive into the shed when they were being stored. And so that you had a 200 meter baseline here in the 60s and 70s. And so they would measure the intensity correlations at these two sites. And so this is a close up of the device. So it's as you can as you might might be able to tell that the tolerances on this device are are not quite as tight as on current imaging telescopes, which is one of the advantages of intensity interferometry. And so what they were able to do is measure the angular diameter of 32 stars with exquisite precision. So not just serious, but but these sort of classic bright stars in the sky. And they got, you know, so the typical angular diameter is a milli arc seconds, but then they got some to 30 micro arc second precision from the ground without adaptive optics with shall I say crappy telescopes mounted on railroad cars at sea level in Australia. And to remind you, so this 30 micro arc second precision, it's kind of an unfair comparison, but it's roughly the same precision at which the the size of the light ring of the creation disk of M87 was measured much, much earlier. So why have most people not heard of it? So why was the technique abandoned after 1974? There's two reasons. So the two reasons are the limited field of view and the limited signal to noise ratio. So what I showed you on one of the previous slides was this excess intensity correlation as a function of the angle between the two stars. So you get these sharp fringes, but they sort of die out after a while. And that's because light from different wavelengths starts to destructively interfere after a while and wash out these sharp fringes. So if the stars are too far apart, you don't get this sharp variation of the mutual coherence. So if the stars are farther than 10 to minus seven radians, which is still a very small distance, and it's unlikely to find too started in that distance typically, you don't get this sharp information. So that's why Henry Brown and Twist only use it to do measurements of angular diameters of stars, right? So it's really good at measuring the morphology of an object of a single object, but not at the relative position of two objects. Well, it is, but it's unlikely to find two stars that close together because of the limited field of view. And the second limitation was a signal to noise ratio. So as I mentioned, this total correlation is set by one over the timing resolution. So this fractional correlation C gets to be very small if your timing resolution is very slow. So there's two things you can do. You can narrow the band at which you observe the light. So by having a higher spectral resolving power and by shortening the timing resolution. So the improvements in the signal to noise ratio just come from ultrafast photon detectors that have now been developed that can detect the time of arrival of single photons to picosecond level precision using these single single, these are superconducting nanowire single photon detectors or SNSPDs, as well as you can increase the spectral resolving power now with spectroscopic techniques that were not quite as mature yet back when Henry Brown and Twist developed an intensity interferometry. Okay, so the signal to noise ratio is overcome by technology. The field of view is what we solve. So what we do in EPIC where we come in is basically take intensity interferometry, but as you collect the light from telescope one, you split this light into two paths of unequal length. So one path is longer by L1 than the other path. Likewise at telescope two, it's longer by an amount L2. Then you spectrally split the light and you do this correlation. And so what that does, the real reason for why it increases this field of view, so overcomes this limitation is that as this angle between the two stars increases, as I said, you're interfering the sort of these short paths that are not crossing versus the crossing ones here. And at large angle, the crossed paths in the middle here, they are longer than the sort of straightest paths from A to one and B to two. So this relative time delay basically lengthens the shorter paths by L1 and L2 to make this differential distance the same. So you split the light and you create many more fringes, some of which have equal path lengths. And so one way you can do this is basically as you receive collimated light, have a 50-50 beam splitter, then it's not exactly equal path lengths, and then you recombine the light again, spectrally split it and then detect it. So what it does is it takes this main intensity fringe and it splits it into four. So for example, this new fringe is where the light from from the source is delayed and is extended in telescope one. Oops. This fringe here would be where it's delayed in telescope two. This fringe here is where it's delayed in both, and this is where it's not delayed at all. So you can sort of, you can take this main intensity correlation fringe and split it into four based on the four possibilities, right? You have two possibilities at each telescope. And for two sources, what it does is I said you're sort of comparing RA1 plus our RB2 minus RB1 and RA2, and the red and orange paths are longer than the green and blue. So this path extension just lengthens green and blue by a little bit. And what that does is it takes these sharp fringes and it creates another ghost fringe, this little red spike here, and it looks like a spike just because it's a log scale. But in reality, it's just a copy of this fringe pattern centered at some reference angle, which is set by L1 plus L2 over the distance, over the baseline distance. So this effectively increases the angle at which you can get mutual coherence between the sources. Yeah, and solves the limited field of view problem. So what are the capabilities? So we have these three phases with fiducial specs of the instrument. So the aperture diameter, timing resolution, spectral resolving power, and number of telescopes per site. And so with two four meter dishes with commercially available timing resolution and commercially available spectroscopic gradings, you could achieve with over 10 to the four seconds of observation. So a couple of three hours, you would achieve a 20 micro arc second light centroiding precision. So a factor of four better than a single Gaia observation. And the global precision, the main figure of merit is this relative light centroiding precision. And then for phase two, which is 10 meter class telescopes with better timing resolution already demonstrated in the lab, in fact, three picoseconds is already demonstrated in the lab. You can get one micro arc second observation. And then in phase three, you can improve by another factor of 20 by increasing the light collecting area, but with having many telescopes per site and higher resolving power and smaller timing resolution. So yeah, I think in the interest of time, I will skip this. So the technique works, it depends on the brightness of the source. So these numbers here were for a sun like star at 100 parsecs. So here I'm showing you the typical signal to noise ratio in the phase two experiment. So this is the error on which you can measure this correlator and any one spectral channel for this is for a main sequence star. So a sun at 100 parsec. So you can get relatively good signal, relatively good precision of about 1% on this fractional intensity correlation in one channel. And the actual signal is 10%. So you get a factor of 10 signal to noise ratio in each spectral channel. And then you measure it over as many as 10,000 spectral channels. So your overall signal to noise ratios is very large. And yeah, so for this is for a blue giant at five kiloparsec, I believe a white dwarf at 10 parsec. So all of these stars, you can you can measure this on your lights on centering precision is optimal. So naively, you say, okay, well, Ken, why are you taking 10 kilometers? So you don't arbitrarily gain. So as you take the larger, your relative light centering precision improves, improves, improves. However, at some point, there's a it sort of saturates and then gets worse again. And that's basically where you start resolving the star itself. So remember, I said, really, what you're doing is you're taking a Fourier transform squared of your image. But if your star has finite extent that Fourier transform gets smaller, and then it becomes that much harder to detect. So there is some optimum distance. And it depends on the parameters of your experiment. It's typically between one and 10 kilometers. But still at 10 kilometers, your resolution is micro arc seconds. And your light centering precision can be better. So for phase two, you can get pico radiant precision. So fractions of micro arc second precision. And for phase three, another factor of 20 better. This is just a repeat of the observation. So that for stars, the two main determining factors that tell you the precision of the observations are the temperature of the source. So how what the surface brightness is in the diameter. And for phase two, I think one phase two and phase three, one can even do light centering on quasars quite accurately. So yeah, and phase three, you get better numbers. I will skip the detailed explanation of this in the interest of time. So I just want to say that intensity interferometry can be performed on the ground in principle at sea level, because it's entirely insensitive to atmospheric aberrations. So as this atmosphere fluctuates here, it imprints phases onto the onto the onto these paths. But at relatively small angles, it's the phase imprinted on the blue path is the same as the phase imprinted at the red path. Likewise, at site two, so orange and green will get the same phase. And the correlator is is doubly differential. So it's the difference of these two paths and the difference of those two paths. So it cancels out completely as long as the angle is small. And that basically tells you that the maximum field of view is set by the atmosphere. So this is no longer true if the angle is big enough. And that's about a few a few arc seconds. Okay, so that's my introduction on astrometry and how Epic can improve the the field of view of intensity interferometry and ultimately the capabilities. I don't have I have no time left, but let me just quickly flash what the science applications are. So what can you do with this with these capabilities? One thing you can do is measure characterized binary orbits very well. So as two white dwarfs orbit each other, you can trace out their relative separation very accurately along some baseline direction. So if you do this, I believe I think I took 40 nights of observations here. You can measure things like the semi major axis inclination angle and some of the other Keplerian orbital elements as well as the component masses and the distance to the binary star to sub percent precision. So here the masses are at yeah at half a percent precision with phase two parameters. You do need to break some degeneracies with spectroscopy. Otherwise, there's some mass semi major axis and distance degeneracy. I think that the most home run application is exoplanet detection. So you can look for the astrometric wobble that the planet imparts on its host star. Gaia has some capabilities and in fact has measured some exoplanets this way. Not not quite. I think it's discovered one such system so far. That was not previously known. But it will see many by the time dr5 comes around is released. However, we can do especially on nearby stars, we can do two or three orders of magnitude better or four orders of magnitude better on very nearby stars with phase three. And so by being able to measure this relative separation between the host star of the planet and some reference star very precisely, we can measure much smaller masses as low as an earth mass around a sunlight star at 20 parsec or for phase three for host stars much farther than 20 parsec. The galactic acceleration field is about a nano arc second per square year and looks like this at about one kilo parsec. That's very difficult to measure but with phase three and 10 years of observations, I think it should be doable. We can also do things like measure the relative separation between the two images during a stellar microlensing event. So the main image here as well as this additional strongly lensed image there. So by measuring that you get very accurate measurements of the Einstein radius and therefore the mass of the of the micro lens again to sub percent precision. You can do parallax measurements and help calibrate the cosmic distance ladder, which is helpful for determining the expansion rate of the universe. If you're interested, I can tell you about this. I think this is work in progress on measuring the relative separation of quasar images, which I think will give you information about very small scale dark matter substructure. So the bottom line is that I think these these images, so this is what David Kaplan, that the relative position between the these quasar images is fluctuating stochastically by about one micro arc second or so, or a little bit less point one micro arc second from the dark matter structure in the lens galaxy. And I think this should be measurable with phase two or phase three. So final slide. We are working with Nick Coney-Darius at Carnegie Observatory to demonstrate this in the laboratory. So with artificial light sources and a sort of 200 meters setup in a parking lot, just like Henry Brown and Twist did it in 1956 with a mercury lamp. So we want to demonstrate the this path extension technique. There is a I'm doing I have I have work in progress also with student at NYU Calvin Chen postdoc at NYU and Jun Wu Huang faculty at PI on a variant of the expanding photosphere method again to do cosmic distances accurately but then with with intensity interferometry. So that's that's it for my my presentation. So I'm just leaving you here with the picture of the of the instrument again. So yeah, the bottom line is that I think you can achieve a light centroid in precision. So measure the relative separation of stars to pico radian precision. So sub micro arc second in the later later phases of an experiment, as long as those sources are separated by a few arc seconds. So so we've by less than a few arc seconds. So so we've extended the field of view of the technique and and see what you could do for certain scientific applications if you pull out all of the stops in terms of spectroscopy and timing resolution of the light. So so yeah, the bottom line is that you one should not just look at the position of sources on your photo detector or measure the wavelength. I think there's a lot of information if you record also the time of arrival of the photons and compare them in separate telescopes. So and I'll leave you with this quote from Henry Brown in 1974, which is when his narrowly observatory shut down. And so that's some source of inspiration. Yeah, thank you. Thank you very much Ken for this very interesting and epic talk. So now we are open to questions. Remember that if you're watching on YouTube, you can ask your questions in the chat of the channel. And then while you do that, I'm going to open the floor to see if maybe anybody in the zoom meeting has a question. Well, while you think about okay, go ahead. Okay, so can I do have a question? I mean, very nice question because I mean, of course, I haven't have no idea about astrometry. But so most of this thing, I mean, all these observations are done in the in the visible spectrum, right? Okay, is there any use of thinking about extending this method of to, you know, as a high energy species, I always think about high energies to, to things with I don't know gamma rays or x rays or I mean, going away from the visible. Right. In principle, you can do intensity interferometry at all wavelengths. So you can do it at longer wavelengths in the infrared, which sometimes is useful if you want to see through dust, or you can even do it in the radio. But in radio, you might as well do amplitude interferometry. So if you can do amplitude interferometry, there's no reason to measure some higher point function of the electric field. You can go to higher frequencies as well, higher energies. As long as you you have a source that emits them. So the the issue is, is that typically the, you know, stars only get so bright, even quasars only, you know, I guess they're inner regions, they emit an x-rays, but yeah, you typically run out of sources at very high, very high energy. So they're non thermal x-ray and gamma gamma ray emission is typically sub dominant. So you so for this technique to get enough signal to noise ratio, you need to collect a lot of photons. And you need to collect them with high spectral resolution. So you need to split the light in many bands and record it with good timing resolution. So these detectors, these single, single photon detectors, they are primarily, sorry, it's getting taking some time to go through here. They're primarily my computers like, they're primarily developed for optical and infrared wavelengths. They do work at x-rays, but with worse with worse timing resolution. So this picosecond demonstrations have mostly been done in optical and infrared. Yeah, I should say also that as you go to higher energies, the coherence time of let's say an x-ray light source is that much shorter. And so it's that much harder to detect the sort of temporal correlations of that field. Yeah, so I wish it was possible. I mean, it is possible in principle, just in practice, I think it's not achievable. So I think blue light is the ideal case. So the blue light from a quasar or the blue light from a hot star, like a white dwarf or a blue giant, that's sort of the best case scenario. Yeah. Thank you. Any questions, Roberto? Yeah, yes, I have a question. I have two questions. I mean, one question and a doubt that maybe it's nothing related with it with the topic, but anyway, the question that I was wondering is because it seems that the epic is very good for this to resolve these different sources, sources that could be affected by different situations like when you mentioned the microlensing or other phenomena. But I was wondering because of this time resolution, is it possible to see or to maybe to measure fluctuation due to gravitational waves or something like you make a small shift between the path of the two lights? In principle, yes. In principle, yes. And there's two effects. So there's a time of arrival and the angle, the angle of arrival of the photons. So you can measure gravitational waves in this way. And if let's say there were no atmosphere and you measured a source, say, from the celestial north pole and one in the celestial equator, so at 90 degrees, then you would get a relative angle of arrival, oscillation proportional to the strain. So if you have a gravitational wave with a strain of 10 to the minus 15, then you would get a 10 minus 15 radian order of magnitude relative angle of arrival. And so epic and measure that except 90 degrees separations are not feasible mainly because of atmospheric aberrations. So the field of view is limited by this isoplanetic patch angle. So a few arc seconds. And then because the angle is so narrow, the gravitational wave affects both light paths in the same way. And so the any phases cancel out, unfortunately. Yeah. If you do it in space, then there's no limitation on the angle. But I think in space, one may as well do amplitude interferometry. So I haven't, I've tried thinking about it, but but so far we have not identified a gravitational wave detection use case. Yeah. Okay. And the other because when you just recently, you showed the these two telescopes, and it's like an old image that you yes, yes, those telescope looks like very close of a man by the design like this. Cherecom telescope for the, I mean, those are for gamma rays, but to make it from the from this shower, you know, but the yes, exactly. And it's something that I didn't mention due to lack of time due to time constraints. But so intensity interferometry didn't die completely in 1974. In fact, several researchers have tried to revive the technique. And as you said, Trenkov, so cosmic ray telescopes detecting the Trenkov emission from cosmic rays have basically look like this picture. They care about very fast detection of light, but they don't care so much about imaging. So indeed, people have demonstrated intensity interferometry with a present day Trenkov telescope arrays. So the Veritas, Veritas has magic collaborations, they've all again measured angular diameters of stars in this way. Yeah, because and precisely because the tolerances on the telescope are not so stringent. If you want to do epic, the optical quality of the telescope does need to be better than what is depicted here or what is used in Trenkov telescopes, because you need to do the spectral splitting. So to get enough signal to noise ratio on distant stars, you need the spectral resolving power, which means you need a tighter point spread function. So our idea is applicable only on telescopes with good optical quality. It need not be quite as good as Keck, but it needs to be better than what was done at Narabri or what is currently implemented in Trenkov telescopes. Yeah. Okay, thank you. All right, so I think that we are at the end of this webinar. Thank you all for joining us and thank you, Ken, for giving this very interesting webinar. For all you who are connected, remember that we have this every two or every week on Wednesdays. Usually we have an upcoming webinar next week, where Ren Seuss will be talking about the EWST and then stay posted, stay put on for next announcements of upcoming webinars. Thank you again. I hope that you have a great week and I'll see you all next week. Thank you.