 Yes, we're live. All right. So let me introduce you. Hello, people. Welcome back to the Latin American webinars on physics. This is Joel Jones again from the PUCP in Peru. Our webinar today is given by Ramiz, who is a postdoc at the Niels Bohr Institute in Copenhagen. I need to mute somebody. Give me a second. I need to mute somebody. Okay. Give me a second. I am having some feedback problems. Are you having feedback problems? No, it's working fine for us, I think. I am having feedback problems anyway. I was muted and I unmuted myself since I thought that would be... Okay. Sorry. It's a local problem anyway. Sorry, everybody. Let me begin again. Sure. I muted you because I was having some feedback issues. I'll just take these things off and let me start. Sorry, everybody. Our webinar today is given by Ramiz, who is a postdoc at the Niels Bohr Institute in Copenhagen. This is webinar number 69 and Ramiz will tell us about a recent work of his and his collaborators where the cosmological principle is tested. So let me tell you a bit about him. He completed his PhD at the University of Geneva being a member of the Ice Cube and the CTA collaborations. He apparently did very well as he got a prize for best PhD student in 2016 from the Swiss Institute of Particle Physics. His current postdoc is a distinguished postdoctoral fellowship given by Karlsberg. So I guess he's more into beer than coffee. So anyway, before we begin, let me remind the viewers that you can ask questions and you can make comments via the YouTube live chat system. And these questions will be passed on to Ramiz at the end of the webinar. So Ramiz, feel free to begin the talk. So shall I share my screen now? Yes, you can show your screen. Yeah, thank you for having me. I'm very honored to be giving this webinar. I hope you'll find it interesting. I'm going to share my screen now. I assume my screen is shared. Do you see my screen? Do you see the slides, everyone? Not yet. Do you see the slides now? No. Okay, let me go back to Chrome. Screen share, application window, entire screen share. Wow, it's going into an infinite loop. Yeah, don't do entire screen. Just do that one. Perfect. You're on. I'm fine. You're fine now. Do you see my title slide? Everything is great. I'm going to tell you some mildly interesting work that we have done recently. I've done this with Roya, who is actually connected here and who would also be happy to answer your questions. She works at the Institute Astrophysic Dipari, also Jacques Colin, and Subir Sarkar works at Oxford. The cosmological principle, which you see on slide two. Yeah, well, I'm switching the slide, so I don't have to mention the slides, I suppose. The cosmological principle is this semi-philosophical idea, more of an aesthetic idea that the universe is statistically isotropic and homogenous. And once you start reconciling it with data, you have to put qualifiers like statistically and on large scales. This idea has actually existed from the times of Newton, who has mentioned it in his Principia Mathematica. And isotropy means that there is no preferred orientation to the universe, that all directions are equal. So the universe should be invariant under rotations. And homogeneity is the idea that there is no preferred position in the universe. You should be, it is invariant under translations. Or in a more sophisticated way, one can say that any observer in the universe on large scales should measure roughly the same properties for the universe. There are no spatial positions or directions in the universe. This means that for a stationary observer, and I'll clarify later why I mentioned stationary, he should see the same number of sources per solid angle in all directions. As long as he's looking at large angles and deep enough surface. Now, today we have a lot of experimental or observational evidence in support of the cosmological principle. For example, if you go to Wikipedia and check out cosmological principle, it will tell you that data from the Planck's satellite show the universe to be highly isotropic. And what you see on the left is a more read projection, which is like a map projection, like Mercatus projection that you have seen of the cosmic microwave background, which is a remnant relic radiation from the early universe. As they're right after the Big Bang, the universe was a hot plasma and photons and electrons and protons were continuously scattering off each other. And the universe is expanding and after a while the cooled and fell below the temperature where the photons and the protons recombined into formed atoms. And at that point, the universe became transparent and the photons decoupled and the universe became transparent. And from ever since then, the photons have been traveling towards us. And as the universe expands and cools, the temperature of the photons become lower, but it remains a perfect black body spectrum. The temperature has been measured to be about 2.725 Kelvin, but the difference in temperature between different directions in the sky is of the order of just 1 in 10 raised to 5 of that temperature. So of 100 microkelvins or so. And that is the color you see in the map. And on the right hand side, the figure shows how these correlation changes over different angular scales, the correlation in this temperature fluctuations. So if you expand the map on the left in spherical harmonics and you add up the coefficients, their squares for each L, the data points you see correspond to that measurement and the line corresponds to a theoretical model and the fit is so perfect. And from this fit, we know that the universe is made up of the density of the universe consists of roughly 26.8% dark matter, 4.9% ordinary matter, that is everything you can touch, taste and see is just 4.9% of the universe. And 68% of the universe seems to be made of this mysterious force or a cosmological constant that is causing the universe to accelerate in its expansion. And because the fluctuations in temperature are just 1 in 100,000 compared to the temperature of the universe, temperature of the CMB, this is considered to be evidence for the homogeneity and isotropy of the universe. However, the map you just saw is not actually what the Planck satellite measures. What it measures, the map you saw has a large feature that has been subtracted from it, a feature that is a factor of 100 larger than these small-angle temperature fluctuations that you see. And this is the CMB dipole. The dipole means that one half of the sky is warmer and the other half is colder and the power at this scale is a factor of 100 larger than the previous temperature fluctuations. So one half of the sky is warmer by about a milli Kelvin or so than the other half. This suggests, and because of the theoretical unpalatability of having such a large dipole to be of primordial origin, that is such a large dipole to be a preferred axis for the universe, we believe that this dipole is caused exclusively by our motion with respect to the rest frame of the universe. So if we are moving with respect to the CMB, the photons in one direction, the direction towards which we are moving will be blue shifted, whereas the photon in the direction from which we are moving will be redshifted, and this causes a temperature difference. And by measuring this dipole accurately, we now know that we are moving with respect to the rest frame of the CMB at 369 kilometers per second with a small error bar in this specific direction. The right ascension and declination correspond to a direction in the sky. Now we know also from par lax measurements and observing nearby stars that the Sun moves around the galaxy at 225 plus or minus 18 kilometers per second and it turns out that this motion is actually counter aligned with the motion and in total the local group which is the group of galaxies consisting of the Milky Way and some nearby galaxies is moving in a specific direction which falls within 30 degrees of the CMB dipole direction actually at 627 plus or minus 22 kilometers per second. What is the origin of this motion? If the universe is isotropic and homogeneous at every scale, obviously we shouldn't be moving. And this is where the fact that the universe has structure comes in. So in the beginning, after the Big Bang, like I mentioned, we were in a very homogeneous universe in which the under densities were of very small magnitude of the order of 1 in 10 raised to 5, but after decoupling as the universe was expanding and cooling, matter starts collapsing due to gravity towards these over densities and away from the under densities. And as a result, the first stars are formed and the structure keeps evolving and 13.7 billion years later, we believe it to be the case. Today the matter, we see all the structure around us which is the galaxies and galaxy clusters etc. However, the metric expansion of the universe and the description of the universe at large scales is described using the FLRW metric which explicitly has isotropy and homogeneity imposed on it because that's the only way to come up with a clean solution for the field equations. And the question of at what scale the volume average actually is the FLRW metric is an open problem in cosmology known as back reaction and we have no answer to it. So what is the real universe look like? On this slide, what you see is the real universe around us and you can see that the real universe looks neither isotropic nor quite obviously homogeneous. In fact, the motion that I mentioned of the local group is caused due to the dipole and isotropy of the local universes by this equation that you see below. And this motion has actually been measured from redshift surveys and tele-fisher relations and you can see a compilation of these measurements here. The dashed lines correspond to the confidence intervals we expect around the average observer the distribution of velocities we expect around the average observer as you go out in larger and larger spheres in the current favored model of the universe the lambda CDM model which consists of dark matter, dark energy and the proportions I mentioned in the first slide. And as you can see many notable measurements including measurements that are utilized to correct supernova observables for supernova cosmology, however show a significantly larger velocity, a bulk flow to the local universe than is expected for a standard observer under the lambda CDM universe. So the next slide what you see is a similar measurement from the 6DFGSV survey which is currently the largest and the deepest survey available and as you can see the red dot is outside the 2 sigma expected confidence interval around the median observer in the lambda CDM universe. And you will also see references to other measurements which go as far out as 300 MPa and still see a remnant velocity of 159 plus or minus 23 km per second in the 2M++ combination and this is a compilation that has been used to correct the velocities of the latest observed supernovae. So what should a moving observer, since we now know that we are moving and a significant portion of our universe around us is moving with respect to the rest frame of the universe, what should the moving observer see? The moving observer as he is looking at the universe sees two additional features. There is one known as a special relativistic aberration which is the idea that if you are moving in a specific direction, the angles that you measure tend to be shifted in that direction. But it's a very small shift. In fact, you will see that the shift is of the order of the velocity as a fraction of the speed of light. So a speed of 369 km per second should cause a very small shift of the order of 10 raised to minus 4 or so. On top of that, you see an additional dipolar modulation due to Doppler boosting and what this means is if you use a survey, a telescope to count the number of sources in the sky in a direction, you tend to be limited by a threshold flux. Your telescope is able to resolve sources only that are above a threshold brightness and astrophysical sources tend to have these power loss spectra, which describes the number of photons that arrive at different energies. And if you are moving towards the source, photons coming from that sources appear blue shifted, which cause lower flux sources to cross your threshold. And as a result, in a flux limited or a magnitude limited catalog, in the direction of your motion, you expect slightly more sources than you expect in the opposite direction. And this effect is quantified by the equation that you see on top where if sigma rest is the number of sources you expect per solid angle in the universe. It is additionally modified by this cos theta component along with the coefficients that you see where theta is the angle with respect to your direction of motion. And x is the spectrum of the power law, the spectral index of the emission of the source and alpha is a similar quantity that quantifies how many sources are there over different thresholds and flux. In general, if you look at a catalog of galaxies and it is an all sky catalog of galaxies, you expect to see multiple dipolar effects in the number source counts. You see the kinematic dipole which I described previously, but on top of that if your source, if your catalog is limited in size, you expect a random dipole which is basically short noise of the order of the size of 1 by square root of n, the size of the catalog. You also see a structured dipole which depends on the redshift distribution of the sources. If all the sources are nearby when the universe is on average still not isotropic or homogeneous, then you see a structured dipole, a clustering dipole which is not the same as the kinematic dipole and is a foreground. And you see additional foregrounds which could be misclassified stars and other galactic contamination like dust and any realistic analysis looking for the kinematic dipole has to account for all of this. In this talk, I'm going to tell you about three papers. First paper examines a catalog of 600,000 galaxies for the kinematic dipole and we find some quite interesting results. This is an extremely deep survey where the median redshift of the sources is the redshift of one. Motivated by the results of this, of that analysis we examine a larger catalog of infrared galaxies and motivated by the observations of both we examine the latest catalog of Type 1A supernovae and I will be summarizing the results in this talk. So the National Radioastrophysical Observatory, very large array sky survey otherwise known as NVSS is a 1.4 gigahertz survey of the northern sky down to a declination of minus 40.4 degrees. It has about 1.7 million sources. You can see the sky distribution in a Maulweed projection and the figure that you see below quantifies how many sources there are as a function of the flux threshold. Like I mentioned earlier, the index of this curve is one of the parameters in the equation I showed in the kinematic dipole explanation. In the southern sky, obviously this is a northern sky survey and it has no sources in the southern sky. But it has a twin survey in the southern sky, which is an 843 megahertz survey of the southern sky by the Molonglo Observatory in Sydney, which has the same beam size as NVSS and it has about 200,000 sources with a similar sensitivity and resolution. There is a 10 degree band where the two surveys overlap and that band can be used to identify sources that have been observed by both for cross-calibrating when you combine them. And what we have done is we just combined the two catalogs by cutting them off at different redshifts and patching them together. Obviously there are many systematics associated when you do that. The first being that they surveyed the sky at different frequencies. So once you know the spectral index of the observations, you have to rescale the SUMS's fluxes. You have to remove the galactic plane which NVSS has and SUMS doesn't. But that does not bias any directional information because the galactic plane is a great circle. It is symmetric with respect to any dipole. You have to in the overlapping region pick which sources to keep, sources from which catalog. And the fact that you get to choose to do that at minus 30 or minus 40 gives you an avenue to do additional cross-checks. And you have to apply a common threshold cut on the flux of both samples. This sample is obviously an astrometric catalog in the sense that you have only positions and directions. You have only directions and fluxes. You do not have any idea how far away they are. There are no source-by-source redshift measurements. But after these cuts have been applied, you can cross-correlate them with a spectroscopic catalog to estimate the redshift distribution. And we know that the median redshift is roughly one and it has no sources below redshift of 0.3. It is effectively meaning that we should expect no clustering dipole within the source. So how do we estimate the dipole? As I mentioned, the dipole is simply a hemispherical anisotropy. And conceptually, the simplest way to estimate such a dipole is to just cut the sky in half and count the number of sources and see how different they are. However, this has a high bias due to the fact that the hemispherical anisotropy tends to suffer from a short noise of the order of square root of n. It is slightly cleaner to add up unit vectors that correspond to the directions of all the sources in the sky. And this gives you a less biased estimator as suggested in this paper by Robot and Schwartz in 2013. And it has lowest bias and statistical error. And the bias due to short noise from the catalog size and from removing specific parts of the sky, these great circles in the sky can be quantified from studying Monte Carlo simulations and we have quantified this. So at different catalog sizes, you have to correct the average dipole you measured by this bias before you claim what velocity you measured. And the results we see is that while the direction of the dipole in this catalog is always within 10 degrees of the CMB direction irrespective of high flux thresholds that we use, the velocity, the magnitude of the dipole tends to be about 1300 km per second, about a factor of 4 larger than the CMB dipole. This observation has very conservatively a statistical significance of 2.81 sigma. We calculated this by generating random Monte Carlo samples of isotropic catalogs that look like the NVSU SMS is patched catalog that we created, applying the aberrations and dipoles and seeing how often we can recover them. So you have to think of something like the signal dipole which is a fixed size, the short noise which is a random size dipole oriented arbitrarily in the sky. And you have to count how many times the two add up to give a dipole larger that has been observed within 10 degrees of the CMB dipole as we have seen it. And this observation, the statistical significance of this is constrained mainly by the catalog size. Could this be driven by local sources contamination? One of the things we did is we tried removing both the brightest and the dimest, the lowest and highest fluxes sources in the sky. And on average, the lowest flux sources tend to be the ones at high red shifts and the highest flux sources tend to be the ones at low red shift. But the evolution of this dipole with respect to these threshold cuts is negligible, suggesting that local sources are not driving this observation. We have also explicitly cross correlated with spectroscopic surveys of the universe complete out to 0.03 in red shift, which is the horizon up to which the average observer in the Lambda CDM universe is expected to see an isotropy. We have removed them and that again has no impact on the magnitude of the dipole. At this point, because the observation is constrained by the size of the catalog, we moved on to try and examine a significantly larger catalog of galaxies. And this is provided by the Widefield Intrarid Survey Explorer mission, which is a satellite that over the course of 10 months surveyed the entire sky in at least two epochs. That means every point in the sky was seen at least twice in these four bands, 3.4, 4.612 and 22 micrometers using a 40 centimeter diameter telescope in space. It generated a catalog of 746 million objects that weigh hundreds of gigabytes in data. And most of these objects are actually stars and as I said, stars are foregrounds for these sort of analysis. We have to first get rid of them. This observatory had a very interesting directionally unbiased survey strategy, where at every point it looked at the direction perpendicular to the direction of the sun to keep foreground light contamination the same so that the exposure to the sky in all directions is the same. And it has advantages which are its arc second angular resolution and multiband photometry, but the disadvantage being that we again do not know how far away any of these sources are. We do not do spectroscopy to estimate their edge shift. How do we get rid of the stars? Elementary methods to get rid of the stars by cutting on the magnitudes, etc. have been already explored in this paper under the, that you see under the title. We just reproduced the results and we apply cuts to go from 746 million objects to a sample of just 2.46 million galaxies, which is by cross correlating with spectroscopic catalogs like SDSs, which are from in which stars and galaxies have been explicitly identified. You can quantify that this is still 76% complete in galaxies. That is, we have lost only 24% of the galaxies in the process of removing more than 700 million objects, but it still has a residual star contamination of 1.8%. The maximum dipole at this stage is in this direction that you see, which is 110 degrees away from the CMB direction with a very large dipole. If we were to think that this is a kinematic dipole, it would be a very large velocity of around 6000 km per second or so. But obviously that cannot be the case if it has a 1.8% star contamination. We have verified that our results at this stage are in agreement with people who have tried to carry out this exercise in the future. But as you can obviously see, if you make a scatter plot of this catalog, there is a significantly higher density of sources along the galactic plane, suggesting a large star contamination. We have to look at how to reduce the stars further. For this, what we did is we looked at the apparent motion of the source. One of the advantages of the all-wise catalog compiled by Weiss is that it had observed every point in the sky at least twice. So it can do a fit for the motion of each source, comparing how its direction in the sky has changed between the two observations. On the left, what you see is, this figure what you show is a distribution of this variable for stars and galaxies. Whether they are stars or galaxies have actually been identified by cross-correlating the catalog against a small narrow band where SDSS actually identifies them as stars and galaxies explicitly using spectroscopy. And as you can see, once you remove the galactic plane, the remaining objects which are stars and are the ones at high galactic latitudes and if they are at high galactic latitudes and they are within your survey, they are obviously moving very close and parallax is very high. And as a result, the apparent motion of stars tend to be high. If you apply a cut at 400 mA second per year, you can reduce the star contamination down to 0.1% while still keeping about 1.8 million galaxies. This is the first isolated infrared galaxy catalog which has more than a million galaxies and has a star contamination as low as 0.1% and which is all sky. So that is one of the achievements of this analysis. At this point, the dipole magnitude reduces to 0.014 but the direction is still 50.1 degrees away from the CMV. What we now go on to do is we selectively suppress local sources by cutting away sources that have been identified as extended and this is based on the logic that the satellite has a specific beam size. In fact, in the band in which we have used most of the photometry, the beam size is 6.1 arc seconds and a galaxy which is of the order of 20 to 30 kPa which is at a redshift of 0.03 or 0.04 or so appears to be an extended source whereas a galaxy in the background will appear to be a point source so you can get rid of extended sources and this selectively chooses the higher redshift sources. Even though we have no handle in redshift, we can evaluate the redshift distribution by cross correlating along a small narrow beam in the sky and the distribution you see, the brown distribution in the foreground is your final selection and the purple and the blue ones are the ones before the extension cut has been applied. So once you select selectively farther away sources, the dipole is now corresponds to a value of 0.0124 which again suggests the velocity of almost 1200 km per second if it is a fully kinematic dipole and the direction of the dipole has now converged to just 4.5 degrees from the CMB. The total dipole at this stage in its strength is 4.2 sigma statistically significant in the sense that it cannot be short noise but we do not know if it is a clustering dipole or a kinematic dipole yet. What we can do is we can take this redshift distribution and we can calculate what a Copernican observer or the average observer in the Lambda CDM universe should see for a clustering dipole and we do that by applying this filter function to the matter power spectrum and integrating and integrating using these expressions that have been provided here and on average you expect a clustering dipole of only 0.0018 or so which suggests that the average observer should interpret this as a velocity of about 1000 km per second or so. However, there are already many hints that we are not Copernican observers as I mentioned earlier. We go out to significantly larger volumes than are allowed in the Lambda CDM universe and still see residual velocities and this has also been noted by my collaborators in a publication they did in 2011 where they looked at supernova 1A data itself and as you can see the black points with error bars correspond to the measured velocity of spheres as you go out in redshift and the cross lines correspond to the median expectation for a Lambda CDM observer. You should of course account for the fact that there is a cosmic variance spread around the Lambda CDM observer. This observation itself is only at a tension of 1.5 sigma or so but it suggests that we are an unusually large bulk flow within the universe. At this point what we have to do is we have to quantify how large the clustering dipole around such an observer is and for that we have to use an N-body simulation of the universe. We looked at the first trillion particle simulation of the Lambda CDM universe and quantified the size of the dipole expected around observers satisfying properties in terms of large bulk flows and we noticed that for an observer similar to the Milky Way and the local universe as we see in this Tuma's redshift survey the clustering dipole is significantly larger if you subtract that out from the observation we get a velocity that's consistent with the CMB dipole velocity but with a very large statistical error from cosmic variance. However, you should see that the clustering dipole on average is not aligned with the kinematic dipole with the velocity of the observer. It is supposed to be 20 to 40 degrees away on average at least for the average observer but we see an alignment of just 4 degrees and this is quite surprising. At this point we have to explore what an observer sitting inside a large bulk flow in an FLRW universe is expected to see and this has been worked out by Christos Saga in these papers that you see here. He argues that an observer who has a mean peculiar velocity of VA sitting inside a bulk flow which might be moving faster or slower with respect to him should see an additional dipolar modulation in the cosmic deceleration parameter. I will be explaining in the next slide what the cosmic deceleration parameter is. What this means is that even if your true cosmic deceleration parameter the isotropic one is a positive quantity just because you happen to be sitting in the middle of a bulk flow and maybe looking in just one direction in the sky you can estimate a deceleration parameter that is artificially negative as a local universe foreground. And to test this before that we have to understand what the cosmic deceleration parameter is and I would like to quote Alan Sandage who first measured the Hubble parameter to reasonable precision here. He described cosmology as a search for two numbers the Hubble parameter and the deceleration parameter. So just like you can quantify the universe in terms of the amount of matter and dark energy and curvature in it you can also quantify the universe in purely kinematic terms. So how fast it is expanding which is the Hubble parameter. So as you go out how fast is the in distance how fast is the metric expanding you can quantify its first derivative scaled by itself. So the first derivative of the scale factor is the Hubble parameter the second derivative of the scale parameter scaled appropriately with a minus sign gives you the cosmic decelerating parameter the deceleration parameter this minus sign has explicitly been put in there to make this parameter positive for a decelerating universe because in the 1950s when these relations were first derived no one had heard of dark energy and it was believed that the universe with just matter would be decelerating and consequently to make the deceleration parameter positive the minus sign has explicitly been put in there if you will go to the third derivative of the scale factor you get a parameter called jerk derived defined as such and you can actually write down the luminosity distance which describes how the universe is expanding in terms of these kinematic parameters through a Taylor series expansion like the one you see here and the deceleration parameter in a lambda CDM universe is expected to be given by this expression but you can analyze all data irrespective of the cosmological model in purely kinematic terms to account for the tilt of the universe which has been described in the previous slide the expected dipolar modulation we perturb the local deceleration parameter into a purely monopolar component which is the same everywhere in the sky and a scale dependent so F is the scale dependent parameter we try different functional forms for the scale dependence a scale dependent dipolar term which is a cost modulated term from the direction of the CMB dipole which basically is a component of the deceleration parameter which is positive in one direction of the sky and negative in one direction of the sky and broadly because all the bulk flows are expected to in fact all observed bulk flows have been within 30 to 40 degrees of the CMB dipole roughly in one direction in the sky we do not attempt to find this direction we just attempt to see how large a dipolar modulating effect we can find and to do that we use a catalogue of type 1A supernovae type 1A supernovae are white dwarfs which are stars that have reached the end of their life cycle but are below the mass required to collapse into black holes or become supernovae which exist in binary systems and they are accreting mass from a nearby star as they accrete mass as they accrete matter they gain in mass and the moment they cross a threshold mass about 1.44 solar masses also which is slightly higher than the absolute Chandrasegar limit they explode and they are supernovae in their spectra they have no hydrogen but they have silicon and identify to be a unique class of supernovae in the sense that they are thermonuclear they are not because the star has reached the end of its life and the core is collapsing inward because of the fact that this explosion always happens at fixed mass it is believed that they could be standard candles even though there is no theoretical understanding of this process and indeed if you look at their light curves each supernova has a very different light curve in terms of its evolution in time and also in its in a spectra however it has been observed empirically from just observations that brighter supernovae on average tend to decline slowly so over longer periods of time they remain brighter for longer and if you actually correct for that observation the spread in the luminosities estimated for supernovae at fixed distances can be made lower and this is purely an empirical observation what this means is that right I did something sorry can you hear me still everyone yes everything good yeah so in practice in supernova cosmology what is usually done is the observed B band magnitude is corrected by two factors which correspond to the color and the stretch the stretch corresponds to how fast the supernova light curve is declining and the color corresponds to in which band the peak was actually the peak spectrum was actually observed and only after you account for these corrections which is not as standardizing the supernova candles can you use them as supernovae as standard candles and this is purely an empirical observation and to do so there is a template fitting method to the light curves observed in multiple bands and extract these parameters the color and stretch corrections on top of which in addition to the peak B band B band magnitude and using this relation you can get the distance modulus which can be compared against expressions for the luminosity distance as a function of the redshift how the metric of the universe looks like so there were many there may well be other variables that the magnitude correlates with in fact you will find quite a lot of literature that suggests that the coefficients of these corrections alpha and beta also have redshift dependencies and magnitude correlations but the current leading works in supernova cosmology use this method as suggested with this 2014 compilation of observed supernovae which is the largest publicly available complete compilation of supernovae that we have now it is constructed by patching together at supernovae observed in a diverse set of surveys at low redshift it puts together supernovae observed in four different surveys also at median redshift it takes supernovae seen in the slow and digital sky survey dedicated supernovae survey known as the supernova legacy survey fills in the intermediate redshift and at the highest redshift you have supernovae from the Hubble space telescope and all of their data have been fit through one process called the spectral adaptive light curve template which allows them to estimate to extract the peak B-band magnitude and the color and stretch corrections the sky map of this supernovae which you will not see in most of the supernova cosmology papers shows that the coverage of the sky is actually highly anisotropic all intermediate redshift supernovae from SDSS are in just one direction in the sky and this directional anisotropy the hemispheric anisotropy is actually extremely severe if you look with respect to the directions of the bulk flows and the CMB dipole that I mentioned earlier so this star that you see is the CMB dipole and something like 631 out of the 740 are in the direction opposite to the CMB dipole and as you can see the CMB dipole is only at a direction of at a distance of something like 30 degrees away from the two 45 degrees away from the two bulk flow motions going out to the largest distances and with respect to these directions also 9 out of 10 supernovae are in the hemisphere opposite to them effectively we are seeing the vast majority of supernovae we are seeing 9 out of 10 are in the direction opposite to our motion there is a newer compilation which adds further 300 or so supernovae on top of the joint light curve analysis catalog and it is known as the pantheon compilation these supernovae have been observed from the pan stars we do not use this catalog we are merely mentioning it because this catalog suffers from the same problems the pan stars coverage of the sky is again patchy and 9 out of 10 supernovae are in the direction opposite from the bulk flows that we are looking what is the impact of peculiar velocity on supernova magnitude a supernova that is moving with respect to the Hubble flow is expected to have its redshift corrected by this expression the velocity of the supernova corrects the redshift and the velocity of it as an observer also adds a correction to the redshift the luminosity distance is also corrected similarly with an additional factor due to the peculiar velocity of the supernovae which comes from boosting effects all catalogs of supernovae since 2014 actually have their redshift corrected to account for the local bulk flow which means that what we actually see in the catalog in the ZCMB column is not the measured redshift but redshift that had some corrections applied to them and also their magnitudes have been corrected to account for these deviations based on a flow model and we examine these corrections by just subtracting out this expression that you see called C here what this expression means is we correct only for our known velocity with respect to the ZCMB ZCMB and what we should observe is the peculiar velocity of the supernovae and as you can see all supernovae out of 0.06 have some non-trivial velocity corrections applied to them after which they arbitrarily go to 0 mostly because the effect that we are trying to constrain right now is something that is an observer dependent effect, it is something that depends on the heliocentric redshifts of the supernovae we subtract out these corrections but we have other motivations to do so because some of these corrections are completely inexplicable this object that you see here SDS2308 is at a redshift of 0.13 or so there are no surveys of the universe that are complete and go out that far and there is no way to know what peculiar velocity objects at that redshift have yet they have non-trivial corrections applied to them in the JLA catalog in the Pantheon catalog we actually found at these intermediate redshifts more than 20 such sources with non-trivial velocity corrections applied to them which are inexplicable and which are obviously mistakes as we discovered from contacting the others subsequently we subtract out these corrections and then we carry out a maximum likelihood analysis of the catalog what we do is we construct the likelihood where the probability of observing the data given a model is split up into probability of observing certain distributions for these color and stretch parameters given cosmology and probability of getting the data given distributions for these color and stretch parameters one of the things you can see from simply examining the data is that the color and stretch parameters are described by Gaussian and from the spectral adaptive light curve template method it is known that the color and stretch corrections that you see for each supernova is not the true color it is dispersed around a median around a value which we estimate as Gaussian according to a method that was introduced by my collaborator Karakar back in 2016 the results of this work itself are interesting and I will be briefly covering them later so we can write down the individual probabilities that go into the likelihood as Gaussians this way and we can simultaneously using this method and using a treatment that covers the full covariance matrix of the supernova sample provided by the supernova cosmology group carry out a likelihood analysis simultaneously fit for the monopole component of the of the deceleration parameter and from the kinematic expression for the luminosity distance you will notice that the omega k which corresponds to this kc squared plus z0 by kc squared by z0 squared is indistinguishable from this j0 that is the jerk parameter in fact in any fit these two are degenerate or they are essentially one parameter so we fit for that we also fit for the scale dependence for the scale dependence and the dipole dipole are component of the cosmic deceleration parameter along with the other parameters that go into standardizing the light curve the coefficients alpha and beta which correct for the color and stretch and the the mean and variance of the distributions that give you the color stretch and the absolute magnitude of the supernovae we can construct confidence intervals in this likelihood by integrating to the maximum likelihood ratio observed the known test statistic the kai square that we observe and indeed we can also maximize over the parameters that we are not interested in which are the parameters that are not cosmological which go into standardizing the supernovae and we can obtain the profile likelihood to examine the statistical significance of observations and what you see is that in our best fit scenario which is where the scale dependence the the dipole are component of the local deceleration parameter is actually larger than the monopole are component the dipole are component is 0.3 negative while the monopole is only minus 0.24 and and any fit has to include both components simultaneously the hypothesis that qm which corresponds to the deceleration of the acceleration of the universe due to a cosmological constant or dark energy is compatible with 0 at a statistical significance of something like 2 sigma which is 95 percent this is a very low statistical very low statistical significance that is a very low standard of evidence it is compatible with the universe is compatible with a non-accelerating universe if you want to understand it a little bit more intuitively what this means is if you look in the direction of the cmb in the opposite direction of the cmb you see a clearly accelerating universe whereas in the direction of the cmb where you have only one by tenth of the sample and you go out to 0.2 in redshift or so the scale factor goes to a very large redshift implying a bulk flow that is much larger than that is allowed in a lambda cdm universe so if you look at the 50 or so supernovae in this direction going out to a redshift to 0.2 the universe actually appears to be decelerating not accelerating this is particularly surprising because the first analysis which claimed that the universe was accelerating which eventually went on to win the Nobel prize in physics in 2007 used a catalog of only about 50 supernovae or so which were all happened to be in this direction in the direction opposite to the cmb so the significance of q0 being negative or the fact that the universe is accelerating is only two sigma cosmic acceleration may simply be an artifact of us being located inside a local bulk flow indeed it has actually been suggested back in 2016 itself by my collaborator Subhir Sarkar and his students in an analysis that used a very similar method where they fit for the dynamic parameters omega m and omega lambda that the data was compatible with no acceleration at slightly less than 3 sigma and this this observation has actually been confirmed by Rubin and Hayden who however go on to argue that the color and stretch parameters should have their medians the color and stretch parameterizations should be redshift and sample dependent in the process they introduced 10, 12 new parameters to the fit and they show that if you do that the statistical significance for acceleration which is q0 being non-zero goes up to 4.6 sigma even if you do that it goes up to only 4.6 sigma and you must understand that at this point they have introduced 20 parameters to standardize 740 supernovae and they use only 2 parameters for the interesting thing here which is the cosmological parameters and assume a perfectly isotropic and homogenous universe which we do not see in our paper we also deal with this argument by verifying the results so we carry out this method in which we use 22 parameters with no dipole and we do confirm that in that specific fit the universe appears to actually be accelerating at a statistical significance of more than 4 sigma however if you add a dipole or component on top of it it still allows for a very large dipole out to a redshift of 0.18 or so the universe is still compatible with no acceleration at 2.4 sigma now you must understand that if all your supernovae happen to be in one half of the sky and you have an ontological reason to expect there to be a dipolar modulation which you do from kinematic effects and we know that we are sitting in the middle of a large bulk flow in fact no matter how far we have gone out in extend in the universe in redshift we have seen no convergence with the cmb dipole we have not been able to recover the origin of the cmb dipole so in any catalogue which has such a large hemispherical anisotropy the dipole or component of the deceleration parameter needs to be allowed for simply to avoid biases and doing so reduces the statistical significance of acceleration to about 2 sigma or 2.4 sigma if you use the highly biased treatment of the nuisance parameters on top of that you need to observe that these nuisance parameters in the in the philip relation which describes how the supernovae are made standard candles they have to be treated in a way that is democratic that is if you treat the color and stretch parameters to be redshift dependent since there is no theoretical reason to believe that they are redshift dependent while the magnitude itself is not you should allow the absolute magnitude also to be redshift dependent and if you do that you can verify that from the from the fit we present in the last row of the paper you again see absolutely no evidence for acceleration of course this is a trivial statement because allowing the absolute magnitude of supernovae to be redshift dependent or sample dependent completely destroys there any case for using them as standard handles but one of the arguments used for allowing color and stretch parameters to be redshift dependent is that the quality of fit improves significantly as seen by by a much more negative value of the test statistic which justifies these by information theoretic criteria and obvious and allowing the absolute magnitude of the supernovae to be sample dependent also improves the quality of fit by these information theoretic criteria like the acai gain information criteria in summary the dipolar modulation of high redshift radio galaxies is a 2.81 sigma to anyone who is not a physicist I have to emphasize the 2.81 sigma is not a high standard of evidence in fact the fact that a 2.81 sigma tension exists between the dipole in radio galaxies and the cmb dipole I would not worry about it at all because 2.81 sigma is not a high standard of evidence but it is interesting nevertheless if you examine infrared galaxies and go out suppressing local structure and getting rid of stars etc you see a similarly large dipole which can be reconciled with the cmb value of the observer only for a non-copernican observer that is an observer sitting in a very rare region in a lambda-senium universe at the level of less than 2% which again is not surprising so I would say these two are slight tensions they are properties to be investigated in the future especially with future surveys such as the square kilometer array which will have a much larger number of galaxies however given these two observations and given the fact that all supernovae or 9 out of 10 supernovae we see are in one half of the sky which happens to align almost perfectly with the directions of these bulk flows and because we have kinematic reasons to believe that the q0 then should have a dipolar modulation in fact the universe even a universe without lambda without dark energy will have structure and the modulation in q0 is ontologically more expected than a monopole in q0 any principled analysis of supernovae should actually allow for a dipole on top of the monopole and once you do that you get a dipole that's significantly larger than the monopole out to a redshift of 0.2 and the monopole are component of the acceleration of the universe which is often attributed to a cosmological constant disfavored only at 2 sigma which suggests that the universe is not probably accelerating at least from supernova data alone I will not comment about independent data that suggests that the universe is accelerating because I don't know much about it so thank you for listening I hope you understood and enjoyed it I would be very happy to take questions should I switch back to the hangout yes please thank you let's see I hope I did not go over time I actually thought this would take only like 40 minutes a little bit but it's okay so thank you very much for the webinar it's been quite intense actually but it's been good so I don't know if there are any questions from the audience right now maybe just a quick question like out of curiosity so could you please comment a little bit how like plan collaboration and that like they correct like you were describing how you were just doing this and then eliminated this and then doing this this this to get your results is there like maybe you could tell us did you hear that plan collaboration and like they do also like to get their data like let's say cleaner or something like that oh plan just subtracts the dipole so you know they do the they expand it in spherical harmonics and they remove the L equals one component which I mean it's it could be much more sophisticated than that because they do not see the galactic plane the galaxy is a zone of avoidance for them and once you don't have the galactic plane none of the L modes are orthonormals but they account for these effects and they just they just subtract the dipole component in that is in effect what they do so you instead of a hundred yeah instead of instead of one in ten raised to three anisotropy of the dipole they see only the indeed they present only results from L equals two onwards so if you yeah the lambda CDM fit is done only with L equals two onwards L equals one is subtracted out oh yes okay thank you I'm not an expert on CMB data I've never written a CMB paper so I do not know the technical details of exactly how that was my question yeah because it's just good thank you any other questions yeah I have a couple of questions for Ramesh first of all thank you Ramesh it was very interesting your webinar so I was wondering do you have plans on which other observables can be used to try to test this type of test to the cosmological model besides the CMB and the supernova 1A any all other types of kind of to cross check your results with all the type of observable to really to try to push the the cosmological principle to a strong test well future surveys of the universe from like the square kilometer array will allow us to go much deeper there is a proposal to use Gaia data to to actually see to actually measure the acceleration of the local group the structure growth factor has been proposed as a way to observe this however as far as I know currently none of them are in significant tension with theoretical predictions in the future as we have more surveys such as Euclid and SKA we should be able to test the cosmological principle to to higher and higher precision but the cosmological principle is not a very well stated idea it is when we have to finally reconciled with data we have to account for the fact that structure is growing we need to identify the scale of homogenization of the universe which has not been perfectly done as far as I know with and especially the local universe so the Lambda CDM with the current fit parameters such as that the universe should flow which isotropy on scales higher than 100 MPa or so whereas the local universe you go out even 300 MPa it seems to be still flowing suggesting that there is a large dipolar and isotropy outside it we will need much deeper surveys in the future like the dark energy survey to actually know for true whether we converge to the rest frame of the CMB and even then it could turn out that we are just sitting in a very special place in the universe and the rest of the universe is an isotropic but that would be a little hard to swallow I think I mean a tension at the level of two sigma between the local universe and the global expectation is probably okay so people don't take this too seriously but if we go much deeper in redshift and it turns out we still do not converge with the CMB dipole then we might have to start thinking if there is a primordial dipolar component from remnant left over from inflation but I'm not a theorist so I don't know the motivation for such things but I've heard people use such things to explain these anomalous dipoles that you see in infrared and radio galaxies okay thank you you're welcome alright so any other question from the audience? okay let's check the youtube channel okay no question from the youtube channel so sorry I was worried I'd have to answer things I don't know anything about don't worry so from my non-expert point of view in your scenario what is the future of the universe so it's not ever expanding anymore are we having a big crunch again or what is the expectation from your best fit point? well I don't know if there are theoretical reasons to believe that dark energy exists then our current best fit still suggests that it is a non-zero value it is just that it is compatible with zero statistically significantly it is compatible with zero at two sigma so I mostly look at data to reconcile and all I can say is currently over data is completely inconclusive I have been told that that baryon acoustic oscillations provide unequivocal evidence that the expansion of the universe is accelerating the cmb fit itself suggests a large amount of dark energy however cmb is not directly sensitive to dark energy you know they fit for omega m and they fit for curvature and they assume the sum rule and in fact if you have to argue that there's nothing else in the universe to get dark energy from the cmb I'm not sure so we just need a lot more supernova to be sure especially in the other half of the sky where no one seems to have looked like I said 9 out of 10 of them happen to be in one half of the sky and if you just look in the other half of the sky all the supernova there are suggest that the universe is decelerate and only you ignore this direction and fit them all as one and just assume that directions don't matter at all do you get an accelerating universe so it is possible that the universe still has dark energy I don't know but current data doesn't suggest it and I think extraordinary claims require extraordinary evidence I think this claim has not been proven to any level of evidence so far at least from supernova data okay okay great so I think we've gone past the I have a very short question not because of the discussion just what Ames was saying so one I mean this is a doubt in fact I don't know if it has a relation but in the case of when you are testing the cosmology principle you have this kind of anisotropy do you know if it is compatible with some theoretical prediction for instance for models of inflaton or actions because I remember that in case of actions when there is time for the case of dark matter you have in the early universe you have domains in which the value of the action field is different from different patches of the universe kind of it firms different could be this kind of effects inducing an anisotropy I mean non-homogeneous universe like in the case of the cosmology principle I don't really know at all I'm not I'm not even familiar with axion physics to comment if there are domain walls I would expect that it need not be a dipole you should also see in larger multiples and indeed there are anomalous observations in the CMB like the fact that the quadrupole and the octopole of the CMBs tend to align almost perfectly and these have all been noted as curious inconsistencies or things that need to be explained I think the most convincing theoretical argument people have made to try and explain these large dipoles is iso curvature perturbations causing like a remnant dipole mode from the inflation era so the universe is a primordial dipole in this and you know it's not spherically symmetric anymore but more like cylindrical symmetry with the preferred direction but I don't know how palatable that is like I said I'm mostly an experimental physicist and I try and work with data and test hypotheses I don't know too much of for example axion physics or any beyond standard model things you can do to explain these observations we need a lot more data to be able to test such models these things are all being done in a not at all statistically rigorous way the cosmology community is much more susceptible to confirmation bias than particle physicists and their standards of statistical evidence are significantly lower okay thank you there are papers noting that of of 500 observation independent apparent measurements of omega m or the dark energy equation of state parameter only like three are beyond one sigma and you know such such central tendencies in in what is supposed to be independent measurements of these quantities and such as that there is a large confirmation bias affecting the community so I would say we just need a lot more data to be able to test anything in a rigorous way okay thank you all right so I think it's time to close before we do that let me remind you that in a couple weeks we'll be having a colloquium by Mariam Torotola and it should be on nutrients right so so this has got a non-standard date and time it'll be on the 11th of October which is a Thursday right this is the first time in the webinar history a long history of webinars that we're having a presentation on a Thursday right so be aware of that and from what I've seen it's one hour earlier than usual so please check our web page for the details and we'll be seeing you in a couple of weeks right so that's it see you next time thank you Rami for the talk once again thanks thanks for having me