 Thanks for having me. I will tell you a bit about multi-messenger data networks and this will be a mix of science and the technical aspect of this. So I try to use the science basically as examples of what can be done with those data networks. So first of all, I would like to show you, I think the first example of multi-messenger astronomy in 1987, a supernova exploded in the Magellanic Cloud. And for the first time, we saw a neutrino signal from an object outside the solar system. And yeah, I think really this was the beginning of multi-messenger astronomy. We had roughly 20 neutrinos from this supernova. If this would happen today, we are really well prepared with the supernova early warning systems news that is basically a network of all the neutrino detectors around the globe that are just waiting for a huge burst of neutrinos coming from a potential galactic or close by supernova. And the goal of this network is to watch for such a neutrino burst. And then if it happens, provide the community with a prompt alert about this neutrino burst so everyone could then basically point their telescope in that direction. And we had roughly 20 neutrinos from this supernova in 1987. If this would happen today, we would have 10,000 of neutrinos depending on the detector. So it would really be spectacular. And what we would expect to see, as I said, is this burst of neutrinos. And there would also be, we would also see expect a gravitational wave signal at the same time. And all this would happen before we could see the actual electromagnetic signal, which is first a shock break out and then the typical supernova light curve that one could see in an optical. And with the setup of snooze, this could happen really fast. This is the estimated latency for the alert that would be sent out by snooze and more than 50% of the triggers would go out much below one second, one minute, sorry. So we really set up well for the next supernova to happen in our galaxy. So I wanted to mention this as an example. I'm aware this is probably not really relevant for CTA. So in the rest of this talk, I will focus on extragalactic transient or variable multi-messenger sources. And the summary of the sources we could potentially look at is shown here in this plot by Quota and Imram. And the types of sources can be split basically in three different types. We have multi-wave lengths and multi-messenger sources related to supermassive black holes in the center of a galaxy. So you could have active galaxies that actively accrete material and produce jets. And in those jets, you can produce neutrinos and ozogummerase. And here we could especially look for flaring lasers. Also, quiet supermassive black holes could give an interesting signature if a star approaches too close to this black hole, it would be tidally disrupted. And parts of the disrupted star would then be accreted onto the black hole. And in some cases, you can also produce a relativistic jet. And those tidal disruption events have become really good candidates for high-energy neutrino production. So this will happen in supermassive black holes, but also on the stellar scale there are interesting source classes. The most famous one is probably a gamma ray burst illustrated here. So if you have a really very massive star exploding, you have an extremely relativistic jet that can produce neutrinos and ozogummerase. But there might be a kind of a mild version of such explosions in form of engine-driven supernovas. In this case, you would also produce a jet, but this jet is only mildly relativistic and is not energetic enough to penetrate the surface of the star, only get somehow stuck in the envelope of the star. So most of the gamma rays would be would be absorbed or you only see a low luminosity gamma ray burst. So in the extreme case, all the gamma rays are hidden, but this could still be a very good neutrino source. And those sources are much more abundant compared to the compared to gamma ray bursts. And there's another class of supernova where the supernova explodes in a very dense circumstellar medium. And then you have interactions of the ejecta with this dense medium. And you could have something like basically a supernova remnant on a short time scale. And you could have effective cosmic ray acceleration on the time scale of months and produce neutrinos and maybe also gamma rays. And then finally, you have the class of compact object mergers, most famous one probably neutron star mergers, and you produce this kilonova signature and potentially also jets that can produce neutrinos and gamma rays. And probably here more debated is the merger of two black holes. Normally, you would not expect any multi wavelengths or multi messenger signature from that except the gravitation waves. But if this is embedded in an environment that still has some gas, then you could you could see a signature and produce neutrinos and gamma rays in this environment. So the the challenges to detect the multi messenger signal from from those sources. There are several challenges and let me start to list a few of them. The first one is the fact that the signal might fade quickly, especially in the case of gamma ray bursts. So really quick communication is needed to distribute the information among the among a network of observatories to really trigger observations quickly and get the most information out of out of the interesting source. And as an example here, I want to show an optical follow up program that Antares is running and the gray lines are optical GAB afterglows. And every time Antares detects an interesting neutrino event, they send out the trigger to a network of optical telescopes that would then as soon as possible observe that direction in the sky. And you can you see the observations that they did in the past indicated here in this colored markers. And in some cases, they really managed to observe the direction of the sky really fast. And in that case, they can then really what if they would have been a gamma ray burst, they would have seen it or they can exclude exclude the existence of a gamma ray burst. But this is an example where really quick communication is key. Another challenge is the high data rate or the fact that the data rate will get even even higher in the future with the new larger observatories that cover larger volumes. So we will really expect an explosion of the data rate. And as an example, I'm showing you here this wiki transient facility that's an optical server instrument and it's producing roughly 10 to the six alerts per night. Those alerts indicate a variable or transient sources in the sky. And this rate already large rate will even increase with the Vera Ruben observatory where we will expect at least a 10 times higher rate of alerts. And why we can probably handle this for like a single observatory if we now want to start like correlating different messages or several observatories if we had no n observatories, each of them would increase their data right by a factor of 10. Then we quickly arrive at 10 to the power of n. So we really get an explosion in data rate. So what we need here is a smart selection of the interesting events in our larger number of events focused on a certain science case that we are interested in. For example, the 10 to the six alerts in ZTF, most of them would just be variable stars that probably from multimessage astronomy we don't care much about. We could filter them out. The next challenge is that if you really want to combine data sets in a reasonable way, you face the problem that you have complex analysis on heterogeneous data sets. So you really need deep expertise from the various experiments. And as an example here, I show you a gamma ray light curve of a gamma ray, a flaring gamma ray source that was found in spatial and temporal coincidence with the high-neutrogen neutrino. So the neutrino arrived here and this is the gamma ray activity over 10 years measured by Fermi. So here's the question, the immediate question that you want to ask is how likely do we find something like this by chance? And to answer this question, basically you have to analyze all the gamma ray sources in the sky and look at their history and see how often would your by chance find a coincidence with the high-neutrogen neutrino event. And of course, you have to know how many events, how many neutrino events and neutrino alerts have been issued, what is the spatial area that they cover and those two information have to be combined. And this cannot be done if you don't have really the expertise of that, like in this case, Fermi and Ice Keep at the same time. So for example, to really look at all this gamma ray light curves, Sarah Bouzon, Matthew Woods and myself, we had to produce this long-term light curves for thousands of sources and it took us weeks to do this, to answer this quite simple question, how often do we see something like this by chance? And I think if you don't have expertise in the resources, it's very hard to really answer this question. The next challenge I want to mention is provenance. So the problem we're facing is that we probably will only be able to study a few isolated events in detail and really to draw conclusions from the study of only a few events. We really have to understand very well why did we select those events and how do we estimate the background properly to really draw reasonable conclusions. And as an example here, I show you the optical follow-up of gravitational wave events and since you all know they cover huge areas in the sky, it's very hard to follow up and observe all of this area and then it's important to have predefined criteria, what do you follow up and how do you select whatever you find in there to get an estimate how sensitive you are and also to estimate the background of unrelated things that you might find randomly in there. If you want to, for example, derive limits on the kilonova rate from such a study. So I want to talk about two multi-messenger networks in detail today. The first one is the astrophysical multi-messenger observatory network AMON and the second one is AMPE that stands for alert management, photometry, and evaluation of light curves. So let me start with AMON. AMON is an effort that's based at Penn State University and the idea is to enable mere real-time coincidence searches from a large amount of multi-messenger observatories and astronomical facilities. So that would include cosmic rays, electromagnetic emission, gravitational waves, and neutrinos. So there's three main goals that AMON has. The first one is to receive events and broadcast them to the community. The second one is to look for sub-threshold coincidences. So using sub-threshold events from two or more experiments that by themselves are not interesting, but once you combine them, they actually might be significant. And finally, AMON wants to store events in a database so one could perform archival coincident searches with that database. So this is a list of correlations that AMON is planning to do. Some of them are already implemented and others they're planning to implement. For example, they're planning to correlate gamma rays and neutrinos, gamma rays and gravitational waves, and gamma rays, neutrinos, and cosmic rays. And then they have this pass-through channel that's just basically receiving interesting events from one observatory and just broadcast them to the community. So all the coincidence searches define like a given radius, basically, where you call something a coincidence. So that obviously depends on the angular resolution of the experiments involved. And then they define a time window. It's usually some generic time window that you have to define to basically suppress the background. And then based on the time window that you choose, you are sensitive to various source classes. And since my expertise is mainly in neutrinos, I want to highlight a few science cases related to neutrinos. So a little bit of background here. We have three neutrino detectors currently operating. IceCube at the South Pole is the largest operating detector at the moment, covering a volume of one gigaton. That's one cubic kilometer. There's also Antares in the Mediterranean and Baikal in Lake Baikal. Both in the Mediterranean and in Lake Baikal, the larger detectors are under construction, came three net in the Mediterranean, and the GVD detector in Lake Baikal. And at the South Pole, IceCube Gen 2 is planned that would have a roughly 10 times larger volume compared to the current ice cube. So far, we have detected a diffuse flux of high-energy neutrinos that is shown here. So it's the flux times E square as a function of energy. And this is the diffuse neutrino flux that IceCube has detected. And in this plot, it's compared to the diffuse gamma ray background, measured by Fermi and the ultra high-energy cosmic rays measured by Augie. And there is a strong connection between those three messages, first in terms of the energy budget injected into the three messages, and also in terms of how they are produced. And I would like to show this here. So when you produce neutrinos, you need basically high-energy cosmic rays interacting with matter or ambient photon fields. And in this interaction, you would produce neutral or charged pines. The neutral pines decay to two gamma rays, and the charged pines decay to produce a bunch of neutrinos. So that already shows us there's a connection between neutrinos and cosmic rays, and also a connection between neutrinos and gamma rays. Because if you produce charged pines and they produce neutrinos, you always also produce neutral pines and produce gamma rays. The problem with the gamma rays here is that they could also be produced in leptonic processes and not in hydronic processes related to neutrinos. And also when you produce them, you produce them at roughly the same energies as you produce the neutrinos. And to illustrate this, I show this spectrum energy distribution here. So in red, you would see the neutrinos, and you have energy here on top. So we produce neutrinos at 100 TeV energies. You would produce gamma rays roughly in a similar, according to a similar spectrum, however, they would then be absorbed in the source and also during propagation. So you would not expect to see necessarily 100 TeV gamma rays, but they would interact on their way or in the source, and then they would cascade down to lower energies. And that could be in the Fermi range or maybe even lower in the MEV or X-ray range. So keep this in mind that this connection at the production doesn't necessarily look the same once we observe it here at Earth. So let's go back to the goals of Raman. The second one was to look for coincidences between subthreshold events. And with the neutrino gamma ray connection in mind, a bond team now designed a subthreshold search combining ice cube events and hawk hotspots. So hawk hotspots are defined as excesses above 2.7 sigma, local significance, and they have a duration up to six hours. And the six hours are defined by the transit time of a source above the hawk detector. So by themselves, they're not significant. Also keep in mind this is just the local significance. If you are now correct for the look as we affect actually 2.7 sigma, it's not significant at all. The rate of those hotspots is roughly 800 per day. And those will now be combined with an ice cube neutrino stream of single neutrino track events. And those are also largely dominated by atmospheric background. So here the rate is roughly 650 per day. And you probably expect a very small signal contribution to this. Most of those neutrinos will be atmospheric background. And now Amon combines those two with the goal to search for an extra galactic source that is producing gamma rays and neutrinos. And the search is tuned in a way that they would find four background events per year. And the duration of the coincidence that they're looking at here is basically defined by the duration of the hawk hotspots, which is a few hours. So they're sensitive to transients that happen on the order of hours. And here's one example. The blue one would be one of the hawk hotspots. And there were four neutrino events found in coincidence with this. And then they calculated combined direction. There would be this red circle here. And then this is broadcasted by Amon to the community. And everyone with a telescope can decide to follow up on this. And I guess this is also an interesting, could be an interesting science case for CTA to follow up in the future. So this is one example for the subthreshold search implemented in Amon. Now I want to show some examples for receiving events and broadcasting them. So again, I use neutrinos as an example. So if we want to use neutrinos to trigger multi-messenger searches, then we really have to get rid of the large background of atmospheric events. And there's two different ways how this can be done. And to explain how we can do this, I show you here just the histogram energy of ice cube events. And in blue is the expected atmospheric background. So at low energies, we really expect a huge background of roughly one event per square degree per year. However, at high energies, the rate is much smaller. And here is where I can actually see the signal sticking out. So there we expect tens of astrophysical neutrinos at very high energies per year. However, at low energies, there's much more astrophysical neutrinos, hundreds per year. But unfortunately, those are buried in this very large background of atmospheric neutrinos. So what can be done is just apply an energy threshold to select the highest energy neutrino events that are quite likely to be of astrophysical origin. And this is what ice cube and also Antares is doing. Ice cube, for example, has a so-called gold channel. So those are the most interesting, most signal-like events. We have 10 of those per year. And the background contamination is roughly 50% in this channel. There's also another channel with a higher rate, 30 per year, but at the same time higher background contamination. Antares has three different channels. They have two different energy thresholds, a very high energy one reducing the rate to six per year. The other one is 12 per year. And then they have one where they combine high energy neutrinos with the local galaxy catalog so they can lower the energy threshold and that gives them 12 per year. Antares currently is sending their alerts to only their MOU partners while ice cube is publicly broadcasting them using AMON. So in this case, AMON basically gets the information of ice cube and then sends them out through the GCN network, the gamma ray coordination network and everyone can sign up to receive those. And the most famous example for one of those events that Ice Cube sent out and that was broadcasted by AMON is Ice Cube 170922A. There was a almost 300 TV energy neutrino event. There you see the event display here. And that's a nice track-like event that allows an accurate construction of the directions. In the end, the 90% air contour at the sky covered roughly one square degree. And there was broadcasted by AMON and as you all know, Fermi then found an already known gamma ray source in spatial coincidence with the neutrino that was also flaring at the time when the neutrino arrived. And I had already showed this gamma ray light curve that shows nicely this large flare in temporal coincidence with the high energy neutrino. In addition to that, then to rank off telescope, ground-based rank of telescopes followed up on this. And magic was the first to announce the first detection of this source, Texas of five or six in very high energy gamma rays. And here you nicely see in the significance map that it's really nicely spatially coincidence with the high energy neutrino. So this is really cool. If we believe this connection of the neutrino to the gamma ray source, which we find at three sigma significance, then it means that the source has to accelerate protons to several PEV in order to get this high energy neutrino. So recovering blazer flares now in our little sketch here. But let's take a more detailed look at how this detection actually actually happened. So IceCube announced the high energy neutrino through AMON. And this is how this GCN notice looks like. So it's a machine readable notice. It was issued 33 seconds after the neutrino detection. And it has like the crucial information the right ascension declination and the error of the source and the discovery time. So everyone with a robotic telescope could just sign up for those and automatically point the telescope in that direction. So what happened next is that IceCube performs now a more time consuming reconstruction that takes a few hours. And once they have the result, they send out GCN circular. Unfortunately, this is not a machine readable message. And you see it here. So the important information are the new right ascension and declination down here. And it took four hours in this case to issue this message. The next interesting finding came from Swift in X-rays. They found nine sources in spatial coincidence with the neutrino error circle. And it took them four days to issue, again, a circular that is not machine readable. Then finally, Fermi announced that the neutrino was coincident with the source Texas of 546. This time in an astronomers telegram also not a machine readable message. And it took them six days. And they say here that they find the source in the flaring state. And next came an ATL from Magic after 12 days announcing the first very high energy gamma detection from the source. And here it is easy to understand why it took them so long because they observed on September 28 and October 3. So they first had to collect the data in order to basically reach this five sigma detection and then they could send out this message. So the question is, for example, why did it take Fermi so long to send out the alert and how could this basically be improved in the future? So there has already been some improvement to the pipeline since detection of Texas of 546. For example, IceCube is now sending machine readable messages also for the updated neutrino position. So there is now a circular plus a GCN notice that is machine readable with the updated position of the neutrino. So that makes it much easier for a robotic pipeline, an automatic pipeline to get the updated information. IceCube also started to already check for coincidences with cataloged Fermi sources. And this information is included in the GCN circular. And also on the Fermi side, there have been several improvements and automatization to the follow-up pipeline. And what is interesting is that they have a light curve repository under development that would basically keep light curves of all the known Fermi sources and regularly update those light curves. So in that case, if we find another neutrino from a flaring source, it would be very easy to answer the question, how likely is this a chance coincidence because all the light curves would already be available. So that's a nice example of how things have improved after the detection of the Texas source. Now coming back to using neutrinos as a trigger, we look now at this very high energy neutrinos that one can use. But there's also a way to look at lower energies by looking for spatial or temporal clusters of neutrinos at lower energies to suppress the isotropic background of atmospheric events. And both IceCube and Antares perform such neutrino cluster searches. IceCube, for example, has one that is really focused on very short transients that looks for neutrino multiplets within only 100 seconds and then a complementary search that's looking for clusters on all timescales up to 180 days. So this would be more targeted to sources that are variable or flaring on longer timescales. And Antares is also looking for short neutrino flares. And the results of those searches are currently sent through the GCN network to private partners. And Amon is not involved at the moment in this broadcasting. And so especially the search for short flares, short neutrino flares is targeting the class of gamma-ray bursts or engine-driven supernova that are kind of related to gamma-ray bursts. So this is interesting also for CTA because obviously all of us know that GCNs are also very high energy gamma-ray sources. So I already showed this plot at the beginning that shows the search performed by Antares where they automatically point optical telescopes in the direction where they find interesting neutrino events. And for a few of those for a few of the interesting neutrino events that they sent out, they really managed to be on source within only 20 seconds. In that case, they would be able to catch 95% of the of the GAB afterglow. So if you wait too long with your follow-up, especially if you only have a small telescope available, then you risk to miss the afterglow emission of the GAB. So it's really key to point your telescope quickly. And of course, the first step is to communicate the neutrino information quickly to be able to trigger those observations. Another example is the first and only neutrino triplet that I keep found. So those were three neutrinos that arrived within 100 seconds. And you see the neutrinos here in blue, green, and red, and the combined direction would be this black circle. And there was a large follow-up campaign. For example, Swift, XRT did several tidalings to cover the neutrino error circle. Veritas also observed. And some of the observations happened quickly within 24 hours. And unfortunately, nothing was found. But I'm showing you the upper limits from the various instruments here. And some later time observations within 14 days are shown here. So what we can do with these upper limits that we received is to, the first step is to disfavor a GAB scenario. Because in the rapid follow-up in X-rays, for example, there was no interesting source found. But also the later time observations are important to now look at this engine-driven supernova case. And what you see here at zero, you have the neutrino triplet arrival time and then optical telescopes monitor the source for up to 30 days. And this now allows us to probe the scenario of a choked jet supernova. And what you see here in this dashed line is basically a template light curve of a supernova placed at different distances. If it's very close, we would have seen it for sure. But even at a distance of 0.15, a redshift of 0.05, we would have seen such a supernova. So we can really exclude that there was a close by supernova producing those nutrients. So also here, quick observations are important. And also continuous monitoring of the source might be important to probe the different science cases. So it's very important to define what you're looking for ahead of time. So you can define your follow-up strategy in order to probe this given science case. So other sources also potentially interesting for CTAs. So I'm showing here some predictions for such transrelativistic shock breakouts or low luminosity GRBs. This is the neutrino expectation from a source at 10 megaparsecs. And this is the expected gamma ray flux for a source at 10 megaparsecs or at 100 megaparsecs. And it's expected that the CTA one could see such a source at 100 megaparsecs with half an hour of observations. So what we haven't covered here yet is the type of supernova that explode in a circumstellar material. And here I just want to show some predictions from Kota Morasse. This is the expected gamma ray light curve from an interacting supernova. So you expect basically that it's bright for hundreds of days. Here you see the neutrino and the gamma ray spectrum that you expect. Of course, it all depends on the model parameters, but the gamma rays are shown here. And unfortunately, there's kind of this high energy cutoff. Therefore, it's important if we want to see something like this with CTA that we managed to go to low energies. And here we see what CTA or Fermi could do, depending on what kind of interacting supernova we see. So Kota here looked at two different templates supernova, basically. And then depending on the distance, it could actually be seen in 50 hours of observation with CTA, especially if we managed to go to low energies. So those interacting supernova, as you can see here, they're expected on much longer timescales compared to the shock jet supernova. So one really would have to observe timescales of months to see the expected gamma ray emission in this case. So how do we know that close by supernova exploded? You can, of course, go and read all the details every day. And then maybe you will find out, but there's a much more convenient way you can sign up for alerts from the transient name server. So this was started by the optical and UV community, especially the supernova community. Every time they find a supernova, it is reported to the transient name server. You can also upload spectral information there. And then other people can sign up for alerts from this transient name server. For example, robotic follow-up facilities could immediately get information from optical surveys and then perform follow-up in a different wavelength or spectral follow-up, for example. Recently, also, radio servers have joined and they report their FIB findings to the transient name server. And there is some discussion to also include gravitational waves and higher energy surveys to this transient name server. So let me show you some examples. So here I used the transient name server to search for events of the type supernova 1c broad lines. So those are stripped envelopes supernova with broad lines. Those are the ones that are the most promising candidates for hosting a shock jet. So you see there's a bunch of them. You get basic information, the name, the coordinates, who reported it. And what I found interesting is, if you scroll a little bit to the right, you get the data that was used to discover this and also who sent the information to the transient name server. And you find things like the Atlas bot, for example. So this is using Atlas data, it's an optical server instrument, and it's automatically defining that this is something interesting. So automatically selecting the candidate and sending the information to the to the transient name server. And then other people can now follow up. Probably this classification came later after someone picked this up as interesting and took a spectrum of the source and added this information here. Also LERS is also a similar bot. On the other hand, also you can also add something by hand. So this one added by Stanek, that's Chris Stanek at Ohio State University, that probably found something interesting in assassin data, and then just added this information by hand here. But most of the sources nowadays are actually added by by machines. And one of those machines that's communicating with the transient name server is Ampel, that's also serving as a multi-messenger network. So Ampel started as a broker, so-called broker for the Swicky Transient Facility, that's an optical server instrument that's basically sending all the all the candidates for variable or transient sources that it's finding in real time. And Ampel is receiving those and on top of this provides a framework to host a user-contributed code. So yeah, you can submit your own code that does some smart selection with the transient. It also provides provenance tracking because you can use this identical framework in real time on archival data and also for simulated data streams. So this is very powerful to really study the effect of your selection criteria and so on. And all the code is open source. A few more technical details. So the way this is set up is that the users actually just design their own analysis schema that can request so-called units that are already provided or could be also added by the user. So the units are pieces of analysis code that can potentially be very simple or very complex. For example, you can do some alert filtering. You can match with existing catalogs. You can get some information about the spectral energy distribution. You can automatically schedule spectral follow-up, for example, and you can distribute alerts to different sources, for example, to Slack, per email, whatever you want. So this really provides a framework to efficiently distribute and co-develop multi-messenger software. And I think this is a nice example to really define a complex science case and then perform a targeted search for sources that are defined by your science case. And that could also, in the future, trigger us to CTA, of course. And I will go through a few examples of what you can actually do or what we're already doing with Ampere. So we use it to search for neutrino counterparts, for example. So the pipeline is illustrated here. We receive a high-energy neutrino alert. This is the one that's broadcasted by AMON. Then we would automatically schedule observations with ZTF to point at the position of the sky. Then we do a selection of potentially interesting candidates in the data that we have collected with ZTF. This is to reject stars that are not interested in planets, artifacts, asteroids. And then the next step we would probably end up with a handful of candidates. And now we can actually perform follow-up observations with other instruments that have a smaller field of view and that are more restricted in how much observations we can make. And that's, for example, Swift and X-rays. There could be spectroscopic follow-up or radio observations. And once we do that, we can then actually classify the source candidate found by ZTF and we select those that are interesting neutrino sources. And one thing we cover with this is the source class of tidal disruption events. And we found one really interesting candidate here is the tidal disruption event 80 2019 DSG, because no one can remember those numbers. The ZTF Black Hole group called all the TDE candidates after Game of Thrones characters. So this one is Brent Stark, much easier to remember. And we found that Brent Stark was in spatial coincidence with the high-energy neutrino that was sent out as an Amon alert. And you see the optical light curve that ZTF had recorded for the source here. And the neutrino arrived roughly 150 days after the source peaked in the optical. We calculated the chance coincidence that this would just happen by accident and find that it's only 0.2% to find a TDE that is similarly bright. And that's already including trials. So this is very interesting, even more interesting with the same program. In Ampere, we found a second TDE candidate coincident with another high-energy neutrino. This is called Tywin. And here you see a comparison of the light curves of Brent Stark down here and Tywin shown in red. So intrinsically, so this is luminosity. So intrinsically, Tywin is even much, much brighter than Brent Stark, which maybe tells us that TDE is emerging as a new neutrino source class. So stay tuned for more information coming on this source later this year. Gamma Ray is also expected from TDE's. And again, I'm showing some models from Mucota-Morazzo. He seems to be the one that does all the predictions. And for TDE's, he has three different models, how the neutrinos could be produced. And all of them also come with the prediction of Gamma Ray. So the neutrinos could either be produced in the corona around the black hole or from a radiatively inefficient accretion flow, so somehow from the accretion disc. Or he also has a hidden wind model explaining the neutrinos. And here you can see the prediction in neutrinos and the prediction in Gamma Ray's. So potentially, those could also be interesting Gamma Ray sources. Another example is to use Ampel to monitor optical light curves and then to trigger a very high-energy Gamma Ray observation. And this is work by Mireya, where she's using Ampel to trigger veritas. And you see an example here. So she's looking at light curves in opticals. So those are ZTF light curves for a predefined list of Gamma Ray sources. And when she finds an interesting flare, then veritas would be triggered. And the tool that she, that is a public tool, you can look at it here, already provides the observability of the source of interest. So it's really nice to look out for optical flares to trigger a very high energy. Observations. And again, this is a nice example that you kind of need expertise from people in both communities to set something up like this. And the final example I want to show is to use Ampel to search for kilonova with ZTF. So covering basically now the case of compact object mergers. And as you all know, the footprints of the gravitation wave candidates are huge. So it's really hard to perform a follow up. ZTF has a very large field of view. So each of these little squares that you see is one pointing of the instrument. So you can cover large areas, but usually not all of the footprint in some cases. And then they have a list of selections because they cover so much of the sky, you would always find something variable or transient. So you have to be very careful like which sources you select for further follow up. I'm not going through all the selections they do. But one example, for example, it has to be far away from a bright source because they can cause some artifact. It's not supposed to be moving to get rid of asteroids. So you look at two consecutive observations and make sure that your source is not moving and it shouldn't have any history. That means you really want to see something that's rising and was not detected in the past because that's what you expect from a kilonova. And this also allows you now to basically go back to archival data and study like what is the background that you expect and so on. And you can use this to really estimate the kilonova rate in the end from the upper limits that were derived by its CTF. Here's one example of one source that was selected. So it looked potentially interesting, a rising source, but after a spectral follow up, it was then figured out that there was a type two supernova and not a kilonova. Kilonova also potentially interesting to look for very high energy gamma emission. This is an example for Hess observations of the detected kilonova even at late times. And this observation actually allowed to constrain the magnetic field in the remnant, which is a cool application, even if you don't see anything from the source. And that's the last thing I want to show, especially for this huge area, gravitation wave events that people want to cover. It is really hard to cover all of the area. And there should be some coordination between the different telescopes. So not everyone would just look at the same position, but they would maybe spread out somehow to work together to cover all the area. And for this, people have developed the so-called treasure map. It's a way to share where telescopes have pointed. And this is just one example for the gravitation wave event and all those different colors dots here and little squares show where the different telescopes have pointed. So you can look at this and then decide where you want to point your telescope. So that's a useful development to coordinate observations. There's some other frameworks that I did not mention, but they could also potentially be useful in the future. There's four pi sky. That's also an open source software package built for rapid automated reporting and response to astronomical transients. This is mostly developed by the radio community. There's a schema that's a U.S. effort, stands for scalable cyber infrastructure to support multi-messenger astrophysics. And the goal here is also to have distributed data handling, computing analysis, and basically also a platform where people can develop code together. And I want to mention the time domain, astronomy, coordination, tech. That's a follow-up of GCN that's funded by NASA and will be using the ESR database. Those are also programs under development right now that I didn't talk about today. And then I finished with my summary. I think we're still at the beginning of the era of multi-messenger astronomy. We had a few detections and we learned a lot from those detections. One thing we learned is that many of the potential sources are transient or variable. So we really need fast coordinated observations to get most out of the source, to learn as much as we can. Some of the source classes I talked about have not been observed yet in TV gamma rays, but some of them have actually good prospects to be seen by CTA. So I think they will be very exciting for CTA to join this effort of multi-messenger astronomy. I talked about different networks to trigger and coordinate observations. They already exist and some of them are still under development. They allow to combine observations at a high level, at the catalog level, for example. But if you really want to go lower to test really predefined complex science cases, you really need the expertise from all the involved experiments. So it's not just correlating everything with everything. So that will be one approach to test a given science case scenario. The alternative approach would be to correlate everything with everything and try to find something unexpected. And I hope that there are two examples that I showed Amon and Ampe. I could give you a good overview of what is actually happening right now and what will be possible in the future. Thank you for your attention.