 Alright, Tim here from stream native. I'm a developer advocate and my main technology is Apache pulsars you saw there up in lights today. My talk here at the open source summit is a hail hydrate from stream to late using open source. We're using Apache tools today including Apache pulsar that unify Apache flink and maybe some others along the way. I'm a developer advocate extreme native, and we support the open source patchy pulsar. If you want to see more content for me source code articles how to use examples. Go to one of my websites to my GitHub, lots of material there I'll make sure all the slides, all the examples everything you see today will be available for you to explore build on fork it if you want. Let's interact on open source, a little more information there so you can find slides and other materials. Hopefully that's enough. Let me do a quick agenda so you get an idea what we're going to do today, we have about 45 minutes or so so we'll try to maximize what we can get out of there. I'll be on slack I'll be in different areas within the conference, I wish I was there in person, but today I'm here virtually, but please reach out. I'm definitely interested to talk to you about the different ways to populate a data lake with open source, along with a lot of other things that you could do pretty straightforward. So our use case today, how do I get data into a data lake, or anywhere you know whether it's a lake house, just an S3 bucket, H face, kudu, you know, arrow spike. Cockroach DB any database you have any data lake anything in the cloud on premise wherever it may be. That's what I'm doing here today. There's a number of challenges there. And we'll go through their impact a solution to them. What's our final outcome. And why did we pick an ify and pulsar example the architecture and we'll walk through a quick demo. Use cases iot ingestion that's one I'll show you some other formats of data. It really doesn't matter when you use the right open source tools. Everything's pretty straightforward. But right now we're starting off on our grand adventure. It looks like a difficult problem. How do I get all this data. We have high volume, lots of different sources different formats protocols different vendors devices, lots of challenges here. Sometimes it seems like too much data to go through whatever channels you have, but relax, we have this, it will be pretty straightforward. So data ingestion is hard, you've got a lot of data, again different formats, maybe very fast, you want to be able to work on this data continuously real time, despite its speed you need as close to real time as you can get. And you want to be able to see what's going on. If something's not working something doesn't look like the data you expect, you want to be able to get it real time, be able to consume it without any problems. Now, what impact does this have to you as a business as you as a developer, your company, your project. Most people solve this by having a ton of different scripts. Maybe I've got a little Python over there a shell script over here some Java some scholars and Ruby some go. A ton of things all over the place maybe a cron job, maybe something's running at Google Cloud something's in Amazon, maybe you've got something stateless function over here using a proprietary tool over there maybe some legacy etl. Code is everywhere code sprawl is a real issue. Really hard to know what's running when how do I change something, but I keep things in sync. Not easy. This is not just time to money. So the more you have to do that the more you have to develop and maintain all these different scripts someone added a new source of data, someone added a field. That's cost for development you got to maintain it you got a lot of tools you may have to pay for support. It's hard to find people who really know this technology out there. You may be waiting for new contractors and maybe delays and people have to learn yet another tool or different package or language or protocol. A lot of money there. Obviously this takes time. This is decrease the people who want this data don't care how you're getting it. They want to see it in their application they want to see it in a dashboard. If a scientist wants to see it in their notebook, you need that as fast as possible this delays the rest of the project if I don't have my data, you're losing revenue you losing opportunities big deal here. So how do we do this, what's the solution patching knife flies a good tool for getting all those different data sources formats protocols different devices together. Knife is a piece of the puzzle patchy pulsar with its functions and different adapters, make some of this even easier. I can use the mop MQTT on pulsar to natively start streaming that in from all those devices that could push to MQTT. I'm installing a lot of different data different types. That's all supported with knife I makes it very easy. I'll show you XML so you Jason doesn't really matter to knife I the other open source tools knife I gives your provenance I see everything makes it very easy. Add new applications regardless of the use case, much shorter time frames don't have to wait for data. Data analysts data scientists can just start innovating with that data they don't have to wait around for months or years for saving money as well not just time. You don't have to worry about using consultants for too much time ingest is simplified or faster or more agile. We reduced the time to onboard a new data source doesn't take weeks or days could probably get an hours. This is more data more streams into your data warehouses now, regardless of where that's running. I recommend using what I call the flip stack great for cloud data engineers, especially be doing machine learning or anything around data lakes lake house databases data warehouses whatever it is. We support a ton of different users frameworks languages clouds data sources clusters wherever you're doing it. People are going to use these tools or cloud data engineers probably know some ETL and ELT made a little Python and Java definitely doing some sequel maybe done a little streaming better know the basic cloud tools. That's a hefty order there that is not simple. So these are very important people. Also, we've got cats out there you might see them in my presentation. They're your typical user, they don't have a lot of coding skills. They're always questioning how much money I'm spending on all this. They are definitely experts in being lazy, often show up on my edge cameras, you'll see that. Also the ever present AI. Sometimes I need me to run machine learning or deep learning is part of this ingest to all these different clouds because may have to clean up the data while it's happening. So I'm trying to process to determine what I should do with that data where does it go. Lots of different things here with machine learning, and I can run that in Apache nine five running into pulsar functions can run in a function mesh. I can run that in Flank. I can run that in regular pulsar clients and microservices. Again, lots of different places, try to minimize it to just a few key areas that can be shared and distributed in some big clusters out there. Stream native makes this possible. You know, regardless of where you're running these apps. Again, they can be data pipelines like this the most common use case here for loading data lake. All those will be using our compute layer. What's nice with Apache pulsar versus some other streaming and messaging technologies is compute and storage are separate, something pretty common in the cloud. So I can have my storage increased without having to add more computer vice versa I need more support for additional clients just add some more compute nodes. I don't need to add storage that I might not need there. And what makes the storage even better is Apache bookkeeper can send that data is tiered storage you don't have to write special scripts. You don't have to worry what happens if I run out of this space. That's all managed for you and that can go into S3 buckets or wherever you have your cloud storage. Where can I run this very easily in cloud natively and Kubernetes like most people like to run their apps these days, and also all the major clouds as you'd expect stream native will run in your data center as well, or in any of your clouds natively makes it very easy, makes it a little simpler than having to wire up all these open source projects on your own, which I'm doing today for demos, and you could certainly do that as a developer learning the product. But when you're ready to scale out and production, you need things like geo replication and backups and management. But you probably need a host like stream native to help you. Again, we mentioned that flip stack flip is also the name for flink integrated with pulsar. There's a bunch of great connectors that stream native have put out into the open source to make it very easy for flink and pulsar to be friends. And we'll see that today. We talked about a lot of different open source technologies, one of them is NIFI. If you look at NIFI, you might see my name out there I really love this project, been working with it for like five six years. You see I'm even wearing the t shirt out there in Italy. I love it. I've got one of the devices that's streaming and data for us today. Why NIFI easily starts off on your laptop you learn how to development. And then you could scale out as big as you need to be. It scales very linearly just keep adding nodes as big as you need to get, but you do some real time streaming. This is great for acting as a kind of a universal gateway, collect your data, curate it analyze it and start working on it right away maybe convert it maybe limit it set out alerts, send it to different things I'll show you a bunch of examples there. So here's here. What's nice is lots of different sources and sinks. So you kind of mix and match what you're doing here without writing code. You connect things together in a very intelligent way. Underneath that you've got abilities to guarantee delivery got Providence and lineage you can see what's going on. Lots of different data sources supported. Great if you need to split or filter things. It's a really nice way to do it. Again, one good piece of the puzzle. The other one is probably the most flexible way to do messaging and streaming is with pulsar which is a cloud native for distributed messaging and streaming, which are two different things. People tend to think of messaging. Remember the old style messaging with JMS MQ series, all those type of things. Those are still relevant practices. And then again you've got the Kafka style event. Both of those are important pulsar supports both. So you don't have to have multiple different kind of messaging systems to achieve what you need. So here's some of those details that are important to be able to do the pub sub that most people like to do as this geo replicated regardless of how many cloud availability zones you have been able to have functions you could do some of that to functionality before the events or messages are in their final destination makes it very powerful. I highly recommend people do this. If you haven't been doing that already, please take a look at those multi tenancy and more so than other places, because I've got namespaces, and I've got the ability to break down all my topics into completely isolated areas that makes it easy to support, you know, multiple lines of business or multiple companies on the same platform, very powerful there. We mentioned that tiered storage, so we can make every topic persistent, or we can have it not be persistent. And again that flexibility you don't see in other messaging systems, full support for different connectors so I could just stream that data right into arrow spike into a JDBC data store and to click house into cockroach DB and the Mongo needs to go very easily. Full rest apis you could do any DevOps you need to do, or even have NIFI do those calls for you. Full command line interfaces you could do everything you need to do from looking at the data starting and stopping things. All those things you expect all that can be automated with your various DevOps tools, as many clients as you could possibly think of their support there, whether it's python.net Java, all those are out there go, and they're very easy clean apis nice way to do it. I'm not going to get into the different just subscription types, but as opposed to some of the other ones there are multiple that lets you decide how you want to process things. So maybe you want to be able to move messages through as fast as possible. So I could have different consumers out there getting their own set of messages. If I have 20 consumers, I could be doing things in 20 x parallel. That might be a subscription type that's valuable to you. We got that right there. I mentioned that different protocols support. Obviously we support the native pulsar, but sometimes you have existing apps, especially in IOT they're doing MQTT, you might be doing a MQP JMS Kafka. You don't have to rewrite all those clients you don't have to as you add new ones you make them native pulsar but for now, have them do their native MQTT protocol and have pulsar act as the broker there and just make that come in and you don't have to worry about it. I'll show you a couple different protocols, and it doesn't, it doesn't add any complexity to what you're doing. It's very nice. Let's go here. My example is I've got a device over here. I've also got some a weather API call. And I've got another rest call bringing that into NIFI to do that initial cleanup, preparation, get that into pulsar whether that's through MQTT, native Kafka, however, Flink is going to do some sequel fun on that as those events come in. And then that data is available in your data lake for whoever needs to query it make dashboards, apps, whatever you're doing. Just a little more detail on that. Again, whether I'm pushing to MQTT, or I'm pushing to whatever different protocol you like, it natively goes to pulsar very easy. Data coming off this device. Pretty straightforward, Jason, but we'll show you how easy it is to work with that. This is one of my cats. We'll also do weather data. Again, what's interesting in weather data, this is coming in as a zip file of a lot of XML files, and we show you how easy that is to parse. Please connect with me, learn more about pulsar. I've been coming extremely excited about it. It's one of the most interesting new projects out there in Apache. You're just starting to see it everywhere. It makes things a lot easier and solve some of the problems that people have been having with streaming got links to a bunch of different source code, bunch of different examples. I always reach out to me, but now that we've done, you know, the slides, everyone needs to see some slides. Let's show some real code. This is my favorite part of what I kind of get through the slides as quickly as we can. I wanted to set up those basics and give you something to return to later. If you want to get a refresher, it is a good store document for you to look at. Finally, the fun part is, let's look at live code. Now this is Apache 95. This is consuming from pulsar and consuming from Kafka. Under the covers that's the same pulsar cluster. I just have both protocols enabled, just to show you how easy that is. I'm going to start data coming in from my device. I messaged into my IoT device. Again, just to show you the Python code here really basic, but it's grabbing some sensors doing a little formats and calculations and it's going to send it to two different pulsar topics. I'm doing it one of them through MQTT and one through Kafka. Again, using those native libraries there, there's nothing pulsar specific there. It's just sending Kafka messages, and it's just producing MQTT messages. The only difference being that this cluster here is a pulsar cluster and it's listening on these two protocols and more. Here's that running. Just want to show you some of that command line. Here's the different topics on that cluster and you'll see we're going to be using a couple of them. You saw one of them was pushing to this one. One of them was pushing to Kafka behind the scenes. These are the pulsar topics for them. I'm going to take a look here back in NIFI as we see some data is coming in because I started that IoT process again. You can see the latest data here. And we can see lots of the details. This is that lineage. This is that Providence information that lets you know what's going on with every event that comes in. The link ID tells you the size, tells you where it is. Like here you can see where it came from right down to the topic. A bunch of different attributes, one message, and we could even dive into the content to see that. And I could store this somewhere if I want a record of everything that came in. Just to give you an idea of the different sensor readings coming in. So those were consumed by pulsar. I'm going to treat it as Jason, convert it into Jason. I could have decided to convert that into another format, which is another thing that NIFI does really easy. Maybe it's Avro. I could get comma separated values for some of my data scientists. Maybe I need to build a form, text form. Maybe parquet, maybe XML, whatever it happens to be. Here I'm writing a query on that data real time. I just want a couple of the fields if the humidity is over 45%. This one just give me all the data. This here is just so I stopped that data real time in between each step. I've got a configurable queue with built in back pressure built in load balancing strategies, ability prioritize different events. Really powerful again all graphical. We can see a bunch of the data has come through as alerts. Some have come through as standard data. And here I'm just going to send it to my cloud database, but just to give you an idea what's going on here looks pretty familiar. It is that same data. Here I'm just going to start pushing that to a cloud data store. Pretty easy. Not that hard. We just wrote the 78 records there. And I could see the results of them what happened, you know what what happened, you know what's where that data went. What's cool is if you look at this code to write to a Postgres database in the in the cloud, which could have been click house could have been any JDBC store. I said it was an insert. I said it was Jason. There's the table name. That's it. There's no other difficulties there. That's how easy it is to send that data to the cloud. I didn't have to worry about any complex things really. So we're writing to that cloud database. Again, it could be Dremio, it could be Databricks, Stove Lake, a ton of different databases I could be writing to in any cloud, anywhere that happens to be. Let's take a look right here. I pushed that into a Postgres SQL database and you could see these are the recent records that came in. This is a regular database. I can bring this into a dashboard bring us into anything that reads from databases. Pretty straightforward. That's the entire process to take a stream from various devices, make it validated and clean. Send it up to a cloud data lake. It's ready for use. That's it. Now I said we can also use other protocols. Here's Kafka. Again, this is my laptop. Same Pulsar cluster. Use the native Kafka port again using those Kafka protocols. Read from that topic that we were pushing into. If we looked at our app here, here is that Kafka topic. It's our Pulsar topic. This is, say I want to read the earliest one but my group ID, standard Kafka. Didn't have to change anything. Data comes in. It doesn't matter that it was in Pulsar. I could see the different attributes that came in. Kafka Offsets, keys, all those things you expect in Kafka. It's just there. There's another app. That's one IoT app running. I can also get data from a rest endpoint. This one is for stocks. What's interesting is it comes in a kind of a weird format. I want to break it out. It gives me one big JSON file with a bunch of header stuff I don't care about. There's a bunch of values over time. I just pull out those values and I convert it into a schema that I like. Then I'm going to add a unique ID and add a timestamp and a date and time and then get it ready. I want to just work on one record at a time. Not if I let me work at thousands at a time if I wanted but to make the demo not run too fast so you can't see it. You click a button and all of a sudden 50,000 records are gone. I put a control rate here so we just do 20 a second. Just so you can see data coming through makes it look a little cleaner. Here I'm just pushing those records into Pulsar and you can see them coming out. No failures here. I can retry if something's down. Again, pretty straightforward. I connect to my local Pulsar. Let me show you that. Here is where you'd have your login and authentication if you're connecting to something like stream native. Here it's just on my laptop. Pretty straightforward. Going to all the defaults, default ports. Makes it easy for development for me. This is pushing to stocks. So I've got data going into the stocks Pulsar topic. Just so we could see that happen. And I've got that controlled. So we'll just keep getting more and more data in there. Pretty straightforward but makes it easy for you to see that. And to show you it's not going nowhere. I'm going to read it. Here's my consumer. Here I've got a little query. This could be anything I want. Maybe just limit the data. But again just to show you that it's running. It's coming through there. So that stocks. So we saw IOT data. We saw rest stock data. What's next. Well how about some weather. I like to know the weather the weather's been pretty rough recently fed some flooding we've had hurricanes, you know, this is something you need to keep an eye on. So this one here I want to know the weather for the entire United States. Well fortunately the NOAA has put all that in one file. Now most people would just manually go there, download this zip file. Then uncompress that. Then unzip it. Give me a ton of different files. Well those are all XML. It's weird. I don't want any weird. I just want airports. I want to know the weather at every airport in the United States. And I certainly don't want XML. I'm not a fan of XML. So XML convert that to JSON. And get rid of anyone to have a weird location because I want to know that this is at a place. You know I want this to know this is Newark Airport. So the airports all around the country. I want to know their weather that one's important for me especially for business travel for different applications I have just those. And here again like we did before, I'm going to do a split in control so I don't run 5000 records in a second doesn't make for a good demo to see all your data go through in a second and running on my laptop. I have a few hundred gigs of data here. I'm going to have a lot of deleting to do. So I didn't want to do that. So we see here data coming through and we get all this metadata. Some of this could be really important to you so I can and knife I pull out every one of those fields. Maybe put that in another pulsar topic and use it for something else. Like this was the original file name. Which airport it came from, you know, interesting data where they ran the NOAA servers, all that sort of information is there for me, and I'm just going to push this to a weather topic. Pretty straightforward. Now I've documented how to do all these demos from the nine five side from the pulsar side from MQTT ton of the source code here. How do you integrate the topics really easy. How do you do a list how do you consume data how you test it you know I put data in. I want to get data out. Like, you know, here I can consume some of that IOT data. Pretty straightforward consume on that one I created a, a subscription there, so that my command line gets just his data that he wants. That was one of the IOT records just to give you an idea. So we have that data coming in. Pretty straightforward. Let's do the flink part. Now I got data into topics, and I can pull data out of topics. Very helpful. Sometimes I want to do other things. Certainly I can run a pulsar function. So when that event arrived something happens, maybe I put it into another topic. There's lots of different things I could do there including like I mentioned, machine learning or deep learning, or any kind of functions or processing you need to do. But something I like to do is create a table on those event streams as they're coming in. So that was large skills I want. Fortunately Apache flink, another great open source project. Let me do that. So the syntax is really easy. I could have used the built in catalog that looks at all the pulsar topics and does it for me cloud native stream and Apache pulsar have that built in schema registry, and you can see that very easily in the cloud. But we don't need to do that. It's very cool if you want to be able to control what that table looks like, just a field you want, control things like when do you start grabbing those messages. All those sort of information, it was Jason, all those connectors are here. So we're going to create a table and then we'll start querying them pretty straightforward, but I think that's important. So I mentioned the schema registry. I don't want to gloss over that. Let's show you what a schema looks like. So with stream native cloud, I could see that schema. What's cool is I built this up with an app. Pretty easy for me to define this based on the fields in my class. Again, if you want to write code, you can definitely do that. It makes it pretty clean. Support a couple different types of schemas here. This is Jason schema pretty straightforward. It makes it very easy to be able to develop and see what's going on in my, in my topics I have a schema on it also lets me do queries. If you have a schema and query them. This is just a different way to see the data stream native makes it pretty easy. Again, you could do this from the command line. There's some built in utilities with pulsar. But having it cloud hosted and manage and all that done for you with nice interfaces makes it a little easier. Just a heads up on that. So we ran here. So we're going to create a table. Let's create this data table. Let's see which one version I like let's do earliest. Let's make sure we grab it and copy it go into my flink. Let me make sure I didn't create this table already. This is the table I have. These are the different catalogs. I'm the default one where it lets me create my own. So I created a table for weather already. Let's describe weather. Let's describe skater to which is a different version of this table. Let's describe stocks. You can see the tables are there. Let's build a new one. Just pasting that in there. Let's make sure that skater was built properly. And I'm just going to do a very simple select star. I'm going to do the most exciting one, but keep it simple. You see I've got all the sequence syntax to create a couple of different tables here. Some of them I've built already one I just wanted to show you so I can do stocks, weather, all the different ones in some couple of different things to make it a little easier for you. So the query is running. What's cool is as you see that line pop up. That is a new event that just made it into the topic. So as that IOT app is sending data, I can continuously get it so my SQL just keeps running, and as a new record comes in I get it. I'm just looking at the data. I'm just developing what I want to do in the SQL. From here I can add an insert statement, insert that into another topic. So say I want to do alerts, someone could be consuming waiting on that topic to do whatever they need to do, which is pretty powerful. Also, what you can do is have that set up in Pulsar to be a sync. So when I push to a Pulsar topic, I can have that connected to a sync for say, as three, or for click house or arrow spike or cockroach DB, or, you know, well, a lake house, wherever it may be, there's a ton of different connectors for Pulsar. So if I send an alert to that topic, it'll automatically go into that sync makes it real easy another way to do this without code. So I just write it here and have flink do what it needs to do. Let's just take a look at the tables here. Like if we wanted to take a look at weather. We can look at the weather one and then figure out, let's insert that to somewhere else. Let's just show a couple of fields. And if I come up with a query that I really like, I can take that embed that in an app. And then app could do other kind of processing. Again, I could do some of that processing and I fi I could do some of that in Pulsar functions, lots of different options there depends on what you want to do. So we still have that guy running. We could get some more data for for weather, as you see we already passed through all that weather data even though that was all the weather forecast for the whole country. And we'll just start add a couple more here. As you see a couple thousand new reports coming in. And even with this delay here that I purposely added in processes in them pretty fast. You may have run out of this space there. Something that happens when you're running Pulsar flink knife I end this whole presentation on the same machine, you know, sometimes it's better to run everything in the cloud, and then come back in, you know, run it distributed because you never know what the networks, you know the fun things that happen when you're speaking at conferences, you know, it's probably always best to run locally in case networks are down, you know, even though the cloud hosting may never go down. Networks do. I think I may have run out of memory here. So we'll show you that this is running locally. I'll just drop the cluster here. And this is my flink cluster. So one node cluster on my laptop. And then I'm going to restarted it. Hopefully I got enough RAM on my machine. That's the syntax to run a SQL client to get access to all those different things that you want to do there. I'm just going to create one table. Just to show you again how we do it. We'll get over to our shell here. Make sure the table is created and select star from the weather, or I could pick just a couple of fields. Again, that's up to you what makes sense on how you want to do that, those queries. I might stop this guy because pushing in a lot of data over here. We're running on the same machine. What's nice with knife I is could start and stop whenever I want. This is stock data. Again, haven't been doing anything with it so maybe I can just stop that for now. And we'll just focus on the weather. Again, only have a couple of hundred records left, even at processing 20 seconds. It's going to be pretty quick. Again, that's why I put that delay on there. Otherwise you're not getting any records. As you see here, I've already pushed a lot of weather data here, even how many that's showing on the page. You see here we're on page 250 of the data. So there's already a lot of data in this topic. You can see airports all over the world. We'll see if we get to the last one yet to just go in here and we can see all the records. Ooh, that's pretty cold. Where is that? That's a little nicer weather than I have here in New Jersey. Oh, Montana. Pretty nice weather. Yeah, so you can see some of these. There's a couple of them where they haven't gotten reports in a while. You can see those observation times. Anyone that's September 3rd, that's pretty recent. You know, and you can see the lat long pretty hot there in Arizona, just to give you an idea. So maybe I want to now that I examine the data. I'm going to send this to a data scientist and they tell me, Tim, I'm interested in a geofenced area so we could do some math on the lat long. And I want every record that comes in, you know, ignore things when the observed time is in the past. We put that in the query. Just show me where the Celsius temperature is over 20. Sure, we could do that. We just put that in the select, have that happen, do an insert, put that into another topic. Now you have your data ready to go. I'm going to go forward. That just gives you some of the idea what you can do. Now if I want to send new data and now the one thing with this forecast with his weather data. This is the live weather. Every 15 minutes is the general collection time. So running this back to back, not going to get new data. I've set my system up to not care that I get duplicates. You know, that's up to you what your data looks like you could filter duplicates very easily in different spots. As you see here the data is coming in really quickly into my flint query. Very straightforward. I could have limited it to, you know, say certain locations, a wild card on location, you know, something like location. Another clause where location like say New Jersey. So I am, and we just get those in makes it a little, little easier again we still have data coming in. And it goes pretty quick by the time we load this it won't be there. So one of the problems with running these things they move so quickly. That's why you might want to look at the, the provenance to see what's going on with my data. And I could see here what airport that is. And if without even diving into the body of the data devil that metadata. We've got most of that data and we might have got New Jersey. There we did. You can see here we got two pages in New Jersey. I could see the airports nearest to me. I could see pretty much all the airports in the state we don't have a ton of them. You know, Atlantic City, you take a look at that one. What's the weather down there by the beach 77. Not bad. So you know you can see what's going on one queries, however you want, just pick the fields you wanted so maybe I just wanted location, lat long temperature. You know, maybe humidity, you know wind speed, those sort of things you just pick and choose the ones you want. Pretty straightforward. You know I could have picked it from other parts of the country, or I could have put a joint in there, joined it with another topic that makes sense, did some look ups. You get at the idea here, we're running low on time. Hopefully you'll want to talk to me ask some questions will do that. And during the live session. Thanks for attending. I'm hoping you've learned some cool stuff you could do with various streaming tech within the open source. You're not limited by, you know, anything other than the needs you have the use case, and the time to do some basic setup as I as you see here data keeps coming in. Thank you. And enjoy the rest of your conference.