 All right, well, I'm going to go ahead and get started. People are still coming in, but otherwise, I'll never get through the slides. And I don't have that many slides, because I really don't want to cause pain with PowerPoint. So, well, briefly, I'm David Simmons. I work at Inflex Data. How many people have heard of Inflex Data or Inflex DB? It's about the same number of hands we're getting somewhere. So IoT is pretty much all I do for Inflex Data. Inflex Data has typically done a whole bunch of DevOps stuff, but I only do IoT for them. And I even brought flashing lights and actual IoT gear with me, so I'll get into that in a little bit. So what I want to talk about is where we're collecting IoT data. And it makes a difference where you're collecting your data and where you're storing your data. And so what I'm ultimately going to talk about is what I call this data layer for IoT, because everybody talks about, and in fact, the last presenter up here was talking about how many billions of devices will be connected to the internet in the next 10 years. And unless you're a device manufacturer or you're actually selling devices, I don't care about the number of devices that much. What really matters with IoT is that every single one of those devices is going to have at least one and more likely five to 10 data streams associated with that device. And as soon as you turn that device on, those data streams will stop and most likely they start and most likely they will never stop. So as the IoT starts to scale up, the amount of data is just going to absolutely go through the roof. And it's going to make what we used to talk about as big data look like nothing at all. We're talking zettabytes of data over the next decade. So do you collect it all in a centralized location? Is it cloud-based? Do you, where do you collect your data? How do you visualize your data? How do you react to your data? And a lot of that depends on who's going to use it and what it's going to be used for. Different data needs to be collected, analyzed, and acted upon at different places, right? So the example I use for this all the time is a drilling rig. I talked to some folks who put drilling rigs out in the Gulf of Mexico, right? And they have sensors on all of the pipes to measure pressure and things like that. And when the pressure spikes, they need to react to that data immediately to shut off a pump or open a valve or things explode and catch on fire and bad things happen, right? And so they can't very well send all that data back to a cloud-based data collection service and have somebody react to it because their only link is a satellite link and there's high latency and very low throughput and it's not reliable. And so if that message is delayed by 20 seconds, that 20 seconds can mean the difference between that oil rig staying above the water and that oil rig going below the water. It makes a big difference, right? So it matters who's using that data and what it's being used for as to where you collect it and analyze it. So if you're gonna collect your data in the cloud, which a lot of people want to do and there are certain segments of the IoT that absolutely no way ever will they collect data in the cloud, right? But if you're gonna collect it in the cloud, it really requires a low latency, high availability network. If you don't have those two things and your data is critical, then collecting in the cloud is a really bad idea. This is why like the industrial IoT folks, you say you go to them with a cloud-first strategy and as soon as the word cloud comes out of your mouth, they are gonna turn around and walk away. They've heard enough, right? Because there are safety issues involved with industrial IoT. My favorite example of an industrial IoT attack, how many people heard of Stuxnet, right? That's an industrial IoT attack. That was an attack on industrial IoT devices, PLC controllers very highly targeted and it caused the centrifuges to spin wildly and blow up. That's why people don't want to attach their shop floor to the internet, right? It does give you the ability to analyze your data and visualize your data from anywhere, right? So depending on where in the line of business you are, you're probably gonna want your data at some point to be collected in a cloud. It may be your system of record so that your overall business line managers can see what's going on with the overall trends in the data, right? They don't necessarily wanna see the millisecond level pump and pressure data because that doesn't matter to them. They wanna see that there are no incidents on the oil rig and that it's still above the water and everybody's still okay. If you're gonna collect it at the edge, you need to collect your data closer to the sensor and there's a lot of people that are selling what they call an edge gateway and what they're really talking about is the edge of the cloud and there's a big difference, right? So I brought, you know, I do IoT so I brought hardware, right? I am so much happier to do IoT than back in the 90s when I did server-based stuff because this is a lot easier to carry around and this is an edge collection device, right? This is an embedded Linux-based edge collection device and I call the edge the first hop from the sensors, right? I've got a couple of sensors up here as well that are collecting temperature, pressure, light values and an industrial CO2 sensor, right? I don't know where our CO2 values are. No, we're pretty good. As this room gets a little stuffier, you'll probably see the level start to rise, right? So if you have an unreliable backhaul network, expensive backhaul network, you're gonna wanna collect at the edge. If you need to do the analysis and the reaction to that data close to the source, right? You're gonna wanna collect that data close to the source and being able to do the processing and the analytics and the actual actions based on that data close to the source make a difference depending on what you're monitoring, right? So for instance, I'm monitoring CO2 in this room and it's currently, well, we're still in the green, aren't we? But we can change that. It's got about a five second delay. If that spikes to over about 5,000 or 6,000 parts per million and stays there, we're in a fair amount of trouble in here. If it gets all the way up to 10,000 parts per million, well, we won't actually notice that because we won't be conscious to notice that anymore. So if I'm gonna react to that data and keep everybody in this room safe, I wanna make sure that I'm monitoring that close to the sensor and then I'm reacting to it close to the sensor so I can do something about it, right? Not necessarily back in the cloud where I may not have connectivity back to my sensors. So distributed data collection, one of the nice things about it is I can collect it at multiple points and these two things are not mutually exclusive, right? I am collecting at the edge and then I am forwarding that data back up to the cloud. Right now I'm not because I didn't sign into the local network. When I do sign this little box into the local network, it will dump all its data back up to the cloud, right? And I can look at it later. But I wanted to be able to react to it locally. So these two things are not mutually exclusive, right? I can use these remote points to feed a backend system of record so that I'm able to have the data record of what went on. Now when this box records data, I actually don't record all of that data back up to the cloud because I don't need that level of granularity in the cloud. I actually upload five minute rolling averages of my data so that from the cloud, I can see the overall trends of the data and I can see when alerts happened and I can see the peaks in the data because that's what matters to me in a backend system. But I can do the local reaction to the data here as close to the source as possible, right? Distributes the data collection load so that I'm not relying on dumping everything to one central backend system, but I have multiple data collection nodes. If you think about a shop floor, I can have a shop floor manager who's responsible for a dozen machines and he can have a dashboard of those dozen machines that he's responsible for and only get the data for that, right? And the next person down the line that's responsible for another dozen machines has a completely different dashboard that shows those dozen machines and only sees the data for those dozen machines. And those two systems feed back to a backend system of records so that the person who's managing the plant at a whole can see what went wrong, when it went wrong, how things are going, right? They don't necessarily need to see that level of detail. They want to see a more overall view, right? This is a lot more tolerant of network outages and things like that, if you're not relying on doing the reactions to that data on the cloud. And I say this all the time and I'm not sure if this is in my slides, I have to apologize. I did a presentation down in Leeds last night and then I left my laptop in Leeds. So this is actually a borrowed laptop and I'm sure that some of the animations won't work and because I'm sort of having to wing it a bit this morning till I get my laptop back. So this is what I was talking about when I started and I believe it's the title of this talk is a data layer architecture, right? This is the idea of being able to collect my data anywhere along the entire IoT deployment, whether it's at the very edge or in the cloud. And I can collect that data there. I can do high speed, high volume data collection there. I can store it there. I can analyze it there. I can visualize it there. I can react to it there, right? Or I can forward it to any other layer in the system and I can do all of those things at any of those layers in the system, right? So that I'm not reliant on, well, I'm only collecting it at the edge or I'm only collecting it at the cloud, right? And I can only visualize it at this layer because this one's just collecting it and this one's where the visualization layer is. Oh, and all the reactions to my data happen over here, I wanna be able to do that at any point along the data stream from the absolute edge to the cloud. So again, this is the sort of the overall architecture of that and as you can see, the data services lay across the whole thing, right? The ability to collect, manage and react to data no matter where I am. And the one thing about IoT data that I say all the time is that if you're not actually taking action based on your data in fairly close to real time, why are you collecting it, right? There's no sense in collecting millisecond level data from some sensor if you're not actually going to take action based on that data at that granularity, if you're only looking at your sensor data every week, then collect it on a weekly basis, don't collect it on a millisecond basis, you're just wasting time, right? And the example I give of this is you're doing vibrational analysis on a machine and the vibration starts to go way out of whack and the machine blows up and you have to shut the whole shop down and it takes a week to get the party and get the machine back up and running and you lost a couple of million dollars in business because your machine went down, okay? So now you're collecting vibrational data on that machine and the same thing happens, why are you collecting that data? If you're not monitoring it and seeing that it's starting to go out of range and ordering the part ahead of time so that you can shut the machine down for 20 minutes, replace the part and bring it up and only cost your production $20,000 instead of two and a half million dollars, then you're wasting your data collection, right? Being able to act on that data in basically a real-time fashion is what it's all about and here's the blank slide. So this is the one that I am sure is not gonna animate correctly because I am just positive of it. So this is a sort of an IoT architecture where I've got my sensors and I don't care how they're connected and in fact up here, I have a Bluetooth sensor and a Wi-Fi sensor and if I had enough batteries, I have a couple of Lora sensors as well, right? So I don't care how they're connecting to this gateway. This thing has a Lora gateway in it, it reads Bluetooth, it reads Wi-Fi, and it reads Z-Wave. So I can connect any kind of sensor to it, right? I don't much care. And I've got it connected to an edge device which may or may not be in this case right now, it's not connected to the cloud and I can collect my data in a database in the cloud and have visualizations at the cloud level, right? But the nice thing about this is I'm running influx DB in the entire influx stack on this edge device, right? So I can use Telegraph for data collection, high speed, high volume data collection and I run that here at the edge so I collect my data at the edge there and I run it on my cloud so I can collect the data at the cloud level. This thing can forward its data to a Telegraph agent anywhere else that can feed it upstream, right? And here's the animation out of sync. I run influx DB to store my data and I store my data both here at the edge and I store my data at the cloud level, right? And so I have my short-term storage at the edge here, long-term storage back at the cloud, right? I use capacitor for doing local alerts here and I do some system-wide alerts at the cloud level, right? And what you're actually seeing, this is actually a display up here that is being run off of capacitor alerts. So whenever I change the CO2 level, it registers on this little dashboard up here that you can't see but you can come up and see afterwards but it's also sending an alert to this other Wi-Fi enabled device that shows a graph of the CO2 levels and I'm assuming that it's changing because I can't see it. Yeah, I just spiked it, right? And so it's sending a constant stream of events from capacitor to this device to give me alerts. I could hook it to almost anything and have that alert do almost anything. I could hook it to a fan in this room so that when the CO2 levels start to go up and people start to fall asleep, the fans come on and it starts to suck CO2 out of the room, right? And I can automate all of that but this is one of those automations that's important to have at the local level, right? I'm not gonna rely on the cloud for that and I can use chronograph to build dashboards so that I can visualize my data at the edge and most of you probably can't see this little device here but I've got a bunch of dials and graphs here that keep track of a bunch of sensor data, right? In addition to keeping track of the sensor data, I'm also keeping track of the platform itself, right? Because if this Wi-Fi sensor goes, you know, stop sending me data, I've gotta know is the sensor dead or is the Wi-Fi dead, makes a difference, especially if that Wi-Fi sensor is someplace really hard to reach and I've gotta roll a truck out there to get somebody to climb up in some small hole and dig out that sensor and replace it. Is the sensor really dead or is the Wi-Fi dead? So I need to be able to monitor the platform that it's running on as well as what it's monitoring so that I know who to send when something's not going right. So this is what that dashboard looks like. I've got CO2 data, I've got dial graphs to make it easy to see exactly what's going on right now. I've got some historical data in the form of strip charts, right, and I'm monitoring the battery levels and the Wi-Fi levels on both Wi-Fi interfaces and I'm monitoring a whole bunch of sensors so that I know what's going on with the stuff I'm monitoring and with the platform that's doing the monitoring so that I can keep up with what I need to keep up with. And this is what I mean by the data layer, right? I am now able to collect and analyze my data. I built this same dashboard for this local device and I run the same dashboard at the cloud level but the cloud is only giving me five minute averages so that I can see the overall trend of my data and I can synthesize multiple sensors from multiple collection devices in the cloud so that I can see what's going on with the whole building if I had these in every room of the building and I can react to my data on a room-by-room basis with a monitor in the room. So IOT data is basically, well it's not basically, it is time series data, right? Time series data is defined by a reading or an event with a timestamp and that's time series data and that's IOT data. I'm getting a, as I said, every IOT device that comes online is going to be generating a constant stream of time series data from the minute you turn it on until it finally dies in however many dozens of years it takes for it to die, right? And it's all time series data. There's a little bit of the data that's not time series and that's usually data about when you brought it online and what kind of sensor it is and who the manufacturer it is and things like that. That's a little bit of non-time series data but the majority of the data that's gonna come out of an IOT device is time series data. And time series data, again, if you're going to collect IOT data, you want it to be timely, you want it, you wanna look at what's happening now with your data, you're instrumenting your environment, you wanna know what the environment is doing. You need it to be accurate, you need to make sure that you have data integrity at all levels of this data layer and it's gotta be actionable. Again, if I'm not taking action on my data, why am I collecting it? If I'm not doing something with it to get value from it by making business decisions or safety decisions or whatever kinds of decisions I'm using with that data, if I'm not doing that, why am I collecting it? Unless I just wanna hand money over to disc manufacturers to put my data on disc, right? My experience over many years is that most IOT deployments are struggling to find a platform that allows you to do this in a scalable manner. How many people here are actually doing IOT deployments? How many of you are using MongoDB? Don't do that. MongoDB is a great database for document storage, JSON object storage, all sorts of other stuff. IOT data is not document storage. It's not JSON based object storage. It's time series data. You can use a screwdriver to drive a nail. I do it all the time with small nails. As you need to drive a larger nail, the screwdriver becomes less and less effective for doing that. At some point you need the tool for the job and the tool for IOT data and time series data is a time series database, something that is optimized to do time series data collection, analysis, manipulation and take action on time series data. That's what it's built for, right? MongoDB is great for I have, I've written applications using MongoDB for all sorts of stuff, right? I do not use MongoDB for time series data because that's not what it's good at. It's great to start out that way. It's great for doing a proof of concept, but it will not scale when you get to IOT scale, right? You're talking millions, tens of millions of data points per second, right? You need to scale to tens of millions, to hundreds of millions of data points per second. Enable in your ability to ingest that data. Not only that, you need to scale that to be able to analyze that data, to visualize that data and to react to that data at those scales. And that's why you're gonna need time series databases because that's what they're built to do. Now I have lots of time for questions. I can answer questions about any of this. I can answer questions about how I built this stuff, right? Because I built it all myself and it's all based on pretty much open source hardware and software, right? This is a Pine 64 open source board running Armbian Linux, right? The really nice thing about the influx data stack is that I am running the exact same software on this Edge device as I'm running on a multiple node cloud instance. It's the same stuff, it's the same bits, it's the same software. So I don't have to know anything different to deploy it to this Edge device than I need to know to deploy it to an enterprise server or to manage it at any of those levels. It's the same software, it's the same bits, it's deployed the same way, it's architected the same way. Turns out after running this thing continuously for just under a year, I finally got to a limit on it last week. I did not get to a limit on it for data storage, I got to a limit on how much data it can compact when it has to compact a shard and move to the next one. I have to make my shards smaller for this embedded device. But it took me a year to find that out running it on here, collecting data pretty much 24 hours a day because when I'm not traveling, this runs in my office and it collects all this data 24 hours a day. This is a small embedded device, it's got a 64 gig micro SD card in it and I've been collecting data for a year and I'm still at only about 20% disk usage and that's including the entire Linux stack and all the stuff that I'm using to collect the data in terms of connecting to the remote devices. So it's really efficient at collecting the data and it's storing the data. I've been running a cloud instance for over a year now and I think I'm at about 170 megs and I've been collecting data in the cloud from about two dozen devices for a year. And those devices each have about five data streams and they're sending data every second so that's sending five data points a second from about two dozen devices. And it's been doing that for about 14 months. I went backwards, I'm sorry. So being able to deploy the same software, the same dashboards, the same data collection, the same data storage from end to end is a huge cost savings and it's a great efficiency. So with that, I'll just take questions. I think we have a microphone right up there if you can use the microphone so everybody can hear that, that'd be great. Yep, it's on. Do you have some recommendation for the minimal system specification or requirements for running that's in flux on an edge device? Well, I can tell you the smallest device that I've run it on and I'm gonna apologize in advance for having run it on this device but an Intel Edison is the smallest device I've run it on. They're not making them anymore and I'm actually kinda thankful for that but that's the smallest device that I've run it on currently. I have another device that's called a Chip Pro which is no longer being made either and I picked one up for 10 bucks on eBay and if I can ever get it to actually boot Linux then I will try running it on that. It's pretty low footprint. It's all written in go and it's pretty low footprint stuff. Now, running it on the Intel Edison and running the dashboards for the visualization as well as the data collection and storage was not all that rewarding experience, right? But it was able to do the data collection and the data storage and forwarding to an upstream collection device with no problem whatsoever, right? Other questions? Yeah, I'll just repeat your question if you can shout it out. So am I gonna be working with EdgeX Foundry to replace their MongoDB core data? I don't know. I'd like to, you know, for again, for IoT data a time series database makes a lot of sense. I don't have the chart in my slides but the Eclipse Foundation did an IoT developer survey last year. And the top three databases for IoT data storage were MySQL, MongoDB and then InfluxDB, right? The year before that InfluxDB didn't even register. Unfortunately, Don't Know and None also registered this year on that survey at not insignificant numbers. So we'll have to fix that as well. Other questions? Yes, sir. And possible, if you go to an industrial event, it's sort of every solution I've come across is sort of just pump everything from the sensor into the cloud processes there. I never quite thought of why they're advocating that. So you're saying that industrial IoT folks want to pump everything from the sensor to the cloud? That's what it is. That has been the opposite of my experience in talking to industrial IoT customers who are absolutely not interested in having their industrial equipment connected to the open internet in any way, shape or form. Thank you very much. And in fact, Gartner last spring did a magic quadrant of IoT data solutions and they would not for industrial IoT and they did not include a bunch of providers like AWS because their IoT data solution was a cloud first solution and the industrial IoT folks do not want a cloud first solution. It is a non-starter for most industrial IoT customers. For exactly the reason I stated about Stuxnet, right? It's not very hard to... That one was very hard because they targeted very specific PLCs by serial number. So they had to know which PLCs by serial number but if I just want to target all my competitors I can just make sure I target all PLCs except the ones I own, right? And I win. So there's a lot of reasons to have zero desire to collect this stuff, to connect it to the cloud. Even a lot of the large cloud providers like Amazon and Azure are starting to hand out what they call edge kits, right? To allow you to do your data collection at the edge and then forward it to their cloud, right? But it's all based on a cloud backend. At Influx we don't care where you want to run it or store it. It's all open source, you can download it for free, you can run it for free, you can run it for free forever and we will never call you up and say, hey, do you want to buy something? It's just not, that's not our business model, right? So you can run it wherever you want as many copies as you want for as long as you want. And that gives you the flexibility to say, I want to collect this stuff at the edge and then I want to forward it back to my local data center that's in the building and that's air-gapped from the rest of the world and that's completely okay, right? Other questions, yes? I mean I know you have the Influx TV processor run at your cloud provider, if you need to pay, right? And have you tried to run, connect to other cloud providers something like OpenGS TV, do you have the system running for free by the user and... So there's a whole bunch of questions in there and I'm gonna try to repeat as many of them as I can. First was how do I deploy this to the edge? There's a bunch of ways to do that, right? So I built, I think four of these edge devices and sent them around to people in the company and I did a fairly stupid way of doing that which is I built one and then I just imaged the SD card and you put the SD card in another box and you turn it on and it comes up exactly the same as the first one, that's not the smartest way to do it, right? I'm not claiming it is. You can run this stuff in containers so I can run, I can containerize it and then just have the containers pushed out, right? I can put this in something like resin.io and push it out that way and I've done that as well. So there's a whole bunch of different ways and I'm not here to tell you what is the best way for your deployment because I don't know, right? I do know that a complete install on Linux from start to finish to having a dashboard up and running took me three and a half minutes and I have video evidence to prove it on my blog, right? It's fairly simple to deploy and to run. I think my Mac OS one took four and a half minutes and that's just because I got confused with doing something that took me a minute but it's pretty fast on that as well. Yes, sir. Is it possible to deploy it into redundant topology? So having a couple of gateways at the edge to increase probabilities that data will be collected. I don't see why not. The trouble you're gonna probably have with that is how do you tell the sensors which gateway to connect to, right? IoT has all sorts of problems that most people tend to just wanna sweep under the rug and not pay any attention to, right? Data storage used to be one of them. You ask people what are you doing with their data and they would say analytics database mumble walk off, right? But this device security and device connectivity is a problem that has been around since IoT started and is still around because the ability to do secure communications from device to edge is largely dependent on the processing power and the amount of battery power on a remotely connected sensor, right? Most small embedded microcontrollers do a fairly poor job of things like TLS because they just don't have the compute power to do it, right? So being able to have them have the configuration to be able to connect to multiple collection devices either simultaneously or serially is, you know, every one of those things takes up configuration space and communication space and therefore takes up compute power and battery power. And so these are all things that you need to think about and calculate into what your device is gonna be, how much battery power it's gonna have, if it's battery connected, you know, what sort of communication protocols you're gonna use, all those sorts of things. Okay, so it's rather a question to terminal devices, right? But on cloud level is the solution allows to, you know, remove duplicate data if it was collected. Right, so at the cloud or enterprise level, as I said, you can run single instances of our open source influx DB as much as you want for free for as long as you want and nobody's gonna argue about it. When you get into wanting to do things like clustering and high availability, that's the secret sauce that we sell, right? So at that point you call us and we sell you a license for the enterprise or you buy a cluster instance and clustering, cloud instance comes with clustering and high availability and failover and all those sorts of things and you can figure how many nodes in your cluster and how much data you want to distribute it and do all that, right? So it's the clustering and high availability level that we charge for other than that. Thank you. Yeah, can you come up to the microphone if you don't mind? Yes, I was wondering how do you synchronize the data between the edge device and the cloud? Is it just part of the influx DB infrastructure? That's a great question. And it is, right now I use Telegraph which is our data ingestion engine to do my data synchronization and it doesn't do it very well, right? So when the connection is down, it will cache a certain amount of data up until it reaches its cache limit and then it just starts dropping data. So it's not a great solution. We are actually working closely with the Apache NIFI project to be able to pump data to NIFI which will be able to do a much better smart caching and uploading of data that way. I can also do it using capacitor and I've worked with some folks to do some interesting schemes around, if I'm offline, I write all this data to a separate influx DB database on my local device and whenever I have internet connectivity, if there's data in that database, I write it all upstream and when I've written it all, I erase that database and I know I've synchronized my data. So there's a bunch of different schemes that you can do. Right now none of them are especially elegant for intermittently connected devices depending on the length of time that they're disconnected. Yes, sir. So is Telegraph kinda like stats D? Cause you can feed straight into influx like over UDP and stuff. So the question is, is Telegraph like stats D? Telegraph is a plug-in driven data ingestion engine and you can run Telegraph to capture data and to output data without ever having influx DB in the mix. You can collect data into Telegraph from any, there's something like 200 plugins now and there are almost all community contributed plugins. Actual plugins written by influx data employees are fairly small and in fact I've got four and I'm like one of the largest contributors of plugins to Telegraph from influx. So you can ingest your data into Telegraph and output it to Kafka, right? Or output it to almost anything and never have influx data in the mix if that's what you want. Do you recommend just going through Telegraph? Is it not a bad, is it a bad long-term strategy to go straight to influx? The thing about going straight to influx is that if you're going straight to influx you have to handle any caching or connection problems yourself. The thing that Telegraph gets you is it's really, really efficient at pumping data into influx DB and it will handle a certain amount of data latency and caching so that you don't have to worry about that kind of stuff and it's just really efficient at it and it's also really easy to A, send data to it or B, write a plugin if there's not a data collector, right? So actually the way I'm sending my, the way I'm sending my data to this external device is I'm actually using capacitor to send data to an MQTT broker which this device is listening on and picks up its alerts from MQTT, right? But I also am listening on that same MQTT broker for incoming data from a sensor, right? So I can just ingest data from MQTT. I've hooked this thing to the Things Network Laura MQTT broker and ingested vast amounts of data that way. So Telegraph is really about ingesting data at high speed and high efficiency and putting that data out with equal efficiency and speed and it just happens to be really good at Influx DB. Other questions? Yeah. Yeah. Since we are dealing with a large set of edge gateway devices that does time, that's due to time stamping for the sensor data, does Influx DB has any notion of time uncertainty for clock synchronization? Clock synchronization is a huge problem in IoT as you probably know because again, real-time clocks on embedded devices are their whole other thing and there's a bunch of different strategies you can use for that, right? The devices themselves can time stamp data as it's sent, right? Or you can not time stamp your data from the device and allow the gateway device to time stamp the data and do some synchronization between gateway devices or you can let the upstream, if you're piping it all to the cloud you can have none of those devices time stamp the data and have the data time stamp as it enters the cloud database. So you can do it sort of at any place along, you can decide where you want to inject your time stamp and at any point, if you just don't send a time stamp with it, it'll time stamp for you. And I have just gotten the hook. So thank you very much. Thanks.