 Okay, we're now going to go into the customer panel and we'd like to welcome Angelo Fausti, who's a software engineer at the Vera C. Rubin Observatory and Caleb McLaughlin, who's Senior Spacecraft Operations Software Engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss, folks, this interview. Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. I mean, of course, doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem. Yeah, absolutely. And thanks for having me here, by the way. So Loft Orbital is a company that's a series B startup now who, and our mission, basically, is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have a big software teams and then eventually worry about a bunch, like just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP is getting your programs, your mission deployed on orbit with access to different sensors, cameras, radios, stuff like that. So that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve, there's a really cool company called Totem Labs who is working on building an IoT constellation for Internet of Things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT, which means you have this little modem inside a container that you track from anywhere in the world as it's going across the ocean. So there it's really little and they've been able to stay a small startup that's focused on their products, which is that super crazy complicated cool radio while we handle the whole space segment for them, which just before loft was really impossible. So that's our mission is providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving a huge variety of customers with all kinds of different missions and obviously generating a ton of data in space that we've got to handle. Yeah, so amazing, Caleb, what you guys do. I know you were lured to the skies very early in your career but how did you kind of land in this business? Yeah, so I guess just a little bit about me. For some people, they don't necessarily know what they want to do like early in their life. For me, I was five years old and I knew I want to be in the space industry. So I started in the Air Force but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of actually. So I've kind of started out in satellites, did spend some time in working in the launch industry on rockets then now I'm here back in satellites and honestly, this is the most exciting of the different space startups that I've been a part of. Super interesting. Okay, Angelo, let's talk about the Rubin Observatory. Varice Rubin, famous woman scientist, Galaxy Guru, now you guys, the observatory you're up way up high, you're going to get a good look at the southern sky. I know COVID slowed you guys down a bit but no doubt you continue to code away on the software. I know you're getting close, you got to be super excited. Give us the update on the observatory in your role. All right, so yeah, Rubin is a state of the art observatory that is in construction on a remote mountain in Chile. And with Rubin, we'll conduct the large survey of space and time. We are going to observe the sky with an eight meter optical telescope and take a thousand pictures every night with a 3.2 gigapixel camera. And you are going to do that for 10 years, which is the duration of the survey. Yeah, amazing project. Now, you earned a doctor of philosophy so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy and astrophysics. So this is something that you've been working on for the better part of your career, isn't it? Yeah, that's right, about 15 years. I studied physics in college. Then I got a PhD in astronomy and I worked for about five years in another project, the Dark Energy Survey, before joining Rubin in 2015. Yeah, impressive. So it seems like both, your organizations are looking at space from two different angles. One thing you guys both have in common, of course, is software and you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb, you could start. Yeah, absolutely. So the first company that I extensively used InfluxDB in was a launch startup called Astra and we were in the process of designing our first generation rocket there and testing the engines, pumps, everything that goes into a rocket. And when I joined the company, our data story was not very mature. We were collecting a bunch of data and LabView and engineers were taking that over to Matlab to process it. And at first, that's the way that a lot of engineers and scientists are used to working. And at first, that was like, people weren't entirely sure that that needed to change, but it's something, the nice thing about InfluxDB is that, it's so easy to deploy. So as our software engineering team was able to get it deployed and up and running very quickly and then quickly also backport all of the data that we collected thus far into Influx. And what was amazing to see and it's kind of the super cool moment with Influx is when we hooked that up to Grafana, Grafana is the visualization platform we use with Influx because it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data where they could just almost instantly easily discover data that they hadn't been able to see before and take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming. And I saw them implementing like crazy rocket equation type stuff in Influx and it just was totally game changing for how we tested. So, Angelo, I was explaining in my open that you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about and the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? Yeah, correct. So I work with the data management team and my first project was to record metrics that measured the performance of our software, the software that we used to process the data. So I started implementing that in a relational database, but then I realized that in fact, I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB and that was back in 2018. Another use for InfluxDB that I'm also interested is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. In every point in that time series, we call a visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long with about 1,000 points every night. It's actually not too much data compared to other problems. It's really just a different time scale. The telescope at the Rubin Observatory is like pun intended, I guess, the star of the show. And I believe, I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. That's like 40 moons in an image, amazingly fast as well. What else can you tell us about the telescope? This telescope, it has to move really fast. And it also has to carry the primary mirror, which is an eight meter piece of glass. It's very heavy. And it has to carry a camera, which has about the size of a small car. And this whole structure weights about 300 tons. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about its design is that the telescope, this 300 ton structure, it sits on a tiny film of oil, which has the diameter of human hair. And that makes an almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has in diameter the size of about seven full moons. And with that, we can map the entire sky in only three days. And of course, during operations, everything is controlled by software and it's automatic. There's a very complex piece of software called the scheduler, which is responsible for moving the telescope and the camera, which is recording 15 terabytes of data every night. And Angel, all this data lands in influx DB, correct? And what are you doing with all that data? Yeah, actually not. So we use influx DB to record engineering data and metadata about the observations, like telemetry events and the commands from the telescope. That's a much smaller data set compared to the images, but it is still challenging because you have some high frequency data that the system needs to keep up. And we need to start this data and have it around for the lifetime of the project. Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about if you've got these dishwasher size satellites, you're kind of using a multi-tenant model. I think it's genius, but tell us about the satellites themselves. Yeah, absolutely. So we have in space some satellites already that as you said are like dishwasher mini fridge kind of size and we're working on a bunch more that are a variety of sizes from shoebox to I guess a few times larger than what we have today. And it is, we do shoot to have effectively something like a multi-tenant model where we will buy a bus off the shelf. The bus is what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power. It has the solar panels. It has some radios attached to it. It handles the attitude control, basically steers the spacecraft in orbit. And then we build also in house, what we call our payload hub, which has all any customer payloads attached and our own kind of edge processing sort of capabilities built into it. And so we integrate that, we launch it and those things because they're in low earth orbit, they're orbiting the earth every 90 minutes. That's seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have one of the unique challenges of operating spacecraft in low earth orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time where we get to talk to them through our ground sites either in Antarctica or in the North Pole region. Talk more about how you use influx DB to make sense of this data through all this tech that you're launching into space. Basically, previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was so slow and the size of our data would balloon over the course of a couple of days to the point where we weren't able to even store all of the data that we were getting. So we migrated to influx DB to store our time series telemetry from the spacecraft. So that's things like power level, voltage, currents, counts, whatever metadata we need to monitor about the spacecraft. We now store that in influx DB. And that has, now we can actually easily store the entire volume of data for the mission life so far without having to worry about the size bloating to an unmanageable amount. And we can also seamlessly query large chunks of data. Like if I need to see, for example, as an operator, I might wanna see how my battery state of charge is evolving over the course of the year. I can have a plot in an influx that loads that in a fraction of a second for a year's worth of data because it does intelligent. I can intelligently group the data by a sliding time interval. So it's been extremely powerful for us to access the data. And as time has gone on, we've gradually migrated more and more of our operating data into influx. You know, let's talk a little bit, but we throw this term around a lot of data driven. A lot of companies say, oh yes, we're data driven but you guys really are. I mean, you got data at the core. Caleb, what does that mean to you? Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astro where our engineer's feedback loop went from, you know, a lot of kind of slow researching and digging into the data to like an instant, instantaneous almost seeing the data, making decisions based on it immediately rather than having to wait for some processing. And that's something that I've also seen echoed in my current role, but to give another practical example, as I said, we have a huge amount of data that comes down every orbit and we need to be able to ingest all of that data almost instantaneously and provide it to the operator in near real time, you know, about a second worth of latency is all that's acceptable for us to react to see what is coming down from the spacecraft. And building that pipeline is challenging from a software engineering standpoint. Our primary language is Python which isn't necessarily that fast. So what we've done is started, you know in the goal of being data driven is publish metrics on individual, how individual pieces of our data processing pipeline are performing into influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow and what that has done is allow us to make intelligent decisions on our software development roadmap, where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency just because we have that visibility into where the real problems are. It's sometimes we've found ourselves before we started doing this kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now that we're being a bit more data driven there we are being much more effective in where we're spending our resources and our time which is especially critical to us as we scale from supporting a couple of satellites to supporting many, many satellites at once. Coach, you reduced those dead ends. Maybe Angela, you could talk about what sort of data driven means to you and your teams. I would say that having a real time visibility to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect with the telescope have good quality and that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible and then start fixing problems. Caleb, what are your sort of event intervals like? So I would say that as of today on the spacecraft the level of timing that we deal with probably tops out at about 20 Hertz, 20 measurements per second on things like our gyroscopes. But I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give an example from when I worked on the rockets at Astra there are baseline data rate that we would ingest data during a test is 500 Hertz. So 500 samples per second and in some cases we would actually need to ingest much higher rate data even up to like 1.5 kilohertz. So extremely, extremely high precision data there where timing really matters a lot. And one of the really powerful things about Inflex is the fact that it can handle this. That's one of the reasons we chose it because there's times when we're looking at the results of a firing where you're zooming in, I talked earlier about how in my current job we often zoom out to look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second. And you need to see the same thing as Angela just said not just the actual telemetry which is coming in at a high rate but the events that are coming out of our controllers. So that can be something like, hey, I opened this valve at exactly this time and that goes, we wanna have that at micro or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at this exact moment was that before or after this valve opened those kind of that kind of visibility is critical in these kind of scientific applications and absolutely game-changing to be able to see that in near real time. And with a really easy way for engineers to be able to visualize the state of themselves without having to wait for software engineers to go build it for them. Can the scientists do self-serve or do you have to design and build all the analytics and queries for your scientists? I think that's absolutely from my perspective that's absolutely one of the best things about influx and what I've seen be game-changing is that generally I'd say anyone can learn to use influx and honestly, most of our users might not even know they're using influx because the interface that we exposed to them is Grafana which is a generic graphing, open-source graphing library that is very similar to influx's own chronograph. And what it does is let it provides this almost, it's a very intuitive UI for building your queries. So you choose a measurement and it shows a dropdown of available measurements and then you choose the particular fields you wanna look at and again, that's a dropdown. So it's really easy for our users to discover and there's kind of point-and-click options for doing math, aggregations, you can even do like perfect kind of predictions all within Grafana, the Grafana user interface which is really just a wrapper around the APIs and functionality that influx provides. Putting data in the hands of those who have the context of the domain experts is key. Angel, is it the same situation for you as it self-serve? Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards because they know exactly what they need to visualize. Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company we weren't using influx DB and we were dealing with serious issues of the database growing to an incredible size extremely quickly and being unable to like even querying short periods of data was taking on the order of seconds which is just not possible for operations. Guys, this has been really informative. It's pretty exciting to see how the edge is. Mountain tops, low Earth orbits, space is the ultimate edge, isn't it? I wonder if you could answer two questions to wrap here. What comes next for you guys and is there something that you're really excited about that you're working on? Caleb, maybe you could go first and then, Angel, you can bring us home. Basically, what's next for loft orbital is more satellites, a greater push towards infrastructure and really making our mission is to make space simple for our customers and for everyone and we're scaling the company like crazy now, making that happen. It's extremely exciting time to be in this company and to be in this industry as a whole because there are so many interesting applications out there, so many cool ways of leveraging space that people are taking advantage of and with companies like SpaceX and the now rapidly lowering cost of launch, it's just a really exciting place to be in. We're launching more satellites, we are scaling up for some constellations and our ground system has to be improved to match. So there's a lot of improvements that we're working on to really scale up our control software to be best in class and make it capable of handling such a large workload. So... You guys hiring? We are absolutely hiring, so we have positions all over the company, so we need software engineers, we need people who do more aerospace-specific stuff, so absolutely I'd encourage anyone to check out the loft orbital website if this is at all interesting. All right, Angel, bring us home. Yeah, so what's next for us is really getting this telescope, working and collecting data. And when that's happened, it's going to be just a deluge of data coming out of this camera and handling all that data is going to be really challenging. Yeah, I wanna be here for that. I'm looking forward, like for next year, we have like an important milestone which is our commissioning camera, which is a simplified version of the full camera. It's going to be on-sky and so, yeah, most of the system has to be working by them. Nice, all right, guys, with that, we're gonna end it, thank you so much. Really fascinating, and thanks to InfluxDB for making this possible. Really groundbreaking stuff, enabling value creation at the edge, in the cloud and of course beyond at the space. Really transformational work that you guys are doing, so congratulations and really appreciate the broader community. I can't wait to see what comes next from this entire ecosystem. Now in a moment, I'll be back to wrap up. This is Dave Vellante and you're watching theCUBE, the leader in high-tech enterprise coverage. Welcome, Telegraph is a popular open source data collection agent. Telegraph collects data from hundreds of systems like IoT sensors, cloud deployments and enterprise applications. It's used by everyone from individual developers and hobbyists to large corporate teams. The Telegraph project has a very welcoming and active open source community. Learn how to get involved by visiting the Telegraph GitHub page, whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraph. We'd love to hear what you're building. Thanks for watching Moving the World with InfluxDB made possible by InfluxData. I hope you learned some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes and you want to scale cost effectively with the highest performance and you're analyzing metrics and data over time, time series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link in the resources below. Remember, all these recordings are going to be available on demand of thecube.net and influxdata.com, so check those out and poke around InfluxData. They are the folks behind InfluxDB and one of the leaders in the space. We hope you enjoyed the program. This is Dave Vellante for theCUBE. We'll see you soon.