 Okay. Good morning, everybody. Thank you for joining me on Thursday, bright and early at almost 11, and let's go ahead and talk about some digital signage. My name is Mike Madison. I'm a technical architect at Acquia, and I'm also an organizer for Drupal GovCon. Just a quick shout out if you happen to live in the D.C. area or if you're involved in state, local, federal, or any other form of government. Submission and session submission for GovCon are now open, and this is a free conference that we put on at the National Institutes of Health every year, so please do check that out. So today's agenda, we're really going to focus on the data pipeline portion of this digital signage concept, right, because getting data into a sign is not as trivial as just going out and building a traditional website. We're going to talk about how to test that pipeline, we're going to spend some time talking about how it integrates with Drupal, and we're going to think about how you can power digital experiences more broadly. Even though this can get down into some pretty deep object-oriented coding standards, we're going to try and keep this down at a beginner to intermediate level, so we're not going to focus so much on the code today. We're really going to talk about this more like an architectural discussion, things to look out for, how to approach the problem, how to test the problem, et cetera. And I do always like to just give a disclaimer here today, right, so we're talking at a Drupal con, we're talking about using Drupal for this, so of course PHP's in the picture, but most of what we're talking about today really could be built with just about any framework, right, so if you'd rather go do this with Node, if you'd rather go do this with Pick Your Poison, you totally can, this is not only a PHP talk. And a lot of the examples today we're going to be talking about actually are going to come down to exactly this. If you've been in New York recently, you may have noticed they've installed a lot of new countdown clocks on their platforms. Those are powered by Drupal 8, and the example that we're going to be looking at today is sort of the underlying framework for how those signs operate on a day-to-day basis. If you're keeping track, it's something like 8 million people use the subway in New York every day, and all of them rely on a Drupal-powered system. Now to know when to stand on the platform and when not. So pretty exciting. If you've been paying attention this week, you've probably heard some buzzwords, right, decoupled Drupal, headless Drupal, digital experience. These are all big, big, big things right now. Everybody wants to capitalize on all of or some combination of these things. And one of the most important takeaways is that, yes, React is cool, yes, you can build a really flashy app. There's awesome talks on that this week. Go watch them if you didn't go to them in person. But none of these things work without a consistent method of delivering data, right? Just because you decouple from Drupal doesn't mean that you don't still have to get the stuff out of Drupal into whatever you just decoupled, right? If you get rid of Twig, okay, fine, you still have to have some method of connecting your data to your presentation layer. And whether that's like a Facebook Messenger or a phone app or a sign, you can't forget about that part. So with signs in particular, we're not just talking about a digital experience, right? We're talking about a persistent digital experience. And that's a whole different beast. And we'll talk about why here in a few minutes. But if you're gonna have a persistent digital experience, something that you're literally gonna hang up and leave for years at a time, the only way that's gonna work is if you have consistent data flowing into that experience. I'm not gonna spend a lot of time telling you why you should have one. We can just move on from that. But let's do talk a little bit about what a pipeline does. So a pipeline's there to automatically move data from one or more systems to whatever your experience is. The pipeline's there to limit human interaction. In the case of the New York subway system, those signs get data payloads like 20 times a minute. You don't want a person involved in that. They're only gonna slow it down. You wanna streamline and simplify data collection as much as possible. See previous statement about how fast they often need to be. Pipeline needs to be able to broadcast data to multiple locations simultaneously. And it also needs to be smart enough not to over send data. If you have 2,000 signs, each of those signs is gonna have a very specific context of what it needs to be showing. There's no reason to send 2,000 signs worth of data to each sign. You just wanna send the smallest data payload possible. Data pipeline also theoretically should try to limit who can actually push data out to your devices. It should help deal with fault tolerance. So if a data system goes down, right? If you've got a Twitter widget on your sign, and Twitter for some reason goes down, if the internet goes down, you wanna have some degree of fault tolerance. If something goes wrong with your device, I mean, like people are gonna notice it, they're gonna take pictures of it. If somebody hacks your device, people are gonna notice it, they're gonna take pictures of it. So anything you can do even in the back end as you're building this pipeline to try and make sure you're not on Twitter with that is important. So architecturally, there's three big questions that you should consider and spend some time with before anything else happens, right? So you wanna think about what's coming in to the pipeline? What is the pipeline powering or where's that data going on the other side? And how is the whatever you're getting gonna be displaying the whatever you're showing? So let's start with that first question and talk a little bit about data. So basic questions, right? What is it? How often does it update? Can you get it from more than one place? What format is it in? Do you have to authenticate to get it? How big is the data dump, API limitations, etc., etc., etc., right? These are pretty standard architectural questions. In the case of the MTA, we're talking about mass transit data. We're talking about getting a city the size of New York worth of train data every couple of seconds. There are multiple places to get that data, so if the primary server dies, we have redundancies, so that's good. Thankfully, it's in JSON. We do, however, have to authenticate to get to it. And thank God the entire JSON string is less than 10 megabytes. So even though it updates really frequently and there's a lot of data in it, it's not like gigabytes and gigabytes of data, so that's something. So thinking about the impact of this example data, this first one is something like if I could have used the blink tag here, I would have. It's rare with a website or anything in our field that this comes up. But when you're dealing with transit data, if you screw that up, somebody could die, right? So I was driving in Portland, Oregon a couple months ago and they actually have digital speed limit signs on one of the interstates coming right down into the city. And it's a blind corner with a digital speed limit sign right there. And if that digital speed limit sign were to say 70 miles an hour, you'd go around that corner and there's stop traffic. I mean, there's no way to stop. So transit data or other types of data where people are relying on that information to make life altering decisions like how fast to drive. Can't stress enough, it has to be timely and it has to be secure. The fact that we're getting this data every couple of seconds means that whatever we build has to be lightning fast. So again, the fact that we're pushing this payload 20 times a minute, we could do it faster, that means that every portion of this pipeline has to be incredibly streamlined. Now, if you're building something like, say you're building like an announcement system for a college campus or you're building some sort of like a retail management system to push data out to cell phones or something like that, speed is perhaps not as critical there. And then finally, the fact that we've got multiple URLs with authenticated requests, we have to be cautious that whatever we're doing to keep this secure and robust does not have a direct impact on our performance. So we need to be smart about how we're going to access the data and work with it. So let's think about the digital experiences that we might be powering with this pipeline. I'm terrified of Furby's. So again, some basic questions, right? What is the digital experience that we're powering? Can people interact with it? And this is perhaps the most important question to think about. So is this a kiosk that you need to power with this pipeline, yet somebody's still going to be like tapping and clicking on it? Or is it just going to be hanging there completely stationary? And it's not that one of those is harder than the other, it's just it's a completely different set of problems if you also have to take user interaction into consideration. Is there any personalization or contextualization, right? If you're paying attention in the world of marketing and digital technology right now, like how much can we personalize this website to you based on all of your click history and all of your search history? That kind of goes out the window with a sign, but that doesn't mean that people don't want to personalize it. There's still context. Is this sign hanging on an uptown platform or a downtown platform? Is it hanging in a city where people predominantly speak Spanish versus English? Like what can you do to provide context even with a stationary sign? How often does it need to update? We already talked about this a little bit in the transit example. But again, it's not always a couple of seconds. It's going to vary. Language dependent, we just mentioned. And is it ADA compliant? Again, this is something I can't stress enough. Accessibility is critically important for web design in this modern era. It shouldn't go out the window just because we're talking about a sign or some other digital experience. So thinking about how can you make sure that this device that you're hanging on a platform or at the entrance to a building is still accessible to those with disabilities. Really important information to think about even this early in the process. Back to our transit example, we're talking about an arrival sign that people can't interact with. Thank God. Is there any personalization or contextualization? In this case, yes. The contextualization is based on physical location. So if I'm walking into a train station, I probably would like to see arrivals for the station that I'm in, not Times Square. It's just kind of common sense. We know that that sign needs to get an update every couple of seconds. It's not language dependent, although it could be. And it absolutely has to be ADA compliant because it's a government agency. Impact of some of that. Old data on an arrival sign is not useful. You know, there is real world evidence, most likely right there, if the sign's right or not. So having an inaccurate arrival sign, people are going to notice, they're going to take pictures, they're going to tweet it, they do take pictures and they do tweet it. So making sure that it is accurate is really important. Again, we talked about displaying the correct location. And the fact that there is no interaction, the sign just has to work all the time. And we already talked about these. But I will mention though, again, with ADA compliance, a lot of what that means for a sign is everything's bigger than you think it probably should be. And it's severely going to limit how many cool things you can shove on one sign at any given time. And I mean, I find that to actually be really refreshing for a design for a sign. But there's this constant battle of how much stuff can we show versus how far away from the sign can you stand and still be able to read all the stuff. So let's talk just a little bit about front end. Like that's it. We're not going to talk a lot about front end. But thinking about how you're going to display this data on the actual device itself is a really critical piece architecturally of everything we're talking about here. So if you want to go build this thing with like Drupal's native front end with twig, it's going to be really difficult to do any sort of real time updates into just a twig, normal front end. If you're going to go and completely decouple from Drupal or partially decouple from Drupal and build it with Angular or React or whatever, that's fine. But as you may or may not know, that then adds a pretty high degree of, well now we have to go build a React app to power this thing that we're doing. So as with many of those other talks, they don't spend a lot of time talking about the back end pipeline process. We're not going to really go any deeper into the front end. But I would urge you, if you're planning on building a signage system, think very, very carefully about how you want to handle the front end. Our system happens to use React. We've been very pleased with the results. And we'll talk a little bit more about that later in the talk. Cool. So high fidelity graphics right here, ladies and gentlemen. We know that we need to get data from somewhere and get it onto a sign. So let's think about how we're actually going to start architecting this solution. That pipeline needs to go out and get data. It needs to somehow organize that data into something useful. And then it needs to send the data out to the sign where again in our case the React app will receive that data and then handle the entire presentation of whatever we just sent it. Something that we haven't at all talked about yet is how are we actually going to organize this data? So we're getting all this stuff into our pipeline that we're supposed to break up and send to the right signs, but how do we know where that's going to go? We also haven't at all talked about how we're actually going to deliver data to the signs. Let's assume there's internet, but there still has to be something to connect from this data coming out of our pipeline and actually getting it into the React app. So we still have a couple of problems that we have to deal with. Thankfully, we do have some basic information that we can put into this equation. So we know that to get data, we have to authenticate. We can try to get data from the first source. If that source isn't available, we can fail over redundantly to n number of additional sources. We need to validate that data to make sure that it's both properly formatted, so we're not accidentally gonna break a sign because of a bad semicolon or something, and to make sure that it is from a trusted source. One of the easiest ways to hack something is not to actually hack the thing, it's to go hack the original source data and then you can start pushing trusted data through. So you wanna make sure that the data you're getting is not only properly formatted, but the sort of data that you might expect. Categorizing data, we'll talk about this some more, but again, as we get that entire payload in, we need to try and break up that data into smaller pieces. So we need to find each sign, each Furby's data, and then break it into as small of a piece as you possibly can before you try and send it out. In our first version of this system, we had like 150 signs each going out and pulling its own data. That means that all 150 signs had all the data for each of the other 149 signs, and each sign had to go find its own stuff inside of that mess of other data. Does that work, yes, is it scalable, no. So by having all of that happen in the data pipeline, you're really cutting down on how much work the front end has to do. And then finally sending the data, right? We need to re-authenticate, obviously, we wanna make sure that we are staying secure, then we're gonna send the data. So these three pieces thankfully are not super hard, right? I mean, we're gonna get data, we're gonna break it up and we're gonna send it, that's easy. And if we were to slightly expand my cool little graphic, it might look something like this. And then when you remember that oh shoot, we might actually wanna show more than one type of data on the sign at a time. It might actually look something like this. And then you go oh right, and we have more than one sign. So it might look something like this. And even though this pipeline is very much the same step, the providers are all the same underlying classes, they can be heavily customized. So that say the first data provider is gonna give us that transit data in JSON that needs to update very, very rapidly. The second provider is gonna give us weather data in XML, it's rate limited so we can only ping it like every 10 minutes. The last provider is gonna be like messaging data from Drupal. And that data is gonna come out also in JSON, but we don't need to update it nearly so frequently. So the same methodologies can be applied to many different types of data providers. The categorizer can look at that data sort of abstractly and understand in the context of our own signage system which piece goes where. Then we can send the data out and have the appropriate data show on the appropriate sign. Again, not too revolutionary, but we're definitely starting to get a little bit more complicated. Now when we add more than one sign into this equation, there's a lot of little details that all of a sudden make this notion of categorizing and breaking up the data infinitely more complicated. So if you have 100 signs or 2,000 signs or 100,000 signs, we now have to keep track of how big is the sign that we're sending data to, right? That screen versus this screen versus this screen could all be showing the same data but we can't show it in the same way on a phone that we'd show on a big projection screen in a room like this. Where's the sign physically located? We already touched on this a little bit, is that sign in a mezzanine, like when you walk into the station so it's showing all of the potential platforms down below or is it on an arrival platform? So we're physically bounded by two different tracks. What direction is it facing? This is a fun one. They've actually installed on a lot of platforms back-to-back signs. So the signs are showing the exact same data but if you're standing this way, the train's coming in on your right, if you're standing this way, the same train's coming in on your left. There's no way to know that, sort of like having physically seen the sign. And then finally IP address for the sign. This is where Drupal can really, really help. Yes, you can totally inventory all this stuff in a text file if you want to. I mean, pick your poison, you can do whatever you want. But Drupal actually gives you so much power and we're not even talking content here, right? We're talking about managing your signage system. Remember all the cool stuff that you would usually pick Drupal for, but remember Drupal is also a framework that you can interact with, that you can get data in and out of. And if you need to manage this signage system and provide context, why not use Drupal to do that? So in the context of our arrival system here, we have different route groups which contain routes, stations have different routes running through them, stations contain platforms, and then there are signs installed on stations. Imagine if we set up content types in Drupal with entity references to define this structure. The data coming in from our provider with the arrivals data only tells us at the station and platform level what data is coming. So the data dump doesn't tell us which sign to send the data to. So we can use this relationship to work from the data that we do have to break it up into small enough pieces to get it out to the signs. You can use Drupal's authoring system to go in and configure signs, platforms, et cetera. And in fact, we have some really fun little options like we can change the resolution, you can add arrows, there's a new rotation feature that we just rolled out about a month ago where it'll actually show like five arrivals continuously instead of just the two that would show up. And we've actually defined a data provider for all this configuration that we're talking about. So if we look at this pipeline again, we actually use the same pipeline that we're sending like data to the signs to send configuration to the signs as well. It's just a different type of provider. But using Drupal in this way gives us a highly dynamic method of controlling our own categorization system, meaning that anytime something else comes online, if we add another hundred signs, we just go put them in Drupal and now our data pipeline is immediately aware of them. And because we've used our same provider system to send that data out to signs, we don't have to like deploy to make changes to our own infrastructure, which is really nice. So even though the pipeline itself doesn't look any different, we've really now kind of gotten into the meat a little bit about how this categorizer works. If we've got all of this information in Drupal, we can now make Drupal sort of an integral part to getting data from that initial source out to the signs. We can use markers in the data plus the Drupal provided entities to actually, again, do the winnowing down of that data into the smaller pieces. Let's do talk about performance here real quick. So anybody who's tried to do sort of complicated entity queries in Drupal 8 knows that they're not super fast. And particularly if you're talking about like thousands and thousands of nodes of interconnected data, that can be very slow. So one of the very first requirements we talked about was how quick this data has to move. And even though we've just introduced a really slick way of managing this categorization process, we've also now introduced a major problem to keeping our performance requirements. Because getting all of the data out that we need for the actual categorization is just, there's no way that Drupal's gonna be able to provide that data fast enough to keep the pipeline moving. So what we do in this case is we actually cash everything from the Drupal side and make it available to the pipeline. We only regenerate that cash when sort of structural elements change, which could be very rapidly for a day or two if they're installing new signs and then it might not change again for two months. That cash could be file-based. It could be database or memcache-based. Really, it doesn't matter what it is as long as you don't have to directly bootstrap and integrate with Drupal every time you're trying to run through the pipeline. The other question to ask in terms of performance is, like, do we need to do anything with this data? Remember, we said that we're getting data dumps every couple of seconds. Our signs show train arrivals in minutes. So theoretically, we could get 20, 30 updates to one sign's arrival that we don't actually need to send. One of the easiest ways to speed up a process is to get rid of wasted action, right? So we actually introduced this idea of a change checker here. So once we get through the categorizer, we actually check to see if the payload that we just found is the same as the payload we sent the last time the pipeline ran. If it is, we don't send it. If it isn't, then we go ahead and send it again. This is both cost-saving, and it also does significantly speed up this process because once you categorize the data, if it hasn't changed, you don't need to deal with the authentication and the actual sending of the data. So let's do talk about the sending of the data a little bit here. So we've been very abstract about this, and I mentioned it as a problem a few minutes ago. But on a normal website, a request usually only occurs when a user clicks on something, and even on more modern websites like a Facebook or a Twitter, a lot of the navigational elements on that site are still based on a click or some sort of user interactivity. With a sign, we're not getting that, right? We're talking about some sort of long-term connection that needs to be able to just receive data over time without somebody having to constantly reload the sign to make sure that the right data is showing up. We actually are using the MQTT protocol, which is something I'd never heard of before. But basically, it sits on top of a standard TCIP protocol, and it's basically designed for exactly this. We're talking about a long, long-standing or long open connection, and it's very much designed to have sort of this publisher subscriber methodology that perfectly fits with what we need. It does, unfortunately, require a message broker, but many organizations that sell hosting actually provide exactly this model where you've got some sort of publisher sending data out to its sending or broker service, and then that brokerage service is then gonna send the data out to whatever subscribed to it. And I swiped this particular image because I really wanna stress those last two yellow items on the right. We've been talking about signs, but remember, we're also talking about digital experiences. So there's absolutely no reason in the world that you couldn't have a sign on a subway platform, a website, and a mobile app all receiving this exact same data payload at exactly the same time. If you're doing all of the work in the data pipeline and pushing it up to the message broker, as long as the website and the phone can also subscribe into the same message broker, they're gonna get the same data payload at exactly the same instant that the sign on the platform does. Now, let's be honest, your website probably doesn't need to be up to date to the exact second, like a sign on a platform might, but you're already doing all this work in the pipeline, there's no reason you can't push the same data many different places. We used AWS for this, we might be a little biased, but there are other companies that provide this, we're actually using their Internet of Things service to facilitate this, not every hosting company is gonna advertise an MQTT brokerage service, but many of them that have an Internet of Things service do have that MQTT protocol underneath. But one kind of gotcha that we found out, maybe four months into the project, was that AWS forcibly disconnects MQTT connections every 24 hours. So we actually have just a little JavaScript service, it's not even a React service running on every sign that monitors its own connection, and if the sign disconnects, it just re-authenticates and continues on. And at the end of the day, that's actually really nice because if the wireless goes out, the sign's gonna disconnect. If the power goes out, the sign's gonna disconnect. If we hit the 24 hour mark, the sign's gonna disconnect. So this actually ended up being a really good piece of fault tolerant code to have in place anyway, but that is the one big cautionary tale I would tell you about using even something like an MQTT. You start dealing with just some weird stuff with signs, like how often do you have to worry about something timing out after 24 hours? Well, with the sign you do. So we've now added this little IoT box here where our sender is authenticating into IoT and IoT sending data out to the signs. Again, fun side effect of doing this, since all this data is now running through AWS, we can actually archive all this data out in AWS. So even though I don't have a slide on this, we actually take all of the data that we send into the IoT bucket and we archive it into an S3. So if we ever need to go back at some point in the future and do an audit, if we ever need to go back at some point in the future and make sure that what we think is showing on a sign is actually showing on a sign, we just have that. And sure, we could have built something, but why? I mean, when you can just turn that service on, it's significantly faster. So let's talk a little bit about testing. I'm one of the maintainers of BLT. If you're not familiar with BLT, it is a automation tool for Drupal 8. And the reason I bring up BLT is that if you don't frequently set up tools like PHP Unit and Behat, getting those tools up and running, particularly locally, can sometimes be a challenge, especially with a distributed team. And BLT helps speed that up a bit. The reason I'm talking about testing so, so early here is that we're not just talking about a normal system here. We're talking about a lot of code with a lot of moving pieces that communicates with a lot of different services. So having unit tests, having functional tests as early in this process as possible is really just gonna save you grief. I guarantee it. So for our system, we actually have several different layers of testing. We use PHP Unit for unit testing, of course. And at that level, we're looking to make sure that the methods in our classes are working as expected. And we're also trying to ensure that classes and components function by themselves in a vacuum without anything else. So that's pretty obvious. We also use PHP Unit in a not-unit testing way where we're looking to actually bootstrap Drupal to make sure that those are working as expected and to execute scripts and are caching to make sure that those are working as expected. And again, doing that at the PHP Unit level is kind of slick because it takes just everything out of context, right? It's purely focused on the PHP code to make sure that it's operating the way we're expecting it to. We use Behat for functional testing, which again shouldn't be a surprise to you if you're familiar with Behat. We make sure that fields and entities are functional. We test our roles, permissions, our workflows, et cetera. Again, not too revolutionary there. But then we actually do full pipeline testing with Behat, which is, I think, a little revolutionary. We use our feature context file to actually execute elements of our data pipeline so that as we interact with Drupal as we get data from our simulators, we can actually make sure that our pipeline is running exactly the way that we would expect and want it to. Our Behat tests actually send data to IoT and then query IoT to get data back to make sure that what our pipeline is parsing and sending to IoT is actually what we would expect it to be parsing and sending. And then finally, we do some front-end testing as well, which I don't have a slide here, but we do have fairly robust front-end testing as well. And I would warn you that this testing framework is slow, but it is fierce, right? So our testing pass on this project takes 20 to 30 minutes. But the fact that we know that it's doing IoT communication and it tests all of our data providers, I mean, this is huge, huge, huge. So if we do a Drupal security update, if we do any sort of update that might break our system, we just don't have to worry as much about all of the down-in-the-weeds pieces breaking because hopefully, hopefully, hopefully, we have all of that covered with our automated tests. Okay, so this has not been a ton of Drupal-focused stuff here, right? I mean, we've been talking about this as a data pipeline. Drupal's pretty involved, but this hasn't really been a Drupal talk and I get that, right? But let's do step back for a moment. This pipeline would not work without Drupal, just because of that categorization process. We also use Drush to actually execute and run this queue. All of our various providers, all of our various pieces of this pipeline are executed via Drush. I would also argue that in this picture, like, Drupal itself can and should be used as a data provider, which is kind of the last thought that I'll leave you with here today. If you're using Drupal for all of the things that Drupal's really good for, right? Workflow, permissions, WYSIWYG, et cetera, there's absolutely no reason in the world that you can't be piping data out of Drupal itself or multiple Drupal systems directly into the signage system. And whether that's the same Drupal site that we talked about earlier for managing the structure or a different Drupal site that maybe you're using for your company's forward-facing news or your company's intranet or whatever, there's still a lot of benefit here to using Drupal as a data provider itself. So I wanted to leave some time for questions and please do use the mic if you can so the recording will pick them up. Yeah, please. Great, this was a nice follow-up to last year's in Baltimore. I had a question about security compliance and standards for data exchange and especially privacy, given the whole new Zuckerberg deal. Are there any governing bodies or specific standards like through the IAB that you have to subscribe to if you're using these in public? So that's a really good question and I don't know off the top of my head. I have to assume that different organizations are gonna have different requirements and different standards for that. In our system, the MTA has very specific security requirements of who can subscribe to the data, what they can do with it, how it can display, et cetera. But I don't know that, I mean, I'm sure there are governing bodies, but I don't know what they are off the top of my head, I'm sorry. Yeah, and a quick follow-up, if I may. You said that there isn't much context awareness, like if there's a subject viewing the sign at the station platform. Let's say that there is like an IR sensor that can tell whether or not there are potential customers for the next train arrival, can that be uploaded backwards? Absolutely, so that's one of the benefits of MQTT and even though our system doesn't really do this, you can absolutely have bi-directional communication through those protocols. So you could, if you so desired, pipe data back up into the IoT and then back down into Drupal and further customize that data. Yes, absolutely. Okay, next year you're gonna cover that. Hopefully. All right, thanks. Yes. Thank you. I noticed you had the resolution configuration in Drupal and I know that's something a web browser knows about, but it's not necessarily something a web browser can set. Yes. That's more of like an X window thing. How are you guys doing that on the local hardware side? Yeah, so the interesting thing about this hardware is that it's not a normal browser, so I can't go into too much detail there, but it's a custom browser compilation from their hardware company and because of that, we found that Myles may vary a little bit in terms of detection of the actual browser size. Okay. So that's why we have that set just as a redundancy to be sure that the viewports that we're defining and the content that we're sending absolutely shows that properly. Okay. I have a question about accessibility. Sure. Since there's no way for somebody to interact with the digital sign. Yeah. If somebody walks up to the sign, how do you make accessible? Do you have like a set of headphones with an audio? So we, that's a great question. So in this case, we're not doing, you know, when you talk to accessibility, there's the whole gamut of accessibility, right? So in this case, what we're doing is we're ensuring that the sign is of appropriate, the presentation layers of the appropriate size, so like all the text has to be three inches tall, et cetera, et cetera. So there's not actually the audio or other components at this time, but we are talking about extending to do audio in the future. Okay. Hi, we have a little bit different application. Indoor Amateur Sports Plex. Oh, cool. And every field has, the two benches are side by side and between the two benches are the back to back display of the team rosters. Okay. One team's checking to make sure there are no ringers on the other team. Right, right. People are, you know, proper, et cetera. So right now what we're using are TV sets with a Raspberry Pi. Okay. And connected, you know, with a VNC back to a control station that tells it, this game these teams are playing. I'm just wondering, I missed about what the hardware is that you're using. Yeah, and I didn't go into really any detail. I'm just wondering, moving forward, looking at the design you showed in your one complex slide there. Yeah. Everything's on AWS, so I'm just wondering if we should look at using the IoT and that type of thing. But I was also curious about the hardware. Sure. So what I can tell you about the hardware is that, you know, other than the screen itself, there is a, there's an underlying operating system that is UNIX-based or Linux-based. That operating system is running a Chromium browser. That's what we're running. Yeah. And, you know, beyond that, you know, they're basically working with a company that provides a proprietary solution. And, you know, at the end of the day, if you have a laptop locked in a box, if you have a Raspberry Pi, if you have a proprietary solution, you know, I'm not sure that it's gonna matter a great deal as long as it stays up, it stays connected, you can remotely administer it. And, you know, you have a consistent sort of hardware and software mapping between the various devices. Well, how much do those screen computer things cost? I have no idea. I'm sorry. More than I'd probably be able to afford. Yeah, displays are expensive. Curious, if you've ever done any testing for these UNIX-based systems as whether a browser with JavaScript would be better than maybe Python serving up a visual, the view, basically, is JavaScript the best solution for that? So I don't know that I can speak to if it's the best solution. You might notice that I had a very small number of slides on front end. You know, for us, the particular hardware solution that was purchased was sort of geared towards having a browser open and running the application. So in that context, I would say that we sort of made the best of the situation that we had, which is why we did the React app that was intended to run sort of persistently in the browser. You know, again though, your miles may vary based on the hardware that you're running. If your target's an app and not actually a browser, if you wanted to do it with like a Python thing, you know, there's no reason you couldn't. As long as, again, it comes back to that subscription, like as long as whatever you're building can get the data that you're piping into it and handle the display, I don't think you should limit yourself to just the couple of technologies I mentioned today. The stuff you're doing with the MQTT and the, so you said it checks the connection, that's separate from the React, but React and the tools available there is an MQTT module thing kind of available that we can use? Yeah, there are MQTT protocols that you can install as JavaScript libraries, so all of that communication. But I mean, honestly, it's a pretty lightweight protocol. I mean, you need to be able to do basic like SSL communication. And once you get the, I'm trying to remember if it's an actual React library or if it's just a JavaScript library, but yeah, I mean, the actual authentication on the front end is very lightweight to actually subscribe to and get the data. The thing that is much more complicated is dealing with now that you've got this data, what are you gonna do with the presentation layer with it? You also, sorry, you talked about so different sizes is the decision for how it's displayed made at the device? Yes. And if that's the case, are you pushing the same React code to all your devices big and small? Yes, we didn't really get into this, but I'm happy to talk about it. So I would describe what we've built as a nearly headless system. It's not a completely separate decoupled application. So Drupal still serves out the actual authentication when the signs try to pull up the site and then they pull up the actual React app from there. So I mean, we could deploy different versions of the front end to each different sign, but we basically just serve up the same front end everywhere and then the sign says, based on the resolution, I know I am based on the data I'm getting and based on the configuration that I've received through the pipeline, I'm gonna show the data in this way and everything's down at the sign level. And everything can be changed at the Drupal level to affect what's being done at the sign level. You're pushing configuration to them as well. Yeah. Dude, this is awesome, this is great. Thank you. No problem. My question was basically just like, do you have any recommendations? Like if somebody's not trying to do something as elaborate as y'all were doing, but maybe like just the simple kind of signage kind of thing. So there are, there's a few like modules I saw and like stuff that you can maybe just get away with just using Drupal with? Yeah, so I mean honestly Drupal 8 comes with everything you need out of the box to make it work, right? I mean, you need like a view that's gonna put the data out in some sort of JSON format. The biggest thing that you're not gonna get out of the box with Drupal by itself is any sort of front end, anything. So there's still gonna have to be some degree of custom work done there and the ability to sort of move that data, right? I mean, Drupal can serve up data all day long, but if you need something that will periodically, automatically go in and get the data out of Drupal and then push it out to your signs, you know, again, that's something you'll have to do somewhat custom and I doubt you're gonna find a contrib module for that. But again, you need a cron that can get the data that can push it someplace. And as long as you can write a hook cron, I mean, you should have, I think, most of the tools you need to do a simpler version of this with nothing but Drupal, like with core, with D8. All right, thank you. Yeah, Yvette. Hi, my question was about the authentication scheme that you're using. Yeah. We're using OAuth or token-based, certificate-based. So each of the different providers that we have uses a slightly different authentication format. For the actual train data, because it has to be lightning fast, we're actually just whitelisting our production systems. By IP address or? IP whitelisting, yeah. So that the world at large cannot communicate with the servers, but we can. In other cases, we're doing just an API request and to get the weather data, so that's an SSL request. And since in this case, Drupal happens to be running both all of its own providers and the pipeline on the same web server, it's just Drush reaching into its own database at that point. So we didn't have to mess around with OAuth or some of the other schemas, but I mean, honestly, any of them would work. The biggest thing is I would caution using something like an OAuth, there's something that takes too many hops if you're trying to do it really, really fast. Which again, is why we ended up doing the IP whitelisting for the train data, because it was about the fastest way we could come up with to keep it secure, but moving. Okay, thanks. Yeah, you bet. Any other questions? All right, thanks everybody. Hope you have a good last day of your time. I don't understand why it's like this. Yeah. It's so long. Is it like this or is it like this? I thought it was usually like this. It's not like this. Yeah. But it's just like this. It's not like this is how it works. Is it really like this? Yes. Yeah. Is it like this? Yeah. Is it like this is how it works? Yes. Is it like this is how it works? Yes. Yep, yep, yep. Hi. I was wanting to wind up on the court.