 It's kind of an interesting presentation because it covers a lot of ground. It's a little bit shallow, but I hope you guys will track us down during whatever the rest of the conference and stuff if you have any specific questions or anything. We're going to be talking about Drupal, and we're going to be talking about Opinu, and we're going to be talking about really how we integrated it with sort of some third-party kind of custom-developed solutions. A little bit about us. Jeremiah and Allie, we are Advantage Labs. We used to be a company that was larger and smaller, and in the last three or four years we ended up being a very small company, just the two of us. And we are really happy being a small company, partly because, you know, we don't have to compete on big projects. We don't have to work on major workflows. We're always working with generally larger projects with interesting questions to answer. And so we help with custom migrations, or we help with upgrades, or we help with custom implementation, or adding a very unique bizarre module to something or something like that. So we end up getting to do like a really narrow piece of larger projects. And that's made us pretty happy. We collaborate with people who really like to answer interesting questions, which is what we also like to do. So we also are really active in our local open-source Drupal community. We've got a really solid community in the Twin Cities of Minneapolis, Minnesota. I forgot the other city. You always do. St. Paul. Minneapolis and St. Paul in Minnesota. We have a really active Drupal camp. We have a really active Drupal community within our local university. And so just about a year and a half ago, we were hanging out at our local Twin Cities Drupal camp, and somebody else came to us who also likes to ask interesting questions. They're the Center for Research and Education and Simulation Technologies, aka Crest. They've been working on a variety of simulation technologies and have been investigating different learning management systems. And they had an interesting question as well. They wanted to know, how can we use technology to make medical training better? They had been, it's pretty unethical or at least uncomfortable to do medical testing or training or learning procedures on live human subjects. You know, most medical students at some point in their career practice on cadavers, there's some scarcity there, and they don't offer a whole lot of feedback. And so there's been kind of an industry associated with developing medical simulation. So if you've ever taken a CPR class, you may recognize Resosiani. It's great to be able to practice procedures on a device that can feel no pain. Simulation is the same for everyone, which means you can use it in a predictable environment. Everyone who's ever done a CPR class has used the same dummy in the same way, having the same tests and the same kind of results. And it's good to have something that consistent and that repeatable so that you can really measure progress against people or develop best practices across a whole body of education. So it's good to have a repeatable or reusable simulation device that you can't really hurt and that everybody's going to be able to make use of. So there's a huge industry, and honestly we're surprised by what we found when we were looking around. There's just a major industry of anything you could possibly imagine of simulators, sensors, tests, dummies, bodies, parts, bones, tissues. Just whatever you could possibly imagine, somebody's thought of it. And again, it allows you to have a repeatable environment where you can execute these kinds of tests without actually harming anything and then to develop your skills. And what DrupalCon presentation is complete without a cat picture. So for the most part, these tools have been a big help, but there's still an important question to answer. The question is, how do you know you're not doing it wrong? You could be doing this. You could be killing Rossassiani every single time you wouldn't necessarily know it. You could take one of these devices and you could, you know, mis-execute a procedure and you wouldn't, you know, be able to get any feedback about it. And so you need to start gathering objective assessments. So a lot of these devices end up having sensors, buzzers, alarms, things like that that tell you when you're doing it wrong, tell you when you're doing it right, measure performance, measure, you know, depth accuracy, placement, you know, any type of sensors, objective assessments of whether or not it's being done correctly. There's subjective assessment. You could do all the objective assessing in the world and you wouldn't actually, you know, be able to know that you're doing it right unless somebody who knows what they're doing can look over your shoulder and be like, yeah, that was right, that was wrong, you should try it this way. Have you ever thought of this? And this is how best practices are developed. And so you need some kind of human being interaction to validate that you're doing stuff right. And then once you're actually assessing and doing things, you need to be able to store data, like how is one person doing? Are they improving or not improving? How is a whole group of people improving? What does the bell curve look like? Who's doing well, who's doing poorly? And how can we, again, develop best practices to educate people better and more consistently? So objective assessments are, you know, they work consistently. They provide feedback, the results are measurable. If you play the game operation, you know that if you do it wrong, you're going to get a buzzer. Like you're going to use the tweezers to pick out the, you know, water on the knee and it, you hit the side of it and a buzzer goes off and you did it wrong and you know it. And so you, next time you do it, you're going to be trying really hard not to hit the side of that sensor, right? You can be limited to objective assessments. You can't be limited to objective assessments, though, because, again, like you could pick up the game operation and turn it upside down and shake it and you could, you know, successfully get all the pieces out of it without actually triggering the buzzer once. Or you could use a tool that is the incorrect tool and not the tweezers with the cable attached to it and you would be able to get all the pieces out of triggering the sensors, right? And so, and it's not just malice. It's not like people are just trying to game the system. I mean, if you have an incision that you're trying to learn how to find a vein, it's like you need to be able to know, like, have somebody look over your shoulder and say, could you try this way or go, you know, at this other angle or otherwise figure out how to learn it better. So it's important to have subjective assessment or instructor vetting. Someone is properly trained to can view the procedure, grade it, offer feedback, tell you how you can improve, make sure that people are improving and just generally be available for instruction and learning, knowledge transfer. And so we've had instructors forever, like longer than we've had any kind of simulator at all. We've been having more and more devices have sensors and equipment and buzzers and, you know, that technology is improving well. But storing these assessments is actually kind of a new horizon. And it's kind of important because you kind of need to build this body of history. So you need some kind of system to manage this content, content management system, perhaps. I'm sure you're not surprised that we're using Drupal for this. The people that were working with the university had actually looked at a number of other solutions before they even came to us. They had a Moodle implementation. They've had a proprietary learning management system that they built. Neither of these solutions were flexible enough. They couldn't get what they needed out of it. Drupal stores data consistently, sort of. It's highly customizable. So, you know, obviously you can get X percentage of the way there, let's say 80%, 90%, and then you can build what you need to bring your project home. For this particular project, using things like services to create REST APIs, things like that, being able to consistently draw a line between what belongs to sensor data and what belongs to Drupal with a clean API, being able to build that efficiently and being able to change that efficiently was a big deal. And the broad community of wonderful people, I mean, they're really pretty happy too. I mean, I've been part of Drupal for a long time. I was at the first conference in Antwerp. And it's nice to know that you can reach out and find somebody to help you or find somebody with common interests or common goals. So they came to us because, you know, we knew about Drupal and they knew about the stuff that they were working on. And they actually introduced us to a distribution called OpinUl. It's a learning management system. The functionality that you need for classes and content and quizzes and passing and failing and all that stuff and group curriculum. You could imagine how you might build something with organic groups and other tools, but that would take a lot of work. And so thankfully, there is a learning management distribution that has done that work for you. It does exactly, you know, has security, it has groups, it has courses, students are grouped by course, lessons go into those courses and you can gather assessments and store knowledge. So really everything that you actually want to store from this technology is there, except for the bridge between the technology and the content. So a couple of built-in things with OpinUl. You've got sort of courses, and so a medical student has logged in and this is some courses that they are participating in. And those courses group lessons, it's really hard to see in there, but there's individual lessons where you can have different slides of content, different curriculum videos, really pretty much any type of multimedia you want. I'm actually really impressed with the robust nature of that content. And inside of an individual lesson, you can put any type of content you want. These nice little like back-forth start lesson, and it keeps track of where you're at, it keeps track of how you're doing, keeps track of versioning so that you can, you know, change the lesson content and people are actually pinned to a specific version of what they're learning so you don't have results change as the underlying version changes. It's actually a surprisingly solid system for that kind of stuff, you know. This content that we have is not all that exciting, but it's a good way to just store chunks of information and page through it and keep track of it. I mean, I've been happy with that. It's also got built-in like quizzes, you know, subject matter assessment. You can assess knowledge using a variety of multiple choice or other question types. It's a good way to provide content review, knowledge review on the information that's been presented. That stuff's pretty helpful. The university had developed these devices and they wanted objective test results from their learning machines to appear as objective results. So, I mean, as opposed to a yes or no question, do you understand the content? You can't ask that, but you can say, did you kill that guy, right? And be able to get a yes-no answer. So that's really the name of the game for us was to be able to answer that question, did you kill that guy? Or whatever other type of assessment that we could do for that. This guy, Dr. Jonathan Brehman at the Department of Orthopedic Surgery at the University of Minnesota has been working with these kind of devices. He's been providing a lot of feedback and participation on developing these particular boxes. And how do you like... Oh, where's her sound? The sound is in there. Did our tech go away? Yeah. Make it stop, either. Alright, it's a slide. It's not going. I'm tempted to just yank the... I'm checking the sound settings. I'm just trying to go out through the HDMI. All of this simulator project is to try to make sure that orthopedic surgeons in training who are learning how to do small joint minimally invasive surgery have the skills that they need to move safely from the untrained environment into working in the operating room on real patients. In the past, there were no great tools for teaching some of the skills, and they're very different from the skills that we learn in open surgery. And consequently, we developed these two simulators wanting to teach the goal of visualizing something in space and triangulating with a tool that works best on different spaces. The second skill was to manipulate objects in space, again using the arthroscope or a small joint camera to move things around, practicing in both of these environments bimanual, or both-handedness, using the camera in both the left and the right hand and then manipulating things with the opposite hand. The combination of those skills sets the foundation of moving forward and learning more complex knee, shoulder, ankle, elbow and wrist arthroscopy techniques and enables us as surgical educators to teach the trainees the things that can only be learned in the operating room on real patients because they've learned things that can be learned outside of the operating room using these simulators. Okay, so there's a variety of different types of applications of simulators, in fact, with skill building. I know you've tried to use this and it's not all that fun to watch. It's really hard, actually. It is actually a skill that you have to build. And so, I mean, we want people to be able to work on these devices and be able to get information about how it went and track that data within the same system where we're providing the content and assessments of other data that's, you know, subject matter review. And so, this is the output of a particular test or a particular exercise that one of us ran. And what can you explain the chart? Yeah, so on this particular display which we added, we're just showing the overall results over time. So you can track how a student's improved or not as they've gone through the exercise multiple times, which is one of the things that they really wanted to be able to track for these studies and watching how their trainees are improving through the use of these tools. And we have other displays for when you're looking at an individual result where you can see a timeline of information about different things that occurred during the assessment. Different sensors that were triggered or how long it took them to perform the procedure. The first exercise is triangulation is designed to teach the user to move around insights the simulator and manually press on objects using the left and the right hands. It requires a couple of attempts to switch the camera to the left hand and back to the right as well as the probe in the opposite hand. This begins with the scope box properly attached to the monitor and the LMS activated and it begins with both the scope and the probe in the registers on the side of the box. It will begin by placing the scope in one hand and the probe in the other and identifying which of the target lights is on. In this instance, the target light is nearest to the front of the box and on the left hand side and consequently that probe, the target can only be hit by moving the scope to the left, right hand and the probe to the left. Once that target is depressed, the next light will light up, allowing the probe to move forward and identify where the targets are at. It may be worthwhile to do this exercise in a darkened room in order to facilitate visualization inside the box. Now, this next target is on the top of the box but on the right hand side of the box and consequently can only be hit with the scope in the left hand and the probe in the right and then moving forward in the back and now identifying where the next target is in the front of the box again switching the probe to the left hand so that I can reach this target on the floor of the simulator right there that's on the other side so switching back to put the probe in the right hand so that I can reliably probe that and then moving to the last target. Once the last target is hit the LMS will notify you and you can end the simulator probe back in the registers and also the scope back in these as well. You got to see one of our other colleagues in the background in the video. So in addition to capturing those results right so like this is where this is where they were hitting the targets right and in addition to storing those objectives has results oh god very touchy there we go another thing that was happening is we can also record video during the event during the training event and match those videos with those results and allow an instructor to come along later and watch the videos and make those subjective assessments and so this is actually a different project that's why it's asking kind of off the well questions about head placement but we're able to create questions that are subjective in nature and allow instructors to revisit the student test results and add additional information and have their scores factor into the overall results of the student. So now we have objective assessment of like sensor sensed and send in information and timed it and did whatever metrics that we needed to do to validate that lesson we also know how subjective assessments of instructors being able to sit down and watch and decide whether or not students did a good job and it's a time saver right because they don't have to sit down and participate it with each and every student if there's multiple runs and like the whole bunch of objectively failed you know the instructor doesn't necessarily have to sit down and watch every single one of them so it's a really big time saver to be able to manage it this way and have it all in one place as well as just having just the raw technology available. And one of the things they that Dr. Braman wants to be able to do with these boxes is actually send sets home with medical students so they not only can come in and perform the tasks in a classroom environment and try to get training that way but they can take them home and practice and try to develop their skills their dexterity with switching instruments and scopes and whatnot on their own time and then come back for final assessment before they move on to working with cadavers and then going to the actual OR so yeah so you know we have it all kind of going on and one oh my god did you do that maybe how did that even happen nope I don't think the tape is helping there you go yeah nobody touch anything yeah so it's all in one place it's all you know like every student has like a good history associated with them and in fact the orthopedics department at the U of M will be launching a study using these simulator boxes probably this fall for sure and so they'll be able to kind of really test it in a real world environment so getting to just the sort of overall nuts and bolts of it all how do we get this information from the sensors devices you know into data droopble and I guess I'll let you talk about some of that stuff are we trading so this just kind of highlights the overall workflow between everything so this slide is actually talking about the airway trainer but the workflow is the same for these trainers as well the airway trainer was for learning how to do intubations so you have the trainer itself that has sensors built in it's got an Arduino at its core that's reading the sensor data and has libraries on it to communicate back over USB with a Google Chrome plugin that's running on the student's laptop sort of our gateway between the hardware world and the LMS so the LMS side we've got opinion maintaining all of our curriculum and with the integration for this stuff and we also have a current of media server in theory that's still kind of a work in progress as a video store for the WebRTC video so when the students performing the task any video from the scope or from other camera sources depending on the test will be streamed from the plugin back to the media server and those records will get associated with the students result in the LMS and those so we communicate through with the plugin over REST and it communicates back with us over REST API provided by services and then it's providing the gateway through USB HID to the actual hardware so the workflow as a student when you're coming into the system you log in, you go to your course you go through all your other curriculum material describing the exercises answer whatever questions there might be pre-tests or other things that are a more traditional test format and then you get to the point of doing the exercise where you've got the boxes on the table and you're ready to go and you click start lesson and at this point we're intercepting that click with a little bit of JavaScript that will launch the Chrome plugin and sort of give it information about the REST endpoint for the LMS the Chrome app then comes back and hits the API and gets some additional configuration information that the faculty person has entered for the parameters on how they want the assessment to occur at maximum how long of a touch can you have on a sensor before it registers an error state that kind of information so then the student performs the assessment they go through, they poke the little lights off or whatnot they trigger some errors they hopefully don't kill the patient and when they complete the plugin sends the video as I said back over WebRTC to the media server it sends a REST request back to the LMS to Drupal with a payload of the raw data of the exercise, all of the error states that occurred and some other stuff that we're capturing that more discreetly like what was the start time how long did it take did they succeed or fail so that actual success value gets determined by the trader itself and gets sent back to Drupal sort of really the arbiter of business rules yeah, yeah, so all the business logic about did they kill the patient or not, lives on the trainer and it makes that determination based on all the live data and then we store everything against the student record so that as we said before you can sort of track an individual's performance as they go through and use these tools so from a nuts and bolts thing, stuff that we did we extended the quiz module which is part of the opinion distribution to add some additional question types, that's how it interacts with making it so that you have data so rather than just having a question type that's multiple choice, we have a question that is a simulator result so we have somewhere to store our data against we've got another question type for the instructor assessments that you saw before it's sort of a rubric grading system and rather than the student answering the question and having that affect their score the instructor can come back and review the results through watching the video and assessing other data that's provided and answer the questions as to how the student did and where they might have room for improvement as we mentioned we're using the services module to provide the rest API so that we can talk back and forth with the Chrome app we've got a custom entity type that we've set up that we're using to store the configurations for the devices so when you get a set of these boxes or a new trainer is developed there would be a package that you'd upload to the LMS that would contain a YAML file with configuration data and CSS assets, HTML, JavaScript assets that then when a session is instantiated the trainer grabs that information the Chrome plugin grabs that information to kind of build the UI out a little bit more and so all that stored in its own entity type so that it can be referenced to these questions and updated later on Fundamentally so much touch to thing alright the surprising thing about this is how little we actually had to do I mean it sounds like a lot but I mean implementing an entity type and installing the services module and extending modules that were already there was actual work but it was not nearly the same undertaking as developing a learning management system or storing data or anything like that so but being a good fit for Drupal surprisingly well okay and now we can do a demo rather than fumble around inside these boxes which if anyone wants to after if we do a buff or something later in the week you're welcome to try they're fun to use but they can be a little intimidating so we've actually got another device that they set up for us you may recognize it from earlier on hopefully I don't screw up the laptop by plugging this in so they hacked a game of operation for us so let's see here so we've logged into the LMS as a medical student we're in the operation operation course lesson oh of course there's not can you switch it back to yeah I get to see it alright so we're in no opinion right now this is a demo site that we've got set up we're logged in as a medical student we're in our lesson and we're ready to go so we can go ahead and click start it's gonna launch the Chrome plugin wonderful UI still have a little bit of work to do on some of the CSS and we can tell it we're ready to go go and so now we can go in and see how we do did it not trigger there we go one error this worked right before this is a curse of live demonstrations at a conference not triggering though you're a terrible doctor let's see here we're just gonna, oh rainy you're not gonna work are you oh there it went off there we go so it's throwing some little blue lights on the screen to let you know which one to do next and it's got some Hall effect sensors there's little magnets glued to these pieces to know when you've actually got it out of there and we're done scored 0% awesome yeah so once you perform that it closes out and you can go in and take a look at the results so there's some testing in here from our colleague and here's our result from the last one we just did and it's logging it it's got graphing some of the errors that occurred and we can go back we had a little bit better performance right before the presentation I got 70% so I guess we're supposed to tell you to evaluate the session and we're I guess just go to the whole schedule, the generic schedule URL and also join us for the Contributions Brits on Friday in terms of just this presentation if you have any questions comments, interests especially if you guys are visiting here because you're doing something similar or interesting I'd really like to hear about it any questions? yeah they want us to try to use the yeah okay we'll repeat it oh as far as the video acts well I mean it's fortunately because you're kind of confined by a group of students for a particular class or something like that we've thus far been able to I mean we're planning on just using the media server and we can understand which files it came from which class and archive them accordingly as time goes by the actual sensor data we're storing just in JSON and it's not that I mean yeah I mean as we said before most of the business rule logic is happening on the trainer itself and so we don't get a whole lot of data back from it besides mostly what you saw up there yeah there's some bits and pieces we pull out like the score and the time stamp information things like that and then yeah we don't get the raw feed yeah no precisely yeah yeah yeah in these cases for these devices the LMS isn't necessarily something they're instituting across the entire university for everything it's more to store curriculum and information as it relates just to like the orthopedic students so students going through that particular program and then there might be if they're become other programs and other trainers and they might have their own instance or perhaps combine them but at least with these devices we're talking about for the most part individual institutions that are interested in using them for their students will buy a few sets of the devices and they'll get a subscription to the LMS and then they can come in and have their own data while there is a fair amount of video as Ellie said we don't necessarily need to keep all of it we're keeping we'll keep video around for sort of whatever makes sense the last three attempts the last ten and the upshot is of course using using Drupal for something that's this niche is that if it did get broadly instituted somehow and you have scalability questions that's a question that's been very very handily addressed in a number of ways throughout the Drupal community. It's not pretty I think you just have to add the database. Yeah I mean I could probably log in and show you the database the most of the extra data that we're storing is a time stamp with some metadata about what the error state was and that's about it so Yeah and I guess part of that is you know we developed with our partner like exactly what we get back so it could be anything what we get in this JSON payload but most of that business like what develops that JSON is kind of out of our jurisdiction so like the actual yeah exactly there the Chrome app I think actually is processing that information so what we get back could be anything as dictated by that sort of intermediate app and what as Jeremiah was saying what we do get back is not all that comprehensive really you're just going to show out there We'll see and so that's also something that you know right now we are just storing the data pretty much raw and then one of the slightly down the road things that we want to do is define some other objects or entity types for defining some of the data structure so that we can come up with a common language between the devices and the LMS so that we can do a little bit more with the data right now to like put some of the graph data on the screen or things like that we're just kind of iterating over whatever was provided and creating some custom displays don't have a lot of integration into something like views right now so you could generate your own reports but yeah exactly defining that common language I think is something that would be useful for Question in that line what's your experience with this I don't think it's that common no I don't think it's that common right now so question is if we do this a lot work with this type of hardware integration and if it's growing this is kind of a new project for us like I said we like to take on interesting projects people asking interesting questions and that one definitely qualifies and I think that there's more and more interesting applications for I mean I know that we're kind of talking about like setting up hardware sensors for our garden and building them into a Drupal instance to tell us when to water our flower beds or things like that there's any number of all kinds of new sensors for all kinds of different reasons there's all kinds of like wall gardens of proprietary systems that are interacting with that hardware data that I think would be more interesting if you could get that data interact something like Drupal it's some custom code one of our other colleagues who's working more on the hardware side wrote that well we are Drupal specialists and they contacted us because you know we work with Drupal but they actually came by way of other tools that did not work well for them so they tried Moodle for example which has a lot of the management courseware curriculum subject matter assessment knowledge assessment etc and it didn't work for them because it wasn't flexible like sensible enough they tried I think there is some instance of a proprietary like a learning management system that they tried to write from the ground up and these questions that come up how do you interact with new hardware how do you scale in case it gets big like how do you interact with users you know and all these other questions that Drupal at large has managed to answer and I think I've I've been working with Drupal for a lot of years now I don't know 12 years and I've kind of gone through my series of Drupal has to be used for all the things all the time always and now I'm kind of a little bit a little more mature and I do have a more realistic sense of a place where it is not a good fit and a project like this I think it is a good fit for all of the reasons it's easy to implement APIs it's easy to manage and track data it's easy to throw together media management stuff for developing curriculum so I think this is in particular a good fit yeah yeah yeah so that's the last result command plus or something oh that's a buy word yeah the terminal session wasn't looking so good can you just go to your terminal and resize that that's as big as I can get it I'm afraid if you'd like to see it here welcome to come up and I can show you ah but yeah that last is there any kind of key things in it or what's the dog that's one of the things inside the operation guy so this particular result was from our demo my dogs are barking oh so yeah so there's just like this stuff has information to map back to the question results for the quiz module trainer idea doesn't look like we're populating for this this guy but that would have information on which of these boxes or other trainer was used start time, elapsed time success value the score basically whether or not it's been closed and then the session data is just a blob of whatever error states occurred they're mainly all time stamps yeah mostly all time stamps so there's some more device information this is basically the raw data that we also got some of this stuff out of and then there's time stamps with the sensor that was triggered in this case and when it was triggered and then there's a total of separate fields this we end up parsing out on the results this way so it's more to provide an overview of the session for an instructor to assess but not something that we're really we don't really have a good way yet to have a fully defined language to speak yeah and so as the project continues and more trainers get developed the same group is also working on a project for a full body mannequin a modular mannequin so different heads and legs and torsos and components that can be assembled together to for instance recreate field trauma scenarios for army doctors so they can have they can configure the patient to have different things that are happening across the different modules that they have to address yeah and different sensors for different procedures so increasingly like having this common language between them like how to interpret those results will become increasingly important as projects like that evolve yeah right now our entity type is pretty fixed like we don't like update it don't update a particular entity very often if ever that just has the information about no actually not even just the information about the device itself and so it has like a little asset file to have JavaScript and stuff like that to tell how to launch the Chrome application the actual results and stuff are matched with these question results so the devices themselves are kind of like so the information that gets uploaded to our custom entity type is for configuring the devices contains some of these types of things so the time taken to complete the operation maximum and minimum value the number of touch errors that are allowed to happen before you consider the failure the number of milliseconds of a touch before it's considered an error state so you can kind of like graze the sensor wall but if you really jab it then we're going to count that and we can handle some other just general device configuration too in this case this is for the box that you saw in the video where you're turning the lights on and off and you can configure the sequence you want them to light so that you have a consistent experience across multiple students yeah yeah in this particular configuration was set up for a big meeting of all of the orthopedic surgery board and they wanted to have different trainer devices and he actually set it up a little bit advanced they were switching their hands almost every single so it was kind of difficult yeah that's mostly so there's a different sequence of targets and then yeah yeah so I think there's been some discussion about having the ability to randomly generate the sequence they do want it to be maintaining consistency though because for example when they enter into a study in the next couple of months they want student A's experience to more or less be the same as student B's so they may set up different scenarios under different lessons but I don't think they want it to drastically change difficulty from one student to the next because it'll be harder to compare the result well we will keep the patches up to date there's be a different student I think we either have or are implementing more authentication and some open ID stuff to keep that that would be a trick you get a different kind of pass-fail on that one and I mean so the general way that these will be used as I mentioned earlier in some instances the students will take them home and they'll be able to play with them on their own in which case they can pick the box up and look through the hole to poke the light instead of using the scope there's easier ways to cheat than hacking the actual interface but when they actually come to a part of their assessment where it's going to affect their score their ability to move forward they're going to be evaluated it's going to be in a learning environment with an instructor watching them perform the task so there's not going to be a lot of opportunity for that to occur and otherwise I mean these students are just really there the purpose of these devices isn't really necessarily to completely teach them how to perform surgery on a knee but to teach them sort of the manual skills of being able to use the scope to see in three-dimensional space where you can't actually see and manipulate different devices or the other the other box we didn't show is sort of across between operation and I don't know but it's the task in that box is there are sensors with caps over them and you have to move the cap from the sensor to a staging area and then switch your hands and then move it back to another sensor and then back down to the staging and move it back to its original position so you're not just looking around but you're manipulating objects and moving them around with the pincers and stuff I don't know if that completely answers your question but there is some security built into I mean all the students have to log in I don't recall exactly what we're doing inside of any rest requests for like additional identification beyond just IDs of the session but yeah you still have that subjective assessment as well we could yeah we were thinking of it we haven't scheduled one yet if there's interest we can definitely set one up we have about two minutes left you guys have any other questions or not you can go make wonderful use of two minutes and if you want to come up and take a look at the actual devices or not please feel free thank you