 Good morning, good afternoon, good evening, and welcome to our very special call for code for racial justice miniseries here on red hat live streaming I am Chris short host and showrunner of all the things on this channel, it feels like sometimes. But I'm joined by the wonderful team from IBM working on call for code for racial justice and I will hand it off to Sabine to do the introductions and such thank you Sabine. So can we actually go ahead and start sharing the screen but welcome everyone. This is another series or I'm not sure it's another episode I guess from, you know the many series that we're doing around call for code for racial justice. And it's been very exciting, you know going through the solutions that we have an open source right now in partnership with the Linux Foundation. And so today we're highlighting the lines of justice which is the team that you see before you, along with a couple of other members for the solution instant accuracy reporting system. So we can go on to the next slide but just welcome everyone. So, in the past, I haven't given a whole lot of huge background on call for code or even, you know call for code for racial justice but I really wanted to you know level set especially at this time because call for code as a whole is a program that exists on the side of IBM, you know, and is hosted by David Clark cause and IBM is really the founding partner of this initiative and provides really the technical expertise, alongside you know the Linux Foundation and United, United Nations Human Rights. So this year we actually have you know our annual competition which is coming to a close and focuses again on you know tech for good and being able to develop open source solutions that are supposed to support communities and need. So it really focuses on inspiring and motivating developers to upstill themselves and to use application specifically to solve some of the real world problems, you know that we're seeing in society. So the projects right now again our open source and the focus is that everyone should be able to access it, but again being able to use you know different technologies to kind of critique how the problems can be solved and we're really excited to, you know show what that looks like on the next slide. So call for code and this also really you know speaks to our community as a whole is around being able again to produce solutions that affect communities in need and the priorities that we see, but it also is really emphasizing how can you build skills around how can we engage community members, sorry, my dog, engage community members to want to participate in this work, but also collaborating with them, so that they can you know be a part of the solutions and taking ideas into action. So most hackathons or even you know typical hackathons focus on. All right, we're giving you this you know use case. Go at it, you know we're going to give you maybe a week or you know a couple of days and you know, best of luck. It's not just about being able to produce a solution. It's saying at the end of this how can we actually implement this in the world, as it stands and so for this year. There's a $200,000 grand prize that comes from the global challenge where if your solution is selected, you are able to actually build it out along the right type of supports with IBM and a couple of other partners that we always have on hand. Next slide. So submissions again are, you know, coming to a close July 31. If you have been working on a project already get it really getting excited to submit it, and you can explore you know how you submit your, your application right now. So if you want to just focus again this session is on climate change, and you should be able to register in the steps that you see below. It's pretty simple we've given you three of them, and you should be ready to participate. And to give a little bit of a timeline of how we got here and we're talking about call for code as a whole and then we're call for code ratio justice falls in there is in 2020 exactly right there. Where we see you know seven open source solutions that were created by IBMers and supported by Red Haters. You know in June of last year where we had this call to action to say that we wanted to use our strengths and our skill set as you know technologists to address some of the systemic racial inequities that we see within our communities. And so if you go to the next slide you'll get a little bit more of an understanding of how we got here and what are kind of our guiding principles or pillars as we'd like to call it. For this period of time that we were in. So we have police judicial reform and accountability diverse representation and policy and legislation reform that really helped to frame how the solutions that are you know in the series right now as well as you know further ones that are being developed really are addressing some of the critical needs within these segments and so incident accuracy reporting system actually falls within police judicial reform and accountability. And we're excited to share a little bit more overview of the solutions that are to come. Next slide. So here we are. So far we have you know been able to see a couple of our previous solutions. And right now again instant accuracy reporting system is a platform that allows for collaboration and corroboration for witnesses and victims who have evidence during a police encounter. And we're going to dive right into it I'm not going to you know try and do it any injustice by giving you a quick summary, we're going to have a really good deep dive you know by the rest of the team. And they'll also be able to introduce themselves you know as we're moving further. So you can see yes we've been where this is actually, I don't even want to call it a volunteer effort but an active project that everyone on this call is working on in addition to your day jobs and if you know anything about, you know, though the work that you do in your day life, especially in tech, it's a lot. So definitely want to you know congratulate and always thinking the team for their committed, you know dedication to these work to this project. And I think that that is all of the talking, I'll probably interject with some more questions, comments and you know clarified statements as we move through here but I'm going to go ahead and hand it off. All right, thanks Sabine. Hi everyone I'm Shonda with the spoon, and I'm going to start off on the presentation for our incident and accuracy reporting system, abbreviated as I a RS. I want to give a background to the problem statement and motivation that got our project started and implemented into an open source solution. So the motivation began with the recent reports of police interactions with the black community being featured prominently in the news, and often unfortunately these features interactions have been tragic. So according to the Washington Post since 2015 black Americans have been killed by the police at more than twice the rate of white Americans. And some of these incidents have even been captured on video or witnessed by civilians. However, what we notice of course due to the media, the recent media reports is that despite some of this being captured on cell phone cameras, or some of these incidents even being captured by the officers own equipment like their body camera or their dash cam footage. We see that incident and information surrounding this isn't always as reported isn't always reported as witness I should say. So let me give an example. So what you see on the left hand side here is the original press release for the George Floyd incident. So what we see here at the title is it says that a man dies after medical incident during police interaction. So just the details about George Floyd's encounter with the Minneapolis I'm sorry police department. It doesn't mention anything that we saw captured by the cell phone footage. In fact, the last thing simply mentions that no officers were injured in the incident, but doesn't give any details about an officer's knee being on George Floyd's neck for over nine minutes. So, unfortunately, when we see these kind of things we're wondering, well, what exactly goes into the final police report that is given that is given by the police. And ultimately, how do we have information that is contradictory from what was actually captured from the final police report. So this is left many people asking for transparency on how answer reports are even generated. And furthermore, the accuracy of those written reports as ultimately only officers have the final say so over what's captured in the report. So after last summer's call for code in a call for code for racial injustice came in, we all decided that we want to answer the call and be able to tackle issues just like this. So this is the members of our team of all the members unfortunately aren't here today but the ones that are here I think we can all introduce ourselves. So starting myself highlighted. This is myself Shonda Witherspoon, and I was a user research first user researcher on this project. I was tasked with being able to do interviews with police officers police complete officers and various members of the police community in order to get a better understanding of how incident reports were generated and how our system could be able to handle the correct pipeline and develop a solution that would be able to add some transparency and validate accuracy that goes into a police report. I'll pass this on next to my twin Shalisha Witherspoon that you see her highlighted. So Shalisha, could you introduce yourself. Yes, hi, I'm Shalisha Witherspoon. I also contributed on the project. I also assisted as a, as a system architect, and also helped with user testing. So I helped with designing diagrams for the components of the system, as well as seeing how users can interact with a prototype of the application. Next, I'll pass it over to Abiella, highlight here in the PowerPoint. So Abiella, can you introduce yourself in your role. Hi everyone. I'm Abiella Ashidjo. And I worked also as an architect, coming up with the components of the system and doing some block diagrams in the beginning. And that's pretty much it. Thanks, Abiella. Next on the side of Abiella, between myself and Abiella, is Tunde. Hi, my name is Tunde Ulupodano. I worked as a developer for this project. I developed the AI backend of the incident of police reporting system. So basically that's what I did. And then finally here today from our team is Laura, highlighted here. I'm sorry, I apologize, Lucia. Hey everybody, I'm Lucia Ramos. I am from Uruguay. And I was the user experience designer for the project. Thanks, Lucia. Sorry about that. But my mind was to introduce the other members who weren't here today, starting with Laura. She was one of the developer advocates on our team that was provided to help us with our solution. Next we had Kalanji, who was also a developer advocate and worked on many aspects of getting our solution developed and coded. Here we have our leader, Osai. So he's responsible for the team meetings and having everyone organized, presenting weekly on our progress to the call for co-team and getting feedback that we could work on through each week's iteration. And then finally, our team was Debra. Debra's role was a generalist, which means that she worked on whichever aspect that we needed help with. If we needed assistance with user research or working on diagrams, then we could rely on Debra for that. So that was our team, Lions for Justice, for the call for code. So next we had to decide on a theme that we would tackle. As I mentioned the first slide, we wanted to tackle police and judicial reform and accountability. So we wanted to get some insights on how we could improve the current process of what goes into making a police report or an incident report. And because of the lack of transparent accurate data available to assess police behavioral infractions, then unfortunately we want to deal with the inaccuracies that can be born from these reports or any falsifications that could in theory be made without cross-reference or any accuracy checks. So this led to us developing hills which eventually turned into solutions, which is the following. We want to make a way so that internal affairs and civilians such as witnesses could both contribute to incident reports by creating a tamper proof record with all accounts of the incident. So this basically would allow civilians who have cell phone cameras or witnessed an event to be able to contribute their own evidence to a police report. And then we could provide a 360 perspective of clean data to assess the police report and be able to highlight any inaccuracies that may be written in the report. That way, internal affairs or chain of command could popularly assess police reports that contain any inaccuracies. So that's how our incident accuracy reporting system was born. So I'll pass it over to Lucia who will explain more details on our user research and our final solution UI. Lucia? Thanks Shonda. If you can go to the next slide please. So to start our project, the main goal was to do user research. And first of all to understand how the police works and how the incidents are generated. So we conducted a first hand interviews and they were based on our scenario with police complaint officers, dispatchers and other members of the police force. This research allowed us to build the workflow of the reports, but also to understand the feasibility of our solution. We interviewed a police officer who used to work in internal affairs to see how our solution will work in the real world. And lastly, I'm not the, I'm the most important part was to understand how civilians victim witnesses would interact with our application. So, so we conducted user testing service with volunteers to ensure that we were building something that was user user friendly but also it was going to be used by people. Next slide please. So we don't know that research. We went to design our solution. So we have a content management application that captures the statements, videos, audio feeds from first hand individuals, all related to police reports. So we have an interface that is for the individuals, the witness, a victim to report information or data related to the incident. And then we have the manual or automated flagging that is for inconsistencies and inaccuracies that are in the initial reports based on all the collected data. Also it cross reference the record data with the user with the officer history and misconduct and a backend to a blockchain intense that has the actual document storing an object store. So you see we have like a mobile version of our web and a dashboard for the police officers or supervisors. Next slide please. So how we do we picture all these things flow together. We imagine that a victim calls 911. So PCO questions about the emergency of what they saw what happened. And if there is any missing information in that part is going to be reported. And the recording is on going to be uploaded to the system. Then when the police officer arrives to the scene they enter the information in the computer ID patch. When the police interviews the victim and the witnesses they take notes and also records information in that system. Everything the recording the notes what it's uploaded in the computer a dispatch is going to be uploaded in the system. So lastly the systems with all the upload information is processed. And, for instance, if they interview with the witness conflicts with the written report or any contents in consistencies are shown. This is going to be sent to the supervisor or sergeant. So the others are or shown in the dashboard are shown in the dashboard but are also sent by email or other things. And these allow us to present all this information to the people that can do something. So this is how we imagine the workflow of our solutions going to work. So this is what we wanted to show the history of how we ended up with our application and now I'm going to leave a viola to explain a little bit more about architecture. A comment such question around this especially in developing this workflow. How did the previous, you know, interviews that you did help to frame this did they get to see this did was this you know really validated by the information that they gave you. I have shown a show that I did the interview so maybe she can tell us a little about it. Yes, yes, actually so this was developed in many different iterations before we came up with this ultimate solution workflow. So first, I was able to interview with various roles in the Marui Day Police Department as I mentioned earlier. And what they basically helped me work with was a flow diagram that had many different scenarios based on what happens when a person calls 911. How that leads when a police complaint officer takes the call and assigns it an incident number. What happens when an officer arrives on the scene. How was the incident reports written etc. And after we went through those different iterations I would speak to another member of the police department such as a police dispatcher who would add on to that work diagram flow diagram. And once we had that information captured. I confirm that information with a police supervisor, which ultimately led to the next step which would be once the report is written. How is the current flow diagram used in handling what might be considered inconsistent report and where that might fall in the diagram. And after that iteration is where we came up with how our solution would fit in to change some of the flow. You have an inconsistency in report, which is highlighting the blue part here. And after going back and discussing that with another sergeant then we would go back and reiterate over that and so we had to find a one here. So there was many, many different workflow diagrams that was created in part thanks to the officers at the Marui Day Police Department. There are various roles that helped us to be able to come with a realistic solution from what they currently have to how our system would be able to add value to addressing these inaccuracies or any errors that might be listed in the report and flag them. Yeah, thank you for making sure we knew that. We interviewed the police officer that used to be working. He's an IDM or he used to working in general affairs. He also explained to us what it was needed. So the inconsistencies or the problems or the misconduct of the police officer could be tackled in real life. So we also knew that little bit of information to help our application. Can we move to the next slide? Yes. So moving on, we're going to go to the architecture where Abiella will go on to our system architecture and some diagrams with that. So Abiella. Yeah, next slide please. So I will be going over the block diagrams that we have in our architecture that shows the various pieces of our system so starting off with the context model. So we came up with the context management system, which will be interacted with by the victim, the witnesses and additional sources of information it could be from news reporters and law enforcement. And all of this information would be processed. So this is just a block level overview. So next slide please. And then we have a draft high level solution, which shows it shows a breakdown of what Lucia and Shonda talked about earlier. So you see how the witnesses will contact the police and the police will interact with the victim. The witnesses will interact with the police and reporters will submit information and the technology that was used right. So a victim will call a police interviews the victims are witnesses. We process all of that data in our content management system we're storing in blockchain use blockchain to ensure that the information that is stored is secure. And then we did some post processing before storing the information right like if the, if it was a call that was logged in or video or audio, you would use speech to text to start to convert that or a language processing if it's non English. So this was just a high level solution that we came up with. And then moving on to the next slide shows some. It shows the flow of how this would work right so a user would input since information into the system documents audio video. This, this video audio is processed, like I mentioned earlier speech to text language translation. We, we, we used the technologies we used open source technologies like for blockchain we use dipoleger fabric IBM technologies Watson speech to text. And then the machine learning we use also open some open source technology and for everything was stored in cloud would be starting cloud. Now we will validate the information using the machine learning right as mentioned earlier. So the, the documents that are inputted into system if they are inaccuracies for example if the officer did not put in enough information and witness called in, since they have access to upload information into the system, those would be cross referenced right using the machine learning and you see if they're inaccuracies. So moving on to the next slide. This also shows an operational model where a witness or victim, how they engage with the system right interacting with the police calling directly to log information into the system, uploading their video audio. And the technologies that were mentioned earlier. So this shows a high level of our system operational model wise. And that should be all of it. Next we'll have the demo, unless any questions. So I'm going to play a demo reporting that was done by our developer advocate who couldn't be here today named Kalanji. Let me start the demo presentation. I'm Kalanji Bankoli and I'm a developer advocate with the emerging technology team. So today I will be demonstrating the lines of justice project. So what we have here is a dashboard that can be leveraged by members of the police force at varying levels. So whether they're there a officer in the field or detective or supervisor, and this dashboard gives a overview of all cases that have been received by a particular police station, as well as any associated evidence such as audio and video files. Then we also have visualizations of how many reports and complaints have been submitted over time, as well as a breakdown of their status. So the main purpose here is to essentially compare police reports with accounts from witnesses and victims and compare how similar their descriptions were. Afterwards, we then categorize the reports based on how many conflicting descriptions were detected. So if I go in and navigate to separate view, so this is a different section of the dashboard. So essentially this is a witness report form. And basically this section be publicly accessible and well for anyone to report a particular incident they may have had with a police officer. So they can specify the location, the officer name or badge number. And they can describe the incident in their own words. They can also upload files in this section. So for example, we'll go on and drag and drop an audio file. And then I can go in and click submit. So after clicking submit, then the audio file will then be transcribed through the Watson speech detect service. And then we can also see details for each report. So we can see the transcript. And then we can also, and then also in the background, then raw file is stored in the IBM cloud object storage service. So essentially we will take a hash of that audio file and store that hash and all associated metadata on a blockchain. So if there's any case where the original files at all tampered within the cloud object storage section, then that hash wouldn't match anymore. And that would raise a flag. Anyway, so that concludes our demo. And I hope you enjoyed watching. So moving on from the demo, we will go into the technology of what allowed our solution to be realized. And for this we'll have to explain today. Thank you. Thank you, Shonda. So let's go to the next slide this. Okay, so these are the technologies that we used for this project. We have some IBM technologies and some open source technologies. The IBM technologies are the Watson speech to text as demonstrated just now by Kalanji. And then we have the Watson language translator because we could have witnesses who are speaking languages other than English. Then we also have the Watson studio IBM Watson studio for the machine learning side of the project. Then we use the IBM cloud object storage for storing our assets. Then these are the open source technologies, the view.js for the front end development. We have Docker for containerization. We have FFM peg for video recording, video audio processing. Then we have blockchain to make the system tamper proof, then scikit-lan for developing the machine learning algorithm. Next slide. So the machine learning algorithm we used to analyze the witness statements and as a case may be just start pushing it with the available police reports. The machine learning algorithm we used is called K-Mins. It's a kind of unsupervised machine learning algorithm. Which will enable us to divide a host of data sets into clusters. And as you can see from this animation, you can see different clusters. And the process of identifying the data points that belong to the same cluster is what those moving crosses are doing. So those are called iterations. So what happens is that when we have all these statements submitted by the witnesses. So we want to bring them together and we want to see if there are statements from if there are witnesses that are consistent. They have consistent statements. So in a particular location, there could be multiple events. So we want to identify those who are talking about the same thing and bring them together. And when we identify those who are reporting the same issue, then we compare that to the police reports as the case may be. So what's happening here is that we extract certain features from those statements. Features that will help us to identify how similar those statements are. So we use a feature called TFIDF that same frequency, inverse document frequency, that helps us to bring the main important things out of those statements. After we remove the stop words, convert them to what vectors and centroids are initialized and the ones that are closer to a particular centroid will go to that cluster. So I believe I've explained that if you have more questions, we can look at the next slide. So here we see some of these things have been described by Kalanji. So we have a report that goes into the database. Then we identify those reports with matching metadata, what are the metadata collected, the dates, the location, and probably the kind of incident that occurred. So when we see those, we bring them together and then we put them into the cluster and see if their descriptions are also saying the same thing. So here this yellow circle is talking about the cluster. Those events are talking about the same thing. Then we have these red circles that are close together. That could be also another event that happened at the same time and maybe the same location and the witnesses are consistent on that as well. So let's go to the next slide. So here we did an experiment using simulated data sets of witness statements. Those are not actual witness statements. We just use those statements to simulate witness statements. And we got a sample police report from an online source and we put them into the K-Means algorithm that we developed. And here we have two different clusters signifying two different events. And here we have the police report standing alone outside of those clusters. So this tells us that the statement of the police is not far away from what the witnesses are saying. The centroid of this cluster is at the middle. That gives you an idea of how far the police report is from the centroid of each cluster. The centroid of the cluster is at the center. So this is an experiment. It doesn't mean that this is the way it is always. The police report could fall inside of the cluster, meaning that the police report is consistent with what the witnesses are saying. But in this case of this simulation, the police report, which we got, of course, was clearly not related to the statements even by visual observation. And the K-Means algorithm is confirming that. So with this, we can be able to, you know, detect some of those inaccuracies in the reports submitted by the witnesses. And like Kalunji said, the statements submitted by the witnesses have been tamper proofed using blockchain. So it cannot be, you know, it cannot be mutilated or falsified after submitting it. So yeah, I think I'll hand over to Shalisha now. Thank you. Thanks, Tune. So I'll be going over the next step we took and helping our solution be fostered by the open source community. So next slide, please. Okay, so what we initially we just did just earlier this year around, believe it around the spring, we had a series of design thinking workshops to help us understand potential adopters and contributors to our solution. So we've worked with a subject matter experts in different areas such as Axon. They are responsible for providing equipment to police officers, such as their tasers or their body cam. We also worked with the office of the state attorneys of Florida. And we also worked with the United Way of NYC, which is a nonprofit organization that's dedicated to helping low income members of the community and built in building relationships between law enforcement and the community. So through design thinking, we worked with an empathy map to help us more identify with our adopter based on the help of the subject matter expert organizations. And antenna we also want to work on empathy map for our citizens who are the other side of the users for our solution. So if we look at our adopter, which could be police departments law enforcement. We want to identify four different areas such as what do they say, what do they think, how they feel, and what do they do. And so for summarizing what we found when working with the SMEs is that maybe they may feel that this system could be expensive due to the AI that's embedded in our system. But they also would see that of course this could make the department more transparent. Then we want to think about how they feel some potential ways they could feel could be hesitant to trust you know the civilian input, and then conflicted on these idea of making sides such as the accuracy between the police and the civilians. So what we think we imagine the doctors would think it would be that it could help towards any presentations and make them more thorough through automation. And some other thoughts might be how to determine if the inconsistencies are from transcription errors, maybe not just from, you know, essentially done by their officers. And then, when those important things to consider was the supervisors thinking how can they convince the officers that this is beneficial for them, and not a tool that's strictly used against them. So that way, law enforcement would be our main adopters so we want to show them this is a tool that can help them help them clear out the inconsistencies that may be in the report, either intentionally or not, and can help them do this automation through which channel affairs, that way, they can see that this tool would be useful on their end. So that's something we want to consider in our empathy map. I have a question. Yes, around this. So how did you think about, you know, especially when we're talking about and being inspired by, you know, I don't know if inspired as the word or being, I guess, prompted by the George Floyd incident where in the the young lady who recorded the incident, you know, was kind of attacked after the fact, right? Did you think about, you know, how people who end up submitting evidence or are trying to, you know, be upstanders in that sense could also, you know, be protected, you know, by a solution like this? Yes. In our empathy map, that was definitely one of the highlights that we had considered of, as I mentioned, this idea of the privacy being protected as they upload this information. Even when we were talking, when we did the, when me and Lucia did user research, that happened to be one of the many topics that came up with the users. They definitely enjoyed the system, but they wanted to know, you know, the feasibility of being somewhat anonymous through their reporting. So, so yeah, I think that you mentioned a very important part, and that was actually, I think that was a major feeling that was captured in this, in our empathy map was just the privacy through the citizens. Will they be believed by submitting this report? Would this still be enough in order for there to be accountability through our system? Right. Agreed. So yeah, so very good question. I think this leads to our next slide and shows how we took that into account during our prioritization. Okay, so now we want to work on the prioritization, which basically lists on and access the level of importance of some of the features that we want in our system. And then, but also shows the feasibility of the proposed features and additions. But we had in our MVP solution. So we can, so during this exercise with the SMEs, we came up with a few dimensions that we recognized that we took based on the empathy map. So what we have is we had data trustworthiness. We had trend and transparency. We had a trust for citizens such as their data protection and controlling their data and their privacy. And then we had a trust with citizens regarding the truthfulness of reporting. We also had easy access and data upload for the citizens, and then the ability for the citizens to be able to track the incidents that they reported and get updates on them. And then we had the alerts or automation of the internal processes. So some, so for example, the report comes in and there's having notifications if there are inconsistencies, we want that of course to be automated. And then what the ease of use for the police, such as of course seeing the value about this automation for internal affairs. But also one of the big things we want to highlight of course that came from the empathy map was building the trustworth police. This tool, how this tool can benefit them. And once again, not be a tool against them. And then, and then work on the potential cost savings based on the technology that's used that's not open source such as the AI. So look at this privatizations. What we found were the top three dimensions were the easy access and data load for citizens for citizens. For example, as you saw the demo. What we had was the, you know, our, our prototype UI for the dashboard. And, and they are you saw how we were able to have a citizen upload, you know, on the, on the website on the on the desktop interface, but we will lock of course for users to be able to use a mobile application. And so that they can, you know, record directly from their cell phones, and then more easily upload that data to the system. So this all falls into easy access and data bloat for citizens. The ability to record an app and be able to input their things right at the spot in real time. The next we had was a trust for citizens. So that's the orange blocks. And that's basically their data protection so basically what Sabine had just asked about the citizens being able to have some control about whether they could be anonymous in this reporting or just some way to have this, the trust and having their data in the hands I guess of the police of the law enforcement and their system. So, but here we thought about some secure identifiers for witnesses so that there's no PI. And then have login systems, so they can see so we can see if law or not reporting is feasible in some situations. And then lastly we want but not but also when the most important is building the trust for the police. So we. So for this we looked at the idea of having clear AI of why a report is consistent. So when we just have the flagging of reports, and this sent to the system. We want to have the, you know, the, the AI, clearly list why the information was inconsistent so that police, you know, supervisor can review this and then see, you know, highlight, see what they highlighted and be rest assured that it wasn't doing something like such as typos or inconsistencies. And if it was then they can clearly check that as well. So just not enough to just say oh there was inconsistency but clearly highlight why so build that trust that they can trust the system. So please go to our next our next slide. So based on the product. I'm sorry was there a question to me. Okay, no problem both anyway I'll just finish off by going through our roadmap. So, based on the user adoption tests on our prioritization, we came up this final roadmap that we that we're hoping to get help from the open source community and getting this implemented to improve our MVP and our solution. So based off the prioritization we have the three different areas we that we want to that we can help you help contributing. So the first we have is our mobile app upload. So as I just mentioned in prioritization, what we want is this easy access and data upload for citizens. So this includes the uploading their when it's accounts and any evidence to either Android or iOS, etc. What roles so not only do we have the technical requirements that's needed for some of this. This feature, but we also have roles that we can use in order to get this done. So, for example, a designer app developer. And someone who's familiar with web with web design, for example, so they can help us to get this mobile app feature on mode. And help us get this this ease of access for differences for the citizen adopter. Another idea another area that we could use that's doable for the roadmap would be automated notification. So as he mentioned, we want it on this process to be automated so that supervisors don't have to do the manual, you know, an investigation of a false or inconsistent police reports. So, for example, this could be email notifications that go on based on the inaccuracies and some threshold that sent to a supervisor's email. And they can also be email alerts also given to the citizen based on any updates that have that happens, do the investigation process, etc. So for this role as we would like a software developers to help implement this, but both on the mobile side and as well for the web application that's used by the web officers. And also, of course, we need someone, people from security to help us secure this information. And then lastly, we have the AI explainability and transparency. So, for example, going off of the last example, which was building trust for police, you want a clear way of when the inconsistencies are done. How can we highlight specific areas of why the system flag these inconsistencies or inaccuracies that were reported. So for this, we're looking at some details of our scores based on the calculation to use as a threshold. We're thinking of ways to make a link part of the recordings of where they ever happened, or flagging the speech to text errors, based on if it was simply an error in the system. So for this we would like, you know, data scientists and machine learning specifically natural language processing experts. So they can help develop this upgrade on the flagging system so that we can have clear explainability for police officers to trust where the system is making an error and it's not just reporting flags. So, through these, these three areas in the roadmap we're hoping that there's many different roles for different developers to get involved based on their expertise, and when they're what they're comfortable helping with. So, hopefully, the things like this, we can get more contributors that can help make our system really strong, and then reach the phases of adoption. All right, so next slide so this concludes our presentation. So you're able to join us on the slack community IBM cloud, and then we have a QR code that you can scan that should take you up to how you can get started right away. Yeah, absolutely and I really just want to kind of drive that point home. This is a collaboration with, you know, folks that are of course with an IBM but we're really looking at this as almost like an outside or yeah an outside in type of situation where we have the solution that was created internally. But we really are focused on how do we can get people outside of our community, or immediate network of you know IBM to really join us in refining the solutions a little bit more. Being able to provide those additional supports as we are making this available for potential adopters, which is really important because we want to show that it's not something that's created in a silo. So we can see that we've done you know our due diligence and contracting community members police officers and understand kind of like the background and how solution like this would be useful, but we're really emphasizing that anyone who wants to learn how to do how to accomplish some of the items in the roadmap, there's opportunities for you to learn to do that through our community and there's also ways for you to be able to teach so if you have some of the skill set that you know we're requesting really to contact us again scan this QR code and it'll tell you how you can begin to you know get to know us a little bit more. You can join our Slack community at you know I'm in there Demi a Jai who's our community manager is also you know active there as well as you know the rest of the team members. I really want to you know be able to bring you in collaborate with you and then also show that again this is coming from you know a call for code to you know being able to be implemented and implemented into community. Okay, so really excited about it. That's all we have for today. I think the team usually I you know prepped a lot more questions but I knew that they were going to get to especially a lot of the thinking that went into creating this which I think is always going to be the core to you know solutions that are really supposed to provide transparency and accountability. It's like who have you talked to you, how have we validated this is a serious and yes to all of those questions. Thank you. Thank you for being. Yeah, thank you for being thank you everybody for participating in this project it. It can help change lives right like, and a lot of outcomes could be different as a result of it so please I encourage you join if you're, if you have time if you, you know, want to get started in open source software this is a great place to start if you're experiencing open source there's a great place to continue. So, you know, I appreciate everybody that joined us today demo to this project I think it's very important to get more folks involved. So thank you for bringing us to open to red hat. So we can stream it out for everybody. So being others. I'm putting this on them they did, they did the work. Thank you. Everyone who participated. Yes, and then lastly, all the, the, the roadmap that I mentioned all the rules that we needed in the coming days will be adding those as issues to get up. So all of anyone who wants to contribute can clearly see how and what what they can do to do it to make it more easier so we look forward to as much help as we can get and new contributors to this project. Absolutely. I'm handing it back over to you, Chris. Alright folks. That's all the live streaming we have today but be sure to tune in. If you're in a Mia, it'll be around 10am your time tomorrow for the open shift coffee break and then when the East Coast wakes up, please join me for the level up hour at 9am Eastern. And with that, I will bid everyone farewell and see you all again soon. Thanks so much everyone. Bye bye.