 And so that's what we're going to do. We're going to introduce some of our students who are working through the ITE consult collaborative under the direction of our Thomas, who's joined with us today. I know our first group, but I'll have you guys introduce yourselves individually as you come on up. And if you don't mind just speaking clearly to the mic so that the folks virtually can hear you, it's fine. Hello, everyone. Am I all right? Yeah, that's okay. Okay, so I'm Irina Rangude, and I'm one of the program managers at iConsult Collaborative. So first of all, a little bit about iConsult. So this is a platform where all of you are encouraged to apply. You will get real-world experience working for like real clients, which can add value to your resume. So feel free to check our website on iConsult. And if you have any questions, you can reach me on Shweta or William as well. Okay, so talking about my project, so before I begin with the presentation, I would like to thank professors all for giving us this opportunity. So here, giving a brief about my client, Fairfile. So on Fairfile, the client, they have developed a platform for healthcare professionals, then recruiting agencies, and a multitude of institutions. So what we're doing here is this platform helps the recruiting agencies and the healthcare professionals to tap into data related to medical or physician level, high-level data, as well as look for job opportunities that align with their interest, specific to market size. So that was a brief about Fairfile. So talking about what we specifically at iConsult do for Fairfile. So if you look at the scope of the project, my team helps Fairfile in developing healthcare-neighbour aggregation and also in geographic filtering system. So now if I reduce this scope into high-level deliverables, you can see we basically are helping in creating a user-friendly platform to access the healthcare data that we are implementing geographic filtering system using zip code mechanism. And yes, so whatever data we analyze based on that insight, we help them to implement that platform and make it better for the users. So the input about all this work will be explained by the data scientist in my team. Hello, my name is Dhruv Gajkul Karni. I am a team member at iConsult. So while we're explaining what kind of model we have used to do our project, so basically our project is like to filter out the data using different categorization techniques. So basically in data science, a famous categorization technique is clustering. So you may have heard of different kind of clustering techniques. So K-Vance is one of the most popular physical techniques that we use. But the problem with the K-Vance clustering is like the ways that you define out the clusters that you define are automatic. You cannot put the clusters with respect to the users. So in this case, our client provided that the clusters that you put on every column should be user defined. So based on the data analysis and our subject matter experts from our client, they give us a different kind of grid to every column. So that we have done. So basically the purpose of that was to enhance our quality and relevance of clustering using this method. So that's why we use weighted clustering. So weighted clustering also uses a distance metric to solve this problem. So basically in K-Vance clustering, the distance metric is limited to at least centroid. But in this method, we are focusing on weighted distance metric. So this improves our reliance on the clusters and moreover, every column get a different kind of weights that is assigned by SME and relevance of that column would be given to different kind of set goals and this kind of adjustment influence how the data points contribute to different kind of cluster formation. So how we implement this would be explained by Aarind. Hi, everyone. My name is Aarind Kapde. I'm a second year grad student majoring in information systems. And I'm a proud member of ICUNCELT working since January of 2023. Thank you very much. That was insightful about weighted clustering. So let's begin with what the data of the project was about and how does it resonate with our project. Our data consists of 70 different columns and 126 unique values. We had different variables that I've reduced in our data like CVS effort, which is basically code-based statistical search area on which we are trying to implement the filtering feature on. And also we had to look for the important variables and metrics that would use to infuse for the filtering process. Now I'll explain the implementation of the different features and machine learning techniques we had in the project. Why we used this machine learning techniques was the reason we had it not selecting different techniques. So, starting to just let you all know and make sure we are on the same page, we scripted a project and worked on our platform. That's the reason we started with Kbin's Clustering, weighted Kbin's Clustering package or library you can say and use the entropy weighted Kbin's function. So, this function and package gave us the exact design output we wanted as well as the clients. But the problem here was we couldn't assign any custom weights to these variables that we liked. And you see the requirements from the client or something different than what was the design output we had. So, we had to tweak a little bit, bring some a little bit and then we went on to a different approach where we created two different artists one for all the variables and it's categorical values as well as the non-technical values and one array for all the integers. So, we assigned the weights by multiplying all the integers to the different variables and it's all the rows by implementing the Kbin's feature and as well as custom weights. We thought this could be a good idea to get what we wanted and also the client or would be happy about it. But the problem here we faced was about low variance. Low variance is nothing but the difference between the cluster scores independent of the weights assigned or not. And in this case, the cluster scores were not much different when we assigned the weights and when we do not. So, when we tried out all the different packages in R, we had to go to the flexless package. Now, flexless is one different other package which is used for clustering. Also gives us more flexibility and freedom to add our own custom weights. So, we started on implementing with the C-class function which is function falls under the same flexless package. The special thing about this flex class package, C-class function is you can add your own weights into it. You can add your own function parameters inside the function itself. So, let's say you have to assign the weights and the primary problem here was we can assign our own custom variables to custom weights to all the variables. So, we're trying to tackle this problem and that's where we find about the flexless package and this gives us the freedom and flexibility to add our own weights to the variables. But we also came across the same problem here like about the load variance and then finally we found out about the KCCA function which falls under the same flexless package. But the only difference that was differentiating C-class and the KCCA was the differentiating factor between the weights independent of the weights assigned to the variables or not. So, that's the reason why we went on and finalized flex class KCCA as I told it could effectively distinguish between the weighted and unweighted clusters. Now, we're talking about the challenges we faced in this project. Of course, it was a complex project and we had to face challenges. The challenges shows how well we read out the challenges and it was an opportunity for us to improve the zones that we'll be sure where we will be in specific zones. Starting with the data processing, we had phase problems in data inconsistency and normalization. We had to tweak a little bit in the column names as well because of the volatility and the requirements of the requirements from the clients. Also, we had to bring in the statistical feature changing the missing values to the closest, doing possible, also tweaking the columns and the statistical measures based on what output we wanted as well as the clients. So, making sure that we are on the same page. The feature engineering was the biggest challenge we faced in the whole project. It wasn't much complex project, but it was time consuming. And the problem here was when we initially started to imply a weighted clustering algorithms on our non-continuous variables, those were just numbers given. But when we actually started to implement this clustering algorithms to the categorical values, so we are not talking about one or two different categories, but we faced a problem with 30 different categories. So, I don't know if you have heard about W variables, what W variables is and why we use it. So, we had to bring in W variables to break down this process and make the whole project smooth here again. So, we had 30 different categories. We had to bring in 20 and different W variables. In case just W variables is nothing but that creates a binary number zero and one essence two. All different categories immediately. Let's say we have a column of, we have really good of, let's say color, and color has three different categories, red, blue, green. So, it's assigned one and zero to all the three colors, gives out one when the color is actually red and zero otherwise. So, this was an integral factor to overall make the project faster and slower, I mean, to our confusion. Also for the literature survey, before starting on the project, we all in the team were from a different background. We had to universally brainstorm and do research about the healthcare domain as well as the current market value of the physicians, how the market value and what the deciding factor would be based on the data, what columns we should include it, what columns should we get rid of. It was time consuming, but as soon as we, we were on the same page about the, what attributes we needed to bring it, it was just matter of time until we decide what the future in selection algorithms we are gonna use it. And I would say the one major problem we have faced, I wouldn't say this is a problem, but this was more of a opportunity for us to work on with new clients and work on ourselves as well. So, whenever we are able to do any algorithm and do any machine learning with the design result that the clients want, there was always change in the requirements from the client side. So, there are few times where we have to come to what force is backward and then we have to start over again, but that literally showed us what we did and it doesn't learn opportunity for all of us. It doesn't matter how, at what mediocre level or intermediate level and also your data scientist, there's always small and major mistakes you do, but always a good opportunity to learn. That's how lastly, I would like to thank Professor Sons and Thomas Adler to give us this curious opportunity. I'm really honored to present this project in front of all of you. And we're happy to answer any of the questions you guys have. Thank you so much. So, I think many of you will have the opportunity to do context about their project and they even ask questions across all of the people at the same time. Some of you might just be about, I think so. And so, while you come down, we'll get it set up. And while I'm doing that, I'll just point out you did it in the room as well as online. So, they would say technical terms, well, I don't know what you mean. I'm technical terms about our packages and machine learning and stuff like that. So, that's all learned in the iSchool. So, if you've just taken the introduction to data science class, we haven't taken any of them. If you take those classes, you're gonna learn all those and learn some more as well. Thank you, Professor Sathya. Good afternoon, everyone. Firstly, thank you for joining us today. I'm Shweta Rane, I am the program, one of the program manager at ICONSET Collaborative. And I'm very excited to be here with my teammates, Arunima and Mobile. I would like to thank Professor Art and Professor Solz for giving us this opportunity to present on the occasion of Data Science Day. So, today, we are going to give you some insight and provide you with some updates on what our project and what we are doing at ICONSET. So, our client is Nudia. So, just an introduction of what Nudia works is. So, Nudia works as a software and service or application that occurs to users in utilizing digital technologies through personalized skills development. When I say personalized skills development, it means it unlocks the value of Microsoft 365 in the way that works best for you. So, this is a snapshot of what is the existing application about. So, this is the Nudia works framework on the Microsoft Teams. As you can see, there are menus, tools, skills, outcome, showcase, insights, et cetera. So, what it gives the user is it gives you a snapshot of your progress, also the status where you are in. So, for example, if you can see the skills section, it is showing us that you have eight or nine skills, five skills need attention to what is the skill. Now, you will see what is the skill. Skill is nothing but as easy as organizing your outlook inbox with the folder or how you can communicate effectively through Teams. This is what a skill is. Anyone who is struggling to work or doesn't know how to use it can just go to the right option perform the steps or can go step-by-step guide and do as per what they want. So, in this way, the Nudia application works. So, what, where does the icon set in come into the picture? So, what we understood was currently the client is focusing on aiming towards moving into the educational domain and assisting the students who work on the same. So, we as the icon set team help them to understand the requirements, firstly, gather the requirements and what exactly they need from us. Secondly, they currently using Excel chart and Excel charts to develop their application. So, we are providing them efficiently Power BI dashboards. So, that is what we are doing and of course, generating insights for the same. So, in this way, we are helping the client to unlock some efficient way of doing visualization techniques using the Power BI dashboard. Now, in my teammate, I will now be explaining you what we have done so far. So, the initial thing that Phyllis said was requirements, that was the first thing that our team members involved upon. It was to understand how the data was being generated, what was the data pipeline like and how it was being collected. So, the data was being generated on a weekly basis and it was stored in a CSV format. Understanding all of this process was necessary to us to know in future how well the optimization of the data in the constitution of data in the dashboard will affect them. After that, the next important process for us was data pre-processing. Understanding each variable, what they meant, just not by definition, but from business application being was also important for us and trying to find the inconsistencies in data was another important aspect before we went on to make the dashboard. Now, inconsistencies, I'm pretty sure you might have come across like null, values, and duplicates. We were also facing those problems, but just identifying that these are the nulls and these are duplicates was not enough. We had to understand that it wasn't generated out of an error in the data pipeline or wasn't there for a reason. What is the reason behind it? For instance, if there's a login column which records the login time period of the person, if it's null, it means that the user never logged in and it's blank, so should we drop it? Why not build an idea? Because that is just indicating us that the user never logged in. So the user activity will be low in that area. Similarly, another important thing we had was format the columns. So the data was generated on a weekly basis and the client also wanted a recording to happen on a weekly basis. So when we were plotting it on our graphs, they wanted on the x-axis that it should appear as weeks and what is days, months, and years. So we had to format the data by applying different, or when that's ready, it is a very powerful Power BI feature to do data transformation. After doing that, after mapping out all the data source series variables and starting it, our next step was to visualize the data on the Power BI. So this is how you use dashboard look like. It's text-heavy, it's not intuitive. It's just one big line chart done with the variables. One universal small headcount chart which is also not very readable. And two of us is going on with this one dashboard. This was not helpful for them which intrigued the insights very quickly. So what we did was, we created a dashboard like this on Power BI. I'll try to show if we can, yep. So as you can see, it's very interactive, very intuitive. And one way, you can understand what is happening and what is going on. We have a full section where there's everything there and it's just very quick in various insights. So what we did with that new dashboard meaning we separated each metric that they were working in one graph into different dashboards. Get more input analysis on each video and it was like, you know, insightful for them. Number one, my team are number one but they just walk me actually to the full journey of the project. And so, hi again. So speaking of the reflections of what we actually learned from the project, not just in terms of the different aspects but working with the client and in a live session. So first thing that we had to prepare ourselves was to like continuously engage with the stakeholders which was Lulia Burks and the client in this case. And when we initially started working on the project the users told me, so what the actual requirements were from the client, we had them but they were not very clearly defined on a granular level. So we had to continuously communicate with them and understand what the user requirements were and try to figure out what we need to do. So for the requirements gathering part we had to almost communicate with them one set in some cases twice or twice in a week figure out what they really want from the dashboard and visualizations and what sort of insights they wanted to generate and build data. Another point that we really had to focus on was like to experiment different kinds of, for example, chat layouts or dashboard design and different sorts of transformations of the data available to make sure that the client is really liking what they're seeing or what we're generating because just removing a plot like a line chart or a scatter plot is easy but the value we get from the plot is quite important for the client. The second most, or the first actually the most important point that we learned from the whole project is that it's different from actual economic projects that we do. Seeing the second point in that, one simple example is like the selection of color schemes for the plots we did. In our economy projects, when we have to work on a visualization for a project we can just choose, like for example, colors we like. That's totally fine as long as it meets the requirements for the post. But when we are considering the right project where the dashboard would be deployed in the software the client is building and planning to sell, we have to consider a few more things. For example, when we were choosing color schemes, one of the suggestions we made to the client was to choose something called perceptual uniform color map. So what the perceptual uniform color map should do is, they, one major advantage is that they are easy on users or easier on users who have television deficiencies or color blindness, for example, if someone can't see a red or green color and if you make a dashboard in all red, it doesn't help them, like to see gradual changes. Another advantage of using perceptual uniform color maps was that the changes in the actual values are shown in the same strength in the color change as well. I really encourage you to, if you're interested in data visualization I encourage you to learn more about the perceptual uniform color maps because I think that they're quite interesting and good to consider because the visualizations you end up building might be used by a lot of users who might have television deficiencies. And another point that we had to consider from the economic projects was that this dashboard we were building was being deployed in life. So when we built the dashboards, for example, in our courses, you just get one dataset from the professor you built it and you're done with the visualization. So the transformations that you make while you're building the dashboards, it works fine. But when we consider a dashboard that's deployed in life, we have to make sure that the new dataset that is uploaded to the dashboard, it goes through the same tree processing that you did for your dataset when you were building the dashboard. A simple example like Arunina mentioned is like removing the null values or not renewing them. So once you do them when you're building the dashboard, you also have to make sure that when a user of the dashboard uploads the new dataset, the same transformations are applied. So yeah, these are our reflections as the team and from learning. Yeah. Thank you. So thank you very much for both teams to do this. So we'll take questions from a virtual audience as well as a live audience. And I'm going to start it off first though. So we're going to only ask questions if you have anything to say about how things are. So how did you get involved in life? Who wants to start that one? Yeah, we are all the students. We are all the students. Yeah, I got that one. So let's start. Yeah. Hi, I'm Arunina and I'm already with you. So when I just started coming to Syriky Senior City, I talked to my alumni as I talked to three of the professors and I got to know what I cancelled. So my professors and my seniors suggested me to not wait or apply on Icancel. As soon as you see the opening on LinkedIn, LinkedIn page that Icancel, you will see the LinkedIn page on Icancel and just see the application. We did two of them. Just don't go to the application. So you just start on keeping and apply. And as soon as we find the requirements we need, and if your resume has the perfect mind about the requirements, we just pull up your resume and just get in for the interview. But I would say keep on putting your supply as you apply again and again. Your resume, it will be helpful for the resume to be on the top of the database. And this is how I approached Rabina through LinkedIn or through few of the previous project managers who graduated and then I find to be a setting that's up. So we're already in front of you? Yeah. Yeah. All right, so I can have that message pretty quick. Any questions? I'll see you there. Yes, go ahead. Another question for the second group. Is there any component with continual training for the client? So as you guys finish the project, will them to maintain the database? The question is, do you have continual training on this subject? So that's a good question. That's what we have finally, that's the last phase of our project that we're working on because Power Viewer is also meant to be our client. So we are making a documentation on how we did and what we did and working it down with the smallest definition. So it's easy for anyone who's going to use it to understand it and they can implement it on their own. So the other documentation part, that's actually a very important part in the data science project and why do I ask that question, sir? Yes. What's the deployment cadence for the dashboard? So a question for you, sir, for 300 million. So that's the part we're still discussing with the client. So they have two pros and cons of clients again. Our client also has partners and clients. So one of them, they probably don't want to deploy it in life like because it has since the information about the employees and stuff. So one option is to like provide them with the actual dashboard file being created and they let them to deploy it in the NLE. And one more option is to, like I don't even mention the data is being generated at a peak. Yeah, it's a weekly level. So the dashboard would be generated. And I think, yeah, the weekly report is generated every Friday afternoon. So probably once a week to answer your question. Sure. I have a question regarding the data sets. Could you please elaborate more what kind of data sets you have in this and how do you gather those data sets? Were they from a third-party vendor? How do you go about collecting that data? Yeah, so talking about the data, we also have big data sets. So talking about our data, I mean, we can disclose all the information in the data sets. But it was a US Census data we are working on. So it was kind of a similar data that you find on Kaggle about the US Census data. So just a little tweak in between our data set was bringing in the attributes that were really important when I was writing with the clients. And this was not a data set, but what were the features, different features in our database and the data set we find in Kaggle was the US Census, was the different, of course the different attributes about, let's say, the city, the actual data set in the Kaggle will just save the universal cities that have like, but this was actually real data. The clients, they actually had our clients the information was very private towards them. And this was actually how we gathered the data. We were also provided the data in CSV format and we actually cleared our data by bringing it into Excel as well as we also tried on different techniques like bringing our tab to PowerBand. Dashboards was not the key factor we would like to work on. So that's why we had to, you know, I think that was to bring out the filtering feature in our database that could help physicians to look at the clients more specific towards the area base. Just see if you want more clients to take a look at from Syracuse. The filter feature will just take a dropdown menu and it will give you a way option to look on the clients who are residing to Syracuse. So that was how we got our data and by little bit of analysis we came to the conclusion. Yeah, in our case the data was part of a pilot study that the client conducted. So it was not a very huge data set because like I mentioned it's for one organization and they are limited number of users or employees. So we were getting state of here on 500 employees information like the usage and it was for like four or five different categories. So approximately five, so five different CSV files with 500 percent. That's it, 10 categories, 10 columns. Yeah, I'm talking about how we were getting it. So the new team members were manually fetching it from the web page and then sending it to us. So that's how we were getting the CSV files. So for many projects, getting the data is a challenge right there. Yes. I have a question for the post-trip. So was there any sort of hypothesis testing done as well to identify the relevant features? So the question was about hypothesis testing and machine learning. So that is our next one actually, we are going to just filter out the data and categorizing using different kinds of samples. But we will do the hypothesis testing only if the subject matter experts are interested in it. If they are not interested, we will not go with that. So the primary aim was to just add a filtering feature and like for every micro and granular level of detail was provided, it was by the clients. So as he wanted us to add his own custom which selected features, he made the code which was really flexible which allows the user to add his own custom which to the very best he wants. So that was the only first deployment part we had. The next deployment phase would be, I mean I'm not going to completely disclose it, but it's our data that I didn't have to bring all our data to the cloud level. And then the development comes into play. But as such of now, we did not really have hypothesis testing, you would say, which actually needed it, but we didn't do that. Even I don't know why, but that's a really good question. Thank you so much. Sure. So how much of your time do you contribute per day to this? And second question is, how often do you interact with your client? What material, methodology do you follow? Like is it agile or is it what for, what do you follow? That's a great question. So the question here is about like, how much do you need an average week on how much time you spend on the project? How much do you expect to spend on the project? Which might be different. Let's talk more about the project next. So, from our project, I would say we are using an agile approach. Yeah. Well, so every week we have a meeting with, so most of the time we have 8,000 meetings, where the team just discusses what we have done so far, what we have to do next. And then sometimes we, by looking depending on the client's availability, we have a client meeting where the client research the next input, for the check what we have done so far, and if they are okay with it or if we need to do any improvements with that, we do that and then we go to the next, I would say, print. So yeah, this is the approach that we follow and how long like, how much hours? Depends, I mean, somewhere I had a handful of time, so I used to usually give like 7 to 8 hours per week, but given understanding the academic activity that we have, it's usually one or two hours and really based on the requirements that the client has. And if there's something really important and it goes on for the week, you plan your week at our weekly and try to give as much as possible. There's no limitation how much you want to give and how much you don't want to give. It's really flexible. You just need the task to be completed, but as she said, we went on with the as-as-as I would say, I think we also did five given the academic load we had. We had the, we had to commit ourselves to that. So on average basis, given the academic load and throughout the semester, two to three hours. But if you have like a Thanksgiving weekend or something, you know, you can work like 7 to 8 hours. And I mean, I never checked the hours. I did work because it was real fun. So Manu, what do you have in mind for us? Our clients already know that we are doing part-time and we have other work to do, so we are very flexible to do that. So you don't have to worry about the clients like what's going to happen. How about this one? So for my project, I would say it really means like a hybrid of course. So what happened was it took me some time to understand the requirement and it went around like two weeks to gather the requirements because I wanted to know what they were doing existing, the current application and what they expected from our clients. So two weeks went by understanding the requirements, gathering all the data. And then once that was done, we started working on the data and we first tried to, we did like various trials on how can we move further. So yeah, I think I know. Yes. So as the data analyst, we were spending around, like we were having one for what week, maybe once a week and the time spent was around, like for me personally, it was not more than two to three hours, depending on that person. Yes, a lot of times, about two to three hours, right? I'm going to ask the question a while, then we'll kind of get back some different questions. So you mentioned that one of the things we learned in the project, that's actually about both projects, was gathering the requirements for the customers and trying to kind of change, you know, where they do it and all that kind of stuff. Very different than all the nice projects you get if you're a hundred years old. Yeah, very different. So if you get a project in your class that's in decline, that's because we have to get used to it and put it back in the room. So my question is a little bit more about other types of rules. Like what else, so for example, talk about different technologies you're going to use, the machine learning model. How much of that did you learn because you had to do it for the project or because you weren't doing it for the course? So look at all the details of it, I don't think people are killing them. How much did you learn by doing the project and how much did you learn by doing the plan? I can't get a good answer. So in my case, I think it's all right, I think it's all right. Yeah, in my case, I didn't know before I started with this project. So I used Tableau for a second week probably. So to answer your question, I think I never used Tableau before this. So I did spend some time watching YouTube videos and just put this practices. For me, I was introduced to the leaders as a professor himself, but I knew a few analysis techniques earlier before coming into city seniority, but was a completely different approach. Machine learning is, so few features we learned in the class for like support vector machine and basically all those three, four techniques. But I had to do my own research and learn a lot about a few of the features in, I don't know how that is for machine learning. So I had to learn a lot about packages and new different functions, how the machine learning works for the category and various different values. And I would say before starting on anything, I felt like it was, it would be a heavy job for me. But now standing here, I felt I learned a lot and I'm just glad I didn't miss all this opportunity. Well, I have a little different perspective on this because I was one of the project managers for the team. I had took the project management course and I definitely learned a lot from that because they had given me an exposure to what were the different models, as you said, agile and everything, waterfall and even the SDAC cycle and everything. But of course, it was new for me to handle different clients in different domains. And with each client, there's experience you get and you learn a lot from that. And applying those techniques, that now I can say we use kind of a hybrid model that came with a trial and error format and I learned it with the one go, one go. Thank you. Other questions, I think I saw a couple of points. I just have a question regarding the after the deployment phase. So you guys told that you did the project and you made it live in production environment. Now, how is it going to get handed over to the client? Are they going to take it over? Are you still the support person for this project? How is it going to be? That's a great question. So the question is, what happens going forward? Is this going to be a project in the next 10 years? Or is it going to happen? So for you guys, they're still discussing with the client. So some of them, they want us to just give them the last group that we needed, so they can use it. Yeah, I think we're still discussing, but I don't see a scenario where we have to support them like for a live deployment. I think they just want us to develop it and do an analysis transfer through the optimization that are going to be on each one. So once we do that, I think they'll be able to do it. So for our project, that's actually the next part of the deployment. How are we going to hand over to all the employers and the client from the client side? So once the third generation of the development team has been done by the job, they're planning to give this code on the cloud-based platform where you want to access it. And since it's just a feature-filtering program and not a dashboard, it's not considered based on the live data. It would be really easy to just consistently send it over to the users where everyone could have access. But I'm not really sure about when the time will come because there are other teams that will have to do the job. Also, one more thing that I would like to add. Before end creating any project, we have to sign a non-disclosure agreement with the client. So once we meet the requirements, we are supposed to hand over all the documents that all the work to them. And then we are no longer, like, we don't participate in the project. So yeah, transfer a question where you don't feel going on with the project. We have time for one more question. Any other questions? I have a question for the second. So how many layers of user access management is there for the dashboard? So depending on the client roles and levels which are there, so for example, director might have a different view, or the manager might have a different view. Also, questions about access and access were all kinds of stuff that you learned in the class that you took in the previous period. So we are concerned about none of that, actually. Because like I mentioned, we just have to build the dashboard and give it to them. And the data we're getting right now is the file that she mentioned. And it's manually fetched for us. So the problem I didn't even, like, print it out, send it to data, and just give us the data that will take for us to see, so on. That's a good one.