 Okay, good afternoon for everyone. It's a pleasure to stay here and Aero Python and have to turn it in to share our experience and government of the state of the Goyas with the pandemic and how digital transformation help us to give information, to give for the society a better public policies in this difficult period, yeah? So today we discuss about the COVID-19 and the impact of the demand for this service in our channels here in the state. So we have a huge problem. So COVID-19 increased the demand for complaints and our channels of the embossments here and the government of the states. So the people stay very hungry. So here in Brazil, we have a many loss about social distance and people have doubts about what's need to be, what's need to stay open or need to stay closed, what service can be continue working or not continue working. So we have this idea and these doubts impact in our channels of the attendances by this. So we don't have infrastructure. We don't have people to support these. So if one artificial intelligence we can improve, we can give support for the sentences to give a good attendance for them for each demand that the people want. So our main object is how to apply machine learning to identify interactions profile in the business channel and the state of the guys and give support and the specific support for everyone. So for this, before that we apply codes about machine learning, codes about Python, codes about a specific model, we need to identify and understand the problem. So first it's we need to understand and analyze how the population, the city's interaction with our channels. After we, in the next step, we build a descriptive analysis for these interactions. So in that other step, we build a text analysis and use a clustering techniques to identify personas, to identify ideas, to give support. So we, with this personas, we have the opportunity to build protocols, to build a recommendation to give a specific attendance. So this is our peak. So in the pandemic, we have peak, we have curves and here in the state of the guys, we have peaks and we have curves too. So it's the normal peak and the normal wave of the demand for our channels to complain. After the measures of the social distance we increase for this demand. So we don't have the infrastructure, we don't have attendance in the call center to support this increase of the demand. So we need to build them specifically to response for these problems. So with our artificial intelligence, with machine learning, we can give a good response for this. To improve our channels and to improve the interactions, we build a text analysis with artificial model. So it's our simple frame of the interaction. So we have the manifestations, we apply NLP, we apply a k-mean clusters to identify this persona, we apply a text model classifications to identify these personas, we deploy off these models in a web app and we build a data visualization to give, to take the better decision in the moment that we want. So for these infrastructures of this model we use TF-EDF model. TF-EDF model, it's the basic model when you discuss about text analysis, we consider the term, frequency in the entire document of the text that we consider. We use a k-mean clusters to identify the subjects of the interest and we have a big picture of these interactions and the big picture of these analysis. Of course, this model is in Portuguese, this tech cloud is in Portuguese, but we have opportunity to identify some strong words like employees, like companies, like the district, like the service, like the other companies, like the government. So we have a big picture of this idea. And we have a logic in the background of these interactions. We have common synthesis that complain about one company. We have the employee that complain about a service that working in some moments, the servers didn't work in this moment and we have complaints about that the activity need to be working or not. So it's the main logic in these interactions. And after, we have tags for the cluster. So we have a cluster of the interactions that it's running activity and employee agglomeration in a specific service. We have synthesis that complain about open service. We have that employee that request about protection. We have servers of the internment that open. We have closed door companies that is still working besides the decrees. We have declaration stores that's open. We have open bars that work. This is important because we don't have energy, we don't have power, we don't have infrastructures to attendants, everyone. So these personas give for us the idea that we need to focus our energy and a specific service. Like, okay, open bar, it's not a service that we don't agree that open bar did work. So we act in a specific the service here. So we have a cluster in that, we have operation, activity and employee agglomeration and we have specific ideas that was reported here. So we have work constructions, we have universities, we have AT companies and we have a specific idea. For example, these specific service, this is specific store here in our districts have made them more 20 requests and one hour. So we have unbued protocol, specific protocols for each one, these clusters interaction. So here we have a specific cloud for these ideas. So we have strong words, so employees, we have companies and we build a nice specific model to understand how this word was related. So the word employee and it's very close to risk, to put these workers in risk, to put these clients in risk and we put these workers to work in risk. We have sentences that complain about open service. So we have many doubts. Many of the service, it's allowed to open but the citizens have doubt about this. So we resolve and solve these with specific protocol and specific ideas. So we have administrative service, park, car wash, we have hotels and motels. We don't know if the service need to be working. So we have a specific word count. When you specifically can see the word working, it's very strong and working normally and how it's related about the idea. So I hope that you can see my Google collapse. And the idea here, it's to try to show how we build into this model and details. We use any specific libraries. We use an LT key, use JC when we use another libraries. This is our dataset of the operation system. So we have a protocol, we have manifestation, we have local effect and we analyze in this specific here, manifestation that it's the text and the message that the citizens want to the government acting for doubts or other things. So we process this text. So we analyze, for example, a Brigham model and give a good insights for us. We have it to continue to identify specific service. So we have a commerce of, to say the ice cream, for example, that is still working. We have the specific words that was related. We have this, our word cloud about this topics and we start to process this text to understand how the word or the specific word interacts. So for example, the word working, it's very related with the company is still working. The company is still open. The company is still working normally. So we have the opportunity to understand these interactions. So here that I show for you in the slide, we import the TF and EDF vectorize. We build and normalize and create a vector of this text. So we take this text here of the manifestation. So that's the text that the people typing and transform these in a vector with this library. After these, we have the opportunity to apply a technique clusters to identify interest and an interaction. So we define eight clusters for each one. We fit this model and we have a clusters for the interest. So this is the distribution by cluster. So each number represents a cluster and the hard work is to tag and to give a meaning for each cluster and this is that we present for everyone in the slide and the clusters of the interaction. So after these, we have defined the cluster. So we define the group of the interest. So the idea with these two is to try to predict by the test what cluster each test was situation with the cluster. So this algorithm, this program here, it's working with this. So here we apply techniques of the machine learning. So here we have, for example, the clusters, we have the model that we train. So we create a model of the classification. The idea here is to predict this cluster, this target variable with base of the manifestation. So when the P on the sentences type these, what's the cluster that we have the more probability that the text to stay? Here we have the clusters that we define. We tokenize this text to define this classification model. We define a target variable and we define a predict variable that was the vectorized the text. So we try to experiment some models about classification. So we try to first with a multinomial model with the predict and with the target. So we don't have a good accuracy and 56%. We try to predict with a case of model. We don't have a good accuracy, we have a 7%. We try to predict with a decision tree model. We have an increase, we have 66% in the precision, but the best model that was performing is the model of the classification. It's the random forest. So we choose the one random forest to predict with base in a text. What's the group of the interaction that the text was situate? So we hear the scores of the text. After this, we export this model of the random forest and to deploy this application and production models. So here we convert these. We predict this model with base of this text. We have and create a distribution of the probability of this text was situation and if cluster, yeah? So here we have, for example, this message and the probability of this message stay and if each one clusters. So this message, we have a good accuracy that was pretense for the cluster five and we export this model to, has API to access for the system. In the end, we have a good idea. This is a model system that we deploy in the reluco. So this is a type of the manifestation. So the idea of the message, it's important in Portuguese model for this is in Portuguese. So the district of the Aguas lindas, the commercials still work normally with a lot of agglomeration and don't respect the social distance. And remember that district, it's so crowd. So the idea here, what's the group of the interest that this message was situated with this cluster, with this model of the classification and with this model of the segment of the text, we have a model for example, again, okay. We have this message, it's too close to activity that is still working with employers and agglomerations. Another, we have for example, a different message, for example, they store off the coffee, is still working normally today and today, and we don't have protections for our employers. What's the cluster that's the message, what situation? So we have the clusters that the employers asking for protections. So it's important because give for us a specific target that how the government need to fighting or how the government need to actuate with the administrations, with the complaints. So this is the idea of the presentation to show, to share our experience and how digital transformation can help us in the pandemic of the COVID-19. So we have our contents, I have in the discord and we have a Github with the code and we can discuss another opportunity. Okay, thank you. Thank you, Bruno. Thank you very much, that was super nice. So I'm going to play some songs.