 You can have a seat, there's the bathroom. There are two tables here. Okay. Welcome to this meeting of the Pidon Memorial, Sont, you, Recurred, Brutal. My name is Olga and I'm happy to be a Madame of the ceremony today. So everybody, welcome to Pidon Montreal 103, Recurred Rhythm. Olga and I'll be the MC for tonight. We are only half serious. I thank all the team of the new creative team. Thank you for welcoming us here. And I'm half serious and I just want to apologize for my French. I only started learning in October. Okay. Okay, so how do we use it? Oh, scroll? Okay. Okay. So, you can join the conversation if you would like. Join us on Slack and on our lovely website. There'll be more at the end. So, today, we have me here, and after the presentation of Have Sirius, Charles Philippe. Also, we have the presentation of the team in Montreal. Today, we're talking about AI, the GraphQL, the PyScript, beaucoup de choses. As you can see, all the stuff is here. I think my French was pretty self-explanatory. And by the way, we'll have a little sort of break in the middle, just so that we can get off, move our feet and you know, a little bye-bye. La pause. Merci de FJNR for supporting us. Okay. Notre première présentation sera faite par Philippe Cretien. Il est associé à Chez Have Sirius depuis 2017. Actuellement, directeur de la technologie est vice-presidente. Il va nous présenter Have Sirius qui nous a dit aujourd'hui. So, first is Philippe. He is the director of technology and vice-president for Have Sirius. And he's going to tell us all about it. Okay. Merci. Merci d'être venu. C'est le fond de vous recevoir. C'est la première fois qu'on voit ça, savoir un meet-up comme ça, tout l'engage qu'on fait. My name is Philippe Cretien. I did my first Montreal Python in 2008. So, probably, I will guess not many of you were there at the time. And yeah. And another one in 2011. So, I presented a small robot activated with tweets, a political tweet. So, I was with Yannick at the time, so he invited us to be there. And so, what about Have Sirius? So, oh boy. Oh my god. Okay. No, it's another presentation. Yeah, I want my presentation. Okay, sorry guys. Okay. We're a company. We're about 60 people here. We started, he always said 13 years ago, but he was alone with Clara. So, they were two doing design at the time. And we're doing software since 2017. So, it's about eight years. When I got in with them, we have a storm culture, whatever that means. That's a pitch for selling software. So, let's get to the next slide quickly. All the projects we're doing here are 100% new stuff. So, we don't take any existing software and maintain software for other companies. We're fixing problems. So, they present us with their problems and we just try to figure out the best way to solve it. These are a couple of our clients. So, actually, not a couple. It's all of them. So, we have a client here in Canada in the US as well. Okay. So, that couple of example of clients we have, it's a mining company. So, we're doing a digital transformation project for them. So, as you may have seen in many of the projects you're doing on your site in your respective company, they have like many systems that own their own data and they don't talk to each other. So, what we propose to them is to bring all that stuff in a data lake and bring it back then into some data warehouse so that you can cross reference data from different systems instead of working in silo and then trying to generate reports. So, these big companies, they are a very report-oriented company and we're trying to bring them to the next level where they don't need a report to take a decision, but they can take decisions on their data. We also have a project with them to, once we have this infrastructure, so this cloud infrastructure, we've built a prototype for an AI project on top of this. So, an LLM with a RAG system length chain. So, we interpret these data using natural language so that they can query their data using that. So, another project we have is with Astus. So, they are sending a fleet management system. They're clients. One you are, you know for sure is a community. So, they are the fleet management system behind the community. And they also have a fleet management system for school buses fleet of buses. They have also a vehicle in airports. So, they track a lot of different fleets of vehicle. So, we're doing the same type of project for them. So, we have a UX, a new UX experience for them for the FMS. And we also re-engineer the backend with them with their team. We're always working with AWS so far, but it's not a religion. It's just a choice that makes. DocMagic, it's a client in LA. So, they are selling more gauge documents. So, in the US, you have the first bank that sends you a disclosure package, the first package to sign, you have to refuse before to go to the next one. So, the faster they are, the better they are on the market. So, we help them to generate and convert these documents to digitally signing documents. We also have Autoprep. Autoprep is a complicated thing, but you take a hundred pages document, and you find all the signature field in there, and you generate the document that fits in their own system to sign it digitally. So, there's OCR, there's image recognition. So, that's why I put it there. It's a very interesting project. We're doing things differently. We always say that. But we're doing the standards of the product design, software development, maintenance. But here we have, as you can see, by the place here, we have a very, very strong accent on UX research design. It's very important for us to differentiate on these things. The worst, we always say the worst thing that could happen is to very well build the worst solution or a bad solution. So, we make sure that when we start coding, we have been through two or three months of user research to make sure that we have the right solution with our teams. Yeah, that's a new thing. We've done a couple. It's all pretty small projects. So, it's not very big stuff. But all our clients, they really need support to understand what's going on with AI, actually. So, it's very, it's surprising to see how confused people are, especially when you are at the C level in enterprise. So, they need guidance to decide what's the next best, the move for them in AI. So, we've built for the mortgage guide. We built this RFP response thing. They have to respond to hundreds of RFP each month. So, all the questions that are in these RFP, we built an AI tool for them so that they can enter the question. We've trained the whole system with all the documents from DocMagic. So, the AI can respond very quickly and they can fill in their RFP response much faster. We're doing the same for sales, PEF analysis engine. Yeah, that's Autoprep that I presented on the other slide. And AI development optimization, that's for the mining company I explained before. So, they have a tool that contextualize the data and then when it's in a very narrow, we are able to prompt the AI with all the action that the user have done in the UI so that we can pinpoint exactly what the person is looking for. Instead of asking broad question through the UI, you select your date range and then you select some action. And at the end, you can chat with it, but we can prompt the AI much more efficiently with this way than we do just by asking questions. No, that's communication and services. That's a very, very important part in AppSeries. That's what is, it's a key distinction from AppSeries and the other studios. We like to call ourselves a studio like Eric was working at me in the past. So, that's why we have this kind of look everywhere, but we want to attack the problems with a different mindset. Not just doing software for software, but we're trying to solve the problem and engage them in the solution. I'm sure you've worked a lot of time in places where you work on a fantastic software, but when you try to deploy it, nobody wants to use it because they're scared. They don't know how to use it. So, all these change management thing, if you bring in the UX team and the designer at the beginning of the project, then it's much more easy. It's much easier for us to deploy the new application and make sure that it's used by everybody. So, yeah, that's a proposal we've done for Arsalaam Mital, so that when a truck is digitalized, when we're done with a truck and it's all digitalized, we would have a pen job on the truck, too, so that the driver would be happy to have this part. We're partnered with AWS. Actually, we're in the process of being partnered. It's like it's a bit sectarian, so we're trying our best to get there, and we work also with Mital for a couple of AI projects. That's it. I hope that wasn't too long. I don't know what this is. It's QR code. Opening. Yeah, so that's us. So, is there any question? I guess no. Any questions for anybody? No. No. Sorry? Oh, yeah, sure. I can. I built it. It's a system. That's the thing I explained. So, the problem they have is that very often in the mortgage industry, they are ingesting documents from competitors. So, DocMagic has their own documents, but they want to be able to notarize and to finalize document that comes from others, so they can steal their clients. So, but the problem with this is that most of these companies, they don't have any digital signature system. So, they generate everything on PDF and paper, and then it's kind of stuck in the paperwork. So, what we do with this is that we ingest these documents, so you can scan them. It doesn't have to be a native PDF. You can scan them, send them to Autoprep, this tool that does the PDF recognition. And then with Image Recognition and OCR, we can find all the signature fields and prepare the document to be signed. I'm assuming that the reason you're having these problems is the PDF is really just a bit Yeah. It can be a native PDF, but most of the time when it comes from the other companies, it's a bit nuts. So, it's a PDF with any major ones. So, PDFs have considerable structure and really depends on the tool that made the PDF. PDFs are a mess. Like, it's crazy how this protocol has evolved from PostScript to PDF, but it's very messy, and most of them are just an image built in a PDF frame, and that's how. So, that's the strength of this tool that we built for them, that you can ingest native PDF or scan PDF. At the end, we all convert to images and we work just with the images. So, we don't... It's one more step if we have a native PDF than if we don't. Thanks for your question. Thank you very much. Thank you. So, if you didn't happen to scan those links, I do know that currently half-series are looking for full stack and front-end developers, as well as AI engineers. And if you'd like to talk to somebody about those positions, we have Zineth at the back. She's waving. She's wearing the black and white jumper. Okay. Okay. Hello everyone. Just to say in English, the whole guy improved my French. Me, I improved in English, but I did my presentation in French, but the whole website is in English. And if you write question, you can ask in English. I can try to answer and maybe help for other guys in the team. But I'm going to do my presentation in French. So that's it. This is the new year 2024. I'm going to do a little bit of Python in real life in this first event of the year. And I'm going to name my presentation on Python in real life yesterday, today and tomorrow. Because there are still changes that the guy is a little spoiled to introduce you to the presentation. Just to remind you, and those who are discovering Python in Montreal tonight, the association has existed since June 2009. And the goal of the organization is to introduce and promote Python language for anyone who is interested in this language. The idea is to be able to share, learn and collaborate with the members of the community. So you're all here, you're part of the community of Python in Montreal. Thank you very much for being here tonight. We have the pleasure of promoting inclusivity. So everyone is welcome, whether it be the origins, the genre or the level of Python. Whether you are a young beginner or expert in Python, you are welcome in the community. And the community in Montreal, it's in the name of Montreal Python, always better. And so in this little idea, we try to make the events are balanced. So that's why we have presentations in English or in French. And we try to communicate in both languages. So that's what the origin of what is Montreal Python. And the association today was able to make 103 conferences. You are at the 103rd conference, conferences that were initially physical. In COVID, we went into virtual mode. And today we are in hybrid mode. So you are there physically. And we have people who are watching us live. And the conference will be broadcast on our social networks. And at the same time, these conferences are presentations like we have today. But at the same time of these conferences, there have also been 44 events since 2015. And these are events of the workshop workshop workshop, so to learn Python or to improve yourself in other subjects like machine learning. We had a production sprint. We have fire contributors and even an official interpreter of translation, of Python documentation from English to French to access language learning in French. There were hackathons and since then there have been barbecues. I admit that I didn't know them. But that's it. It's part of the history of Montreal Python, so you have to know it. And the association was able to participate in the reception of Python in Montreal, which is the international conference in 2015. So a little pride. Maybe there was one who was there. I wasn't yet in Montreal, but it's a beautiful success for the association. And we have an online presence with all these beautiful logos, with our website, the Meetup, where you may have seen the event. The Slack, where the community can really exchange. YouTube, where you can find all the conferences. And then the traditional networks like LinkedIn. And Twitter, where we may be waking up for the community. So don't hesitate to join us. In any case, the Montreal Python must focus more on the events, then an online presence. And all of that, it's only thanks to volunteers that we have to thank. Volunteers who have been present for years. Maybe the list is not exhaustive. I did as I could find on the site. And volunteers who are currently in the team. So there is Doug, there is Yannick, who is online. There is David, Jules and Ivan, who are more involved in managing the workshops. It's the last few months, and then there is Noël and myself who are shooting around the team. So that's the state of Montreal Python. And Olga, and Olga, sorry Olga, she is our beautiful Olga, obviously. So it's a little bit of the current team of Montreal Python. And as a whole, the association evolves. And an association has a office which is currently made up of Yannick and Doug who we thank very dearly for their work. But time goes by and Yannick made us part who wanted to give you, because Yannick has been here for quite a few years. I don't know if you can guess if you can ask him a question. I don't know if you can guess if he is here. But he wanted to transfer the flambo to other people. So we decided with the team to rebuild a office and for that, to leave the possibility of other people joining the association during the month of December and January, we made a call to the candidates to find a new president, a new secretary, a new director and active volunteers for the association. And after this call to the candidates, it turns out that we had a person in each post who posted that if there was only one person, we didn't need to vote. So I present to you the new office of Henri-Alpiton with myself as president, Noël as secretary and Doug as director, always the Doug. So we have a new team and we want Henri-Alpiton to go through everything that has been done to serve the community and create new things. So currently, we don't have any new events but the team is here and we will work hard to predict the next events. And even if we have a new office every day, we will advance less well without our active volunteers who are already present but we are still looking for them. The idea is to be a volunteer at Henri-Alpiton, to help the participation of the events, to find presenters, to manage the computers or to welcome people. We also need help to manage social networks to manage the community which is in line, we are looking to try to put the machine back on the sponsor's search to be able to offer always bigger to the community. And we need a little administrative so if there are people who are motivated to do administration skills, especially when we are volunteers we talk and we have fun together, that's the most important thing. So if that helps you don't hesitate to come and see me or the other members of the team that I presented. The idea is to make a meeting in physical order to see how we will prepare this new year. So you are welcome, there will surely be announcements on social networks if that helps you, the place is still to be defined and to keep the date. But I talked a lot about the office, the DB Névole but Montréal-Piton is also you the community and in this idea we have the society is evolving and surely your opinions are evolving and so we would like to have your opinion on for you what Montréal-Piton is and what you would like to see for the future. So there is a QR code I will show it everywhere that we are going to broadcast but we would appreciate a lot if you could give us your opinion because it is important for us to do what we like but also what we like, you like if there is anyone who comes to see us it's a bit of a shame. So go ahead and give your opinion it's totally anonymous and then we will take your suggestions with pleasure to make the association grow So here I had an idea I hope there is a question Any questions? I stay there Thank you Melanie I would just like to repeat what Melanie said the last little bit in English Montréal it is about the people that come and it is about the community and it is about the love of Python So if you do get involved and if there are things that you want to get done if there are things that you prefer if you don't tell anybody about it it's not going to get done OK mountain old AI Revolution Long chain LLM So we have AI is going to speak to us it's going to be exciting We have two presenters Bastiane and Rashid Rashid Rashid Rashid OK So Bastiane is developing full stack of series Rashid and a specialist of AI of series and they are going to talk to us about AI Let's go around of course Thank you for being here I am going to come to the presentation and then my friends Rashid is going to take over for the second part So AI speaks why because LLM and Largerman models are going to talk to us the LLM revolution which is a framework and we are going to talk about it It's a different one It's the second one OK It doesn't work So what's the further money today we are going to talk about Largerman models LLM's LLM framework and then we are going to have a demo So like I said we are going to give you an insight into the future of development software development part of it They are advanced AI algorithms that enables you to basically talk with computers It's natural language processing It's part of it So you can understand, generate text, interact with text It's a part of deep learning It's a part of deep learning and machine learning with each other part of AI And it's a quick difference between ML and DL ML is usually less data You can understand what's going on inside much more easier whereas ML is usually much more data terabytes, petabytes and considered most of the time a black box, you don't really understand what's going on inside You know how it's built and why, how it thinks, like a brain basically Like I said it's built an extensive data analysis text data, text data only So its capabilities are text narration translation, summarization classification and other things like chatbots, coding assistants and content creation So some examples of GPT series from OpenAI, probably heard of chatGBT Cloud from Anthropic and Yama from MetaAI so basically Facebook which is open source I don't think Cloud is open source and GPT is definitively not open source So how to use them you're going to need prompts Prompt is basically a user input or a machine generated input that's going to help you guide your conversation So its role is very essential because that's how you're going to specify the task that you want to want the element to perform you're going to give it context, information it can be documents, it can be just the time of the day, where you live anything, there's two types of them user generated, your text and system generated the answer is going to give you some example, there's three types of stront, again it's a different type again Subtype let's say, system prompts you're going to give it a quick text that tells it hey, that's what you are, that's your definition here it says you're a helpful assistant that answers questions with that it's going to know that it's going to answer a question, it's going to be a conversation most likely and it must be helpful sometimes things like just a word like that matters a lot because that's still the beginning of it the user prompt could behave what's the weather forecast for mutual tomorrow and the machine is going to use its tools and everything to say the weather tomorrow will be minus 9,000 degrees and also it will snow prompt engineering, it's the art of crafting prompts to get the best result out of it to make it even faster sometimes and just more efficient in general skieplication is similar to what I said before, question answering and reasoning it makes your AI more natural and human friendly user friendly, also more customizable you got the basics it's also helpful to make it safe, you could tell a do not tell anything about the company this is out of bounds, don't talk about that we don't want to talk about these topics, follow up and it helps you integrate dumb and knowledge, so everything about your company, your topic mining, whatever you're working in it's very important to have tailored user-specific interactions so personalization again and like I said, for efficiency accuracy, other responses, here's two techniques there's many more I'm going to show you two techniques, there's going to be zero shots basically just ask something and don't give an example of how to do it classify the value of the text want to hang out, classification and then it's going to generate what's after classification the semi-color rule and a few shots, basically the same thing but I gave it some example and I'm telling it, hey, this is dope this is boring, and then if it's a good model, it's going to be able to figure out why this is dope, why it's boring and it's going to know that it needs to use dope and boring, the positive, negative short, long, English, French, whatever the classification using is so want to hang out, again classification yeah long chain, it's a framework that we're going to use to employ LLM so it's purpose is to streamline the development of context-aware, rezoning application using large language models it has four core components it's libraries, both in JavaScript and Python it's going to have good component integration and pre-built chains and a bunch of other things that are pre-built that you can just plug in and use you can really get started with a few lines to get a very simple proof of concept there's a template, you can just copy code some code here and there and customize it to your liking to have it do what you want it to do LinkServe, it's not on production yet but it's a service of web hosting to have a big list of APIs writing your APIs and LinkSmith see it as Datadog or Greylog, but for your LLMs, for your chains see what happens, how long it takes for the error rate bunch of other information and once more, key application questions and chatbots, and this is a bit new decision-making agents I'll just talk about them later on key components the LLMs of prompts and a memory if you want to chat with an LLM you need to have a memory and you need to know what was saved before by who, you're going to use a memory vector store and document loader this is for the domain knowledge part you're going to store your document, index them with embeddings, we're going to discuss embeddings today but that's how you sort of index it and retrieve it very fast chains sequence of actions and agents, they're sort of smart robots let's say LinkSmith and LinkServe have talked about it before so let's go with the chains like I said, they're basically a sequence of operation to connect different tools different LLMs, different things basically, datasets as many avatages as modular you can very easily switch an LLM for another you're using chat GPT no, not GPT, it's the app you're using GPT in the model but you want to switch to another model you can just instantiate a class give it, I don't know clothe instead of GPT and then you're going to use it instead it's very clear, very simple reusability, you can create chains export them, it's also open source I don't think I mentioned it, it's an open source project you can create chains, have them used by anyone basically observability it has built-in helper that lets you see what's going on you can just see the thinking, reasoning, everything is going on just different steps you can also see them in LinkSmith and efficiency with streaming instead of waiting for the whole answer to be generated, you can just as soon as it starts getting generated, see it so you have a much bigger time-to-first word and asynchronous processing here's an example, a very simple example we do our imports, the chain the vector source and sort of database it's going to store the embeddings and the documents your documents your retriever and then your algorithm and your embeddings so we have a list of documents very simple, nothing crazy oops we instantiate the embeddings function the chroma database and we just add documents, it's going to do a bunch of stuff behind and index your data basically so you can retrieve it after based on the semantics so at the end of the retriever I say k3 and score threshold 0.7 that means give me the three most relevant documents based on my query and they must at least score 0.7 of similarity and then my chain at the very end which is a chain that's built in as you can see, I think it's there so now let's ask some questions I'm going to use a chain and evoke with a query parameter query already host and which game is good so you can see this is this one and the answer is my craft this is this one found the most relevant documents and you could weigh more than that, this is just a very simple example and that's about it from my part I have a question here I was curious if you're not using OpenAI is there an open source model for embeddings there's many of them repeat the question for the people online sorry, if you're not using OpenAI can you use an open source model of course, yeah you can they have a bunch of built-ins and LMS I'm not exactly sure how it works but you can definitely do it I think it's going to be running on your own machine unless you use a different API by any provider Mestrol, OpenAI, Cloud they all have their APIs but some don't so I'm just giving you open source models because they want to do it that's also a point of being able to easily change we're good? that is good in terms of the document all the types that you're using for the retrieval do you know from the limitations of the size of the context specifically the size is not really so far sometimes it helps with an amazing amount of data and it's still able to get answer in less than 0.05 seconds which is very fast, it's really efficient they're using some algorithm to make it way more way faster but I think they're like 99.8% accuracy so they might miss a document but they're still very fast I think you had another question, sorry what was it again? anyone else? thank you so much so I'm going to start with explaining the drawbacks of using the chains so basically the problem with the chains is that everything is hard coded I can give you an example that explains why this can be a drawback so imagine you want to develop a chain that generates that uses an API for weather for example and you want it to generate the code that will give you updates on the weather and execute that code so imagine you ask yourself why is this the updates on the weather in Montreal right now and imagine your chain it will generate the code one thing that could happen is that sometimes it could generate an error when it generates the code and that happens a lot in the algorithms sometimes when your context is so complex it generates error so in order to solve this kind of things you could use the agent which has more autonomy and more flexibility I'm going to start with the explanation of why is an agent and this is the definition that how a chain explains the agent and this is how they introduce it as I say the core idea of the agent is to use language mode to choose a sequence of action to take while in change it's a sequence of action that is hard coding so basically the idea in the agent that we use an hour that will decide which action it will take while in change there is this pipeline that is defined from the beginning so we add more autonomy in the process of the decision making and why this is a very good point this has better autonomy and better decision making and I think that goes very well with the spirit of AI which is giving more to the machine to decide so now the agent could decide which action to take instead of deciding from the beginning how to work it's more flexible and you can imagine very complex use cases very complex use cases chains may not work that well because sometimes you have a lot of corner cases you have a lot of scenarios that you have to cover and sometimes with the chains it's not easy to cover all these use cases but with the agent you could cover more of these corner cases or use cases because it's an LN that will decide which action it will take so it has better adaptability and flexibility also it allows better air handling and recovery and this is a very important point because when you want to adapt your LN to a complex context as I say it will generate a lot of errors sometimes when iron up for example generating an SQL query or for example generate a code that will be executed by an executor so it's very important that the agent could start over and find a solution to the errors it gets and finally at scalability agent has better scalability when it comes into scaling up so if you want for example to add a new component to your agent all you have to do is you implement your agent and then you explain it in the prompt and then that's it you can integrate it while in a chain you have to find the right place to add your component and you have also to connect it with the other component so sometimes when it's the big architecture of the chain it gets complex to scale up now this is the framework of the agent and how it works so let's say for example we sent a query where it says what are the updates on the weather in Montreal for example to start by thinking it says okay now I have to generate a code that will be executed by the executor and it uses the weather API for example so it goes to an action and it generates a code and the code for example will call a function to get the time right now and it will also call the API to get the updates on the weather and then it will do the observation so the observation will say okay now I have the code what I should do to get the final answer and the thought will say you have to execute it so the action will start over and it will execute that code with another component in the pipeline and then it will get the answer and the thought will say now I have to transform it into user-friendly way so it will go to another action transform that output into a good text to get the final answer so that's the chain of thought of the agent and that's how it works in order to generate the final answer here I wanted to give an example of an agent that is developed in LightChain of course LightChain offers many agents but you could also implement your own agent but here I wanted to present the SQL agent which we will use in the demo so the SQL agent allows you to use the database but in natural language so you give the query and then there is the agent which will decide which tool it's going to use so the tools can be an SQL checker to check if the query was used correctly there is an SQL query executor which will execute the code InfoSQL database gives you all the information you need like what art columns, what art data types and so on and this SQL database the list of databases and table you have so every time you will ask a query there is the agent it will decide which tool to use there is a prompt also in the prompt to explain each one of the tools and based on the explanation and the contents you get in every action it will decide which tool is going to use and it will do its chain of thought and then give you the final answer so we'll do a demo just to explain how this works I'm showing how straight forward so this is your code if you want to scan it to get the the github for the code and that's also in the link if you are using your laptop and here's the application so I'm going to try to explain it so that's the interface of the application and the interface we used Python library also it's called streamlit it's an interesting library also it allows you to develop a proof of concept like interfaces very quickly it's a very useful library and how it works for example here I asked the question just to be in time I asked the question how many albums does Alice each chain have and gives me the answer and in the background what happens is this so in the background sorry so in the background this is what happens so the first action it does it takes, you see here it takes the action which says I want you to list me the table so it gets all the tables in the database now it will do a thought to say okay now I have the relevant now I have the tables I think the relevant tables would be like artist tables and artist table and then it will take an action and it will ask for the schema for the artist table so it gets the schema to do it will use another tool from the chain from the agent and it will get the schema of the table and then it will say okay now I have the schema of the table the artist name now yes now it wants to check if Alice in chains was written correctly so it's going to use another tool to check if Alice in chain was written correctly and this tool the one that checks if Alice in chain for example was written correctly it's also a tool that we have developed ourselves and we have integrated in the agent and just to show you that you could develop your own tool if you want to customize your agent now it fix it the name of Alice in chain for example was like this and now it's like this then it says okay I have the correct spelling of Alice in chains and it's going to start executing the code so it's going to generate this select query and then it will want to see if it was written correctly so it's going to get the final select query fix it and then it will execute it and give you the final result so that's the chain of the agent and you can see that every time it starts thinking and gives you the right answer and for the code I'm going to go into details in the code but the idea is that yeah so this is the main function that you use for the code basically gives the prompt it's like you try to explain to the you try to explain in plain text the context to your L so you say all the information one of the very important thing when you use SQL is to say do not make any DML statements because sometimes it could hallucinate and generate its own like data and yeah and then you do your create your database this is a tool that we have created just to include in the agent and that's it you use your create your agent just like this it's very straightforward you give it the LLM the toolkit which is the SQL database toolkit and then you give it the extra tool you give it the prompt and that's going to work first I can go into details after if you want more details on the code and I think that's it I'm going to finish with the conclusion yeah that's the conclusion so just I want to finish with the process of using like chain and agents so the process it facilitates building LLM based applications clearly yeah sorry yeah it facilitates building based application it supports many features of LLM like agent, chains and so on drag and so on it's open source and it's easy to use but the cons I think it's performance limitation one of the biggest issues I think with the agent is sometimes very slow when you have very complex complex architecture so that's why sometimes you need to use chains instead of using agents it has complexity in fancy use cases the more you have more tables like for example in this field it becomes complex to cover all these tables and finally dependence on external modules I think it's the idea that it depends sometimes on open AI for example it could reduce its performance that's it those are some resources if you want and yeah that's it thank you any questions? yeah what guardrails you put in place so that your model doesn't just drop the whole database what guardrails you put in place so that the language model which is like you would drop at the database because that would be like what I would be afraid of you have to just say that could you repeat the question if you say what you do to avoid integrating data you avoid for it to like all the data to avoid it to drop all the data you have to write that in the prompt you have to say never do that there are some techniques you could do sometimes you could for example you must not do that but you must not do that in a higher case so the AI knows that you should not really never do that that's the only research we have I think that is like three minutes it's going to stop yeah I think we could do that I'm still strongly encouraging you to have the security on your database yeah yeah like change also in like change they say try to make change the security so it doesn't do that so you have to modify that in order to avoid these kind of issues because sometimes it could hallucinate and create its own data and give you the response and say it's there so yeah I think there was another question I think he had one and then I'll ask so what part of this did you say you said for this demo or something built before that you included into this demo oh yeah it was the part that you check if the names like for example Ellison chains was not written correctly so you could create your own tool to check if the the nouns were written correctly so we'll fix the how it was written okay so it does look like straight forward will be written in your demo but I was wondering how long did it take to build that in function was that a big R&D effort was it very easy no it was very easy to build yes very straightforward the black chain also the interface is very straightforward so yeah I think both are very straightforward and yeah but when it gets complex that's where it becomes of course that's normal when it's complex you have to do all the modifications in the prompt in your chain in your tools that you're going to use so yeah that's when it becomes the complex complex for example you have database but it's a huge database there is a lot of columns the columns could be like weird names like I know it's just abbreviations so this kind of issue that would happen generally that's the kind of the documentation is very much up to date can you open source protect and do a lot in the past year there are many things that are not up to date and kind of like you have to dig into classes it's very deep to figure out oh it is because there's a prompt four levels lower that says something that doesn't make sense I'm going to overwrite some of them sir my question is a little bit unorthodox how can you use lang chain to branch on agents according to your specific use cases is that even possible where you use lang chain actually to identify the use case and then select an agent according to it is that possible you mean and patient choosing another agent to use that's possible and I think they added a new feature which is the lang chain graph I think which allows you to do this kind of very complex graphs of an agent calling another agent and so on and what sort of computational burden or how do you say that's very costly in terms of high it's very costly exponential I imagine because your graph would be the whole problem so that's why sometimes you use chains instead of patients because it saves you a lot of time so ok so there is a cost saving ok actually a huge one good thing is to have meaningful names for your column now that you have an engine that works on natural language to query your database the more the more information you give in the tables and column names I think that was a good presentation thank you for the presentation the question would be is there any methodologies and tools to do testing when building a long length chain yeah you say are there any tools to use for testing a length chain I think there is a lang smith I think that's the only one there is clearly there to use for testing a lang smith will give you the tree of the executions it did and how it's thinking inside in the log but with lang smith it's more easier to understand and interpret but besides that I don't think there is something straight forward to say that's how the accuracy of my chain or that's how I perform my chain so that's I think one of the big issues are you testing a line chain or do you really want to test the underlying LLM that you're using you want to test for example your chain so you develop the chain that's your chain you don't see LLM but you're using it so you're really testing the LLM yeah exactly many LLMs are interacting so you want to test that does that mean a unit test or an integration test I don't think it's a unit test so I think there are things like weights and bounds weights and biases weights and biases could help but not too much I think sometimes it lacks many features but could help we have a last question yep no more questions no more questions ok that's it fantastic now we're going to have a 5 minute break please get up walk around have some snacks and we'll be back in 5 minutes ok class Everybody give me a, let's end up all and I'm ready. Thank you everybody, we're going to restart shortly. For people in, I want to say in the studio, the people in the studio, if you smell the aroma of pizza, because there will be pizza. There will be vegetarian options, I've been told. If anybody here is vegetarian. Okay, so. More vegetarian than usual. We always ask under estimate. Okay, so I think, oh, do we have a, oh, maybe... Not graphically. Can we just check that Yanak is okay? Do you want this one? Oh. Oh, could you? Oh yeah, you want to share the... Do you like strawberries? Oh, strawberries? I mean strawberries. Just a little, a little, a little humor there. A little bit of black. Okay, so now, graph QL is strawberries. Rubens is going to tell us all about it. Rubens is a developer full stack. He is the main developer of a start-up, a Brazilian app called 2U, which manufactures automatic intelligence distributors. So he makes smart vending machines. Which is pretty cool. And he is a full stack developer, and he's going to talk to us about this stuff. Let's give him a round of applause. Yanik, can you switch it to the perfect screen? Okay, great. Today I'm going to talk a bit about graph QL and strawberry. So starting out, thank you for being here. And any questions you guys can just ask while I present? Well, I already got a good introduction, but yeah, basically, that's it. I moved to Canada recently. I moved to Canada here last year, and I currently live in San Jose, and I'm very happy to be here in Montreal. It's a very nice place. I really like it. We like the people. And just out of curiosity, when I applied to go to Canada, I had to write a letter. And in that letter, I had to explain, like, what are you going to do in Canada? How are you going to contribute to Canada's economy? How are you going to be part of the Canadian people community? And one of the things I put is, like, I'm going to participate in the Python Montreal. So, yeah, government officials that are in the... What? How are you from Bahia? Brazil, Sao Paulo. Sao Paulo. Yeah, yeah, yeah. And so I still work at this Brazilian startup, but I'm actually searching for jobs in Canada now since I'm here. And let me talk a little bit about what we do, because it really is what motivated me to give this talk. So here you can... This is basically our main thing of the product. It's a fridge that we put cannons in it, and it has a raspberry pie. And the person opens the fridge with the app. It takes a picture before, and it takes a picture after. And we see... We send those pictures to our AI, and we see how many of the things they took out of the fridge. In this case, they took out two hyacinths. We charge them, and that's it. That's basically what our company does. And before we had GraphQL and Strawberry, we actually had a Mongo. It was a Mongo application with a cherry pie, which is sort of like Flask. And we re-writ everything to use Python and Strawberry after about a year and a half of operations. So choosing GraphQL... Choosing the GraphQL language was a very, I think, important decision, because after a year and a half of operation in the startup, you really don't know what you're doing in the first year. You're trying new things. You're changing the product a lot. So after a year, choosing something like GraphQL, which has a more of a client-first approach. So you look first at the project, and then you sort of go back to the backend. It was a very big decision for us, and it worked out. We eventually switched our system, and this is our current stack. That was about two years ago. So we use Strawberry and the Strawberry Library for Python for serving the GraphQL. Python in the backend with Django. And in the GraphQL, you only have sort of one endpoint, and this endpoint serves our two front-end sites. We have a back office site, internal site, the one for our clients that are made in React. They all query our GraphQL. We have our mobile site in React Native that also queries our GraphQL. And interestingly, we also have our Raspberry Pi inside the fridge that also uses the GraphQL to get data from our servers and also WebSockets with GraphQL. So we have real-time communication and updates with our Raspberry Pi, with our backend server using Strawberry. So this is a very Python-centric startup. We have Python all over the place. Oh, yeah. And we have our AI that uses Python also with Fast API. So I think at least this startup is a real certification that you can build these good and fun technologies with Python, except the front-end. You can't escape JavaScript, right? Okay. And so another big motivation was after two years using Strawberry. I really liked it. It's a very good language. It's very easy to get started. Hopefully we'll see in the live examples a little bit. And hopefully after this talk, maybe you guys can try it out. Strawberry, there are other GraphQL libraries for Python. Graphfine is the other big one. Strawberry is a bit newer. I think it's more like Fast API if you ever used it. Graphfine is a bit more cumbersome. Strawberry, I think it's a bit easier to do things. It looks a bit more like Fast API or maybe even a Flask. And I think it's currently, it has more things going on in terms of people developing. Like the community, then Graphfine. I think Graphfine, for example, the last update was like a couple months ago. Strawberry, I checked yesterday. It was, I think it was like two days ago, but they had another new update. So they're always updating. And they have an active Discord community. So it's very easy to just go in the Discord and talk to people and take your doubts. And also they have a Jungle integration. So if you're already using Jungle, you can easily set up a GraphQL endpoint. So that is very nice. Okay. So a little bit about the ecosystem of GraphQL. I know that, I think a couple of years ago, GraphQL was really hyped up. I think the hype sort of went down after a couple of years. But we can see it's still pretty active. I got this from the GraphQL landscape. You can check it out here. And a couple of companies at least that I know that do it right. We have Twitter. We have Netflix. We have, I think, Twitch and all these others. So it's still pretty active and it's still a growing technology with a lot of users. All right. Okay. I'm going to talk a little bit about how it started. I'm going to go a little bit back to 2012 and see sort of what market changes led to the creation of GraphQL. And I think the problems that they have in 2012 are still, they're still present today. So I think it's very interesting to see them. So before in 2012, you had a lot of monolithic APIs. So like Netflix, Facebook, whatever, they had a big monolithic API. And all the clients would start consulting directly on the API. So there are a lot of clients and they're all talking directly to the API. But what happened in 2012? About nearly 2012. You know, you started getting... Microphones. No, almost. You started getting cell phones. So you started getting other types of clients. Micro clients, we can say, right? We had cell phones, then game consoles and tablets. Smart TVs. And Netflix was actually having a big of that problem because it's not just one type of mobile device. It's a lot of types of mobile device. A lot of types of smart TVs, a couple of consoles. And in 2012, Netflix had 800 different device types. And they were all consulting the same single monolithic API that they built. So you would get... With a monolithic API, you would try to make the one-size-fits-all. But at the end, you would just get different... These are all theoretical queries for products. You would have to have different endpoints just to serve different specific clients. So on your Android TV, you would have an endpoint just for the Android TV and whatever. So you could try to fit it all inside the monolithic API. So Netflix was having that problem. They did not solve it with GraphQL back then because there was no GraphQL 2012. They had to do other work rounds. We'll talk about that later. And another case from that time was... Yes, so with all the new use cases, all APIs were getting unviable. So they were trying to move out from the monolithic architecture. Another problem with SoundCloud, they had two teams. One with front-end engineers and the back-end team that had the back-end engineers. And for them, it was very complicated that the front-end engineers had to convince the back-end engineers that they can serve a megabyte-sized files to the web app. The web app has different needs than the browser app. And at that time, that caused a lot of conflict between teams because for every new API, so they wanted to get a new endpoint, they can just be like, oh, make a new endpoint. They had to have meetings and agree on the endpoints and whatnot. And that caused a lot of friction between both of the teams. So that was another point that the monolithic APIs were sort of getting out of hand. It was causing development issues that actually slowed down. SoundCloud, if you look at the reports, they say this slowed down our production, our ability to make code. And then we had Facebook, which had the similar problems that the other companies had. But basically, their mobile sucked. It was really bad. They bet it on HTML5 for their mobile application. They're like, no, they're going to use it in the browser. And we're going to serve HTML5 code to them. And it's going to be great, and it's going to work. But no, their mobile was bad. Zuckerberg knew that. It's like we had to do something about mobile, or we're not going to go forward as a company. And 2012 forward, one of a couple people in Facebook actually drafted what is now known as GraphQL. So they did a language, which different from the rest API was designed to be client first. So they describe it as, let's look at our front end. Let's look what the front end people want in a web client. And then we back propagate those changes in the back end. So let's keep the back end and the front end sort of have a middle ground. And that middle ground would be GraphQL. A problem that they had with when we're talking about cell phones, and we're talking about cell phones and the limits obviously to the sizes of different screen size. But we also have cell phone limits. So when we under fetch, we have a traditional rest API. We have to query other endpoints. So if you have, for example, a user endpoint and you also want their wallet, but the user endpoint doesn't return their wallet, you have to do two queries, user endpoint and the wallet endpoint. Or you have to create a custom solution or a new endpoint, or you have to put more, you have to edit your endpoints to stop under fetching, even though you already have those data being exposed for you. And we have over fetching. Maybe an endpoint returns too much data, and now you're using too much cell phone data, and it's bad for the mobile device. So they tried to solve that with GraphQL having a single endpoint. You have one endpoint and you say the data you want to, you ask for the data that you want. So in the back end, you would have the user, you have the wallet, and we define user and the wallet, and the front end would go, hey, I want a user and a wallet. And I'm like, oh, I want to query API slash user slash volume one, wallet slash volume two, get the data and match them up. So you put everything in a single endpoint. That's what they sort of proposed with GraphQL. And they made a schema. So they made a definition, an actual document, which is the definition of the GraphQL schema is in this document, and it would also be independent of technology. So if you're a Rust developer, if you're a Python developer, you can get the, you can see the actual documentation and specification for GraphQL. And based on it, you create your own implementation in your language. So that's what they did. And in 2015, in RACGS conference, they actually made their ideas public. They showed their documentations, their findings, and they asked everybody to contribute. And from that point on, it's just went forward. Okay. All right. I'm going to get a bit more technical, and I'm actually going to go point, sort of see a couple of points about their, about exactly what was this documentation that they released on Facebook for GraphQL. It's called the SDL. It's the Schema Definition Language. So it's basically, in their main document, that's what they defined. This is what GraphQL is going to be, and this is how it's going to work. So hopefully I can go point by point, and we can see exactly how it works, sort of, and we can try to program some endpoints here live, and hopefully they'll work. Let's see. Okay. So the first thing that we have, that GraphQL defines are the scalar types and the object types. So firstly, you're just, you're actually saying the types of data that you have. So here in my example, you have a book that has a title. It's a string. The exclamation point means that you, it's, you have to have it, right? And then authors a string, prices a float, and then you create first, you define your data first, right? With the scalar types defined. So in the SDL, you only have the boolean, the end, the float, the string, and an ID. The ID, it can be a lot of things. It can be numeric. It could be a hash. It could be a UID, but it has a more symbolic importance because it actually represents what the data uniquely, that's what it has to do. It has to represent the data uniquely, just like in an SQL database, you know, you have the, the main key of the column, right? So that's the thing. And then, oh yeah, you can create your own scalars. I'm not going to do this today, but if the first scalars are not enough for you, which normally they're not, in strawberry, you can create your own scalars if you want. So in this case, we have our book example. I create an ISBN scalar, right? And to do that, you just create a serialized function. You create a parser, and then you define a type and you define it in strawberry. So you can create your own scalars. We use that a lot. One of our scalars, for example, it's official documents, and telephone numbers. That would be another scalar that we, that it's very common to see, or even like images and other things, right? Okay. So the next is queries. So the, so now that we define our data, we have to, Okay. Sure. So in this process, what is the difference between serialization and parsing? Would you be able to share both? Okay. So, All right. So, Serialization and parsing. Serialization and parsing. One is basically how you, how the data comes in, right? How you serialize the data, how it comes in. So we make sure it's, it's a string. We got it as a string here. And one is how we parse the data. So parsing would be like, for example, if, if you got a string, I think an example that we use a lot is JSON parsing. You know, we want to make sure that it's a valid JSON basically, right? So in this case, at least in this case, our serializer just makes sure that we're in the right format of our input. And here we just make sure that we just validate if it actually fits what we agree upon is an ISBN. So we use it more for validation, right? And this is very interesting. If you actually send the wrong data in the GraphQL query we're going to see next, it just doesn't work. Like if the GraphQL asks for an end and you send a string, it won't work. It'll be like, hey, you give me some, it depends on the case, but like it, sometimes it just breaks. So if you, if you try to send something that our validator doesn't agree upon, which in this case is a rejects expression, simple rejects expression, it won't accept the data, it won't complete your query. All right? Okay, no problem. There's another question. Yeah. Yeah. Okay, I'll ask the question. How would the serialization parsing be segmented differently in the case of a string? Would you apply the same, would the same rules apply or would it become a little bit more complex? To be honest, the ISBN I would have, you're really, in this case, so you're asking that if we accepted that it was a string, would it be more simpler? Yeah. It would, so the thing is, when you define a new scalar, it's not like you're expecting that a string is going to come in. You create a new type. So it's like, this is an ISBN. It's not a string. It's not an integer. This is an ISBN, right? And if I don't, and in this case, if I just be like, okay, I'm going to receive a string, I can trust it. It's going to be a string. If I don't do this, and I receive, for example, an int, my rejects expression is going to fail. And the whole thing is going to fail. So I think, at least, when you're serializing, you really have to understand what exactly is going to come in. You don't know exactly what it's going to come in, but you can try to guess your best what it's going to come. And it's probably going to be one of these scalars. They're going to be a subtype of these scalars. So it's going to be a pull in and then a float, a string. And this parser that I wrote, if it will, the serialization if it will work, that I wrote, if somebody gives me a bullion, it just won't work. If I get a bullion, none of this will work. So in a sense, this is much less robust than the our Victorian case, where your IS, it would simply be a subclass. Yeah. So it's much more breakable. Is that correct assertion? So yeah, you said that this would, is more breakable than a class, sort of a class oriented. Yes. It's supposed to be more functional. It's supposed, just supposed to have a function, get the data in, get the data out. Yeah. Without much abstraction. I think if you start abstracting too much, it's going to be weird. It doesn't feel right. So yeah, exactly. Yeah. That's a good point. Yep. It's kind of a follow on from the serializing, like I don't see a tight fit. You know, you're also passing, just to be clear about strawberry works, the typings are just hints, right? These aren't going to be breaking if, I think it's so match the expected type hints. Well, the thing is if you're programming in strawberry, and just because the community really likes typing, you're probably using, you're probably using something that's going to have a very red error. And be like, Hey, you're doing something wrong. Yes. So, um, technically it won't break because Python type hints, they won't break anything. But, you know, Yeah. Yeah. Like if you use a cast, I don't, I didn't explicitly cast anything here. Or if I did like a no question asked to ignore type hints, a type ignore, you would get bad looks. So, uh, yes. Um, but that's a good question. Um, anyone, all right. So those were scalars. And with these scale scalars, we need to grab them because if we just have data and we can't grab them, it's a fun. So we have the queries. So this is the second thing. That's defined in a graph to all. And I separated into. Three different code parts. Okay. So this is the actual definition. So let me go back a little bit. When we go back to here, for example, I forgot to mention this is, this is an actual definition of your, your queries, the SDL. It's your schema definition. So this is, um, an actual file that's defining your schema. So any, any GraphQL client being in strawberry, being in JavaScript, be it in rust, can read the schema and interpret it itself. So it's like, like a JSON, you know, you don't have, the JSON is not language specific. The SDL schema is also not language specific. So in this next example, this here would be the definition in the actual, uh, SDL document in your schema definition. You define a query like this. So this query is a shop query where we receive an ID has an argument. I have to, I have to give this argument because I have the exclamation point and it returns the type shop that we defined earlier. Um, so this is what it would be in my definition. This would be an example query based on this definition. I know that this query is going to be valid, right? Based on all the types that we define. I know that if I go into a GraphQL client and put this query, it is going to work because it's going to be valid. It may not work. It's going to be valid. So here, here in this query, we're going to try to query this one that we define. We ask for shop ID one and we ask exactly what we want from the shop. So if you go a little bit back in the shop, we define ID, name, location and books. So the ID is an ID type. The name is a string. Location is another type. Location and books is a list of books. So here we're only asking for name and location address. So we know that our server is not going to give us all the other things that we don't want. So the list of books, it's not going to give to us. And I think the ID, it's not going to give to us. So we know that the response from our server is just going to give us the name, the location and an address. And it gives us a response in a JSON form. So this would be the server response. All right. It gives us in a very nice JSON form of data shop name location. So in this case, we get a shop. The shop of ID one is called the word. It has a location and its address is 469 Wilton Motel Street, Quebec, Canada. All right. So this may be a bit too much, but I'm going to implement this in strawberry and hopefully, you know, it's start making, it will start making a bit more sense. All right. All right. How much time are we finding time? You got 15 minutes. Oh, I have 15 minutes. I will copy my code then. All right. All right. So, okay. Is the size good? Make it a bit bigger. Okay. Yeah. So basically here, this is already strawberry code. So strawberry, it's a code first language. So with this code, it generates the STL schema. Okay. There are languages that do the opposite. You write the STL and it gives you Python code. There are libraries for that. So you can define the STL and you get the Python code. This is the other way around. I define it in Python and get the final schema. So here we have, we have our strawberry book, right? Has our title, author price. We have our location. Okay. And we have our shop. All right. We have this defined. So let me just... What's the Python library used to convert this to Python? No, this is, so what's the library that... It's still out. Oh, okay. So, okay. That's a good question. I'm going to get this. So how do I get the data? That's what you're asking. All right. So, yeah, it's all done by our implementation here in Strawberry. So this is already the Strawberry library. And I'm going to get into how we get from the database in a little bit. So in the Strawberry library, so this is our... I have a mock database. I did this mock database with the shops. Okay. Has an ID one name location and it has books. Okay. And has a second shop with ID two. Okay. So first in Strawberry, you would define the types. So you're not defining how you get the data. You're just defining, I have a book. I have a location and I have a shop. This I'll go later. This was not supposed to be here right now. And then we define our queries. So I mean, I'm just going to put this in another file. Just so we can just have the minimum here. So then here, so we have our book location and our shop. Okay. And then we define our query, which is what we saw on the last slide. I'm going to skip the shops for now just to make it a bit more simpler. Okay. So the next step in Strawberry is we're going to define our query. And in our query is where we define, we use Python to get our code. So right now we could use SQL Outcoming, for example, to query a database. So in Strawberry, how would we define that endpoint? Okay. So again, our endpoint. Now we want to define this query. This is what we're trying to define here in Strawberry. All right. This shop ID one. Okay. So we define it using the decorated Strawberry field and it's a Beatles reference. That's the name it catches it. And we define it with a function. So it, so our function receives an ID of type string and may or may not return a shop type. Okay. So it's all typed Python. You have to type it. You cannot not type it when we're doing with Strawberry. And basically here we would put our data. We would put, if we were doing SQL Alchemy, we would use our SQL Alchemy code to fetch the data from our database. I have a mock database. So I just get shop from database and I get the ID from the, the JSON I put up in the top. Okay. And then we just, we just put our data that we, that we got because this is going to be a dick. So this shop data is going to be this dick here. All right. And we put it in our shop type that we define up, up, up in the code, which has an ID, a name and a string. And I just use this operator. So, you know, it's the same as doing name equals data name, right? So we the same as doing this and doing one by one until we actually define the shop. Right. And so we put this in our, in our data and we return it if it exists. And if it doesn't exist, we return none. And after we define our query, okay, we put it in our schema code, which is sort of what strawberry uses to find our query. Okay. So, somebody have any questions? And also be complicated. Yeah. Me? Yeah. Okay. I want to turn to this question. I think it was a very good one. Is it fair to assume that the, the query types that are, that you're searching here are finite and are predetermined in strawberry? So if our query types are finite and predetermined. We determine. Yes. He asks a very good question. How do you determine what was the process of determining those types of via your Python? So what's the process? How do I determine? Yeah. Yeah. Yes. Yeah. So the thing is, is the graph QL, it's, it's not connected to, for example, the backend code is the backend code. So if we're using SQL alchemy, the SQL alchemy code is in the backend part. The graph QL is sort of in the middle between the backend and the front end. So it, it uses our definitions, right? To create, to create a middle language. So that we can sort of coordinate the front end and the backend sort of together. Yeah. I'm just wondering if you can go from this code that could actually, you know, Maybe we can table this conversation for when we're having pizza. All right. We're a little bit getting on in the day and the pizza's getting cold. Okay. So, you know, I don't want to be responsible for cold pizza. And we've got one more presentation. So yeah, if you haven't, again, I'm just going to go really quickly now and show the other thing. All right. And if you guys really have any questions, please, let's try to talk. I had a hard time when I started learning graph QL to sort of embrace it. It's a bit different, but please feel free to talk. And the shop area has a good discord community. So you can just go and ask them there too. It's, it's, it's going to be great. So we have queries to find the queries can have arguments like we just saw before with the shop ID. The mutations. The mutations is just a query that changes data. So if I wanted to add a book to the shop, I would use a query, right? And let me actually just give just, I just have to give this example release of now. Okay. We're already running a server. Okay. So let me just see. So this is, this is called the graphic graph Q. So this is actually a waste for you to test and run queries that a lot of graphical clients come with, right? And this is how you would sort of run the data. So this is our query, right? And we would just execute it. And this would give our results. And our schemas, the other schemas that we defined are here in the documentation for the API. So it's an API that has only one endpoint. Okay. It's a graph to our endpoint. And the way you query the different data out of it is using this internet intermediary GraphQL query language. Right. So in this case, we have a query where we want the shops with our names, the locations, the addresses, the books with their titles. And we get exactly what we asked for. So if, for example, I don't want the books anymore, I take it down here, remove it. I don't get the books from the query. So I can start to try to customize our query to the way we want it to. And our SDL file that we defined by using the strawberries that I showed, it will define an SDL file, which is our schema, which in the end serves as our documentation. So here, all of this documentation here in the left was generated just based on the SDL file that we have. All right. So here we have our queries. I defined queries, a shop ID and the shops. I have our mutations, right. I define a mutation later on, add book to shop. And we have all the different types that we can have in the query. String, location, book, float, and etc. All right. And then this is what, in the end, this is what we're defining with these different functionalities. Okay. There are a couple of other things. There are enos, there are abstract types. You can use it too. There are unions. If you want to use it too, I'm not going to go here into details. And in this, there are federations, which are big GraphQL servers. So the original problem in Netflix, they actually solved using sort of an API gateway. Later on, they solved it using federations. It's sort of, if I could sum it up in five seconds, it's a way to use different APIs and different services all under the GraphQL, all under one GraphQL server. All right. So this is a bit more advanced thing to use, but it's a good cause. And yeah, so GraphQL is a bridge between the front end and the back end basically. So in the case of my, in the case of the server that we have, so our server is Django. So we use Django, we use the Django REM to get the data from our database, from our SQL databases. And to communicate with our front ends, that's our mobile front ends, our Raspberry Pi front ends, we use the intermediary. We use just one, just the GraphQL API, just one GraphQL API. And we grab the data just using that one API. So we're not defining a lot of different endpoints. We're just defining a single endpoint, right? That we query independent of what our client is and what the need of our client has, okay? And these are some of the sources that I, that I did, that I researched for this talk. Sorry that we ran out of time. I wish I could have went into more detail. I had to run on the last part, but yeah, that's it. Thanks guys. Do you guys like strawberries? Okay, we have one final speaker and then it will be time to socialize and eat the pizza and all the rest of that cool stuff. Okay, so let's just... PyScript! And Romain. Romain, you see, a developer web, Prince Apalman, Python with Django. He's a web developer. And let's give him a round of applause. Thanks. So the pizza pressure is on me now, great. The presentation will be in French, but just for you, sorry, I don't remember your name. It's okay, wrong. You started your presentation saying that you cannot escape JavaScript and you might now a day with Python. Rubens, Rubens. Rubens. So, I'm going to do it in French. I'm not going to introduce you to what we're going to do, quickly. My name is Romain, I'm a developer at Viennère. And I'm going to talk to you about PyScript, which is a new project carried out by Anaconda that would be able to execute the Python on the client side of the web. So, very quickly, we're going to see the current state of web development, a small presentation of PyScript, an example with an S that's going to be an example without an S, and the roadmap. Actually, JavaScript is the language most used in the world. 63% of developers use it. HTML is the second season, and Python is the third. For all the figures, there will be sources at the end, if you want to consult. Great to see you. On these people who use JavaScript, there are 50% who have a degree, a degree in computer science. 19% have a degree in computer science, and only 28% don't have a degree. I'm going to show you that JavaScript is quite reserved for people who do higher studies. And on the web, we estimate about 87% of JavaScript projects. For popularity, that is to say, the number of tutorials is far ahead of Python, and in demand, while JavaScript is not even half of Python and in decrease on the last year. I put you here all the links that concern PyScript, their website, Twitter, GitHub, and documentation. So it's a framework that is free and open-source, under page license 2.0. It's a project by Anaconda, which is a Python distribution, and it defines itself as wanting to bring the programming to 99%, so it's to echo what I just said. Their goal is that people without a lot of training can use to do web development with Python, which is very loaded in tutorials, in communities, et cetera. To make PyScript work, we need a browser that supports WASM, which is the WebAssembly. Globally, it's all modern browsers, Chrome, Firefox, et cetera. PyScript uses PioDive, PioDid, which is a web distribution in Python that allows you to install and launch packages in the browser, and MicroPython, which is a Python 3 optimized for microcontrollers and fairly constrained environments, like the web. And to use it, it's very simple. You need a script tag to add the version of PyScript that you're going to use, with a link to the body. And then you use other tags to say to your browser, to your PyScript, or to find your Python scripts to use. So it really holds very few lines. This is the first example I wanted to show you, which was an animated image, but it's not the most interesting one, so I'll go straight to the next one, which is a visualization of the distribution of New York taxis according to the time of the day. It's just a little far from the start. I think they haven't optimized the charging code yet, but that's it. So you have a New York card. You have one of the Pandas data that's in the code, and if you click on Play, you have the data that changes. And so all of this is done in Python on the client side. And then you can modify the size and the parameters. No, it's not right. I think it's the third. It's the fourth. Perfect. For the roadmap, I didn't find a lot of information. They mainly have communication on their GitHub project, with the dedicated list, the issues, the milestones, which you can request and release them. If you're interested in what they do, every week, every Tuesday, they have a meeting at 5.30am, where you can ask questions to the developer, to the maintainer, where you can participate in the community. And every Thursday at 6pm, they do what we call a fun call and demo. It's the people who come to propose the projects they did themselves and do what is possible to do with PyScript. And the most representative of their roadmap comes from Cloud Anaconda, where they explain that it's still a beta project that is accessible. They try to make it as stable as possible so that people can have fun with doing projects and that everything doesn't explode at each release. But since it's a development project, there are things that will break, etc. It's very interesting to see that there is potentially a revolution that is happening on the web, that is to say that we could do full stack that would be just Python, potentially. So I think it's a project to follow. I put the sources here and that's it. You can eat them. If you have any questions. But I discovered the project last week. I didn't really look in detail how everything worked. It's technical. But if you have any questions, we can try. No, I'm sure there are pizzas. Yeah, I think that's it. Do we have... Just press. Oh, there we go. So thank you, merci. Thank you to Have Seres for hosting us tonight. And thank you all for coming. Let's go have some pizza. Allez! Bon appetit.