 So today I'm going to give you a very gentle introduction about our recent release open source product, which is called GINA. As you can see here, GINA is the Cloud-native Neurosurge solution. And so for most of people who don't know about Neurosurge, so this could be a very good introduction to tell you that what GINA can do and what the future of Neurosurge could be, right? And so a good start would be like, let's talk about Neurosurge, right? So what is so different about Neurosurge and how can we use Neurosurge to do some things that we cannot do using traditional search, right? Yeah, I will also go through some hello world example and also tell you what is the best way to learn GINA as kind of like a framework, right? As last I will talk about our company's mission, right? It's a more beyond the building and our open source framework is about building an ecosystem around Neurosurge. Okay, and then eventually we will have some time for Q&A. So if you have any question during the session, right? So please write it down in the chat box and I will go over it after this slides. Okay, so what is Neurosurge? So I believe the idea or the motivation of Neurosurge is actually not very new, right? So it starts from the motivation of finding similar text in the large database, right? Finding a semantic similar text. So for example, here I list the three definitions of microservice, right? So I just grabbed from Wiki, from other computer magazine or from other tutorials, right? So this is from three different sources. And so the idea is actually you want to, let's say, even one paragraph of the definition of microservice, you want to find all the similar definition of microservices, right? While traditional search may not be very well, in this case, may not serve very well in this case because they are based on keyword and usually in this scenario you have to really understand the semantic behind that in order to retrieve all possible definitions from the database, right? So this is a very, let's say, a very toy example of a real-world scenario. So you can imagine this kind of techniques can be used in chatbot, can be used in QA system, can be used in customer service, right? And so this is kind of like a semantic search. Okay, so what are the challenges here? Let's say we want to solve this kind of text, semantic text retrieval problem, right? So what are the challenges we must solve in order to build such system, right? And so here are these four challenges, four major challenges, when we build a neural search system for text search. So the first problem is how to quantize these semantics. How do you represent these semantics of a document, of a short tweet of a paragraph, right? How do you represent that? Well, if I ask this question like 20 years ago, right? People have no idea how to represent these semantics properly, right? But nowadays, if I ask the same question again to you guys and you know that you may answer, okay, let's just use Spurt, right? Let's use one of the Spurt algorithm models, right? So nowadays it's pretty clear you can use Spurt or Deep Learning to represent a paragraph or document into a fixed lens vector, right? So these mathematic vectors can be basically summarized all these semantic representation of semantic meanings of the original documents. And okay, so the problem, the first problem is solved, right? So the second problem is once we have these bunches vectors, right? How do we store them efficiently and how do we retrieve them efficiently? So some of you probably noticed that Facebook like probably five or six years ago released open source software called FAS, right? Facebook Semantic Search and Index, right? And also Microsoft also released some kind of like this kind of vector database. There are also some existing open source vector database such as annoying from Spotify and also the Melviz from Zellis, right? And so they can be used to store the vectors in very efficient way, okay? So since that we have a database to store this kind of thing, right? So the second problem is also checked. And now the problem come to the, how do we define this layer here, right? So once we have the vectors, once we can retrieve this vector from the database, now the problem becomes, okay, so why is this vector more similar to that vector compared to the other vector, right? And you can of course use very naive or straightforward metrics such as L2 distance, Euclidean distance, Hamming distance, and so on. But also you can use some very sophisticated model such as you use deep learning to learn the metrics between the, between two vectors, right? About between two representation. Just like we do in BIDAF in the machine reading comprehension, right? We will learn the metrics rather than using simple cosine similarity, right? Okay, so the last question is often ignored by a lot of practitioners, right? Is that does your work, does your model work both super long or short document, right? So you generalize, it's a generality, right? So the solution of this generality is often, you need to do some preprocessing. So for example, in this case, you have to segment your document into sentences, right? And for each sentence, you need to do the encoding, right? So this often requires some domain-specific knowledge for the preprocessing step. But it's super important, and often people, when they think about neural search, it's just through a document that converts this document to one single vector, and that's actually not, not good enough if you do that in practice, right? Because semantic has kind of like a, kind of like a, there is a unit, there is a basic semantic unit where you have to optimize it. And this optimization actually is implement in the preprocessing step. Okay, so let's see, okay, so now we talk about the text, right, the semantic text search, right? So what if we want to extend this kind of application into image and video, right? So what is changed and what is not, right? So basically, at least still the, here are the four challenges here, right? So you can see that we basically solve the same problem, right? But rather than using BERT model to do the representation, right? We use, for example, some VGG, some CN, VSNAP, this kind of CN based commutation model to do the representation, right? So that part is not changed at all, right? So you just replace the model by some commutation model, right? Okay, so the last question that previously we talked about whether your model work on both long document and short tweet, right? But now we face more as the same problem, right? But instead of working on the text, we're not talking about whether your model work on large or small images or long or short videos, right? So here you see the importance of purposes, right? So you need to segment image or video into patches, right? And the size of each patch actually depends on, let's say, empirical experiments, right? Depends on your experience. So you need some domain knowledge to select the best adequate size for purposes, right? Okay, so combining all the three modalities, right? So we'll see that actually to solve your search problem, see there are a couple of issues or a couple of steps that one must go through, right? So encoding step to represent some document into real-world document to the vector, right? And the indexing step to store and retrieve vector efficiently, the scoring step to compare two vectors in some kind of matrix. And also the preprocessive step, right? To select the correct size for vector representation. Okay, so if we put the neural search that we just talked about along with these traditional symbolic search, which could be based on elastic search, could be based on Lucene, could be based on Apache solar, right? So what is the difference between the old one and the neural one, right? So we see there is actually not so much difference, right? So you all have this indexing step, which is often doing offline, right? And you also have this parsing step, which is online. So when user input some query, you need to do some parsing for this query and then represent that in the same space with the documents that you've just indexed, right? So the difference is that elastic, we use analyzer, right? We use analyzer to present a query, to represent document into some symbolic representation, and then the matching happens in that symbolic representation. While in the neural search system, we use deep neural network, right? To do the parsing and to do the indexing, right? So we represent query and document into the, let's say the latent space, right? The latent vector space. And then we do the matching there. So that's basically the difference, right? So we see, so if you want to know more about the traditional symbolic IR system, a symbolic information retrieval and also the neural information retrieval, please read this blog post, which I published like two years ago, talk about how to do this, the symbolic one and neural search one end-to-end wait for product search. Okay, so the next difference between the next difference I want to highlight for neural search system is that for traditional machine learning system, you usually have two different runtime. So I call it runtime, but you can call it let's say running mode, right? For traditional machine learning system, there is a train and there is test of inference, right? And for typical search system, there is an indexed runtime and search runtime, right? And but for neural search system, because your model, your system is based on deep learning model, right? So there is a third runtime and which is basically train index and search. So this is another major difference between the neural search system and traditional search system and the traditional machine learning system. Okay, so neural search seems to be a very promising, let's say a solution, right? To solve all this kind of cross-modality, multi-modality problem. But what stops people from using that? So what is the problem? Why don't we see a lot of people using neural search in the production, right? So the problem is that the first scalability, right? So neural search is based on deep learning, but deep learning is very heavy, right? If you do that in production, your search system, the QPS of your search system is pretty low, right? So per per second is pretty low. And this is often doesn't meet the requirement of the production, right? Excuse me. Well, the second problem is the flexibility, right? So every time, I believe every morning you check out the Twitter, you check out the archive, you found some new models, right? Pop-up. And you want to integrate this new model into your system, right? So for example, if you look at BERT, right? After Google announces BERT or releases BERT, there are a lot of variants of BERT coming up, right? So you want to integrate them all, right? But actually it's not that easy, right? So some BERTs you're seeing as they're based on different frameworks, they're implemented in different framework, although they require different dependency. So you cannot really, let's say, immediately incorporate those new models into your existing framework. So this is a flexibility problem. And the third problem is the sustainability, right? So a lot of deep learning models, they actually have a very high dependency, right? It depends on TensorFlow at 1.15, right? This kind of very concrete version restriction, right? And so how do you solve this high dependency, right? If your system is completely coupled with this kind of dependency. And often the deep learning system is multi-architecture, right? So some part of your system is probably running on CPU, the other part is probably running on GPU, right? So it's not really maintainable. And finally the accuracy, right? A lot of people seeing deep learning as kind of like a black box, so they don't know how to tune that in a very accurate way, right? Okay, so those are the four challenges that people are facing when they want to build neural search into production, right? So GINA actually provides a solution, right? So that's why we do GINA, right? Because GINA provides the one-stop solution for solving all these problems. GINA is a cloud native neural search solution powered by state-of-the-art AI and deep learning, right? So this is a slogan that will put on GitHub, right? But if I use a simple word to explain what is GINA, right? So I often explain it in three different, let's say, phrases. So first I want to say GINA is TensorFlow for search, right? It's TensorFlow for search. So what is TensorFlow actually? So TensorFlow is just a symbol, so it doesn't represent the real TensorFlow, right? So it's just a symbol of all the general, very universal framework, deep learning framework, right? TensorFlow is very powerful, so you can use that to recognize cats from ducks. But meanwhile, you can also use TensorFlow to play a goal, right? So it is a universal framework, deep learning frameworks that you can do everything, right? Well, GINA is a framework that built on top of TensorFlow, right? It's on top of TensorFlow, which is dedicated, which is tailored to search applications, right? So this is what I mean in TensorFlow for search. So the second is a design pattern, right? So basically in the old days, in the classic search system, there is a certain design pattern, such as analyzing, tokenizing, this kind of thing, right? But when it comes to neural search, right, there is a new design pattern, and nobody has given this yet, right? So GINA actually provides, like, not just a proof of concept. So we have iterated this kind of idea back in the, back in, back in, back in a couple of years, right? So we provide the design pattern, so you don't need to figure it out by yourself anymore, right? The last thing is, of course, the accompany region, right? So our company position is when we see ourselves as the next elastic, right? So this is how we see ourselves and what we are doing, right? Okay, so yeah, so this graph basically explains where we position GINA, right? So you can see the AI development of the industry is like a reverse pyramid, whereas at the very bottom, you have the computing infrastructure, this includes the CPU, GPU, FPGA, cloud service, and so on, right? So on top of that, you have the framework, you have TensorFlow, PyTorch, MXNet, Manusval, whatever, right? And GINA is actually one layer above the framework, right? Above those deep learning framework. So we actually embrace all kinds of deep learning framework. And on top of that, we have end-to-end application. This could be machine translation, image recognition search, text generation, face wrapping, and the data compression, and so on, right? So GINA is a dedicated layer on top of the universal deep learning framework and it provides the infrastructure for search applications, right? For all kinds of search applications. So this is how we position GINA. Okay, yeah, so this also, this is a list of what we can do right now with GINA, so you can use GINA for long text, short text, semantic search, you can use GINA for image-to-image search, video-to-video, audio-to-audio, or even any kind of document search, right? You can use it for multi-modality search, cross-modality search, multi-faceted search, and you can do index sharding, replicas, elastic, distributed workload, model, controlization, Docker, all this, right? A lot of features that are listed here are already implemented in the current version and some of them will be released in the next couple of weeks. And yeah, so basically, as you can see, we provide the one-stop solution for all these kind of applications. Okay, so we as a company actually, our goal is not to just implement one framework, one open source framework, right? So open source framework is the core part of our company, right, so our core, our company product lines. But if you list all the product lines that we plan in our mind in the company landscape, right? We have Gina Core, which is basically the open source framework that you see on GitHub. On top of that, we have Gina Hub, which is basically marketplace and it provides some main specific solution, search specific solution, right? So you can imagine the relationship between the core and the hub is like iOS and the app store built on top of that, right? But we are not like a universal app store. We only provide app for search, right? Whether it is image search, whether it's a dog search, cat search, the hub is actually driven by the community, right, it's contributed by the community. Yeah, we also have like a prototype of the hub on GitHub, so you can check it out. And we have Echoes. Echoes basically provides some enterprise features such as dashboard, auditing, log monitoring, alert, this kind of thing. And finally, on top of all of this thing, we provide a cloud service, but this is like a long-term goal. Okay, highlights of Gina. Yeah, so this is basically, you can get all this bullet point from a GitHub. It's a universal search solution. It can search for any kind of mobility and it can even run on Raspberry Pi. So if your PIP installed Gina on Raspberry Pi, it will also work, and run on Gina on lower, it also works. And it is very high-performance and state-of-the-art. We actually spend a lot of time to polish the user experience to help the developer on board. And we actually made the API very simple and very easy to use. And so this is, when you use Gina to implement a new search solution, you will realize that it's actually much, much simpler than you imagined, than you thought it was. So, and also like Gina app, it actually provides a very powerful extension and very simple integration to the Gina core. And you will see that in the Flow API. Okay, so I think a good start of getting to know Gina is actually to draw this hello world. So if you have, if you right now, you have a mic or Linux operating system and with Python 3.7 or both installed, then you can simply do PIP install Gina and then Gina hello world, right? So it will run the end-to-end image search on your laptop. So if you don't have like a Python 3.7 or you don't want to install Python 3.7, but you have Docker installed, right? You have Docker installed on your mic, on your Linux, you just copy paste this kind of command line, one-liner in your terminal and then you can run the hello world. So here I will just run this thing for, because in the rehearsal I try to run this thing. I actually, it's a lot of resources and turn off the video streaming because very laggy when I run this thing. But it's actually very understandable because Gina is actually, when you run Gina locally, right? It started multiple process and they start to communicate with each other and that's actually like makes the video streaming very, very laggy, right? But so for example here, so I will just clear the terminal here and you can run the Gina hello world. I think I'm just run Gina hello world. Okay, so it will start to download the fetchMU state set and do the indexing and so on. But I will stop here because otherwise it becomes very laggy. So I will just stop here, right? Or you can also use the, you can also use, so I will just clear the terminal here. Or you can also use like a Gina log profiling, you turn on the profiling and then turn on the log server. So once you do that, you actually will, you can do the, you can go to our dashboard. You can go to our dashboard here, right? So you actually, oops. Yeah, so here you basically, you will see how the, so for example here if I run this thing and yeah, I probably shouldn't run this thing because then it becomes very, very laggy. Okay, let's terminate here, right? Let's, okay. Oh, let's terminate here because otherwise the computer is not useful anymore. But anyway, I have the screenshot, right? Prepare because this is always like the case when you show something, it's become very buggy. And then you have to prepare some screenshot for backup to cover the embarrassment of yourself, right? And okay, so anyway, so what does it do? What just happened if you run the Gina hello world, right? It's basically showcase that end-to-end neural search system with the index and search two procedure, right? In one line. So first they download the fashion MNIST dataset including the training part and test set. And also the 60,000, so is that through the 60,000 training images for indexing and then it randomly sample 50 queries from the test set and then from the top 50 visually most similar images and right to our HTML, right? And so this is what happened, right? So as you can see the results. Okay, so hello world is very simple, right? So you usually expect that it is very simple, right? But it's actually, so if you look at the implementation behind that it is also very straightforward, right? So we have a Python API which basically loads, let's say a workflow from a YAML file, right? YAML is like JSON file, right? So it's like some kind of a schema that some domain-specific language, defining domain-specific language. Yeah, so YAML file is look like that. So it's basically defines the workflow, right? And so we have paths, we have this kind of different components here. So each components can be, so we call it path here but it's actually, you can imagine each one is, each part is like a microservice, right? And so if you say, okay, so I never learned YAML, I'm a JSON guy, right? So I'm a JSON guy, I don't like YAML, right? So that's no problem. So you can design your flow, right? In the dashboard, right? So by just drag and drop there. And then you just copy the generate YAML to some file and then load it. So that's it, right? So it is actually, we provide different ways for you to design the flow. That includes using the Python API, using the YAML file or using the dashboard. And we will see that later. Okay, yeah, so all this logs. So when we run this thing, right? So, yeah. So let's just screw it up a little bit. So when we run this thing, you see a lot of like this triangle and it seems like a message is propagating. It is, right? It is a message propagating over different microservice or over different paths, right? And so you can see that, oops, where is it? Okay, so you can see each path here in the flow actually corresponds to the line, to each log here, log recall here, right? So, yeah, so that's basically how GINA works. It propagates the request into different microservices. So it's actually parallel in nature. Okay, so that's a good start, right? So you see the log is scrolling. You'll have very intuitive idea. The first impression of how GINA works, you'll understand, okay, GINA is probably propagating on something in message over all the microservices and GINA is kind of like, yeah, you can deploy all these microservices here and there on different machines. So you get the first idea, right? But in order to understand how GINA really works, so I suggest everybody read GINA 101, this key concept in GINA, right? So it's actually explained everything in a very, let's say, Katomish way, right? So you can just simply type GINA 101, right? And then you go to this page, right? So it's easy, it actually explained from very low level to very, very low level from the document in Chang to YAML config and the executor, a family of executors. You see this four guys, right? So there's four gear guys, right? This actually corresponds to the, remember the four different, four important steps that we do when we do the neural search, right? Encoding, preprocessing and storing and indexing, right? So those four gear actually corresponds to that, right? But actually the families are not restricted to those four gears, right? So you can, at any time adding more gear, adding more family members to this executor family, right? And we have driver, we have bee, podge, flow, eventually we have this kind of like a big family here, right? Okay, so that's good because it actually, so for those who don't know much about microservice because I know some private data scientists they never learn like a microservice in a very, let's say in a very formal way, right? So that's fine because you just read, Gina, why aren't you getting the idea, right? Okay, so what are those characters if you put that into the code, right? So here you can see that this is actually the source for the hand, Gina for the word, right? So from the left column, from the left most column we have the executor, right? We have all this like a crafter, segmenter, encoder, all this very, let's say, algorithmic unit which is basically like a written Python or NumPy, right? And those executor doesn't, they cannot talk to each other, right? Because they are actually isolating the microservice, right? So you cannot just talk to each other like you're talking to Python function, right? So in order to do that you actually have to grant some kind of communication ability, network communication ability by using the driver. So this driver basically defines how this executor talks to each other under some kind of request, right? So for example, under the index request, okay, so how should you talk to each other, right? Under search request, how should you talk to each other, right? And this talking schema actually is defined with YAML colleague, right? And then one layer on top of that, we can wrap all the driver executor YAML into a P, right? And you can also, let's say, you want to start multiple P's and you just wrap all the P's into a part, right? And eventually you put everything in a flow, right? So the flow actually represents a high level task such as indexing, searching, training, this kind of stuff. And yeah, so that's actually from the micro level to the micro level, from the left side to the right-hand side. Yeah, I think there is an animation, yeah, from the micro level executor to the driver and then driver with YAML, they wrap it into a P and a P, if you want to scale it up, you want to set replicas, you want to set shots and so they are wrapped into a single part. And then you connect the part together with a flow API and this finally becomes your high level task in the flow. So this is how the logic evolves, right? Okay, so now let's talk about the flow API. As the flow API is probably the first thing that you'll notice when you look at all the examples, all the tutorials in our GitHub repo, right? So flow API is actually the interface that we provide, it's one of the interfaces that we provide for developers. So you can use that to, it's like a translate layer, right? So you can translate the YAML file, the Python file, all the, or even the dashboards, the interactive dashboard into something that you want to, some kind of backend that you want to run, right? So if you use flow API locally, right, then basically it runs as a multi-processor, multi-threading, right? You can also use flow API to run, to start Kubernetes and the docus world. So that's also possible. So flow API, strictly speaking, is like a context manager for managing all the P paths and all the context of all those P paths. So you don't have to worry about who connects to who and what is a port number and how should they communicate with each other, right? And these are all taken care about by the flow API. And the Kubernetes support and docus one support and other orchestration layer support, this is still under development. But at the end of the day, so you can use flow API to generate the YAML config or JSON config that you need to deploy it on the cloud. Okay, so here at least some very like usage, some very simple usage of flow API. So as you can see here, adding a path into the flow is actually very simple. So you just use add, add, add, add and then you give some attributes and also the YAML, YAML file, YAML path, right? And I can also say, so this is very interesting. I always like it, I always like this feature because I think this is how we, one of the highlights in flow API. So basically you can run one of the part, right? Not all the part, one of the part remotely, right? And you can also run one of the part remotely inside a Docker container by specifying the image of this part, right? So once you specify the host and the image, then this part is actually, you know, it's like a wrapped in the box and then put it in the, on the other side of the earth. Yeah, so this is a very powerful feature. And then you can of course build parallel steps by using the keyword needs and join, right? So you can branch the steps and do it in parallel and then wait until all the steps finish. So this is a Python API, right? For some reason, people, let's say you do this in production, you can now change the source code so frequently. That's why you separate the code itself from the, you separate the structure of the flow from the code itself, right? So that's why you write an independent YAML file. So that's no problem. So you can use flow to load this YAML file into memory and run this flow. To feed the data into a flow is very simple. You just use width, it's kind of like a context manager. You open this context and do the afters, and that's it. All right, so this is, and you give the input function to tell the flow, okay, so you should grab the input from this function and this function is probably some generator, right? And okay, then you should write your output into with this kind of callback function, right? So in this case, it's a simple as print, right? So that means I do, every time you receive a request, every time a request make a run tree, right? Back and forth, then you print it, right? So this is a simple as that. Okay, here are at least all the examples that we have right now for advanced P pass API and flow API usage. So you can see we have feature extraction. This is some feature that I actually, I was always asked by the community when I was doing the bird-eye service, right? And people often ask me, okay, how can you do access service, right? Can you do Alberta service, can you do whatever be Alberta service? Actually, in this case, if you use Gina, that's actually like a work out of the box. You just give a different Docker image, and that's it. And so you use the same workflow, you use the same pipeline, and then you can extract the feature, right? So in this case, we give an example, we use Hugging Face Transformer to extract the feature, well, and scale it in parallel. So it's pretty simple. And we have image search, this is not so big deal, right? So search in flowers, search in single flowers. We have QA search, this is I believe this is a source pack script, and so you can type in a script, and then it will look for a single script. We also have a video search, right? So this is based on Tumblr GIF dataset, right? So you basically can throw in a GIF image, and you go like, find all the related videos, right? And here, back to the Hello World example, we can also split the Hello World into kind and server architecture, right? So you deploy the server on the remotely, and then you use local client to send data to the server, right? So this is, so you can find out the examples here. It's also very simple, very intuitive. So all these examples are available on GitHub, so you can use the shortcut learn.gina.ai to get all the examples. Okay, so now let's summarize a little bit. So we have seen Gina can do this and that, can do a lot of things, right? So when people come to the Gina GitHub repository, right, they first think that Gina is probably something like a very small toolkiss, like a Berla service, right? And then once they realize that Gina is actually, it's very ambitious, it's actually doing a lot of things, and it's trying to become the cloud native framework for neural search, right? And they become scared, right? They don't know how to start, right? They don't know where to, how to learn this, right? So here I summarize the best way, the best way to learn Gina for beginners, right? So first, as I said, you have to run the Gina hello world, right? We spend some time to make it very, there is actually no dependency for Gina hello world. You just peep into Gina and then run Gina hello world. You don't need any tensorflow or specific version of PyTorch or whatever, right? You don't need some database in my SQL, non-SQL, whatever, right? You just run Gina hello world. Everything can be run out of the box. And if you don't have Python 3.7, right? Then you use Docker, right? So this is also, this is also doable, right? And then after you run the Gina hello world, you reach a worldwide, right? So this is like a must, you must reach this thing, right? To understand the key concept, to understand the P path, to understand what is the driver, executor, this kind of terms, right? And then once you get these cartoon characters, once you get to know these cartoon characters, you can read the first tutorial, first two tutorials. One is a flow API so that you know how the examples work and how the examples are written. And IO functions, IO functions are of course very important because yeah, you have to know how to feed and retrieve data from and to the flow, right? To and from the flow. And then once these steps are done, you can really dig into the hello world example and look at the other more advanced search application, search applications such as NLP, this form search, script search, image search, flower search, video search, and so on, right? And of course if you're, anytime encounter any problem, you can read the box. We actually build the box and I'm very, let's say, we actually spend some kind of like a graphic design effort on the box because we know that developers spend a lot of time on the box reading so we want to make sure that their experience are actually like, you know, more enjoyable. Okay, so after all these steps, right, there is now really like you can build your own search system with Kina. Yeah, try that, try to build your next search system with Gina. Speaking about the learning experience, we really care about the onboarding experience for new developers, right? So that's why we actually define different milestones for developers and we actually optimize the learning experience for different level of developers. So we have the one-on-one carton-ish storybook and we have the very comprehensive and we will still add more and more examples to the tutorial list and we also have a very nicely written, well, at least it's very, looks very beautiful document here and we will also launch our tech blog like in the next month. The tech blog will also be community driven so we will write something and we also encourage community to share their learning experience of Gina and we can publish it together. Apart from that, so if you ever encounter any issues when using Gina, please submit an issue on our GitHub repository or if you prefer more like interactive cheat chat style, you can use our Slack channel. So we already have some members there and so we have like a daily discussion about, yeah, so what is Gina and how to do that, how to do this, what is wrong with this, but this kind of thing, right? Okay, so that's basically concludes the, yeah, the Gina as a project. So now let's talk about the Gina as an open source company, right, so the first thing I always, like I was often asked by some people is that, okay, Gina is a great, Gina is ambitious, so why don't you work Gina at X and X is some kind of like some tech giant and my answer to that is that we actually care more about your search, we care more about the community, we care more about the open source and that's why I'm stepping out from the tech giant and doing this as a startup and raising money to do that as a startup, right? And we actually, so all of our co-founders that we have, we actually share a very strong belief, believe that open source AI infrastructure is a future, right? And we believe that we want to build this thing with the community, right? And there are a couple of things that we want to change and we will change that. So this is our very strong belief and so our company culture actually encourage people to change things if they don't like that, right? And yeah, so that's basically the answer. So some people may think of that, okay, so if you work that in the tech giants, right? So you probably get more resources to let's say PR for branding and so on, right? But that counts as a hobby for the tech giants, right? But this is something more for us. So this is a major difference because we care more because we care more, we can pay more details, pay more attention to optimize for the community to make this project more sustainable and open source. And this is the thing, this is the reason motivation drives us to do this as a company, right? And we also have our own understanding about the company, about open source, right? A lot of people think that, okay, so open source is probably like you open source your source code on GitHub, right? And then you collect the stars, right? And actually that's an easy part. The more difficult part is the open governance, right? How do you make your project sustainable over long term? How do you provide long term support, right? And we fortunately, we are a venture backed team so we have enough funding to give this Gina very long term support. And we will build a lot of synergy with the community and to have a more open governance model. Yeah, so here specifically, yeah, we will make our projects very sustainable and also community driven. We will build synergy with other open source software. So if you look at our Gina code, we actually have already incorporated many interfaces such as, yeah, Tengen Law, PyTorch, this answer, and also Hacking Face Transformer and also Farm from DeepSat and Fast from Facebook, this one, right? Yeah, so we love to do the synergy with other open source software, right? And, yeah, we are also looking for partnership to build an open governance model such as a technical steering committee around Gina so we can discuss the challenges and the Docker in the new research and then we can solve it and push the whole community forward together, right? Okay, so that being said, if you are interested in any of the jobs that we provide, so we actually, right now as I said, we have AI years in full time and AI product manager, open source evangelist and also the full stack engineers. So if you are looking for any of the full time job doing open source, especially AI in search, if you believe in your search, yeah, then please submit your resume and follow the instruction on the website and we are looking forward to have you on board. And, yeah, so that's basically it. That's basically the today's session and...