 It's all the great things, and it's something that you can celebrate and you can find the text that all comes together with the song. Yeah, it's just... And then, you know, that would be the best thing about this. And I don't feel the need to keep, but then you can throw it through the radio, and you can do it outside, and it's pretty cool. It's like this. Oh my god. I think I just... I changed the example which... I don't know if you've heard of it, but I love it. It's changed. It's here with the GDZ. Not any of them. It's... Thank you. Thank you so much. Without the representation of the GDZ, I don't know how to illustrate it. Alright, go ahead. So, I'll turn it off here. Great. Yes. Nice. It's great. It's a great site. Yeah, I think that the flying child... I don't know. You're really... She'll sit in that very screen, but I think... We'll see how consistent we're tomorrow. We're well tomorrow, probably. So, he's coming. He's coming, but not to the developers. I don't know. I don't feel the need. Yeah, she is... I don't know. What's that in front of you? Uh, yeah. So, can you tell her to be there? I'm not sure. I don't know. So, if there are flight preparations... She'll just go leave to the next year. She'll be here sometime tomorrow. I don't know. She'll just leave then. She'll just leave then. She'll just leave. She'll just leave then. She'll just leave then. She'll just leave then. She'll just leave then. She'll just leave then. She'll just leave then. Hey, let's get started. Everyone. We've got a read department at the Wiki Media Foundation. It's talked to you about the Wiki Data Query Service with Stas. Stas is going to give us a quick five minute demo of what the Wiki Data Query Service is and what it can do. And then, we'd like to hear your ideas about how the service can be improved. A lot of the users of the service are developers like yourselves. So, getting my feedback would be very helpful. So, I'll hand over to Stas. You probably need to stand here for the microphone. Stas. I was told anywhere kind of in front of it was fine. So, like there would probably be okay. But then you run from the projector. I'll stand right here. That should be fine. So, Wiki Data Query Service. So, what it does basically is how the Wiki Data is how the GUI for the service looks like. Basically, the service has several parts to see and kind of work with as a human is the GUI which allows you to run the queries. So, let's look how the queries look like and run them. So, this is the most the first and the famous query. It gives us the list of cats in Wiki Data. So... Zoom in? Zoom in. Yeah. Okay. So, what we can see here is several things. First of all, we see that it refers to Wiki Data elements. It refers to labels and we can have different languages. So, let's see if we have something more interesting. Yeah, we can also have different languages here as we see. So, if you notice here at the corner we have data updated. So, what this thing means is that it's like data from Wiki Data meaning that if something is updated from Wiki Data within a few seconds as it says here it's available for users. Next thing, what we can do with it we can download the data set of course. That's the last thing we see. More interesting is that we can have visualizations. So, we have a bunch of visualizations here. I'm not going to demo them all because that will take very long time but I will show a couple of them. Basically, this is the way to kind of see the data beyond the tabular format. For example, we have maps. So, let's see if we have some interesting maps for us. Yeah, so this is the map of Berlin subway station. So, we see that we have data points that have coordinates. We can see them on the map. Other visualizations that we have we can have images. This one has images. You know it has images. Yeah, so this one has images for sure. Yeah, so we can have images directly from comments. We can also have all kinds of graphs and charts and so on. We also have a graph builder that allows you to build your own graphs. It's kind of simplistic but you can build simple graphs and import them into Wikipedia. So, that's the part that you see the most when you go to the GUI. There are other parts that you see less but also available for people to use for... There are news right now on Wikipedia. That's bots that use Wikidata data to generate lists. So, there are tools like Listeria that generate lists and other bots that use these data to control basically for quality control for finding out all kinds of problems within Wikidata and even within Wikibedias like lanes and so on and hopefully correct them. Basically what we have right now for this and other things that I haven't shown that we kind of have here is we can try to explain basically what this query means. So, there is an explanation here. Basically it says that we're finding instead of this kind of obscure numbers. We have an explanation that says what we're finding and what we showing. This is also by now very simplistic but we are working on extending it so it will be smarter and can explain more complex queries. There is no chance it will be able to explain all queries but it might be able to explain as simple ones and allow you to modify and adjust them. That's kind of all for short intro into what the query service does. So now basically what is the purpose of this all session and basically we would like to hear from you what you would like the service to do for you what you would like to use it for, what capabilities you would like to add and there's no promise we actually will do all of that but we really wanted to hear what are the needs and kind of interests to develop this further. Yes. A quick question on the display types. It's a graph. It's a graph. Can you repeat the question or was the graph visualization what kind of graph it means? Yeah, it's kind of not clear so I'll show you what kind of graph there is. So this is the graph that is in the visualizations. So this is the graph that's what it means so you have the kind of nodes and this is the way you can visualize all kinds of networks. This is the network of descendants of Chilis Khan so you can browse this and so on and you can also browse this now and open those nodes and kind of explore those networks in the GUI. So that's, we also have another kind of graphs which is graph builder which is different kind of graph despite what. And that's the charts and so on. So that's how you build the charts and the visualizations like that. Okay, so that's pretty much it for me talking so I guess we need to get it's okay. I think we're going to have to sort of register the interaction type but I have to register that in a way so that the graph can be a little harder. So protein A and protein B and there's aren't just a little bit of A and B. Oh you mean like, okay so then you'll have probably you'll have a property that in Wicked Data that says that and we don't need to do anything special but you'll just get it as a part of the part of what it is. Okay, we're also going to do anything special to make sure that when you write this graph builder so that at the beginning you can define which there are additional issues and then there are some in it Yes, so my interest is federated queries where you can take information from Wicked Data to something in data and other Wicked Data instances for context I'm I believe Wicked Data is that it takes basically information as far as you can tell for Wicked Data and just that is the only thing you don't want anything necessarily in Wicked Data necessarily so I'd like to be able to have my own Wicked Data and then use Wicked Data to fill out the gaps. Okay, so you want basically Wicked Data so it's to be able to read from your Wicked Data or your Wicked Data to read from? Okay, so does your Wicked Data have a Sparkland point? Yes. Yes, okay. So I think your Sparkland point should be already able to read the Wicked Data's endpoint but we're also thinking about allowing Wicked Data endpoints to integrate other Sparkland points and right now we're trying to figure out exactly how to get the list of endpoints so please watch the project spaces and we will be running a survey zone that we will ask people to nominate basically endpoints that they want to have and have the processes how to nominate 8 points basically it will be kind of similar probably process that people nominate properties maybe even simpler because there's not much space for objection there except if endpoints is really like broken or something so yeah, so we'll just let people tell, okay I have this endpoint please somebody add it and somebody will look at it and say yellow is good and that's it. Since others are interested in this what's the process for mapping my own entities with Wicked Data and do you think that it will work? Great question I have no idea that's the same that we will need to look at so we are fixing the same situation where it's more of a macro and our plan is to that's how we can manually generate a lot so that's a very good point that we need to look at how generally since we control a code for Wicked Data endpoint we might have some way to assist and make automatic translation but for that we need to really know what we're talking about in the country we're talking about I have more than a question which is we have this interface which is a software interface that's basically a translator to Sparkle you can put in a query and you get nice visualization of data but I think the great strength of a service like Wicked Data service would be to be able to expose a simplified API not just you have to stamp your own Sparkle query for third-party use I mean if you think about it a lot of web services that you find around that are not free are based on knowledge graphs like this one and this one is the biggest knowledge graph at the screen that we have in the world and it would be very important to allow everybody to be able to query even as a third-party not inside our cluster let's say or our project directly I don't think it's viable the way it's now to be a general purpose instrument for people outside of our projects do we have any plan to build an API that simplifies a little bit how so what that API would do not exposing directly on the Sparkle because of course I mean you could just expose the Sparkle to the world and save people just throw me an HTTP request we'll probably not go and we do that that's not viable in the long term of course because people could hear the query that brings up a service or whatever we probably I mean I think we should advertise the service to third parties to be outside the world but we have to instrument it a little bit roughly I mean create a simplified view limits so we had a bunch of when we started we had something we talked about it and we didn't find any way to define what simplified API looks like and the way to do it that we'll get to it I think that all of this to find good abstractions or simplifications that are still general enough to be generally useful I can very well mention the API that would allow you to run simple theories in a simple way so give me an instance of X right? something like that and the question is should we specialise both for 5 or 15 of these use cases and I don't know having something like that defined on the that kind of if you find that you reach these placeholders like you have the status of the scale or something like that that might be the way I don't have it yet if I had to choose honestly I would ask for everything and the processer translates that whole language to we could do that but I know it's not simple basically what Google does the decision to put it in the dust that's all it asks for it's a source you can use already that it applies to analyze the text and build a query upon that and you can see why they're some days very hard especially when you have special cases and I can imagine how it will work with genes or something I'd suggest to the government in that direction but not to bring you a query service or a new API but just make it similar to for those partners I think the government was quite asked for but it was a feedback from students that wanted to somehow harvest you to write your own query so maybe you guys have to sort of learn it on time to write your own partners I think that would be as hard for us as in the API so there are two things one is getting into your own programmers you have to be able to post your own query and there I think you certainly don't know what this is you don't know what this is and I think next up there is seeing what the outcome of your research project is that's analyzing the current query and finding out to basically see what project where people do it and from there go okay the other thing is making it easier for people to post queries articles is one of them that would make them come to form of others by their time to make gammies as well to give kind of joys for the relationship project there is a very useful tutorial on this you can find it in the top menu that I can swap all in for part of our tutorial yes so my case is that I often have a bunch of pages just want to know all the you I don't know what this is but I want to use it as a filter I think it should be possible with the older WDQ thing and I was wondering whether I can then post 10,000 to 100,000 and get a filter and it will get okay so you say you would like some kind of batch processing yeah so it's not currently possible to do that well it's kind of possible within limits of of like Gert Jorls and the query service probably the Gert Jorls so more complex queries it would also be a bit of a query complexity that if we give too much throw data to match against it it might just be small but yeah you can write a script that's what I I've been doing several times for our internal states just by this thing that does the invention that would be useful for because I just did it for special states so it's not very useful outside of those states yeah so anybody else so getting back to my question I wasn't sure at least I found you again I remember when you were working on the query service I was thinking about it Google ads it's only in PI which is kind of similar on a state that uses Sparkle directly something like that was my idea of an API for developers then I think even for users that should be something like that but I mean it could even be a provided user honestly I see so many applications special to mobile and also necessary in our applications that I direct with weak applications but even for clients that yeah we are forced to use these private personal whilst information like the Google Nomad Engine and I think it's it would be important to have a free alternative to that one thing we could think of that has been discussed time and time again by absolute reason the idea is basically finding our own query language so we have specialized what we have and basically just providing a view of that Sparkle I mean the original we did query language that I must define it's kind of an idea of the platform for that and it could hide a lot of questions of Sparkle by having an ecosystem language that is found on our table I don't know whether that would make things simple enough I think it would make it easier for me to understand query language I'm not a Sparkle I need to write things instead of query language and we've made a lot easier to understand that that would not be called like the average viewer but for us many help is so that would need help from community in two senses from community of developers and from community of users to actually define such language because I can like sit where we can define the language but it probably would be useful for me and not anybody else yeah we need to kind of have some discussion about what this language should look like but just hitting the time out so I'll see that it doesn't affect from a very complicated query maybe it would be worth having some kind of system like a job view where you can submit a really complicated query and have a much older object but it would be small so that's an interesting idea that's an interesting idea yes but one thing is it's kind of I see the use case and I also see the potential for use because people just say you know what this query takes half a year so I just put it in the job queue and let it be somebody else's problem yeah instead of writing this query the right way so it takes half a second people just to the job queue because it's easier so yeah so we still will need some controls but yeah but in general having some kind of offline capability well you kind of have it with bots but yeah having it without having your own bot probably would be useful yeah that's an exciting idea you kind of forgot that we've been looking at using like we had information to say all big apps and things like that pretty absolute response time have to be like a lot lower than we know it would be the game would actually be 5-8 seconds and is there going to be any push to like kind of get that down like a sub second so it depends what kind of queries you do so if you have predefined queries so that means you know like I think for now you have to know the whole query basically so it's a static query we can we can do it we can ensure that it's basically will be there a sub second but if it's arbitrary query I don't think right now we have a way to guarantee anything because some queries are just far so so predefined like so you would have that's what one of the things that we discussed like you would define somehow query on wiki data as a thing that you kind of predefined and say this is my query and then you'll have the idea that says okay I want to resolve this query and you will get it I think one major aspect of this is caching basically if you want to have a complex full query and you want up to the queries to the second that's not going to work so you have either one or the other either you have only very simple queries or you want to do a stable data I think this goes back to the developer because we don't need arbitrary queries that's my point I need to know all the categories and articles like in this rotation is it something what the knowledge that would do basically so maybe we kind of relax the thing with exact query way maybe we can have query template let's say you have query and we can substitute this one parameter and it still would work I think that might be possible I'm another simplified idea this is something I feel strongly about I think it would help a lot if wiki data enables the time the domains especially also the range would be nice too especially the domain of each property so the number is just a country can have a capital but a dog can't have a capital that sort of thing and that chain would enable various GUIs nothing else to enable easier browsing and I think it could also potentially enable easier investor I thought that I agree with you but I don't think it's a query question it's more like wiki data question but then say anything wrong in the end the way we've been thinking about this is always that are animals very open and the software is not going to prevent the dog from having a capital just as it's not going to give a capital B mayor which actually happens so so but you cannot of course specify these constraints on the properties of what we do that and software can make use of these assumptions it just means that you do as well you have to ignore some things that are in the focus and from the perspective that belongs to others or something you need some of these constraints that have some lines so it's not something that we have found but I couldn't imagine a very relevant software we can use some of the awkwardness for some of the real organizations I think it does potentially also tie into querying but that would be a problem probably because we are all enforcing constraints and there will be probably doubts we have constraints it's just not many people look at them and even less people do anything about them so it plays a role but we already have the querying part we need the social part to kind of approach people more to actually pay attention to constraints my question was in doing a like a sparkle dialect is there a tool that actually includes the query you want to have I like to distinguish that in the visual things that allow you to rank it I don't think that it's just a simplified interface that just doesn't expose all the possibilities you have but just find this kind of entity like you have let's take what Luke is doing as an example I'm not saying we can just comment with that as an example because a lot of people who mobile apps use that they have a pretty fine set of entities basically that you can query and when you have a schema for those entities and you can query those entities for you can for example search music loop as two guitarists for example because those are entities in the schema of the music loop entity those kind of abstraction on top of what we have with the data is probably for party applications or even power of applications would need this kind of kind of abstraction like building some form of a schema on top of the data that's much more articulated than we have it's different from the approach that the original we did query language it's it's two sides of the same story basically something like this is going to be easier to optimize for example for these things that Claude was coming on I think you have to distinguish two layers of abstraction and two layers and one layer here one is the base data model so entities, statements, references sidelines, these things that's basically the idea of a quick base that includes a lot of the dimensions of the language that could be possible and I think that's a spark I think it would be at least enough to be different from the original and then you have the modeling of the problem that's happening quickly that's something like that that's something that that's a little bit more and you could build a specializing interface from that but I don't see that how that could be made in the software and except for some very, very different cases like okay that's kind of what I'm thinking about for the sub class that's something that we should wrap up that could be very specialized things that work but they're very specialized city very applications perhaps I think it would be useful to allow for parties even internally it says that I imagine like a subset of entities would probably have a lot of like buildings, animals, cities like all the basic nouns and if we had that almost like a pre-defined entity that I know would go along with that and we could always add more along the way so I feel like we already probably have some data that we can use to look at what various people are making and you know like character loading through right away and that's a base already those users are a lot happier and they're not necessarily good that's okay okay so I have kind of a couple of proposals for topics that are in the cover yet one first one is so we talked about this query service it has the data from WikiData so is there some data from somewhere in the Wiki universe that is not that you wanted to query and it's not in the query service like structured data or well, yeah hopefully, yeah we won't be passing with the test that's not going to happen but if it's some data that is available in some form that is not that requires complex processing, yeah is there any data that you want to get into? okay, okay so question on actually is there any time to what kind of metadata time created or something like that yeah so we the answer is that we didn't consider it the reason is not because we there's kind of some prohibition against it even though there might be kind of some privacy aspects but let's say we overcome it we just didn't talk about it because there was no request but yeah that's a nothing to think about so in general any data that is available in the structure format might be available in the graph so yeah so that in theory that's possible a practice we just never talked about it so that's a new new thing for us that was just going to be a some benefit already not for all sightings or something yeah, yeah, yeah there's a couple of things but not that's why I'm asking we kind of started this topic with small steps so we want to know if we need to go more there or it's enough revisions is kind of tricky because unlike the keys a query service leaves only in the present it doesn't keep any history yeah, but we don't need to just so we kind of can do a mind trick and say okay it's a present state of the of the history so we can kind of define it as a present even though it's the past so yeah we can work around it but we shouldn't expect the same like time frame as a wiki color step basically you can see everything through all the timeline that would probably not be possible and you think there is a way to come about so it's actually performance wise because that's what I think because there will be huge amount of work so yeah, so performance it's kind of tricky because it depends a lot about how you are in because let's say if we just double the database it won't be that that big a deal because it would be just sitting there the question is how you query if you query the whole database then of course it will be slower if you query just the part that you added separately from the part that already exists it won't matter that much it might be required like we might get a little more hardware to kind of have a caching be on top of it but it won't matter that much it won't be like nothing works anymore but it depends a lot of what you query for this if you have done a change on big B this probably should be a number it's like this, you want to say all pages or all items which hasn't been changed since that date someone's made it come from really with EPI some of this this is a very good question which we kind of consider making Wiki API available that's one of the data that is not on Wiki and we might want to make it available so here I would be interested to know which APIs would so when you do it query basically if you ever done it there was a thing called service so basically it allows you to run some code that or something like the example of services that if you noticed when I did the cat's example there was a service closed that extracts labels on different languages so you can do a lot of stuff with service and one of the stuff you can do is extract stuff from a Wiki API so the question is which API and having service service can be useful so you basically do a service style switch of the data and then you just work on to that's on the to-do list that's on the to-do that I hope it would be done this on next quarter depends on various things that are outside my control but it's really I really hope that it will be possible I'm trying to set some statistics on how much some items and properties are used by the queries for example when you have templates there are some statistics how often the template is constituted then so if you have queries using a certain property or a certain value then you would have a big effect by changing one value with the data and it would cause a lot of change in queries like whether you get x or a query result no so we actually have a marcus with his students do no marcus approach yeah so we have him doing now a research on the the WikiData logs and query logs that covers topics like this like what the queries are and so on so I guess when they are done we might also if they are a little bit okay we can actually open the tools we can make some kind of components so we will see how how it goes yeah there is a lot of interesting statistics the problem is kind of how to process all this data it takes time so people are working on it hopefully when they are done we can also use those results okay so so the next topic I wanted to ask about is getting so we have this nice query data but they leave kind of the side you go to GUI or you go to API you get a picture of Jason so now if you want to use auto-wiki how would you do this like if you want to re-use basically the query query from Wiki so we have this list generator this list tier box that generates lists so that's the obvious case do you have any other kind of needs or ideas how would you use query solvers to basically generate content on Wiki where is the canonical example with the info boxes info boxes okay so yeah that's already being walked on so info boxes yeah that's kind of starting so we will keep what else we will do to kind of make this data actually so we have a group another guy a while ago on a generic JavaScript library that takes in structured data and displays it in different ways like charts or yeah that's you this visualization library yes sorry then you are working on yeah that's the graph and visualization library yeah that's I'm not sure how widely it's used because you need to kind of design this graph and it's right now it's not super easy you need to know more Sparrow and Mega language but if you're invested in it and kind of do it then you can do it yeah we have some nice visualization examples that you can do with this and they have totally data driven so they are updated is that much like this timeouts so I think that's a pain point for a lot of people and also the fact that some ways to optimize your query and then they suddenly don't time out but if you do certain mistakes then it's likely to time out and do we have some strategy also like to provide information to upstream so the public can optimize their Spark Optimizer for Blazepark or that's kind of hard because like so almost Blazepark has very optimized it and which tries to do the best sometimes it works, sometimes it doesn't so I don't believe we can do better here than they did there so we can of course and we do hypervisor in different bounds and say it works well this way but not this way and optimize on this query and so on and so forth and it's ongoing thing because it's hard but I don't think we kind of can optimize query beyond what they do just because they they have much more information than we do about how that works can we or do we expose profiling information to tell the developer to tell us what part of the query is there I guess a lot so you can't see you can't see the query profiling information unless you know very good place graphic terminals it's not going to be useful for it and I mean like it's useful to me as a person working on this for three years and like one solo cases I just dump it and send it to them because I cannot make a thousand out of it so for like random person it would be just probably more of text that is incomprehensible unfortunately we don't have anything right now that can be comprehensible can we build it I don't know maybe that's an interesting thought to consider because this data is automatically generated maybe some part of it might be extractable I just don't know maybe if we kind of if I really squinted it I can find something that can be useful for people to but generally if it's time out then you have to rewrite it usually yes so actually we're going to ask that this is a problem that comes up every now and then and I mean there's a process when we want to optimize the existing queries so they fall under the limit and the other side is increasing the limit how feasible do you think it is to actually exist at this time we can increase the limit the thing is so the thing is we don't know which limit is good and which is bad we just kind of we just chose 30 seconds because it sounds kind of not too short and not too long so yeah so we might double it and see what happens so there might be two things happening first thing nothing happens everybody's happy second thing somebody runs a query and then brings down the server and of course it happens at 3 am and I have to wake up and restart it so we could try it the problem is there would be again two types of queries there's the query that is just under the limit and there are totally such queries and I actually so there is a way to run a query without the limit which you cannot do but I can so yeah so I sometimes I check and see how much under the limit so there are a lot of queries that take a minute instead of but there are queries that just don't finish and I think it's ok to have queries that will just never yeah so for those kind of that means that for 30 seconds we cut the losses at 30 seconds if we double it then the losses would be doubled so it will be hogging the server so I think once we get the third server we probably might might try it on two servers I'm kind of still kind of scared a lot of doing it because if one goes down we basically run on single server and then we are kind of one step off total failure so on three it's kind of more if we have this monitoring so by the time it allots us we still have two servers live and we have the time to kind of bump the query and see what's going on there's another way there in the making the case for more hardware is difficult when you're vague about what you need it for but if you can say hey there's all these awesome things that we can do with it we can't do right now because we don't have enough hardware send me all those yeah so and your planning is happening soon I would say I'm not sure I will be telling completely to Ruth when I say it because like yet another server would suddenly run makes system perform better overall but for single query performance it wouldn't do anything so from that point we will be more safe from these queries but it won't change anything for the individual query so we need to just say consideration enough to run more hardware because I would be happy to have more hardware my experience is that especially when you use labels and queries often time out especially when you're using the servers for the labels it's like mostly times out it's there yeah yeah there's a reason why that happens because and actually there was time to open to it that it requires a bit messing with kind of hard source rules but maybe there's a way to make it faster it's a bit slow just because it seems it doesn't and just looks at the labels on the full language and we have like 600 languages so it might be just a bit complex what it does and it doesn't really use optimizations because we basically don't operate with optimizers so we might try to do that so maybe we can fix it I'll look into it and see if it's yeah I figured out that if you suck please then if you like just like pull the labels in the last step that helps because as I said we're not cooperating with optimizers we have to separate the part that optimizer can work and the part that it cannot and then the optimizer can do much better than the part that it can so software is sometimes not always what that might be one of the things that happens okay so yeah so maybe this is already documented somewhere but is there a document on making your queries more optimal there is something here it's kind of unofficial maybe we should review it and make it more official yeah that might be possible that you kind of make that like a kind of topic and then we can receive information and then maybe we have a plan to handle that yeah so one thing now we have this help portal so we have a place where we can link things so maybe we should review that and kind of bring it to the at least semi-official status and then maybe official status and link it from that portal I could have a question about like we've been talking a lot about like performance and like time out and things like that I know Dan like on Discovery you guys are thinking about justin stuff right like in the series there should be wiki data like how does this plan together is that like a long term strategy for like period of service something serious is that like just special use cases or like what's the relationship or how between elastic search on the wiki and the like what are some yeah because I just I know you're solving special use cases right or what's the I don't know you speak more like what that plan is like how Discovery would work with that I don't know that we have a specific plan honestly things that mostly we don't on a case by case basis the things that for example giving articles within x miles x kilometers on a specific location or a specific thing theoretically you could do something like that but I don't know what we have a specific strategy so one of the things that we plan to do soon as I mentioned is integrating between the query service and plastic so how that relates to question is that it might also help to use the cases that are covered by elastic better than query service obvious cases full-time search risk query service doesn't do right now elastic excel then but not only there might be other for example coordinate some cases of coordinates maybe better or maybe elastic has some plugin that does I don't know what does it excellently so we would use it this way generally the big distinction there is that serious only does single documents so whatever you're doing like you can't inspect your graph the way that the query service can so that's kind of where the separation there happens so it's a very good filter on one or more documents but considered as a single document so you can't explore links between them but if you need to find documents that have specific property then it's pretty good it's basically like one level of query maybe this is already possible with info boxes or templates but I'd like to be able to embed a graph view of a certain query in the article maybe so you mean like visualization not possible right now but there might be possible in the future if we gather it so we are thinking right now about making some visualizations kind of exportable so there's actually two ideas of how to do it the simpler way is to make it exportable as an image which is more like if you write an article and want to put it there basically a glorified screenshot the more complex is to make it exportable as a template for what the user is doing and that will require some work but yeah that's an idea that we are looking at we currently currently embed image maps as like a nested set of about 7 templates under wiki articles that include links to other wiki articles to show the interactions between proteins for example from one protein article and with the wiki data it is theoretically possible so there is no I don't see at least the technical reason why it won't be possible it requires work but we are thinking about is that something I can contribute to is that a common project something but if there won't be we'll publish it on the project space I'm not sure graphic extension actually can do that it's an interesting part I mean you are just putting text and if you put the graphic extension then it will come out with content there's no wiki text maybe maybe it's solidly possible I'm not sure but maybe maybe it's solidly possible maybe it's too soon to ask this question but maybe I was wondering if you have any early thoughts on integrating with the content that we worked on for the comments well we are thinking about it so basically I think the plan is to by the time it actually there to have some concept of you doing quite a bit too I'm not sure how it would look like like if it would be the same database to different databases I don't know yet but one thing I think we can already say is that for the comments editors who will want to search because it's based on specific criteria we probably don't want that to be forced to write spot queries so we will need to have some nice interface for them to say or show me pictures like this it's possible that we could do that using the last state search anything single document filtering is really amazing if you need that stuff to go well like what we just said if you need the documents just filtered by certain criteria Elastic is very good at it if you need any graph navigation Elastic is not good at it it's a long game sorry no question I don't know maybe somebody already asked this and I missed it but what about the reverse of integrating search results into integrated queries what about integrating some integrated query results into on-weekly searches like what Google does I mean if somebody types into on-weekly in the search box on a search page something that looks easy to process as a on-weekly data query like somebody writes cats famous cats in California and something could notice that there's a lot of categories that are on on-weekly data and there's an easy way to link those and for a little graph box on the surface also in on-weekly crazy, crazy product type idea we have this last product service that supplies the process that the language input just have a little job search that takes a 3-8% of the product so if there's any useful output it doesn't have to be integrating a service or when we use itself anything just with a job search could have just as a product type for people who don't know how much it costs because it's going to be like 100% as much as you can and maybe some something just crazy product type idea we've been talking about 100 queries a second yeah, so we have two yeah, two barriers basically to encounter the process performance barrier because the queries that we have there's no way Sparkle can handle that much queries especially a kind of web queries like maybe there's something that is called cats and maybe there's something that is called how many and maybe they're combined in some way that can produce some results this is very hard queries that take a lot of time that's one thing the second thing is just conceptually don't you know which queries look like they might work so I think we need more experience on that before we can do something useful but yeah that's definitely on our minds I just wouldn't expect it in the next couple of years to have something work there's also the potential to misunderstanding like I asked Platypus where is Germany and one of the answers was in the middle of Sweden where is Germany and there's two Germany's inside Germany and there's one inside Sweden so you really have a mission to Germany though that's what it sounds like so but it's a really awesome idea it just needs a bit of work so I would try everything that starts with the question mark who, where, how many or anything that is with the question mark we use the number and yeah we just need to look at it because it would be very hard to get the sparkle to perform on the level that can process a lot of queries but it also kind of relates to the design of how there are page right now works so we have no concept of kind of staging results so we cannot give one result and then deliver another result a bit later so that means if sparkle takes a lot of time the whole search page takes a lot of time so if we design it so we could split-load it that would be much more viable to actually deliver such results so we're out of time I think if anyone has any questions feel free to ask, stop the ride thank you yes please yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes Yes yes yes yes yes yes yes