 questions to the last lecture, how to organize policy control, which in all sorts of systems is a very important part. All parts. If you have data, you have to have pre-work-discussing standards. Someone has to control it. What is it? I mean, I know that is not an easy job. In all databases, you have to problem what should be in it and what can... because it's somehow taken as a reverence. And the data, the policy control is very important. How do you do all that? That's true. And policy control and models depend on how they are generated. So we allocate our data based on the... based on some levels. So we call them gold, silver, bronze. So how this is characterized through experiments, and we have high confidence we allocate those parts and models accordingly. And sometimes we use predictions or computational tools, long quantifying parts. So we use some predictions and in that case we say this part is characterized or predicted by using a computational tool. And that foundation is also searchable. If your tool is using the resources that I showed, you can filter them. You can use the high quality one if you want to. That's the quality control we have at the moment. But what I can also say is that in Silicopa it's going really fast so we can write a program and integrate data. But the feedback on the experiment at the moment is the bottleneck. The characterization of parts is taking more time. I mean, by asking this question we also have the problem in general with mathematical software, for instance. There is now a huge connection in overall form. How do we... I mean, we have a reviewing system for theory and for papers but the problem is how do you do the software, for instance, which you need if you do provide the algorithm. I don't know if there are, of course, several approaches but we tend to... we always tend to write some unit tests that would handle the particular problem and make sure at least they are covered. Of course there will be things that we didn't cover and it's like continuous processing. I guess if I may rebound on the question. For example, in the beginning when you mentioned ontologies and you said that you're doing faceted queries and aggregation of queries thanks to Sparkle and the ontologies. Of course you're managing content from the other days and you're querying with ontologies but ontologies could be seen as content as well when they are both. So how do you manage the fact that your ontologies can evolve as well so the same query could provide different answers from ontology versus one to ontology versus two on the same databases which is basically the life cycle management of the ontology. Well, I guess if ontology is evolving I assume your knowledge space, your data is also evolving otherwise you wouldn't just introduce a term just for introducing. I assume you also tend to take integration. At some time the owners of the ontology are doing the evolution by themselves on their side so the data is not necessarily reflect the impact. In that case I guess the query is going to turn anything basically. That's my point. But maybe we can include this process by making these terms available to the computation tools and the ontology terms available. Actually we have been doing an equivalent system for the industrial site and wherever we release the query language we check that basically there is some sort of mathematical equivalences for some query primitives from version one to version two to version three so that at least we retrieve the same kind of answers and if we have like dramatic disruptions then we need to within the system or at least revise the content. It needs to be revised. Like a very quick comment on this I think so I'm also involved in the ASCII development a little bit those are the kind of problems that we would like to have actually right now on the synthetic raw edges side because the more basic problem is that most of the data obviously is not in the ASCII stack it's not actually there so it's very good problems but I would love to have those problems reading a paper where I don't even get the sequence. I understand that. Yes sir. I was not mean to make you for the entire session. I don't consider it as a sanction against my country. So, up and down to the first two questions is your query system and database stable against the errors and full questions? If I put on a query with a minor mistake so what level of errors would yield a senseless answer and which one still brings something reasonable and useful to work with? Okay, the queries there are lots of query types the one that I showed here using ontologies and that would force you to create a ontology term so using these approaches you can't make mistakes because you have to use an existing ontology tool such as protegeal oil What you saw was a mistake? I can't. Using these approaches would have to so maybe the next thing is to provide a computation database abstracting views of ontology Excuse me, did you test your system physically? Yes, I did. Great. Is it possible to see some reports on testing? Yes, I was showing a table of summaries with classification so that was my example but then I presented some additional complex questions about pathways how to operate a home function how to operate a specific pathway or how to operate a specific coding sequence how to operate a home function for example and you can easily write queries but I guess you have to be an expert ontology because you are not I know but our task is then to provide some computational tools which would abstract that ontology level and you just use some drop down boxes basically I think that's the next thing Is it possible to the question to the previous the first one presided? Is it? Yes, sure What you said about the microbiome what kind of diversity or natural dispersion of that letter is behind your presentation I mean dietary actually affects the biome and some other aspects as well so what is the natural diversity of the biomes you have studied and what is the standard deviation of your data so this is a very good question and an extremely complicated one the standard deviation of our data is extremely small for the simple reason that what we are doing right now is mice, they are fed with the same batch of the exact same food over and over again they came from the same facility so from mouse to mouse there is a very minimal deviation in the type of microbiome they have but this is not quite artificial but this is also not quite natural in one of my introductory slides there was something about the abundance of bacteria bacteria is theta-iota-micron you need to change that name and in sample microbiomes about 46% of humans are going to have theta and conversely 54% are not going to have it that doesn't mean they are not going to have bacteria they are just going to have another one so to an extent everything we propose right now are proof of concepts and what we envision in the longer term is some form of personalized medicine where maybe this exact strain would not be the way to go but we are trying to develop sets of tools that can be applied to microbes that can be used within natural microbiomes same as the phage I presented a single scaffold but I am not expecting that any real life application would exclusively use t7 life phages we would need a couple of different scaffolds for the simple reason that some diseases are caused by ground negative, some by ground positive and to my knowledge there is not a single phage which is capable of targeting both so at the very least we need a minimum of two scaffolds the same is true for all of those bacterial engineering systems that we envision in bacterias we do it in bacteria ethyna eutomycron because it is somewhat characterized and to start a project you need something which is not completely unknown but we are moving into experiments where we actually isolate bacteria from natural microbiomes and see to what extent we can transplant everything we've built so far into those birds that we know nothing of the 16S sequence excuse me very simple question for two human beings with drastically different immune status very strong and very weak what kind of difference do you expect in biome in brief in the most general terms I don't know how to answer that qualitatively very different so someone who has Crohn's disease will see its complement of enterobacteria say about from about 0.1% to something in the range of 5% an example so it can be pretty drastic that's the whole idea behind the implication of microbiome disease these biases is the whole idea and we're trying to find ways to adapt microbiome to a more healthy situation and yes it will need to be tested obviously but we are not there yet at all so do you know what is the stoichiometry for the repression with the Cas9 do you need a lot of RNA and a lot of Cas9 or a lot of both things because you have a repression but by default you have a fixed amount of Cas9 and a fixed amount of RNA so do you know whether you can modify the effect by playing with the relative amounts that you have of Cas9 and RNA yes it's possible the way it is implemented in those data experiments is that Cas9 D Cas9 itself isn't a IPTG induction and based on IPTG concentration we see a similar response like you would expect from any other IPTG you don't know how many molecules are involved in the process you have like 1 to 10 100, 1000 to 1 the target is just one promoter so how much of the other components you need to have a repression so Cas9 works at very low concentration I would not be capable of telling you how many molecules of D Cas9 you have in the cell but it is not the limiting factor usually the limiting factor is the guide RNA because it's a small RNA and it's pretty unstable so the problem is that you have to guide RNA to guide RNA it was also easy building the data that I showed, I kind of skipped through it but you could see very drastic differences in efficiency from a guide RNA to a guide RNA and some of them differ by only a few base pairs you take one and you take one which is a few base pairs downstream and they can be as much as a truthful difference even through so it can vary drastically but it's difficult to deconvolve what is the actual efficiency of binding of the multiple components from stability of the respective components some sort of a founding question about microbiome engineering sorry for that but it really comes up it is pretty clear to in my opinion at least perhaps am I wrong that the microbiome is a world of itself that not only has interactions with the host organism but also has lots of interactions among the different bacteria or microorganisms that live in the intestine from that point of view if for instance we would like to set up a microorganism that produces some more of a vitamin for the benefit of the host it is pretty clear that it will change the composition of the microbiome because it will have some bacteria which will be favored by the existence of this vitamin that they cannot make for instance and will benefit from it and in the end it is not even sure that the host organism actually will benefit from this probiotic bacteria engineer bacteria so that my question then would be what is the value what is the value working on one of these components so I got theta is the name for me to obtain any type of result a net output in a real situation without taking into account the whole the interactions with the other microorganisms either through modeling or directly empirically so theta-yotamicron repeat after me now to get to the actual question so the question is what is the importance and this is an excellent question what is the importance of the interrelations between the various members of the microbiome when you are trying to engineer it and the answer is uncharted the only thing we know is that I don't know of any paper which managed to link in any way two species together we know they interact that's all we know so at this point it's completely exploratory maybe we won't be able to do anything useful and maybe we will and that's the whole idea maybe we will just one more thing what we are targeting now are more really diseased microbiomes so we're not really trying to go into the tabolo mix of the microbiome and just delicately engineering out a given pathway while increasing the level of expression of another one we are more thinking of things like there is a bad bacterium we want it out and this should be feasible I mean we can do it we can do some extending the lab it's a question of efficiency after that I have a practical question so you are working with a strictly anaerobic bacterium bacteria is strictly anaerobic but you are using as reporter luciferase how do you combine this because luciferase obviously requires oxygen nanobook that's the whole difference nanobook was designed for thermal or strategy to be active under anaerobic conditions so it's all in the reporter and I have no idea what they did but nanobook is perfectly functional under anaerobic conditions without oxygen well depending on the assays there may or may not be oxygen and here I would need Mark who actually did the measurements have a clear answer so we have made measurements where we take this tool and we do it in a test tube and here there's plenty of oxygen but he's also made in situ luciferase measurements and those I am not sure how we perform but what I am sure of is that nanobook works even under anaerobic conditions they first tried with various members of the fluorescent proteins none of them ever worked because of that anaerobic condition problem the other thing I would like to add is that although bacteria is strictly anaerobic it is oxygen tolerant so you can actually grow it anaerobically take it to your desk and it will not die it will stop growing but it will not die so there are a lot of things that you can still measure aerobically although you have grown everything anaerobically I am just curious you have a microbiome and you have these bacterial phages are they also in your gut they should be so yeah they are probably 10 times more phages in your gut than bacteria and they are already probably 100 times more bacteria than human cells in your gut could there be an option to somehow coordinate or modify the activity of the bacterial phages to have a positive effect yes so maybe I went too fast on that indeed that is one of the idea why we are trying to control hostage everything the bacterial phages in the end boils down to hostage and this is an extremely complex phenotype to control but yes one of the idea we want is to be able to use the phages to clear up space the bacteria we construct to settle in because direct competition between an engineered microbe and a natural microbe is unlikely to ever lead to colonization so that may indeed result to directly isolating phages from the guts and engineering them the way we want them and so on or that may have to do with phages that we already have in the lab and we just engineer to have a different hostage and we just try it out and we don't know yet what works Do you think that it can be some that's the nutrient that the diet can induce metabolic switching gut-filling says yes gut-filling but I can't think of a paper I read which actually measured that I can think of papers that measured the various so that started from a homogenous population of mice for that instance and then they submitted them to various diets and found out that at the end they had a completely different microbiome but I don't think that was ever correlated to some form of metabolic now the field is moving quickly so it's not impossible to do just dealing with the paper it's a sense of history so yes, there will be some change what change who knows so is it possible to represent the effect of their recombinase or something that modifies the design itself or a cast-9 but that would cut the design and therefore maybe sell something new using S-volume so the question is I'm cathartically active and you represent you want to represent that using S-volume was that the question? I would like to represent that you can modify the original design the design can modify itself yes you can represent it yes you can basically create components for your target as well and then you can add annotations and you can also create an interaction using S-volume entities so there's an interaction between the S-volume you can say that interaction basically represents the genome editing and if there's two participants one is the complex and the second one is the DNA and you can also assign roles to the participants using the S-volume interactions I think that would be sufficient for you to change data just a question on just semantic web technologies in general and the idea of querying across ontologies is there a limitation as far as the with your system as far as I can see or semantic web technologies you're making statements about things so you're saying that this protein is a whatever or a particular molecule is like a fatty acid or whatever statements that you make about things and then you're reading across that is there a limitation though that if you come across there's no way of inferring knowledge from that insofar as you can if you come across with an entirely novel protein or an entirely novel small molecule you can't make any you can't question all that because the system has never seen it before is that a limitation insofar as you can't actually infer new knowledge from this you can just query of what is existing and what is normal that's true, you can only query what you have but sometimes there are data over there you can always see query but you just don't realize that maybe that protein is a specific trap you don't realize that but once you query your database and find that that trap is similar to something else you infer that you have made a candidate but it's that question or similar thing you know what you're saying that is similar to that but what doesn't seem to be it doesn't seem that you can capture that kind of that mental leap that we can make in such a system in our ontology we use, for example, gene ontologies which assigns molecular functions other links that you can capture maybe but I'm not saying we've designed it ontology and we captured the whole domain of the post-internatology I'm not saying that there are most of the things that we didn't capture I just wonder if it could be augmented with more kind of cheminformatics approaches or more biochematic approaches then you've got the knowledge that you know and then you can make the lead that says this command looks like that compound of which I know these facts are that you know that it strikes me that on its own it would be less useful than having this kind of hybrid system where that knows a little more about chemistry and biochemistry yes certainly you can always mine your raw data and then maybe as next wave you can provide some more high level information which can be probably better represented using biological networks yeah but raw data, sometimes you have to do bioinformatics to compare sequences or search for specific and for instance you have to do that, I guess it's not as far as networks are used I can just add an additional answer to that so far when we say ontology as you know in the semantic web standards we have multiple levels of what we call ontology there's a form I call Audible UL actually this is the one you're using and the first level of that language provides inferencing between terms when there are synonyms and stuff like that the two subsequent levels which are considered as being dangerous by the computer science guys are actually doing a little more because whenever they provide some inferencing at two or three or four degrees then they are directly retrofitting that link and creating a direct link so they are like enriching the ontology at query time and therefore there is like an extension of a sparkle language that's the first thing that we are monitoring what we are doing and the second thing is machine learning and deep learning is also now trying to merge with that space so that query inferencing and let's say mining are trying to link together in one system but it's really not the same what you said last time you're implying and this is a curation and it's interesting but this is a curation and quality process back to microbiome did you test a relaxation or recovery capacity of the entity I mean if you course artificially say a diarrhea with the physiological some stuff then how fast the biome gets into the normal condition normal state normal exactly yeah I'm trying to think but it's important to remember that really the microbiome field is extremely young although the word has been used for fairly long time the real quantitative data date back from less than five years ago so this is still extremely descriptive where people take a bunch of samples that may be more or less biologically relevant deep sequence them to find out what kind of bugs they are and that's it as far as I know there is not a single actual follow-up experiment during an actual illness that has ever been done it has which one then both the USA and former Soviet Union there is quite abundant data on spacemen concerning the testinal microflora dynamics and quite number of those data are freely published are not classified and actually those data says that for human being in quite specific conditions meaning some other are classified sure sure quite specific in physiological terms first of all the recovery is quite fast very interesting yes I am thinking as we are discussing so what we see in the mouse indeed is that as soon as we stop so theta is pretty stable but it will get outcompeted after about 30 days actually my question addresses not the theta itself but the composition or combined and gene consisting of a mouse and a theta so if you make some delusion of intestinal microflora then how fast it recovers to the previous normal status well yeah one of the reasons why I have difficulty to answer that question is that I am not sure how to define normal status ever so we actually it was my attempt to understand how you define it so good answer I will just think about what we have done and in what we have done we actually never look at the endogenous microflora so far what we follow exclusively is the engineered bug that we put in it the only measure of the resident the resident microflora we have is the total count on a given type of plate which is an extremely biased measure and some QPCR data where we measure the total abundance of 16S of a specific type of 16S which is theta and in those experiment we actually so the problems we start from streptomycin treated mouse so when we start the experiment we don't have a natural microflora we have something which was heavily depleted so we do have bacteria that survive streptomycin many of them but that's nothing like a natural microflora so what we know is that once we stop streptomycin within about two days the guts are fully colonized with stuff but we cannot define as of now what stuff is and even less whether or not stuff after is the same as stuff before but that's exclusively for what we do and yes the data is obtainable but that has not been a major focus for us in Tiananmen thank you a question for Gokso how do we adapt the S-ball attempt to the bottom up mainstream synthetic biology I have more difficulties with some other approaches to synthetic biology such as top down or pro-cells or whatever and I would like to know if there are some extensions of S-ball in the planning that would top down genome reduction whatever approaches that are not of the bottom up mainstream type my comment would be S-ball doesn't dictate better you to start the bottom up or top down so it's about exchanging your information without losing any information you should be able to exchange your designs and in my lab I should be able to replicate so I think S-ball would still be useful for top down designs if you mean some abstract definitions like specifications S-ball can still be useful it can substitute things where you can create templates but you can even represent the whole genome using S-ball that's perfectly fine and S-ball is idea-based and backed up it turns from different ontologies and we also developed in a way that's really extensive even if you come and represent your specification-specific data with S-ball we have an ability to store your data and during round trips it wouldn't be lost so those kind of entities we call them generic top levels and we treat them as external data and we introduced that concept and we wanted to easily create extensions in the future at one if you don't have a context expansion context information is easily exchangeable for data shift but in the future there will be extensions for S-ball hopefully some group will develop and will devote it to the short track and will say let's expand S-ball with this extension any other questions? still have some time? no other questions? thank you but I have a question so you use this approach of delivering class 9 toward using M-13 which well is a great idea although M-17 as mentioned is calling to use for other microbes but this idea could be used with all the other phages that you are developing like delivering this type of toxins no gases into these bacteria phages but in addition to the addition to the targeting of the bacteria have you considered the replication requirements of the phages in the sense that in order to have let's say a minimal set of phages that could infect many bacteria it's not only the capacity to let's say infect at the beginning but also the ability to replicate inside I don't know if these are very strict for many phages do you think this could be a limitation? yes and no as usual that depends what kind of application of phages you envision if you envision delivery of DNA like the Cas9 system with M-13 and you just want to package it into something else which is better than M-13 then phage replication is not relevant at all there is no phage replication you package your cargo DNA in the lab then you deliver it and then it's just a cassette when bacteria heat type of system then the plasmid design may or may not be replicative that depends on what you want to do with your Cas9 system but the phage replication does not occur, cannot occur that's the whole idea behind phage and maybe I'm not going to say too much about it because I'm guessing that this is what David B. Carr is going to be talking about tomorrow so in this case this is a one-cap set one-hit and if you have a billion cells to kill you need a billion phages and you need each one of them to find its target this is one of the reasons why I don't really like that approach and I am still a big proponent of more classical phage therapy where you indeed use the capacity of the phage to spread once it found the right bacteria in this case there is indeed a lot more than just host recognition to do with replication so bacteria have developed a lot of phage resistance mechanisms outside of simply changing evolving mutating receptors of very different types and of high diversity from strain to strain and it then becomes a question of choosing the most appropriate scaffold so the best one there is but this is also the reason why at some point I mentioned that I don't believe we will make you with a single scaffold we'll probably need a few the idea is to limit as much as possible the number of scaffold we need to use because as Yvonne said yesterday every time you introduce a new component in the system there is one an increased number of risk of failure and second there is an increased cost of being getting it approved tested guaranteed to be okay okay so I hope to thank our speakers for their presentations