 Okay, so thank you very much for the invitation to speak, so okay, so one of the questions that many of us interested in is how information is represented and processed by the circuitry of the nervous system and how this is disturbed in disease states. So the circuits we're talking about, the circuitry we're talking about are things are circuits more like that on the left than that on the right, of course, but there are certain key things sort of in common with investigating these and one of this, if you want to understand how this circuit on the right worked, you would really want to know about the activity in the individual elements of that circuit, the transistors, and you would actually sort of want to be targeting individual transistors in some way to represent the logic of the information contained there to understand how that worked. And we'd argue that the same thing is true over here, but in order to answer this kind of question, we need to record from targeted individual processing elements. So what I'm going to be doing in the talk today is basically giving a sort of a technology talk here. There's a sort of a neuroscience talk I can give as well, but this will be focusing more on the technology on some of the tools that we're using to look at that, look at targeted elements, one of which is automated patch clamp recording, the other of which sort of comes back to sort of calcium imaging and sort of joined up with some of the tools, some of the informatics type tools that we're developing to work on this. So the gold standard for monitoring neuronal properties, of course, is the whole cell patch clamp developed some time ago now, obviously used with great acclaim to study properties of individual ion channels, et cetera, and I'm sure everyone in the room sort of knows at least roughly how it works, basically take a pipette down, stick it on to the side of a cell. You can do whole cell record, you can do sort of cell attach type recording with it where you basically can monitor, say, action potentials without disrupting the cell or you can break into the cell and go whole cell. So you are in principle disrupting the cell, but you've got full access to the inside of the cell. You can monitor sub threshold signals, you can even put plasmids and things in and modify what's going on in the cell. Now this was done initially in vitro and later on in vivo, so it has been a sort of very powerful technique. So moving into the 1990s, originally Vidya Sagar and Kreuzfeld were the first to do it in vivo. So okay, so that's great, but done in vivo, this method basically you take a pipette down and you record from the first cell that you happen to come across before your pipette gets blocked. So it's not really targeting. Coming back to targeting, so there is, this is actually a very old slide, there are more tools available now, but we have this fantastic library of initially Kree based and now a variety of other molecular tools, intersectional targeting, where we can actually have transgenic mice expressing some reporter in a specifically targeted cell type. So we've got examples here of dentite granule cells, cortical interneurons, Pekinchi cells, Bergman glial cells, etc., and in principle we can also use dye based approaches. This is the classical AM dye approach where we can actually sort of separately label neurons and glial cells here by combining two different dyes. So by using these kind of approaches we can actually target sort of individual circuit elements. Now we can also using two-photon imaging, we can image in vivo the elements of these circuits, so that's another tool in the toolbox here, some images I collected in my lab some time ago now. So let's put that together with patch clamping and sort of see where we go. So basically this was developed by Troy McGree, originally in the early 2000s, two-photon targeted patching. So basically you can fill a pipette with red dye, so we can see that here, pipette with red dye, this has been taken down into the brain under the guidance of two-photon microscope to target a cell which has been labelled with green fluorescent protein. And so this is all happening under a two-photon microscope, we've effectively got a craniotomy with the objective sort of sitting above it here. Now this is actually harder, you might think intuitively it maybe should be easier than blind in vivo patching because now you can see what you're doing, okay that is a plus, but the downside is with blind in vivo patching you're basically, you'll accept the first cell you come across, that's a legitimate target. With two-photon targeted patching you're actually, you have a smaller number of targets, you want to target a particular type of cell, so that's your target, so there are a smaller number of these potential targets. So it's actually, the hit rates are somewhat lower and as a result there's extremely few experienced two-photon targeted patches worldwide and they have a bad habit of getting promoted and not having enough time to spend at the lab anymore. So this has limited the dissemination of the technique, so one of the things we set out to do was to use robotic automation to increase the productivity here and lower the entry barrier. So in this we were motivated by a paper from Ed Boyden's group by Suhaza Kondar-Romaia who had automated blind in vivo patching, so we looked at this and said well maybe we can sort of add a two-photon targeting layer to that. Our initial thoughts were sort of quite naive on this, we thought okay we can just select a target and move down in the brain, it turns out there's a crucial technical problem, we'll talk about in a minute, you have to solve with that, but basically it seemed that maybe automation, robotic technology is at least a way to sort of overcome some of this dissemination barrier, okay, so we need robots, not robots like this guy, he would be a terrible patcher, okay so the first thing we set out to do was basically to reproduce the Ed Boyden paper, so it's always good to give summer students an impossible project just to see how well they can do, so I gave this to Luca Anakino as a summer project, as an intern in my lab, and it didn't quite manage to succeed, we actually got a fairway through in doing it, this is our version of the blind in vivo patcher, and basically you start with prepared on the surface of the brain and then sort of hand over to automatic control, you're monitoring the impedance, controlling the pressure, we've built a nice sort of automatic pressure controller which can do sort of graded pressure controls with sort of fast time scales, you can sort of try and imitate the kind of behavior that a human patcher would do, and then you get this sort of readout display over here, there are different modes, the pipette insertion mode, we've got a higher pressure when you're going down, so you're going to sell hunting mode and sort of monitoring impedance here when you get close to a cell, you're going to try and seal etc, and so this all works nicely, so basically our success rates with this, so if you want to just do sell attached recordings, it's actually got a 74% success rate, so it's quite a high hit rate there, going wholesale that reduces, so you lose a few when you go to wholesale, so it goes down to 51%, so it's basically similar to the Kudandara Maya earlier work. Okay, now the problem is, for using this, for doing targeted recording, the chances of hitting a cell of a defined type are quite low, unless it happens to be a layer 2-3 parameter or so, in which case your odds are reasonable, so we need to put this 2-photo and targeting layer over the top, so this is our version of the 2-photo and targeted patching robot. Now the key problem, I should just mention now, is that as you move it down through the brain, you're moving the brain, so let's say you've got your labeled cell, you've sort of scanned around in 3D, sort of clicked on it, okay, here, this is where it is, you move that through the brain, it starts to move the tissue, you get sort of viscoelastic deformation of the tissue, the target sort of starts to migrate, so you need to compensate for that, so a lot of the problem here has been, has involved sort of dealing with that, so the way this works, it's basically a sort of similar kind of setup, but we've now got bits of the 2-photo and sort of layer over the top, fairly standard, so you take the pipette down, you acquire the pipette tip, either do that manually or through an automatic method, and then it basically sort of goes down sort of as before, you've got sort of automatic approach, now you've acquired the target, okay, so you've selected a particular target, and now as we go through, and I'll sort of show you a bit more about this in a minute, we're actually modifying the trajectory in order to keep the target, you know, a defined position, okay, and then as we go through that, we'll go to sort of cell engagement, et cetera, cell formation, and so forth, this is just sort of actually an early picture of what the system sort of looked like there, it's all built in lab view, this sort of lab view control package that you use, now the key thing, as I said, is dealing with this movement issue, the way we've dealt with this is basically we've built a little computer vision system, so you're using the 2-photo microscope, in effect is the eyes of the system, you've got your target location, and you can then select either the target or actually surrounding structures as well, which are then tracked, so it's basically an optic, sort of an optic flow type system, you're actually sort of tracking the motion of the targets, you do that in both x, y, and in z, the z is somewhat special, and in the z direction, what you're doing is actually keeping the target in focus, this is an autofocus algorithm, which sort of works, so basically, this basic image processing to first sort of find the targets, and then you want to keep basically the selected structures in focus, there's a sort of a contrast focus score, which you can automatically assign, and then as the tissue moves, basically your targeted structure starts to just to go out of focus, even under 2-photon, and you can actually use that to keep the target tracked, so we do this iteratively, you're moving the perpet down, you're tracking, so you've got the 2-photon layer basically taking stacks around the target location while you're moving the perpet down, and you're iteratively adjusting the trajectory of the perpet as you go down, so here's an example here, you can see that it's moving down, so these numbers here correspond to the points on this trace here, so this is in the automatic approach period, you can see there's a few sort of wiggles, as there's been a bit of tissue movement, there's a more major jump over there, see something sort of shifted, and then after that it's sort of going down relatively directly towards the cell, and then you go on into the sort of sealing process and breaking, the very last bit is effectively operating similar to the blind auto-patcher, alternatively you can hand it over to human control if you wish, if you want to do something fancy at that point, so this works quite nicely, here's just a few examples, here's a sort of GAD67 positive interneuron, we've done some recordings from, just another example, so this is actually, it works with AM diluting as well, the old method before we got G-Camp6, and we've used that to do recordings from various cells as well, so systematically you can do it for different cell types, works on astrocytes as well, here's an example, so before I was showing you, filling the pipette with red dye, Alexa 594 and targeting a GFP labelled structure, in this case we've filled the pipette with Alexa 488 and targeted sulfur-rodomain labelled astrocytes, it's quite useful for doing cell attached recordings as well, as I said the hit rate is actually higher for cell attached, so you can use it to do cell attached recordings, and okay, so then we're at the point, okay it works, we want to know how well it works, so how well does it work in comparison to a human operator, and we weren't really at this stage trying to make it better than a human operator, just as good as, and I think we managed to achieve that, so here's a number of dimensions on which you might consider the performance, one is the input resistance, the recording, the amplitude of the spike, basically if you've got a bad recording that's always a good signature, the resting membrane potential, also very relevant, and of course the holding, the holding amount of time that you can actually hold the cell for, so the green triangles here are the robotic system with myocardial pulverum and containing it in your irons, we've got a few examples of, of Pekingy cells as well in the same, the same transgenic mouse model labelled Pekingy cells as well in the cerebellum, so we used a few of those as well just as a sort of a cross check, and then sort of a, a comparison with a manual operator, so when we sent this paper off for publication, the reviewers sort of came back saying this is all very nice, but can you compare the performance with an experienced human two-photo untargeted patcher, so we didn't have one on hand, but we managed to train up an experienced in vivo patcher, and that was sort of acceptable, so this was someone we trained up to, to do the two-photo untargeted recordings, having had a lot of experience doing in vivo blind recordings, and essentially, you know, very similar performance across the board with about the same number of recordings made. So basically this table here sort of shows a comparison performance, this is, this is the robot, this is doing it manually, this, our experienced human, this is not doing it two-photo untargeted, but just doing a blind, just using the, using the robotic system, but just to do a blind recording, so 74% goes down to 46% getting a seal there for the robot, or 56% for doing it manually. When you go down to, actually going whole cell, it's more like about 20% success rate, so about one in five of these penetrations you'll actually get a whole cell recording with the robot, it would be 51% if you just wanted to do a blind recording. This is the total, so this, no, so this one here is the total that have managed to get a successful seal, okay? So that's of all of the penetrations. The denominator is the number of times that you've put a pip out into the brain, and therefore we're getting overall above about one whole cell recording per animal, which is sort of where it needs to be to be useful. It's basically very similar quality of recordings, so that's, so basically conclusion there is we now have a system which is effectively sort of very similar to a human. Now as we started doing this, we became aware reasonably soon that Ed Boyden's group were also working on this, and we basically made a, you know, we sort of exchanged notes at some point, and then sort of went sort of separate ways, and then reconverged to actually have our papers accepted the same issue of Neuron. Just a sort of a brief comment on the differences in the method. They actually end up being very similar. The main difference is that we, our approach we basically, we make as many movement corrections as possible further away from the cell. The idea that being that if you're moving your pip out, potentially slicing through a tissue close to the cell, we're damaging the secretary we're interested in. They instead, and then we went sort of obliquely down to the cell. They instead went to above the cell and then moved down on it from above. The plus of our method is, yeah, less maybe damage to the local secretary. The downside of our method is I think we're slightly more likely to sort of do what we call an impale, impalement, where basically your pipette starts to sort of push through the cell a little bit as well. That's maybe something that can be optimised further. Like some of the nice recordings we've got that otherwise look nice. If you look at the, if you look at the stack you can see that the pipette sort of pushed a little bit too far. So, works quite nicely. Hopefully we'll improve the rollout of this technique. We've basically got all the material openly available and we're going to be trying to sort of open up further to build systems for people. And our next steps on that, we're using that in combination with calcium imaging to examine my doing a memory task. So we've mentioned this project on mouse models of dementia that we're trying to characterize things there. And for that I'm now going to jump back a bit. I've got 11 minutes, okay, so I've got time to go through this next section. I'm going to jump back a bit to calcium imaging. So, whole cell patricine recording, great for looking at single cells. We are working actually on expanding it to multiple cells of people doing the blind robotic patching. I've got that going and quad patching etc. So I've got a student working on expanding that with two phototargeted patching to multiple cells, which I think will add a lot to the potential questions we can use to ask. But, you know, it's one side of the story and the other side is I think looking at calcium imaging and of course also voltage sensing is very important as well. So we've worked a bit on imaging analysis tools for the calcium imaging data. I'm just going to select one here which is region of interest segmentation. There's an algorithm called ABLE that we developed activity based level segmentation for basically pulling out region of interest corresponding to individual cell bodies. Now, you might think this is a relatively simple problem. You just do a correlation map and draw a boundary and it'll work okay and you can get something. I would argue it's not and you'll see a slide in two slides of time which will actually show you why that really is the case. It does matter what algorithm you actually use in terms of the cells you pick up. Some cells will pick up with anything but not all. So basically, this is a collaboration with Peter Luigi Dragotti in the electrical engineering department and basically it's evolving. Basically you've got an active contour for each region so there's one of these regions for every cell effectively. You sort of select how many there will be and each of them are actively evolving under the control of a differential equation that basically maximizes a cost function. It still uses the correlation information. It doesn't require the cells to have say classical calcium transient type correlation coefficients as we'll see in a sec. Basically the level set function is basically you've got a high dimensional function based on this cost function whose zero crossing is the cell boundary. So you've got basically this sort of evolving to maximize this constraint for each of these cells and it works quite well. So here's an example taken from calcium imaging data. This is some AM loading data from Slice recorded in my lab I think or no actually sorry no sorry this one is actually from the Superboda data set, a publicly available Superboda data set. And so in this case AM loading data we've also got it working nicely with GCAM6 data. So you can see some of the example regions it's sort of pulling out here. One of the nice things about it is it's not you don't just get ridges with this sort of classical calcium transients here but you get others which when we look at them further we're convinced actually correspond to things like higher firing rate neurons etc that are giving correlation that are based upon the activity but they're not corresponding to these sort of classical sparse things. And of course you also get glial cells and things as well. It works quite nicely with NeuroPill contamination removal algorithms which are very important to do. You can see an example here that if you look at the region corresponding to the cell and the sort of the surrounding area you've got some contamination there you should remove to get the sort of cleaner signal. So how does it compare to other approaches? So we compared it to sweet2p and CNMF algorithms and actually the interesting thing of this whole picture is that while there are many cells like the ones in black that are found by all algorithms actually there are many cells that are found only by one of the algorithms so each of those individual colors there yellow magenta and blue there found only by one of the algorithms and it's a label across here and some found by just a couple of them. I mean the algorithms you know roughly do similar there is no real ground truth for this so we're comparing this with the manual labeling from the NeuroFinder challenge dataset so that's not ground truth it's really important to emphasize that it's just somebody you know and both this person and the three algorithms find a highly well there are many cells in each of them that they find that the others don't and converse. So sweet2p for instance is getting slightly higher percentage of the manual regions found in those estimates but also has issues with fallout there's a certain number of estimates not found in the ground truth so it's effectively sort of false positives it's just finding more regions than the other algorithms. Okay so that sort of works quite nicely that's one of the things we've integrated into a pipeline we've developed we're now applying this to mice performing a memory task here's just an example this is this there's a Neurota platform it's basically a a floating if you can see here there's this bed here there's a lot of holes in there it's air attached here you've got this sort of sort of hockey park type thing it's actually the chamber itself which is floating so the animals head fixed can move around and explore the chamber and you can sort of see that there here's another and sort of doing that you can get similar trajectories found in head fixed mice to freely moving mice so here's what's not truly freely moving this was tethered this is some data from David de Prey and this is our head fixed version with similar sort of speed histograms we can implement sort of various environments with this this one here is a circular version of a linear track the animal can come come up sort of one end and then sort of move back around and we then track the movement just like this side is coming with a circular linear so this this this version is the one where the animal just can go round and round and round this version is the one where the animal can go round and back etc and we can do sort of white mazes and things as well so we're now working on mapping place fields in this environment so we're doing this with sort of labeling cells in hippocampus hippocampus g1 with gcamp 6m ruby it's been developed by Tobias Rose it's working quite nicely for us the nice thing about this we use the red channel and we're actually seeding the ABL algorithm not just with so you can you can seed it with basically a correlation map effectively you know some putative likely you know likely neurons what we're doing is we're seeding it with the red channel here so it's sort of labeling cell bodies so we can make it sort of at least more immune to the problem of selecting only the active cells because we want basically to see all of the cells even if the cell is sort of firing you know one spike every two minutes or something rather we might not otherwise pick it up but we might have more of a chance if we're actually seeding our region with the red channel here we're doing this together with methoxio4 labeling of amyloid plaques we do an IP injection of methoxio4 we then label that down at 720 nanometers take a reference image we can then move through move back up to 940 to do our imaging and then that's sort of an overlay an overlay there we're able to image over multiple days this is one of the nice things you can go back to the same cells over multiple days and you can see so here's a few traces there we're seeing sort of images of the same region over multiple days we're revisiting we can revisit the same cells look at how their calcium transients change etc over there now I just sort of added this as a point for discussion later about that actually bringing up the point made before earlier about the nwb format we've started adopting the nwb format as well so far just for the time series we've decided it was too hard initially just to put all the imaging data in from scratch and maybe that's the long-term goal is that at each stage of each stage of acquisition you know we'd have that available in a standardized format so far what we're doing is we've just got those in tiff stacks and then as soon as we've got the time series data that's going into nwb format we're getting all the tools working with that first and then we'll move back so we're so we we had an opportunity we're setting up this new analysis pipeline to in a sense sort of re-standard as our format so we just move it you're going with the nwb we've been developing this sort of in-house analysis pipeline pipeline neuro-c which hopefully we're ready to publish early next year which incorporates basically these sort of different tools it brings together sort of some movement correction with the ABLE algorithm with some of the calcium transient detection algorithms that we've developed etc together okay and that is based around the nwb platform then at the the end of the experiment series basically we take the animal we we kill it and do whole-brain two-photontomography so with a tissue site 1,000 systems so basically it's again a two-photont microscope that it takes a hundred micron stack slices it off the next hundred micron stack etc so we can then rebuild a 3d image of the whole the brain the whole brain of the mouse that we've previously done our experiments in so the example you'll see just in a second is a 5x fad mouse that's a mouse model of early onset Alzheimer's disease labeled with methoxy-04 so what we're looking at is largely the methoxy-04 label at the moment and there's a gray auto fluorescent channel as well so this is sort of again it's one of the animals that we didn't didn't experiment on but we can just get this is a fairly far gone Alzheimer's let's just say there's amyloid plaques all over the place one of the things I think it's important to work on now with this is actually label-free imaging methods for trying to relate different types of information for instance label-free would be fantastic to have a label-free missile missile imaging method which is something that we're sort of thinking about deeply so I think that gives you a sort of a feeling of a sort of the pipeline of data collection and processing that sort of we're working on through the lab at the moment so I've got minus 10 seconds left so I'll finish with acknowledging basically the various people involved in the work so that the patching robot work was mostly done by Luca Anachino with a bunch of collaborators as well Paul Chatterton another PI at Imperial now just moving to Bristol is an in vivo patching expert provided a lot of the patching advice a lot of the data analysis tool development work is done in collaboration with a signal processing expert Pierluigi Dragotti with a joint student Steph Reynolds the in vivo imaging work is led by a postdoc in my lab Mary-Anne Go with a bunch of other collaborators as well and I also have to thank Troy McGree and Molly Strong at the Sanctuary Welcome Centre who saved our bacon on that neuron paper with a transgenic mice that we needed the last minute to do some more experiments for reviewers so and finally to sort of thank our funders particularly the Michael Uren Foundation and you for your attention thank you Simon so that was very interesting so as you know our basically our entire understanding of spike to calcium transformation is built on a handful of cells I mean like five handful of cells and it seems like this method would be ideal to extend that right easy to sort of switch instead of using the two-folded imaging for passive morphology to use it for sort of active g-camp or so on and then just collect a heck of a lot of ground truth data to understand there's that but but I noticed that both your and Boyden's paper actually don't don't do that I was wondering is there some fundamental difficulty of doing that or is it just that it just wasn't the first thing I mean we so with the g-camp six for instance yeah exactly one of the things with the g-camp six of course is that the baseline fluorescence is reasonably low so of course you can see the cells when they're when they're fluorescing unless you have something else in it as well I think in terms of using it for targeting with our method with the computer vision system you kind of want to have a stable a stable visual signature if you like so if it's changing because of activity level things it might kind of get in the way I agree but I think you could signal process that away right I mean from something silly like a max projection to something smarter yeah and also I mean of course you can use in combination with other labels as well like the mruby one then you know to use that for targeting and then use the g-camp six to study what's going on functionally yeah absolutely I mean I think you know patching with simultaneous calcium probably tells a lot yeah that's a hugely valuable data set that you know half of the world is using g-camp yet there are 30 neurons that are supposed to inform my entire understanding of how spikes relate to g-camp just to follow it up so I completely agree with this comment the previous comment so I would just say that the patch prepared may not be the best for getting that ground truth because it's fairly large right so it's actually I mean that's why when you're going there with the patch prepared with a positive pressure right it's going to be spew I mean you know it's going to push the tissue right so if I were to do it specifically for that purpose I would use a for example a juxtapot which is much smaller so that could be possible right without difficulty right one of the things that we've thought about doing proposed to the various projects we didn't get funded etc was to basically incorporate juxtapot with this in fact I spent some time recently at the mrc brain network dynamics unit in Oxford also a lot of people there who experts in that technique and it's certainly something that we could incorporate yeah so I wondered how quickly the Z correction works as you come down and is that limiting how quickly you can get the pipet down to the cell and as a add-on to that what do you need in addition to a two-photon microscope and a basic patch equipment in order to implement it so second question first not really anything I mean there's a few little cheap devices you know electromagnetic pressure controller etc but nothing substantial and with the first one that's not really the limiting factor in a sense I mean yeah so it takes so the typical time to obtain still is six minutes yes you could probably cut it but I don't think in the context of your of your overall throughput I don't think that's actually you know going to be it's not going to be here well just one additional question and what about the juror what do you do about that we yeah what we we were reflecting a bit of juror we're not going through juror okay for those experiments question on the the rate limiting step yeah for the multi patch all right it do you find that it is almost just a direct probability multiplication so for the multi patch yeah yeah I mean I don't know yet because the student is basically just just getting going but um the um yeah I mean we've got I guess we've got plans as we've got a couple of plans as to how to do it I mean the sort of an iterative version versus versus yeah compensating for the the resultant vector if you like and I guess we're going to just plug out and see which works best obviously if we can just do the resultant it should be quicker whether or not that time matters in the context you're doing a multi patch experiment you're probably going to get one experiment per animal right so you're not yeah you'll be very happy with that so again I think it's probably the time-safe uh mean of the most important issue important issue is actually how well it works the question and maybe everyone has all the speakers Yota, Samantha, do you want to go ahead? Do you see the patch robot getting out of the few labs that have built them and becoming more like you know being well like every lab who does patch clamp currently get a patch robot and what's what's limiting that and uh yeah is is is there sort of a niche for in vivo patch clamp robots that's that makes it makes a lot of sense to have an in vivo patch robot but that you know you've been still for high quality and feature recordings a robot probably might not be sufficient sure the I mean the I mean two photo microscopes I guess so for a start you've got to have a two photo microscope so it's kind of you know become as a sort of dissemination limiting except that's still an issue obviously it'll become less an issue over time when as laser sources are getting better um we um I mean basically we had plans to sort of try and commercialize it uh we're kind of in no man's land on that so we'll see if that goes anywhere we've made it publicly available I think to actually do something I mean it's already all making it publicly available but you kind of have to have the resources to make it easy for people to use so what we'd thought of is we could set something up with a sort of model where we basically send someone to your lab for you know for three weeks to show you how to do things with it and set it all up I think that's what you would probably need to really disseminate that so you need to have a sort of a cost model which deals with that so I think a pure open source thing won't really help because it will get it into a few more labs but not many so you know I guess we'll have to see where we can go with that