 OK, cool. All right. Hey, guys. I'm Trevor, and I'm doing my talk on Python for quantifying cardiovascular function. So I just finished a master's project looking at relationship between brain blood flow regulation and systemic blood pressure regulation. So our whole thing is looking at the different systems that regulate blood flow and blood pressure throughout the body into different organs and whatnot. So this is important because all the tissue in your body needs constant delivery of oxygen and nutrients. Without this, all of your vital organs, your brain, your heart, liver, everything will fail. So yeah, my background, I don't come from a computer science background originally. But studying the human body got me really interested in what tools you can use to do better analysis, basically. So normally, when you go into the doctor, they'll, like it's pretty common, they'll measure your blood pressure. This is a static measurement. And it's just a one-off. They'll say, if your blood pressure is 120 over 80, you're healthy. If your blood pressure is 160 over 90, you're not doing so hot. But the problem with this simplistic measure is it doesn't tell you anything about the moment-to-moment variation or how your body can adapt to that. So if you look at this bottom graph here, this is a real blood pressure trace from our lab. And granted for this one, we did inject drugs to make their blood pressure drop and rise pretty quick. But so basically, look, if you were to measure this guy's blood pressure at 1,700 seconds versus 1,740 seconds, that's quite a discrepancy. So I went and got my blood pressure checked to physical before I started my last job. And I was late. I had already slammed three cups of coffee that morning and had run there from the train. And my blood pressure was through the roof and the nurse was worried. But it's totally different than it was the plot. So this plot, this is a blood pressure plot. This is a continuous waveform. I'll show you some close-ups of it later. But basically, we're so used to thinking of blood pressure as a high and a low, right? Well, this is the continuous pulsatile flow, which is recorded from a device on the finger that records at 1,000 hertz. So you're getting 1,000 data points per second, as opposed to just two data points for. It's a line plot. It's a line plot, yeah, right. And so the green, red, and light blue, those are all calculations I've done in Python that calculate the systolic mean diastolic. So just looking at blood pressure, that's a static measurement of the brachial artery in the arm. If you go a step further above that, you've got a system called the Barrow Reflex, which regulates systemic blood pressure. And then with the local arterial beds surrounding vital organs, such as the brain, you've got things like cerebral auto-regulation, which modulate arterial diameter to control flow. So someone that could have a specific blood pressure and their blood flow to their brain could be not necessarily representative of what the blood pressure is. So one of the big things that I focused on in our lab was the Barrow Reflex. So basically it works as a negative feedback loop. So you see this picture here. For arterial blood pressure, going through the aortic arch in the heart and carotid arteries in your neck, there's sensors that detect change. So if there's an increase or decrease in blood pressure, these nerves will send signals to the brain to change heart rate and arterial diameter. And so that's one of the big things we were looking at is assessing how quickly a person's body can adapt to those changes. So let's see here. Yeah, so blood pressure is a product of cardiac output and total peripheral resistance. So cardiac output, that's related to heart rate and amount of blood that's pumped out for each heartbeat. And then total peripheral resistance, that has to do with the change in arterial diameter. So if blood pressure goes up, the arteries can dilate, and that helps to bring pressure down. So basically the idea is the quicker that your body can do that, the more healthy you are, in theory. So in our lab, we are able to quantify this by making a number of measurements at the same time. So we had transcranial Doppler ultrasound, measuring brain blood flow on the subject. We had ECG trace to get the heart rate and the cuff I was talking about for the blood pressure on the finger. And in addition to that, we also had a nasal line getting Entitle CO2 and oxygen. We inserted microelectrodes into the peroneal nerve in the knee. So it's basically a needle sticking a centimeter into her leg into a nerve. And it's basically like trying to stick a pencil tip into a pencil tip. So it takes us about an hour and a half to get it in the right spot. And then we start our test. But so it's a bit of a disconnect between what we can do in the lab and what we can just do walking around day to day. But it's pretty amazing to me what you can read from these signals. So we do a number of tests where we just look at them at their spontaneous baseline fluctuations in everything and how they relate to one another. Then the drug injections I talked about. There's a kenylin line in the arm where we inject nitropresside and phyllofran to get a quick drop in rise in blood pressure. And then inflating thigh cuffs that basically cut off circulation of the legs. And then deflating them and watching what happens as blood flows back to the legs. So it sounds a bit barbaric, like saying it out loud. But that's what I did for two years. So here's a close-up of the measurements we'd make. So we do this. It's all records to use a program called LabChart. I'm sure there's other ways to do it as well. This is all time aligned together. So we've got the heart ECG trace, CO2 oxygen and blood pressure, two brain blood flow channels, and the nerve activity, which corresponds to the constriction of arteries. So when I started working in this lab, I was involved with data analysis. I didn't have any programming skills. But it basically got delegated to me to make the analysis program for the Barrow Reflex. And I kind of went off the deep end on LabView. And I just show hands. How many of you are familiar with LabView? A handful. So this is kind of, I didn't realize, but I feel like not a lot, not everyone, like it's not as common as I thought it would be. But I guess in physiology, a lot of people do use it. It was pretty easy to dive into, or at least not be intimidated by it, because the back panel, what you're, instead of writing code, you're basically making a circuit board. So while loops and for loops are funny looking boxes with lines coming in and out of them and whatnot. But so to start out, it's pretty easy. But then it's pretty easy to turn it into a big rainbow spaghetti nightmare that takes a while to debug. And in addition to that, after leaving my masters, to get a license for this, it's three grand. If you want some of the extra modules, it's going to be six grand. If I want to basically give someone else another lab a copy of my software to use, they also have to have a license. And it's kind of same for MATLAB as well. But those two programs are very common in the biomedical research. So I kind of, I started reading around and kind of got interested in Python. I read Python for Data Analysis, which was Wes McKinney's book, and became really interested. And so the big thing switching to Python was just that it's open source, which is obviously awesome. And I was surprised just how many different libraries there were to use and just how powerful it can be. So also on top of that, there's a huge support community. Like I've been going to New York Python Meetups and just looking online, if you've got a problem when you're starting online, someone else has had the same problem at some point. And also just looking at the popularity of Python is growing. This is off of GitHub. They quantify popularity based off of search queries. And so just my thought is the more people using it equals more libraries coming out, more support. So it's a good spot to be in, definitely. So I guess I'll switch over to my notebook here. But one of the big things to me was pandas, and specifically the data frames. Because I'm working with so many channels of data, it's easy to take all of that and quickly relate different channels to one another and easily aggregate multiple channels at the same time. Yeah, so we'll just run this. And so this first part is just importing a text file as a data frame. And so you see here, this is the text file that was an output from that lab chart program. It's just three channels. I've only taken out the time and seconds, the ECG, and blood pressure traces. And so you can easily import that into a data frame and tell it what columns to use, what to label the index, what to label each column. And so the next thing, the first thing I wanted to do was quantify when each heartbeat happens. And so to do that, basically it's like a local maxima of the ECG. So each time you have that peak, it's called the QRS complex. That's basically when the heart is contracting and sending blood out to the body. So I kind of wrestled with it for a little bit because the signal has a lot of high frequency noise. And so basically what I was doing was saying for points that are above a specific threshold and are greater than the previous point and the next point, that would be when the QRS complex was. But as you can see up here, there's multiple points just because of the noise. So I ended up using the Butterworth filter in sci-fi. And basically, to be honest, I basically copy pasted the example they had to use it in sci-fi. It took like two minutes to set up. And all the high frequency noise was gone. So that was a really good day. So this part here, this is that what I was talking about. And basically, it's just a low pass filter. And I then added it to the data frame. So just give it a column name, say equals the filter to ECG, which is our end result of our Butterworth low pass filter. So from there, we're able to quantify when each heartbeat happens a lot easier. And so I just made this threshold here. And just because each subject, the baseline for the ECG is going to be a little bit different, I can't just give it a specific number. And so I made basically a threshold. I used NumPy Max and Min to calculate a percentage height of the signal. So for each file I use, I've got a new threshold that's automatically detected. And then, like I said earlier, the basic principle of determining when that QRS complex is, is finding out when a data point is greater than the previous and next points and basically append that to its own list. And so from that, basically with all this, like each time there's a new local maxima, it becomes the first point for a new heartbeat. And then all following points are labeled as the same heartbeat until you get to a new one and so forth. So this makes it really easy to say what is the maximum blood pressure for heartbeat zero, or what's the minimum, what's the mean. So that's how you're able to basically calculate local values. And so the heart rate is calculated as 60 divided by the RR interval, which is the difference between each point. And so you can see that right here. But basically we're calculating just time between each heartbeat. And so basically as your blood pressure drops, the barrel reflex is sending a signal to increase the heart rate. So the time between each heartbeat is going to decrease. So we've got our heart rate calculation right here. And then from that, you can then calculate the local max mean and min, which is the systolic mean and diastolic blood pressures for each heartbeat. And so from this, I've made an output data frame. Again, I find this really useful, really easy to just make from scratch a new data frame that's got all of my data. So for each heartbeat, I've got what time it happened, what the RR interval is, what the heart rate, systolic mean and diastolic blood pressure is. But from there, we're not quite done yet. To quantify how sensitive the system is, we need to basically bucket by systolic blood pressure. So what we're looking at is relationship between the time between each heartbeat and the systolic blood pressure. And to do this, I don't know if you guys saw the pandas talk a little bit earlier, but there is a group by function. I was working at that for a little bit. I ended up making my own that basically appends. It makes the 3 millimeter mercury bins. And it just made more sense to me. But basically, I've got bins corresponding to each blood pressure range. And so from that, that looks good. Sorry. OK, so basically I've got the bin values. We'll graph that in just a moment. But also with pandas, it's really easy to quickly calculate something for all the channels. So here, I've calculated the average heartbeat, how long? The total recording was average R interval, average heart rate, and then average blood pressure values, and number of bins. So there's seven bins. And then from this, calculate linear regression. You can see here, I mean, it's all crunched together here. But basically, a close up is you've got your ECG and blood pressure. And along the way, you've got the changes. You've got your local maxima. And then you can calculate what the relationship between the two is. I don't know why that changed. Just a moment ago, this fit. So sorry about that. Let me see if I can fix that. Apologies, bear with me. OK, there we go. I don't know what just happened, but it's restarting kernel and it works. So you can see here, we've got systolic blood pressure versus time between each heartbeat. So as systolic blood pressure drops, on average, you have a decrease between time between each heartbeat. So this is basically the system, as blood pressure is dropping, it's telling the heart to increase heart rate to bring the blood pressure back up. So basically, in the lab, what we're calculating, we're doing all this just to get this slope value. And the greater the slope, the greater the increase in heart rate per drop in blood pressure. So basically, the sensitivity tells how quickly a person's heart can react and help to buffer the changes in blood pressure. So from here, I had high hopes for this talk. I really wanted to get the sympathetic arterial portion. It's a little more work. I've done it in lab view before. But basically, it's the same process. We're calculating the average integrated area of each burst corresponding to heartbeats in each blood pressure bin. And just time-wise, it's going to take me a little bit longer. Also, there's a number of ways to measure cerebral auto-regulation. And that's all relationships between the blood pressure signal and the brain blood flow velocities. And so just using tools like Pandas, it's quick and easy to aggregate the data. It's easy to manipulate different channels based on another channel on the same table. Yeah, it's an awesome thing.