 Welcome. My name is Ryan Fox. The title of this talk is To Infinity and Beyond. We're going to talk about computer vision and image processing, how they're used in astronomy, and in other applications. This is where you can find me on the internet. I work with companies in astronomy and aerospace and drones doing software development. So I use image processing and computer vision a lot. I'm going to explain what those mean and how you can use them in a little bit. Not everything in this talk is going to be astronomy related. But a lot of earthbound techniques use the same applications use the same techniques. The goal of this talk is to give you an idea of what's possible, what exists, what you can do with it, and how you can use it in applications you're developing. This talk is going to be three main parts. This is image processing and we'll work our way from lowest level building up to high level, more complex tasks, move into computer vision, more of the same, and then how you can use it beyond the end of this talk. First we have to take a little detour into how computers actually see. Raster images are stored often as a two-dimensional matrix. You get eight bits per pixel, zero being pure black and 255 being pure white. The darker your pixel, the darker your value, the darker your pixel. It's just a direct mapping. This is a single-channel image grayscale. You're not restricted to one channel though. It's common to get three channels, red, green, blue. This is the format most of the images you encounter on a day basis are. You have a separate channel each for red, green, blue, but you can change the color space so you also during processing you might operate in different color spaces. You're also not restricted to just three channels. There's a format commonly used in astronomy called fits where you can store a raster image and have basically as many channels as you want because it's fairly common to take an image of the same object at many different wavelengths. The important thing to remember from this is that any operation you want to apply to an image, you can apply to one channel or all three channels or all channels. Now to explain what we meant by image processing versus computer vision. If you talk to ten different people, you're probably going to get ten different answers. There's no hard definition. But for the purposes of this talk, when I say image processing, I mean something operating on a low level versus computer vision, you want to get more intelligence about what's in that image or photo. Image processing algorithms usually are working on a pixel by pixel basis. In computer vision, you're more concerned with an image holistically. A lot of image processing things we're going to talk about you won't actually directly use but the components in a larger pipeline. Usually the goal of computer vision is to get some knowledge about an image or photo and be able to do something with that, whether that's label it or use it as part of a larger set. A lot of times for an image processing algorithm the input is an image obviously but the output is another image that's been transformed in some way. With computer vision, your input is also an image and you might not get an image out, you might get information about that image on a broad level, what the image contains, what it's a picture of more or less. Just as an example, this is the same picture fed through two algorithms. On the left it's been processed by an edge detection algorithm and on the right it's been processed by an object detection algorithm. On the left we don't really care what's in an image, we just want to find the boundaries of what objects are in it and where. And on the right we don't particularly care about what color it is, what way it's facing, where the actual lines in it are outside of the general region in the image where it is. It gets labeled with a car. We're about 64% sure that's the case. I think it did all right on that one. If you're going to be using Python for computer vision, use OpenCV, it's a fantastic library, lives at OpenCV.org. It's written in C++, there are Python bindings, but it's kind of a pain to build and import the objects into what you want. There is package on the Python package index called OpenCV Python. If you're not interested in using it for C or C++ applications, you can just pip install this and it gets you pre-compiled binaries. So easy one-step installation. Once you've got it, just import it as usual. This is how you read an image. And then once you have it imported under the hood, it is stored as a NumPy array. So it plays fairly nicely with NumPy, SciPy, and Pillow, which is another imaging library on Python. If you ever use that one, it's also pretty nice. And then once you're done with whatever you're doing, it's also easy to save basically any image operation you want on top of the ones we're going to talk about. So that gets us to the image processing portion of our talk. We're going to run down this set of features. We're not going to go too heavy on math, but your takeaway you want is to have an idea of what it is and how it works on a broad level. So first up is convolution. This is a low, low-level building block for a lot of image processing applications. Don't worry about all this. Basically, what you want to remember is this is a lot of times how a filter is applied to an image. This is used as a... Sorry. You have your large image and a small, maybe three-by-three or five-by-five matrix you use to apply a filter stepping across pixel by pixel. That filter is called the kernel. That's about all you need to remember from that. Here's how roughly it works. On the left, we have our image. In the middle, we have our kernel. You set the kernel down in the top left. You go element-wise and multiply by the weights in the kernel with your pixel value. And then at the end, you add them all up. Once you're done with that, you have to divide by the sum of the values in the kernel so your image doesn't get brighter every time as you're multiplying and adding all these numbers up. And then the pixel in the middle of the kernel on the left you store here in the output image. Once you have that step done, slide it over one and repeat and do the same thing. What particular values you pick for your kernel is going to change what the operation does. This particular kernel has the effect of blurring an image. This is the cadi nebula. On the left is the final product that's been released by NASA and applying that kernel gets you this. So that's nice. We can blur images, but why would you ever want to do that? It seems like you're kind of throwing away information. Well, one useful property that blurring it has is reduces the noise. You can see there's a lot of stars in the background on the left, not so much on the right. So that's going to be useful for feature extraction. This is basically detecting shapes in an image, whether that's lines, circles, corners, edges. OpenCV has a lot of built-in features for this. First up, we have edge detection. The one OpenCV has is called the cani edge detector. First up, it blurs the image. Second is it calculates the gradient, which is just the change in intensity, direction and magnitude in a particular region. Once you have that done, if you look at each smaller region of the image, if it's a place with a low gradient, maybe you're kind of in a sea of flat color, not really an edge. If you are in a place with a high gradient, maybe you're changing from one color to another to light to dark, that's something a human would probably describe as an edge. Here's a picture of Hubble telescope. You can see this one did pretty well. Some of the solar panels, we got fairly good lines and even banding around the bottom of the telescope. On the front of the telescope, though, it's kind of a mess, but to be fair, if you gave a human that sub-picture and asked him to find the edges, there's not much to go off. We can also detect corners. This is also fairly noisy. All these operations are going to have parameters that you can tune, and what you choose depends on your application, really. What your tolerance to noise is going to be false positive rate, false negative rate. So really, I have to see what's going to work with your dataset. We can see here, still found all pretty much anything we described as a corner on the solar panels, the radio dishes, and then along the telescope. Notice not on the sunshade, but on that little notch on the top, we have a smooth curve there, and it thankfully passed those over. Moving on, we have a huff transform. This is how you can detect lines or circles in an image. Same thing, we look at Hubble, we found our solar panel lines pretty well, and again, a lot of noise. You can see, depending on your image, it's not always going to give you a great result. You kind of have to play with it Finding lines in an image is often useful for real-world applications. This is sort of the first one where it has a little world mapping. If you are programming a self-driving car and you want to stain your lane, you might put a camera facing forward out the front of the car, and ideally you want to see a line on either side, converging toward the middle. If you've got one below you or maybe two off to the same side, you can rectify that situation. Next up is feature descriptors. If you just give a picture to a human and ask them to describe it, you can use some words, but that's not very useful to a computer. This is a way to quantitatively fingerprint a region of interest so you can compare it to other regions. OpenCV comes with a bunch of different classifiers, that all have different properties, different processing speed, whether they're susceptible to rotations or scale changes, color, skewing and warping. These two, SIFT and SURF, I'm going to mention for historical purposes mostly, they were some of the original really accurate ones that opened the door to show what could be done really. Unfortunately, they're patented and combered. They are still in OpenCV, so you can try them out if you want, but it might be a little risky. Luckily, OpenCV has a bunch of other ones, all with the pros and cons, all not patented and combered. We're going to talk about ORB, which is a two-step process. ORB stands for oriented fast and rotated brief. Fast and brief, of course, stands for something else. It's a two-step process. Fast is a feature point detector, so it's going to locate edges, corners, places that might be described well in an image, and the brief actually describes them. Rotated alludes to the fact that it's supposed to handle rotated images fairly well. So if we have a picture of tranquility base, if you asked a human to find some points where you could maybe line up another image with that, what do you have to go on? Maybe the antenna on top of the space suit, the equipment on the ground, the flag, the lunar lander. It's kind of hard to see this resolution, but with the feature point detector applied, it did find the top of the antenna. There was some space suit found three corners of the flag, so it does all right. Once we can describe where an image is, we're going to talk about image segmentation. This is basically splitting an image into multiple sections like a region of interest. The simplest way to segment an image is thresholding. You just pick a threshold value and all pixels above that, you take it to be one, you take to be zero. You can threshold on anything you want, pretty much intensity or color are pretty common. If we wanted to segment this image to get our astronaut out of the sky, the background behind him is pretty dark. He's really bright, a lot of white pixels in there. The earth's kind of somewhere in the middle and there are some clouds, so we might have to deal with those separately. If you play around with your threshold to see what you can get, this is just a binary mask now. Once you have your image segmented, a lot of times it's useful to use erosion and dilation. This is basically going to make the white portion of your mask grow or shrink. So if you dilate it, it's going to fill in holes like a lot of these shadows in the space suit. Erosion does the opposite. A lot of the tiny little specks will be washed away. It's important which order you do it in, because it's coming to use them in tandem. If you erode, then dilate, that's going to eliminate noise. So if you have speckles in the background, those get washed away by the erosion. Then they're gone so they don't come back with dilation. If you do dilation first with any erosion, it's going to do the opposite and maybe fill in some of these rivers in the big picture. Here's some successive erosions applied. You can see that the space suit does kind of wear away, but also the clouds from where we're right got much fainter. And then the opposite. This is what dilation looks like. A lot of those shadows on the space suit are filled in, but then also the clouds are getting stronger too. Once you have your image segmented, you want to do something with that. So selecting a contour is a common way to do that. It's basically picking out a shape. It's usually what we're after as the final step when we're segmenting an image. OpenCV will find all the contours in an image, so you'll get one blob for the space suit, another one for this cloud, another one for this cloud. It's common to sort them by size, and then you know the guy taking the space walk is likely to be the biggest one. And then if you combine that mask with your original image, you can select out just his pixels. Histograms are another way to separate an image. It works by putting pixel intensities into bins, so you get a graph with dark pixels on the left, bright pixels on the right, and the number at each intensity. Here's what the histogram for Hubble looks like. NumTi has this. OpenCV also has histogram functions. You can see there's a lot of dark pixels. This is a log scale, so we're talking on the order of a million pushing 10 million dark ones, and then a handful around 10,000 really bright on the sunshade and reflecting on the barrel. Now, this is a picture of the Eagle Nebula. It's a fantastic image. This is not what an image that comes down from Hubble looks like. That's what an image that comes down from Hubble looks like. Astronomers are operating on low signal to noise ratio a lot of time, so how can we improve this? This is the histogram for this image. If you can see the numbers there on the left of the scale, that's 10 to the 8, so we're pushing 100 million gray pixels and basically nothing else. You can see there's a few bright specs, so some stars shining through, but not much. Histogram equalization basically takes this and forces it to conform to a more regular shape, oftentimes a normal curve or something approximating that. Now, there's been more processing steps that went from side A to side B, but you can see there's a lot of detail there that is totally invisible in the original. That's one of them, yes. There's been more exposures combined. Image registration is a fancy term for lining up images. You might have to deal with translations or rotations or perspective changes. There's two main uses. One for making a bigger picture, for making your field of view, and two for increasing clarity on maintaining the same picture size. Panoramas, people pretty intuitively understand your phone does it probably. You take multiple images. You can see this one's three. Once they're registered, you can kind of blend over the edges and then get a larger picture. One issue is that you can get a lot of distortion toward the edges, so there are ways to do that. If you're familiar with this image, this was taken by Hubble. This is actually a panorama. It's called the Hubble Deep Field. They pointed the telescope at an empty patch of sky and exposed for days and days and days to see what was there. And then when they looked at the end, this is what they got. Now Hubble isn't actually one camera. This is a composite of four. That's why you have the strange shape of your camera on the middle, too. That brings us to the other way you might want to... The other reason you might want to register some images is stacking. Instead of increasing your field of view, this is going to increase the fidelity of your image. When you take any given image, if it's for astronomy, you can have a lot of sources of noise. Much of this is inescapable. And then even if you're photographing stuff around your backyard, there's a lot of different things. What can you do about that? Well, if you take a lot of pictures of the same thing, kitchen table in this case, with a straight, dry erase marker walking across, if you apply feature detector to it, you can line those all up. And then on each pixel alignment, you stack them up and then sort them and take the median value. The idea is that if you had bright pixels, those will rise to the top of the stack and dark pixels will fall to the bottom. Along with thermal noise, anything else, hopefully it will average out over your run of shots. So ideally, the middle pixel leaves you with something approximating the ground truth. Yeah. If you look really close you can actually see some ghosts. The fix for that is to take more pictures. Well, the hope is that if you have some random noise or a transient error, the more pictures you take, the more of them are going to contain the true image that you're looking for. So if your signal is just totally washed out and you can't even get a solid image in half of them, you might not have much luck. Quick question. You can, yeah, a lot of applications deal with rotations or translations. Once you have them lined up, all you need you can resample a pixel grid. Everything is rectified on top of each other. You just have a list of integer values, so you take the median. The left picture is a crop from one of the original ones. The right one is the finished product. This was never actually taken from a camera. It's just composed of the other five. I don't know how good our quality is here, but you can see it's quite a bit of noise reduced and it's a lot less grainy. Now we're kind of crossing over into computer vision territory. We're going to talk about higher level techniques what you can do with them. First up is object detection. Really, this is the problem of, given an image, how can you find that in another image? This relates to registration, if you might have one of those rotations. OpenCV has a handful of ways to do this. Feature matching is going to rely heavily on those feature descriptors we talked about. Histogram back back projection is a process where you take the histogram of your sub-image and compare it to regions on the larger image so you get a probability that that picture came from the original. Harcascades and neural nets are both useful for this thing. They're also useful for object recognition in general, so instead of finding one particular instance of a cat, you find any cat. This is a wheel from the Curiosity rover currently on Mars. If you apply that orb feature detector you can see pick up a handful on the treads and then also some on the chassis of the rover. Not much on the dirt. That's pretty good because it's fairly uniform. We don't want to key on too much there. Here's the original image you came from. The way feature detector matching is going to work is it finds the feature points in your sub-image and finds the feature points in your larger image and to a computer those are just numbers. So it can block them and then for all the feature points in your smaller image it looks for its nearest neighbors. If it finds a region in a larger image where a lot of those are close matches then you've probably got consensus that that's where it came from. OpenCV lets you draw the matches even and you can see the feature points on the chassis match up fairly well and then also ones on the treads. The lines are pretty thin but it doesn't line up well. Now orb is supposed to handle rotations. You can see the feature points in our match up there are still on the chassis and cut across and do match up on the other ones. So orb functioning today. Object recognition I touched on a little earlier. This is about matching any given instance of an object. There's two popular ways to do this. Cascade classifier one of those is called the hard classifier that uses known characteristics. You know that's a really great of this too but one of their shortcomings is we don't really know how they do what they do. So hard cascades operate under the assumption that a lot of pictures of for example human faces have a lot of the same characteristics at least in photographs. Light generally is coming from above you so it's your forehead which is lighter than your eyes and it'll also illuminates your cheekbones. So there's a set of filters that get set up to compare regions on an image to how well it stacks up against those. Now it's important to know that one built into OpenCV wasn't trained specifically on Neil at all it just uses those characteristics. The one OpenCV is bundled with has I think on the order of 1,000. When you run it hooray if found Neil hard cascades tend to be fairly fast relative to neural nets they can be fairly performant but they take more time usually and can take a lot of memory as well. Yeah depends what your phone is or camera they use both ways a lot neural nets are a relatively recent development they've been they've been showing a lot of promise and they're getting better all the time. OpenCV also comes with other feature detectors bundled in this was detected using the frontal face detector there's also one for faces in profile and a pretty cute one there's one for cats as well someone got that pull request accepted. Alright so let's talk about a different kind of problem reverse image search so instead of looking for an image inside another image given an image what is it a picture of say the Eiffel Tower or George Washington this is a service astrometry.net provided to the public their mission is given a picture of anything in the night sky where is it what are you looking at not only hard sounding problem at least this is one taken from their archives if you're familiar with a lot of deep space objects you might know where this is but if I cropped out that segment in the middle you might have a hard time so what can we do to figure out where this picture is the way astrometry.net works broadly speaking it finds the brightest stars and then it's going to map a set of triangles over them triangles the angles and side lengths the ratios between side lengths aren't going to change when you zoom in or rotate so that gives you a stable feature so to speak to search on once you have a set that characterizes your image you can search a catalog of known distances and angles between bright stars this is the annotated version we can see it found the big nabla it also located an object on some of the brighter point sources and then this brightest one in the bottom is labeled Alnatac which is also Zeta Orionis in the Orion constellation if you're familiar with the horse head nabla you know that's in Orion so found the right spot so that's cool but what if we can't calculate triangles what if you just want to find another image you could do a feature point search but that's going to be prohibitively time expensive over a large image set that is something called a perceptual hash if you're familiar with cryptography hashes desirable property is for every bit you change in the input you want the output to change a lot so you can't tell that you guessed close to a password you just want to be able to tell yes or no image hashes we want to have the opposite property if we change it a little bit we don't want it to change much given lossy compression if you strip off one row of pixels the image hasn't really changed but a computer wouldn't necessarily see it that way so these are three images the two on the left are different they were taken a second or two apart if you look closely you can see some of the waves have moved the boats in the background the one on the right is substantially different but not a ton there's quite a bit more green grass so how can we get a hash that will tell us if the one on the left is equivalent or at least close and the one on the right is not so much there's one called D hash there's another there's several other hashes this one in particular the one I found to be the best performing it's a fairly simple algorithm convert your image to grayscale simple enough resize it down to a very small size something on the order of 8 by 8 and then iterate over each pixel in one direction it doesn't really matter you just have to be consistent if the next pixel is brighter or darker if it's brighter you write down 1 if it's darker you write down 0 so at the end of your 64 pixels you have 64 bits which is a number computers good at dealing with those you can still see the original image somewhat there's light water at the top a very bright path to the bottom everything else is kind of muddled the same these are the D hash of those three images the two on the left did hash out to the exact same value so we went in there and the third one not so much D hash also allows for fuzzy matching so you don't have to have an exact match to tell that the images are close Hamming distance is the distance the number of different bits between two strings there's a library on the Python package index that makes it fairly easy to use these as well you feed it an image and it spits out a number optical character recognition isn't used so much in astronomy but it's useful for a lot of earthly applications neural nets have made a lot of big strides here one of the big engines for this is called Tesseract it's backed by Google currently the latest version uses something called LSTM which stands for long short-term memory it's a kind of neural net there's an interesting name on that one there is Python support for this with PyTesseract again fairly simple, you feed it an image it tells you what words are in it QR codes are a special kind of barcode two-dimensional barcode a lot of people are familiar with the three big blocks in the upper left the upper corners and the lower left corner those are for locations when you have an image the algorithm will key in on the location of the QR code based on that there's also alignment blocks you can see in the lower right there is a clear space around a single pixel there and then as you add more and more image to a QR code they actually expand and this one has several alignment blocks those are for rectifying the image so if the picture is taken at an angle you can square it up and read those pixels out photogrammetry yes it's possible so if you have a key point detector or some geometry that's known ahead of time you can compute the transformation between those two so for example if you sheet of paper you know it's going to have right angles if you look at the angles the lines you can figure out how that was skewed and project it back onto an 90 degree surface so yes they do have uses outside of QR codes so that's a prominent one we're actually going to come up to something on that shortly photogrammetry in general is getting physical measurements from imagery drones are really suited well to this you can use satellites for it but they're very high up and they cost a lot so you're probably not going to use one on the other hand drone hardware is getting better and cheaper all the time there's software where you can get a google map style view of your local neighborhood and trace out an area and it'll automatically canvas it so once you have this set of pictures what can you do with it we can generate a map or an orthomosaic which is kind of a map on steroids usually when you stitch a panorama you can blend all the seams together but that warps your image so you can't necessarily calculate an angle or distance accurately an orthomosaic is the same thing but with those distances preserved you can see here there's some gaps at the top and bottom usually that indicates there was a tall structure or a large slope on the ground maybe where the drone couldn't get eyes on that side of the object it would just kind of get washed out as parallax artifact in a panorama but not so much on an orthomosaic with point clouds you can also generate a point cloud so that gets you 3D modeling so you can make a contour map you can do area or volume measurements you can also use these kind of things for terrain classification so you might want to separate this image into a field area and a wooded area and then the house this is what the point cloud looks like one another recognition task in some of these projects is ground control points basically you have a big tile you throw it on the ground that acts as a QR code once the image processing software sees that you tie that to a known reference location in the real world and that can help your accuracy out it's kind of hard to see from one static image of a 3D point cloud but you can kind of get the idea that woods go up from the house a little bit and then also the ground slopes off down here so that's about what you would expect from looking at that orthomosaic neural networks are useful for a lot of stuff they are starting to get incorporated in OpenCV but they haven't been too much yet they're useful for a broad array of tasks image classification given an image what is it a picture of object recognition locating those objects in the image YOLO stands for you only look once OCR they're really good at a lot more stuff as well so here's an example of image classification just give a picture what is in it we got a lot of people the grass it's outside yeah pretty well fits pretty well some of the lower probability matches are interesting although umbrella you can kind of see where it got that from and then object recognition localizes objects in an image you can detect sure it has cars but where are they people multiple classes that sort of thing so now that ends the tour of our computer vision portion of the talk so let's talk about how you can use this in real life now that you know some building blocks you can see how those can be composed into a pipeline maybe you want to load an image select some object classes in it send that off somewhere else or load a bunch of images and create a panorama there's a lot of libraries library support for this sort of stuff OpenCV as mentioned all talk is great there's a lot of APIs for this stuff too so if you don't want to deal with implementing it on the back end Google Cloud, Microsoft Azure AWS all have some applications some APIs for you to use if you're interested in astronomy data processing AstroPy is great it's under active development it works well with those FITS files I mentioned earlier if you want to do sacking this is another free program deep sky stacker there's a lot in this space but that's just one that I've found it's nice Hugen is an image panorama stitching application it's nice because it's scriptable and then if you're interested in more of the drone side there is a newcomer on the block called open drone app that will do a lot of that OrthoMosaic and point cloud operations actually I need to take a step back I kind of glossed over this side a little bit this looks like pseudocode but it's not actually pseudocode this is code from a project I've been working on I mentioned I work a lot with drones a lot of those missions involve repetitive tasks kind of doing the same thing over and over well what do programmers do with repetitive tasks we automate them so I wrote a programming language it's called DIL for drone imaging language it lives here on github this is a domain specific language so meaning it's not intended to be used for general purpose programming it has one specific niche these examples actually came from DIL this is a panorama of downtown milwaukee it's one of those 360 degree panoramas DIL can create can create the panorama and then put it in a nice web viewer so you can look around pan around zoom in and then this is me flying on a mission actually the object detection is done with a neural network based on YOLO so imagine the question a lot of you have for me right now is why why write a language for this why not just a python library or even just a script and when I talk to programmers a lot of times about programming language I ask oh have you developed any because I'm interested in languages and I get this blank look or no I've never even considered it or they don't think they have the technical chops and I don't think that's true so there's three aspects of domain specific languages and I think we can do ourselves well to this project the first is that domain specific languages don't have to be complex if you think about a general purpose language like python or java or javascript there's a lot of grammar there if you had to write a first so that's a really heavyweight task python itself has some of these built into it the date time formatting spec if you've ever tried to deal with if date dot month equals equals four string plus equals that's a nightmare instead you just give it a format string put all your symbols and you want and then out comes a nice human readable date regular expressions are another domain specific language they're great at parsing text they're also kind of infamous for not being great when you try to apply them outside their domain now you have two problems there's another one python in the documentation it's called domain formatting mini language if you want to inject some values into a string maybe format a number nicely this helps you do that if you remember the left pad debacle from node.js it's nice to have something like this in your standard library the second aspect is that domain specific languages trade generality for simplicity this is the entire dill it has five commands they're all fairly self explanatory you can load a set of images or an image you can highlight some object classes in that image you can stitch a panorama you can show that panorama in a web browser and you can save your results that's it when I was implementing it I didn't have to worry about recursion or branching or memory storage anything just keeping the spec small enables you to have a narrow focus when you're developing it and the third is that languages are interfaces programmers like to maintain and talk about separation concerns keeping the design separate from the implementation of something python is a case of this there's the cpython implementation but also highpy, jython, iron python some others implement the same language but they're different under the hood by designing the language to be separate by designing a language rather than a python class that forces this implementation to be separate a lot of imaging tasks are what could be called embarrassingly parallel for example a 2017 desktop processor might have four or eight cores and a roughly equivalent GPU might have 2,000 let's just do the nature of images like if you want to render a pixel you don't have to go all five other ones before you do that so if you wanted to you could re-implement the DIL in a way to take advantage of this if you had a really large dataset to chew through and lastly creating interfaces joining them together using interfaces as a programmer to stock and trade you're already doing that pretty much every day so the next time you find yourself needing to implement an API while I'm a big fan of using the right tool for the job don't necessarily dismiss it out of hand it's not necessarily as difficult as you think that wraps up our talk here's where you can find me on the internet and thank you for listening sure right so the question was if I understand you right how do you quantify or you have to still know about the error bounds on your noise yeah that's a difficult question in general a lot of astronomy operates right at the margin of signal to noise a lot of a lot of preparation goes into setting up an observation you can generally have an idea of your instrument what kind of noise you can expect and what you're observing what you expect to get from the object sometimes you don't get anything sometimes you just get noise sure there is for basically every observatory there's been a lot of work in characterizing the instrument itself so they have a pretty good handle on the error bars of the entire system you're talking about this guy right yes so the question was about importing raw files from a digital camera as far as I know there isn't anything in OpenCV for that but it handles TIF formats and there's an application called DC raw which will convert basically any raw into a TIF PILO also does it TRIPILO then also potentially yeah so asking about using over and under exposed images along with the regular exposure that's called HDR imaging high dynamic range so back at the start when we talk about 8 bits per pixel per channel so you have 24 bit color that's nice but that's not a ton of dynamic range if you take another picture so if that's your standard picture you take another one over exposed under exposed you can expand that so a common example is in a room with some direct sunlight coming in the windows if you take a regular exposure the windows will just be blown out and totally white if you take another image with the exposure cranked down you get a regular exposure outside of the window but everything inside is dark so if you align those images you can take the nice the well behaved parts of each of them HDR is something you can google if you're interested in learning more about that that's built into a lot of cameras nowadays actually yeah so you're talking about some of the open cv examples not many, maybe on the order of 10 so loading the images align and then open cv actually has a blur function so that's one more line and then if you want to save it out or show something some of the more complex ones like finding features let me come to that one there we go those are going to be a little more setup because you're dealing with multiple images you have to create a detection object and then map between them not more than maybe 15 or 20 lines I'm a consultant so I work with companies that use video or imagery in their products I've worked with local astronomy observatories I worked with a national radio astronomy observatory they run the VLA which is out in New Mexico if you've seen the movie contact the satellite dishes out in the desert that's a real place it's a working observatory and I've also worked on James Webb's Space Telescope yes no no one touches the telescope or anything else where you start to get shadows or light how do you recognize the image sure that's a fairly difficult problem in general if you have aligning images that have drastic changes you probably want to try out different feature descriptors and see which one works the best for you it's easy to write or come up with a bad feature descriptor the good ones have been pretty babble tested I have a lot of research going into them so I would say probably try the probably run through the ones available in OpenCV these guys and see which one works best for your data set and these ones have different properties depending what kind of changes you want to be you want to count as a positive or negative result yeah my website is foxrow.com they're not up yet I'll put them up in the coming days I think that wraps up our time so thank you everyone for coming