 Hi und herzlich willkommen. Ihr seid hier in einem Vortrag zu einer Open-Source-Software-Planoptikem, die von Christopher in seiner, oh Gott, jetzt habe ich schon wieder vergessen. Dissertation. Danke, Dissertation entwickelt wurde oder mit, wo er sich mit beschäftigt hat. Und er wird uns jetzt ein bisschen was davon erzählen. Und das Schöne daran ist, dass er das explizit auf Anfänger ausgerichtet hat. Das heißt auch Menschen, die noch nie was davon gehört haben, so wie ich, können hier noch Schönes lernen. Dann viel Erfolg und schönen Applaus für dich. Das sieht nicht anders aus als eine Konventionale Kamera. Aber es gibt Micro-Lenses, die nur in der Front des Image-Sensors, die Konventionale Kameras nicht haben. Und mit diesem, was wirklich faszinierend ist, ist es möglich. Und das heißt Refocusing. Und ich möchte dir eine Idee geben, was Refocusing bedeutet. So, wie ihr es sehen könnt, in diesem Image haben wir viele, viele Fokalpläne, die wir fokussieren können. Also, jeder Objekt kann fokussiert werden, von einer einzigen Kaptur. Also, ihr müsst nur einen Foto nehmen und dann durch die Scenarie, mit, ja, den Fokus auf die Fokuss. Und wenn ich starte, um das zu schauen, und wenn ich es hörte, ich war mir, wie es möglich ist. Und ich möchte dir eine Idee geben, wie wir das erreichen können. So, wie ich es sagte, wir brauchen nur einen Image. Und das ist, was so ein Image so aussieht. Wir haben Tausende Micro-Images, die durch Micro-Lenses sind. Und, wie ihr es sehen könnt, in der großen Portion, in der Mitte, diese sind sehr klein. So, das kann gedacht sein, als der Flieger sichert, dass es sichert. Und wir müssen ein Image-Prozess machen, um das Refocusing zu erreichen. Aber bevor das geht, müssen wir zuerst die Objekte fangen, um zu verstehen, wie es möglich ist. Aber nicht zufrieden, ich komme nicht zu viel Details her. Btw, hat jemand das Image, das hier blöde ist? Ja, du musst es... Auch wenn es blöde ist... Das ist wirklich gut. Ich glaube, wie viele von euch wissen Doom, btw? Ja, das sagt viel über die Average H in diesem Raum, ich glaube. Ich glaube, es geht um dich, um zu entscheiden, was dieser Age ist. So, zu Beginn, wir haben ein Optical Bench hier, das ist das Red-Line. Und auf diesem Optical Bench haben wir eine Conventional Image-Sensor, die ihr in jeder Kamera findet. Und in der Mitte haben wir diese so-called Micro-Lenses. Und das ist die Unkonventionale. Ich habe hier nur sechs von ihnen, aber eigentlich haben wir thousands von ihnen. Und auch einfach ist das Objektiv-Lens, das ihr auch mit oder in jeder Kamera findet. Und mit diesem ist das Optical-Setup komplett. Und wir können starten, um zu schauen. Und ich habe hier einen Yellow-Ray, weil das ist ein ziemlich distinkter Fray. Es ist der Chief-Ray, und der Chief-Ray hat eine Property, die die Optical-Sensor-Sensor-Lens fährt. Und der Chief-Ray fährt durch zwei Optical-Senters. Ein, durch die Optical-Sensor-Microlens, und dann wieder, durch die Optical-Sensor-Main-Lens. Und wenn wir die zwei Positionen stecken, wenn wir die two Positionen stecken, dann sind wir auf der Image-Sensor, in der die Intensität fährt. Und das ist der Center der Micro-Image. Und wenn wir die Rays folgen, können wir sehen, dass wir eine Beamte haben, die all diese Rays existieren. Aber wir haben nicht nur einen Pixel, sondern wir können auch denken, dass ein Ray zu einem Pixel ist. Aber wir haben nicht nur einen Pixel per Micro-Image, wir haben viele von ihnen. Ich simplifiere das, um zu sagen, dass wir mehrere Pixel auf der linken Seite haben, die hier blühen sind, die auch eine Beamte formen, und auf der anderen Seite auch. So mit diesem haben wir drei Pixel per Micro-Image. Und in so einem Fall, haben wir völlig den Plan-Optik-Modell, und können starten, um zu sehen, wie es möglich ist, einen Refocus zu machen. Und um euch zu bemerken, was Refocusing bedeutet, habe ich das Doom-Guy in den Fokus hier gebracht. So, starten wir mit diesem Develop-Modell. Wir müssen hier ein paar Definitionen machen. Wie ihr seht, mit der red-Bahn, mit der red-Line hier, haben wir einen Objekt-Plan, und das ist in der rechten Image ein Black- und White-Test-Chart. Und das Black- und White-Test-Chart ist in diesem Fall hier in der Fokus gebracht. Wenn wir einen Objekt-Plan und den Rasen durch die Image-Site haben, sehen wir, dass sie auf der Image-Plan fokussieren, hier in dieser Micro-Line. Aber sie werden nicht enden, sie reisen durch die Line und enden auf der Image-Sensor, in der Micro-Image. Was wir jetzt tun, ist, wir müssen die Intensität, die hier existiert, in dieser Image-Plan-Position reist. Wie machen wir das? Wir integrieren die Pixel-Values, die in dieser Micro-Image sind. Wir nehmen alle die Pixel in dieser Micro-Image und adden sie. Insofern reist du die Intensität, die existiert auf der Micro-Line. Aber wir haben jetzt nur ein besonderer Punkt für das refokussierte Image, wir müssen das für alle die Adjacentpoints machen. Und insofern rekonstruieren wir die gesamte Image mit der Fokussierung auf den Hintergrund. Aber wie gesagt, können wir auch die Fokussierung für Vorgang-Objekte fokussieren. Wie ihr seht, sind diese Figuren hier. Jetzt schaue ich die Objekt-Plan auf die Front, und wiederhole ich einen Objekt-Plan. Und ich trage die Reise auf die Image-Plan. Und ich sehe, dass jede Reise durch verschiedene Micro-Line fährt. Also, die Pixel, die ich habe, zu collecten und zu adden, sind über viele Micro-Image bestätigt. Und ich muss sie identifizieren und dann adden sie auf, um ein Image zu rekonstruieren, das die Fokussierung auf den Hintergrund fokussiert. Ich mache das für alle die Adjacentpoints. Und finally, we have reconstructed an Image with a focus to the foreground. So, this is basically how refocusing works. And yeah, when I developed that model and explained this to myself and implemented it, I was wondering, is it possible to predict the distance to a refocused object? And it turned out that this is feasible. Again, we have that model. And in order to achieve this, I regard each Reise as a linear function and fit these linear functions into an equation system and solve this equation system in order to get the intersecting position. And once I have that, I can estimate the distance in a metric value with respect to this object. Obviously, what you need to do in advance is know the focal length parameter of your main lens, know the focal length parameter of your micro lens, know the pixel pitch of your image sensor. But if this is all known, then you can estimate the distance of a refocused object. Another capability of this camera is to change the perspective view. And I want to give you an idea what that means. Again, we have our model here. And if I highlight pixels that share the same relative position in each micro image and collect them and rearrange them, I have generated a view from a different perspective. And the perspective position is where these blue rays focus. So the perspective position resides on that main lens aperture plane. And you can move along that. And as you can see when I move back and forth, like now the yellow rays are highlighted, if I move back and forth, the objects in the front appear to move. So you can see in this little stereoscopic setup as you can also imitate why closing and opening your eyes you see the same phenomenon. And I can also pick another position. So you can vary this viewpoint along your main lens aperture plane. Algorithmically, this can be thought of as illustrated here. On the left we have the micro image representation with micro images and I highlighted the blue pixels here because they correspond to these blue rays. And if you rearrange them to a new image array as depicted on the right, you would obtain this perspective image view. So this is how it works in principle on a very abstract level. If you want to dive into this and get a more in-depth knowledge, I recommend to read some of these scientific publications and now and finally I would like to come to the most important part of this talk which is the software that I've written that has the implementation of the algorithms I just described. You can find that on GitHub as with this link down here. This is how the user interface looks like. I will give a brief demonstration now. And if you do not call a plan of the camera your own you can also obtain some light field data online. This is free to use and you can use download the software, download the image data and play around with it. And why am I telling you all this? Well, I would like you to join me on my road developing this and collaborate to improve this software. So to give you an idea this is purely written in Python by the way we can also since it takes some time to compute these images one step could be to convert parts of it to C to make it perform faster and better. This is what the interface looks like it is quite lean not too many buttons and I want to use some of the images we captured back then at university which is this image you can see the micro images again this is what the raw image looked like there is a camera by the way that we built ourselves back then because virtual cameras were not available and also we needed to know the focal length and so on and this was not given with a commercial camera this is why we had to take our own components to know what we are using to know the parameters in order to predict distances and so on so now I have pointed to the light field image and what is also necessary is this white image calibration file and this is necessary to calibrate the camera we need to find the centers of each micro image and this is why this has to be provided as well there are some settings here I don't want to go to details here there is a documentation I took some effort to document that so if you are interested you can read through so now while this is processing there is a lot of some time for you to ask me questions if you have some so if you like you can come up with whatever is on your mind how difficult is it to retrofit a normal DSLR with micro lens array why is it even possible or commercially available if I got you right then you are asking about the manufacturing or how difficult it is to manufacture such a camera well there are available micro lens arrays for about $1000 roughly if you just get one or two and I think you don't want to get more than that then it is quite expensive the manufacturing process well depends on your skills but it is not too difficult I have seen a workshop of a colleague who actually did that in front of an audience and built that with a conventional I guess whatever camera that was but a quite conventional camera micro lens was attached and then you attach your objective lens and then you are done the rest is done by the software it is not limited to commercial available cameras or the one that was initially commercially available so you can build your own and use that software because it is not restricted to any type or a specific type of planoptic camera yeah did that answer your question satisfyingly so if we think one step ahead will this technology be applied to video images as well very interesting question we asked ourselves the same thing back then and it would be very convenient because usually at the movie set up you have a one camera and what one person does this person is called the focus puller he pulls the focus of the scene standing next to the cameraman and you could do that usually this person has to do once or twice and this could be done only in one shot and you can postpone that process to the post processing stage so this could be done later on it depends on the create a freedom of the people sitting behind there there was a company also looking into that and trying to introduce the camera to the cinematography market but they didn't make it before they had to close down but yeah it is a very interesting application and if it comes to other applications I think that microscopy could benefit from this because these viewpoints that I've shown you earlier are very close to each other meaning you only get depth from very close objects so you have to be your objects that you capture have to be very small meaning microscopy might be one field or endoscopy or all the medical instruments out there yeah if we are talking about microscopes the usual problem there is to have enough light and in this case I think but that's basically my question that you also reduce the amount of light that's available if you restrict yourself to certain points of the camera because you only need those for that particular focus and you discard the rest so you would need much more light I guess what do you say in fact you the amount of light that is captured is the same however there is another trade off since we collect only one pixel out of each micro image we reduce the overall image sensor resolution so the trade off is rather on the resolution side than on the amount of light side I would say so this is rather the trade off that we are making with this type of camera but in terms of light these images can be quite noisy since you you split up your image point to many pixels like the image point that exists on the micro lens gets spread over many pixels so by this you introduce noise or you have quite noisy images but since you have the information replicated you can also in this integration process you can cancel out parts of the noise quite easily so I would rather think that the image resolution part is much harder for much hard to take trade off for photographers or for any medical applications I have a python question I am seeing it is taking quite a while what technology stack do you use do you use siphon do you use numpy do you use pypy let me just give you I have written the dependencies here I don't use cpyphone right now if there is anyone who is an expert in that who has used that before siphon sorry yeah I am not using it up to now but if you are willing to take part and introduce that feel free currently I am using just a few I try to keep it very yeah to keep it very lean I only use how many of them are 5 libraries here numpy is one of them scipy tiff library and some demo siking libraries in order to use the biopattern image of the lightro camera yeah obviously this is where the image processing takes place alright if there are no more questions anyone want you have one more question are you done with the demo sorry say it again I am done with the demonstration yeah yeah oh yeah true completely forgot about that yeah since I already have shown you some of the images I thought that's not the big magic now okay let's have a look at them so the processing process is done however I guess all the images are there I was thinking maybe they are still exporting so now these are the images as they come out and as you can see this is a slightly different image from that that we saw in the introduction and as you can see refocusing as possible the software are a bunch of viewpoint images and as mentioned earlier they are a bit smaller since I've done some tweak to extend the spatial resolution of the refocused images and as you can see the objects in the front are moving however at the very end of the aperture plane of that mine lens we see that vignetting effect this is a typical vignetting behavior that you also face with conventional cameras so this has to be treated in future and if you are an expert in image processing you are free to join me and rectify this and eliminate this okay thank you very much there is another one as you just showed us this technique allows you to compute different viewpoints from one shot so that should allow you to compute 3D models of the scene view that's absolutely true you can compute depth maps so called depth maps this is also one of the future tasks that are still on my list if you have experience in doing that I'm happy to have you on my side and that would be great to put that also into that software for sure okay, so thank you very much for your interesting talk and I think you will be still happy for others to see you afterwards as well so, thank you yeah, thank you for having me and as said come to me afterwards and let's have a chat have a good day