 So I'm Benas Pirzamonbin and I'm at the moment assistant professor in a statistic department at Lune University. Last year when I submitted my abstract which is titled the study on bending of laminar packaging material from Tetra pack I was a postdoc at DTU and working in a team group and Kim stand for Center for quantification of the images from max for is a collaboration between three university Lune University Copenhagen and DTU in Denmark, and also founded by capital region of Denmark. Where there we are a statistician working with the images that are acquired in Max for or to the imaging center at DTU in this collaboration which is established under the links project in Denmark which bring together the industrial collaborator or researcher together with university. We've worked with the colleagues at Tetra pack and also 3d imaging center at DTU where we acquired the data. Shortly I give some motivation about why we did this study is now as you said be moving from food itself to how to package the food and what are the challenges there. As you might know Tetra pack is the largest food packaging company they create or develop machines that the filling machines that they can produce 40,000 milk cartons per hour. And to increase the capacity and quality of this packaging process they are controlling and predicting the process through the virtual 3d models that they have. And in order to have this reliable virtual model. And it's important that you know the geometry of the product and how this geometry change when the product go under this process of bending in the machines. So in this study we developed a pipeline to deduce information of how different materials behave when they get banned. For example, here is different angles that we are looking at. And this study can help Tetra pack to verify their simulation methods so that they can design new products. Up to the point that we done this study what Tetra pack used to do they take the 2d images with the high fastest speed cameras and For this study we actually acquired data using X-ray tomography and created 3d volumes that you can see one in the video on the right side that we can see both inside the material and in 3d space now rather than the 2d space. And we had four samples that we study in this work. They have they are same material but they have different properties and the Chris line in this material are different and also the where how do we bend them is different is either in word or out for the packaging material. Our colleagues at the 2d imaging center developed some clips that you can see on the top images here so that we can bend the material a certain degree 045 90 and 180 and they can be fixed so that all of them have the same angle when we image them the pixel size of the image you can see on the right side is a 4.2 micrometer and you can also see the indicator of the down here that this is around 2 millimeter of the actual size that we see. So so far I give some short motivation part and how we what we actually have as a start point now we go to the statistical and image analysis part of this work which me and my colleague work on. The goal of this part was to identify different layers first in this 3d volumes that we have now we acquired through the x-ray and the right side you can see the 2d a slice of different different angle 0 degree 45 90 and 100 degree and the right side of this material that you can see the samples is aluminum layer that says quite a straight and different from the background so you can see it with eyes and then on the left side is a clay which is a bit harder to identify with ours. After we identified is to layer in the process, we want to obtain a quantitative measure to describe the delamination that has happened when we are bending in the center of the delamination which you can see it more easily in maybe 90 degree and 180 degree on the button to figures. And at the end we want to also investigate the deformation process that how this straight packages can when they fold what will happen to them and how one pixel move between this different angle. So, as I said we create a pipeline in this pipeline we are to fulfill the first goal that we had we do a segmentation that we can find the layers on the outer side and inner side, where which was aluminum or clay, and then we come, we find that having that segmentation we measure the band crest characteristic by defining a measure. And then finally using the segmentation and doing some registration of different samples we have in different degree transforming them and aligning them, we can investigate the deformation process. So the next slides are going to go a bit more detail in different parts of this process, starting with segmentation. So, we choose to work with the segmentation method called layer surface detection, which is good for the segmenting the train like images in 2D. So, as you know we are, we want to find the layers so this is perfect method for that might might be a question that why not other techniques or a simple like techniques like thresholding because layer surface detection is a graph graph based method search method, which might be a bit more complex than thresholding. So the question is that even though we can, for example, as I mentioned already we can find the aluminum layer, pretty good and we might manage to get a good segmentation just by thresholding. There are other material like clay in your inner part of this bent that they have different contrast in different parts of some part of it is quite obvious and by putting different values and thresholding we can find them but on the other side and top part here, it is really hard to capture it using a thresholding method. So we thought that this method where you will find that surfaces could fit the best and I'm going to show you that we were kind of right with it. So to do the layer surface detection, what we do we annotate this image around the area that we think the layer of interest is there and then we sample around that area. Then we get this mesh that you can see on the picture, which is, for example, in this case 180 degree bent. However, as I already mentioned for the layer surface detection we need 2D like an image of the like a train like that there you can see the lines. So therefore we unwrap this mesh into the 2D space by extrapolating and we get this top picture on the bottom which called unwrapped image. Now this is 180 in the 3D space, now it's extrapolated to 2D space, we can see even by eyes the different surfaces of this layer on the top and the bottom. And using the cost function that you can define, we have also other properties in this method that we can define how many surface you are looking for and how smooth is this surface and how far from each other you think this is going to go so that you don't end in the area that there is totally wrong. By doing this, we managed to segment even the clay parts as you can see here, applying in this to the all the four samples in the 3D, we got the result as you can see in the picture, the 0 degree, 45 degree, 180 and 90 degree. And on this image, the top purple colorish surface is the aluminum and the bottom one is the clay that we segmented out. Now that we have the segmentation, we could do the measure, we define the measure as to be the Euclidean distance between the outer layer and inner layer in the image where it can show us that how the actual dispending process characterized. If you look at the images on the left side on the bottom, this is where aluminum was the inner part of the layer and you can see that it showed that aluminum has clearly form a delamination in the center of the crease line in the material at from 90 degree go to 100 degree, it's quite obvious. And then also in the other material when the clay was the inner material, we get two different delamination around the ages of this crease line. So in this way we can characterize the property of the material when we bend them and we can see how measure how they are different from each other. Another thing we needed to do was to actually follow, investigate this deformation process to do that we need to, we had different samples so and then different samples we had different bending angles. So we need to register this bending angle and align them so that we can follow one pixel from the zero degree and know where did it end in 45, 90 degree and also 108 degree. And to get this we needed to do the registration between the different angles and to do that we after we got the segmentation on the right image you can see we sampled in work to the material so we can use the property and the pattern exists in the fibers inside the material so that it can give us some signal that how the how to register to images to each other. And we use the 180 degree as a reference point because it has the biggest field of the view and zero degree 45 degree and 90 degree all containing it to make it a bit more maybe understandable I now project them in this. Like to the images. So, again, as you see that because registration it's happening in the transfer is happening in to the space. I again unwrap the folded image into a straight image, and then the one on the left is for the 90 degree and the one on the right is for 180 degree, and I tried to show some part that is easy with eyes to see that they are similar. The method we use is a surf registration method which look for the bubbles and also sharp ages so for example on the top yellow ellipsed here on the left. There is a white bubble and it is a dark bubble on the other side of it, which we can also find that in the 180 in the similar fashion so then this tell us that this pixel is the same pixel and the 180 degree. We can find also other patterns or similarity between the pixels in the lower part here and also in the one on the left side. So now we got the transformation to the 2D space, which is regarded from the registration and also because we are unfolding the image from the 3D bent to 2D surfaces. So we have two different transformation in this model I try to put it here so that it's visually easier to understand. So if we have our material in zero degree in 3D space, we transfer and we know in zero degree nothing changed from the 45 and 180 it become on the surface we extrapolated to a 2D surface. There we can transform the different angles to each other so that we know exactly where the pixels are. And then we can also go back to the 3D space again. On the left here you can see that as I said 180 in the black or grayish color is transparent so it's become more gray. And it's the biggest surface area we had because of the field of view of the camera, and then the green, red and blue or the field of view we have for the outer layer of the material that is projected in 180 degree. And you can also see the different in the size of the view here. So to just get a conceptual point of view that what's happening from now is that if I have a pixel in zero degree now I can transfer it using the registration transfer to 180 degree. And then there I have the transformation inverse to the all the other angles that we had 4590 also. So I can move that pixel to the 45 degree space in 2D plane and then to geometrically transform it to the 3D plane and do the similar for 90 degree. And we already talked about the 180 degree. In this way, we know now the pixel in zero degree how varied it is in the 45 degree so we can actually follow this deformation when we bend the material. And something to mention here is that as the field of view is not same between these angles, we might have some area that is existing one angle is not in the other angle so the final simulation process going to be the intersection between this field of view. So by after doing this, I do some alignment to remove any translation that exists when we take the images, and we choose to have that alignment on the most concave part of the bending matter, bended material on the outer layer and hopefully you can see the camera moving through the video that is showing the mesh, and I put it as a mesh not the surface so that all every of this square is a representation of the pixel, and you can follow them through different bending now. The final slide is going to be the final product where we did the interpolation now, because we know where the pixels are, we did the interpolation between different bending angle, and we can simulate the bending process, the formation process. And then again here the top layer is aluminum, the lower layer is the clay and this is now 3D so you can see how actually the aluminum when is the inner layer bends and it's, as I mentioned already it's in the center of the crystalline, and it has a bit of a wave in it, which before in 2D images was not possible to see. Saying this, this was my last slide, and I just say thank you for listening, and if there is any question. Thank you very much for a fascinating talk. So how do you see this packing, what do you see as the main challenges with the packing material, is it the mechanical sort of things effect on the structure or is it more of a chemical, you are now the expert of packaging. I would say that I'm not the expert there I'm more of a statistician who tried to in like deduce information based on like a mathematical equations out of the images we get. What I what I would say here is that we see that the different material behave differently. So, if, for example, for the next aspect could be nice to actually try to classify the material then based on this characteristic, and this can also help. And the companies like Petra pack that come up with new designs, so that they can, they can come up with more stable packages or different design of the packages. And this is also, as I understand it's quite expensive for putting the product into the production line before you actually know what you get. So, all this work has been put to understand the process before we actually go to the, like a production so that we don't need to produce 40,000 packages to know what will happen to the material. Okay, can you apply the same modeling to food for looking to how how you bend when you truly was something like that. In principle, modeling should be have you thought of that. I would say, if we can take the images. Yes, we can deduce information from them, and depends on the quality of the images and the contrast we get here what you see is actually what you saw all the slides. It was contrast enhance so we do different techniques to get the best out of the images produced in synchrons or like X ray in the labs, so that we can get information out of the images we have. I should say also that one of the goal we have in team is that people when they take the images, they get information out of it is not just that 3D nice like image we can also get the statistic out of it so we can do some calculation and quantification. Okay. Very, very, very much. Thank you for this insight. So, thank you very much.