 So my topic or presentation today will be mostly about the use case of point cloud processing using AI but before we go into the use case themselves we just go for some baseline terminology so we are all on the same page. So what even is AI? I think that this year everybody heard term AI all over the place from chat GPT to other language models but in the fact artificial intelligence is just a field that deals with development of intelligent systems. How we go about it there are different ways but most used technology is machine learning and that means that we need quite massive annotated data sets to train our algorithms to to get to the end results. Again in the machine learning there is again variety of different tools or algorithms and most state of the art these days are based on the neural networks with multiple layers so we actually have deep neural networks and deep learning. So most of the use cases that we will be looking at today have been done using deep neural networks and deep learning. Usually the processing goes like that. So first we get data acquisition then the pre-processing, geo-referencing and matching is done in the softwares from usually sensor manufacturers. Here one thing to note is that if we have matching of data that isn't done properly so that two flight strips aren't tightly matched together those can also reflect in classification and other derivative products so it's really key point here that we have to generate as good pre-processed data. Another thing to consider here also is that depending on the sensor manufacturer and the quality of the point cloud how much scattering coronoys is in the point cloud it also influence the quality of the end results. So after we get the raw data then we do the classification where we basically segment every individual point into the corresponding categories. We have a variety of pre-trained models that were trained on vast amounts of data from different sensor manufacturers from different geographical regions from Europe, North America, South America, Africa also Asia and in that way we can create general AI that is applicable for most datasets, sensors and in different geographies. So for some use cases this is already the end delivery but a lot of times we want to basically extract some additional insight out of the data so therefore we do some sort of vectorization for example to detect single trees to calculate how much area is covered by a tree canopy or to do railway inspection or inventory generation. So last thing before we jump into the use cases would be quality matrices so it is really important to consider what quality matrix should we use for our use cases. So we can use from accuracy, a one-scores, intersection over unions and it's really critical that before we are going into the projects we determine what is the most suitable matrix and what is also the threshold we want to achieve. So for some use cases the speed of processing or delivery is more important than absolute accuracy so it really depends on the use case so we have to think about this beforehand. Also one thing to notice that everybody would like to have 100% accuracy in the classification but to be frank 100% is practically inachievable or for human annotators or for machine learning systems or if it is it's prohibitively expensive to really check each individual point to have the correct, correct annotation. So let's take a look at few use case examples. So one of the most common ones that we also do mostly is just large area, lighter mapping projects here usually. We are processing nationwide or region-wide projects for topographic mapping. One key thing here to remember is that with use of AI we can basically process large projects quickly and also with use of AI and machine learning we can scale our processing significantly so we can rent out tens or hundreds of GPUs to do the processing as fast as possible. Another thing to consider also is that in certain countries there are restrictions about the moving data outside the specific geographical region so therefore we can also deploy our algorithms to the country to decide from which data cannot live and additionally if there are some national security questions about it we with use of AI can make it happen that no human needs to see the data that isn't authorized to see the data which can also have huge implications in certain security and defense applications and in terms of categories we classify standard categories such as ground, vegetation, buildings that are segregated in rooftops, roof object, walls then in terms of infrastructure wheels so the power lines, wires, towers separated in low and high voltage ones and railroad infrastructure. So the second use case where the adaptability of category definition comes in places for example rock face mapping so usually classifiers for ground classification do not classify overhangs as a ground as when we will be generating elevation models we would get some artifacts over that but for that particular client the request was that also overhangs were classified as ground points so what we did with the client is that we just pick few small tiles annotate them manually to correspond the new definition of the ground category according to the specifications of client do a bit of retraining of our pre-trained model and with that we got a classifier that was able to classify ground according to the specific definition of the ground for that particular client. The next thing is LOD so level of detail to building modeling so as already mentioned in slides beforehand a lot of times we do not stop at point cloud classification but want to go to vectorization or some sort of compression of the data in this particular case we are automatically generating 3d level of detailed one to building models and also exporting footprints of each individual building the same technology or algorithms obviously trained on different training data set can be also applied to different platforms for example mobile mapping where we can automatically filter our denoise moving vehicles stationary vehicles pedestrians detect curbs poles etc and again this is then a base for follow-up vectorization of the objects that client may be interested in one of also really great use cases with the whole society moving more greenery greener is to use slider for forest management to generate detailed forest inventories to help forest owners or managers more easily and accurately manage for us this may be in a nationwide forest or in plantations or even it applies to urban forestry so for example currently most of the cities at least in Europe are trying to follow the green space rule where from each building one should see at least three trees the area under canopy has to be greater than 30% and the list 300 meters should be to the closest green space so in this particular use case we have lighter data of of municipality which was scanned multiple times and with it detailed analysis of each individual tree in the municipality over the years and therefore the municipality administration can then follow closely how much the tree canopy or tree count is shrinking or growing inside the municipality itself similar tingles applies to mining applications where we can do from volume calculation change detection monitoring and reporting and exploration one particular interesting use case is analysis of drill holes in open pits where basically the drill holes were drilled and then scanned with a scanner and and client want to get in detailed analysis of which drill hole was the depth what's the diameter inclination azimuth and other attributes and this can be done completely automatically in streamlined processing fashion so all of those services and use cases can be done we are three ways through our services one is with use of web application the other thing for a larger projects is processing as a service where we do batch processing and the third thing is that we also offer on-premise deployments and integrations via REST APIs SDKs or command line interface so lastly we'll just take a brief look at our web application so to done the processing via web application you basically define the processing flow all AI capabilities are defined as processing nodes also additional analysis tools can be connected and in that way when you press run we can process entire data sets and also if this is happening on our web app we are scaling up the infrastructure behind the scene for you so the processing can be done as fast as possible another thing we also offer via web app is to offer retraining of the of the models so you can create your own annotations via web app run the retraining algorithms and generate new AI that is customized for you and less functionality that I want to highlight is our quality control and quality assurance collaboration tools through which you can annotate point clouds easily to the web app share nodes between team members and also assign tasks to each individual annotator working on the data set and also monitor the progress of it so to conclude my presentation the key advantages of using AI and machine learning systems are that they can do fast processing that is scalable we can offer seamless integration to your workflows or information systems it handles data in safe fashion we also have unparalleled category diversity number of categories those categories also can have dynamic category definitions and last we also with use of AI can get always consistent results so if you have large manual annotation team then each annotator would do annotations slightly differently but with use of single algorithm to do all the processing the definition of everything is the same so with that I would like to conclude if you want to try it out you can go to our web page and try for free running the processing and if you have more questions you can visit us at boot in hall 27 boot number 16 so thank you