では、私のプレゼントにおいてお話しします。3Dオブジェクトをご紹介します。このプレゼントは2つの問題です。一つ、知り合いがあることがあるので、2つの問題をアプローチしたり、二つの問題は、3Dオブジェクトを知り合いにすることができます。最初はこのような物があるので、スタッチを見つけ、プロバウンを見つけ、メソードを見つけ、リッドチャレンジ、フォーカースを見つけます。2.ディープランニングの3つのオブジェクトを使います。ボックスネットを説明します。ボックスネットとリザウトを調整します。これは私のインタダクションです。私はマサイ・オークシーです。私はカーブック・インクです。私はイメージプロセスのデベロッパーです。これは私のTwitterアカウントです。カーブック・インクは、私のマサイ・オブジェクトを使用するためのサービスを行います。私は3Dデータを使用するためのサービスを行います。私は3Dデータを使用するためのサービスを行います。最初のスタッチは、このプロバウンを見つけます。このプロバウンを使うためのサービスを行いますか?何?ありますね。フレームネットを使うためのサービスのデータで、私はオスステルコグニッションも高いプロセスに支えました。別の場合は、私はナタ・ランゲット用のプロセスと、時年データのアナシスを使用します。これは私のフレームネットを使用しているので、その場合ではなく、私はトランマーデルを使うためのサービスが Incase ポランマーデルを使用します。1%アキュラシーを使うことができますマシンラーニングの可能性は100%アキュラシーを使うことができます次はライトメソッドを見ることができます最初の方法はディープラーニングで解決することができます最初の方法はGoogle searchを使うことができます可能ですGoogleを使うことができますGoogleを使館から使うことは will doというようなことをしたい最初はことらしいようなことをしていますGoogleから的には製品が成立されますメントラクとGitShip is a provided GitHub and latest paper.You will get the code and paper.Following Twitter user.It is possible to get the latest information.The book will get the structure, knowledge and access.It is provided the paper site.You can find the latest method.You already know the good keywords.You will use the GitHub and Google.It is possible to get the good code and good knowledge.The next content is keep recharging.You gather a lot of training data.Let's start using the full dataset.No possible.A lot of training data takes a lot of time to train.You have to check the closing.First things are prepare small dataset and check module works correctly.Second steps prepare easy to verify training dataset.Most models can be trained with datasets such as MNIST.You have to check it works.Deep learning have deep learning.Observers method improve accuracy.You have to check the training accuracy and validation accuracy.Both accuracy are not improve it.You have to stop it.Check the result by the visual board such as the tensile board.You have to increase in challenging times by improving the calculation speed using GPU or optimized CPU.Focus.Deep learning are a lot of methods to improve accuracy.The model, how deep, how is it structure, adjusting the hyper parameter,and the process data, data augmentation if using graphical data,and optimized SGD alarm and so on.Depending on your situation,the enough computation resources and enough datatry a deep and complex model.Enough computation resources but enough datafind good pre-trained model and focus on the pre-process such as data augmentation.So, not enough computation resources or dataconsider other way to solve a problem such as logistic regression and SVM random forest.Deep learning probability isn't best choice.So, the end of the first part I will talk about the next partDeep learning applied to 3D objects.I will talk about BoxNet.There are a lot of deep learning models.How to choose one?I consider three things.Resource,Performance,Speed.So, resource is the computation resources and human resources.Performance is accuracy and speed of development.I choose the BoxNet.Why choose the BoxNet?So, BoxNet advantage the resource and speed.So, computation resources is good because in my environment it works.Memory 32GB and GPU type is there.Performance is accuracy also.So, 83% accuracy in the paper.Speed is open source and simple call.So, I will talk about BoxNet processes at first time.So, the maps 3D data to 3232 BoxNet.3032 is possible to choose it.Resist data size because the 3D data is rich data.So, we have to reduce the size.And the input is convolution 3D.So, the convolution 3D is effective for the filter.I will explain about the 2D cases.So, because the 3D cases is difficult.So, they prepare the input image.Prepare the kernel window.So, we will watch the red spaces.And input image value is 1.And multiply is 5.And plus.And input image value is 1.And multiply is 1.And plus.And repeat the action again.And it is get the value convolution filter is 33.And repeat the action.And spaces, green spaces, blue spaces.And it is get the convolution future value.And then, so talk about the 3D cases.I will provide it the input image and 3D kernel.And 3D kernel move to the image and all image map.And provide it the convolution future.And move the inner.And provide it the convolution future.And repeat the action again.So, they get the first convolution future layer.So, in figure cases, repeat the 7 times action.So, if you are interested in the implement the code.So, if you use the keras, it is only one line up.So, in this case 32.So, in that case, the input shape is 32.In the last one meaning about the color channel.And setting the kernel size.And slide is how the kernel move it.And data format meaning about the which color channel.So, the keras support for the TensorFlow and Siano.So, the TensorFlow and Siano are different input shape.So, that's the reason why the setting the color channel.So, the next step is MaxPool.So, MaxPool effective for the detected image.And then, so, it get the convolution future layer.And select the MaxValue.In that case, red spaces MaxValue is 6.So, they get 6.And orange spaces get the MaxValue 8.Repeat the same action.MaxValue action for the 3D data is the same action.So, they provide it to the pooling windowand the convolution future.Repeat it action.Get the MaxPool layer.And if we use the keras, it is only the one-liner.So, the setting size and channel.So, the input image.So, the filtering and the detect.And the fully connected.And the limited sizes.Limited sizes.Because of the class file cases.Number of class.And apply the softmax function.Softmax function.It maps output, probability, distribution.And it is easy to differentiate.So, the keras using only these things.So, they define about fully connected.Limited size.Limited number of class size.And softmax layer.So, these code is define the model.At first is define the model.And the convolution 3D.Conversion 3D.And MaxPool.And fully connected.Limited size.And apply the softmax function.So, it also define about the loss function.And the metrics.And then training the data.So, the setting the box mapping data.And setting the class file level.So, I will talk about the next step.Is the improving technique, accuracy.I think improving accuracy has two approaches.The first approach is the model.And the second approach is data.So, model approach a variety of way to improve accuracy.This advantage deep model takes a lot of resources.And it is obvious which model is better.Data approach advantage is the effect of change of the object.And this advantage approach are limited.So, in my case, model approach is applied to random dropout,require, and data approach is data augmentation for 3D data.And class weight for unbalanced category data.So, 3D data augmentation is a specialty case.So, I will talk detail about it.I think data augmentation has advantage over the other method.The effect of shares and it does not increase in calculation timeunlike adding layer to the model.So, data augmentation these things.So, the changing data, rotation, shift, share.So, the few apply to the code.So, they provide it to the box cell data.And each box cell data applies augmentation matrix.So, getting the changing data and re-changing the numpy format.So, I will talk about the example case.Rotation matrix, this one.So, the changing data is rotation.System matrix changing the data, this one.And share matrix changing the data, this one.So, in my case, adding the data augmentation data for the training data.So, improvement technique speed.Deep learning has a lot of ways to improve the calculation speed.Such as, use GPU or CPU optimization.Mount thread, prepare future set.So, the CPU optimization is very effective for the data augmentation.So, I will talk about CPU optimization.So, if we use TensorFlow, so setting the build option,it is possible to apply the CPU optimized.However, you have to check which option is available.So, the result, this result shows the validation accuracy.So, the blue line meaning the baseline.And the red line meaning about the data augmentation,shift x and shift y.And the yellow line meaning about the shift x,y and applied to class weight.And the green line meaning about adding the training databy the data augmentation and applied to class weight.This table shows about the result.So, this is the result validation accuracy.So, the baseline is 79%.However, adding the shift x and y dataand applied to class weight achieves 85%.So, the conclusion.So, at first part, my strategy,find the right method and find the right programand find the right method and read challenge and focus.So, the other case, the right programis 3D object recognition and right method.Right choose the box net and read challengeinc data augmentation and customized modeland focus improves the validation accuracy.I will show the demo.This site show about the similar 3D objects.However, so this text is Japanese.Just a moment.So, it takes a lot of time.I prepare the video.Okay, I show the demo.Choose the file about the airplaneand set it.Okay, sorry.So, upload the airplane.This possible find such as similar shape isand the next content is the bus stop.So, the last content is the toilet.Toilet and chair, it is possible find.Such it.So, the end of the presentationand show the recording.I think deep learning for 3D object,it is very rare case.And if you are interested in the working in Japan,so, you have to access the siteand send the email.That's all.Thank you for listening my presentation.Five minutes for questions.If anyone has any questions.Hello, thank you for your presentation.Do you have any other metrics or just accuracy?Because accuracy, especially if you haveimbalanced classes, could be pretty misleadinglike any kind of loss or precision recall.What about them?So, you question me about the othercalculation metrics such as therotation and so on?Yes.The metrics?Sorry.I only calculate about the accuracy.So, not calculate it as the metrics.Okay, thank you.