 Hai, semua. Saya Joe Lee, jenji J2 Student atar Institution dan pekerjaan di sini dari CYBRAC. Hari ini, kami akan berbicara tentang kemahiran tersebut di machine learning model. Jadi, beberapa perasaan adalah salah. Jadi, saya akan beritahu. Jadi, semua pakaian yang saya gunakan seperti PANPAI di tradisional seperti ini akan terbuka dan mengalami kemahiran tersebut di sini. Jadi, kita akan berbuka dengan apa yang kita akan lakukan hari ini. Pertama, kami akan mencari kemahiran tersebut dan kemahiran tersebut. Selanjutnya, usaha AI adalah pengalaman baru dan beberapa metodologi medan yang digunakan untuk kepercayaan efektif itu. Jadi, apa yang berlaku? Jika anda berada di tempat ini, anda boleh beritahu saja yang anda inginkan. Jadi, apakah anda mempunyai kepercayaan efektif sebagai kepercayaan memerlukan informasi di dalam bahagian ini atau sebuah objek ini dalam sebuah cara bahawa keadaan informasi ini bukan kepercayaan kepercayaan manusiaan. Dan ia betul-betul yang berlaku. Tetapi, saya sedang memuatkan kepercayaan. Jadi, saya sedang mengambil sebuah perjalanan kepercayaan untuk memperkanan sebuah cara bahawa keadaan ini mengenai objek ini. Sebenarnya, saya akan halat kepercayaan. Kita akan mempunyai kepercayaan ini untuk mempercewa kepercayaan. Sekarang, saya akan mengetahui kepercayaan yang mengalami. Dan, saya akan mempercewa kepercayaan. Okey, sekarang kita akan menambahkan kepercayaan yang terdapat tiga-tiga. Bukannya, kelas setiap tiga saya, jadi, tidak terdapat tiga-tiga. Jadi, mempercewa kepercayaan yang terdapat tiga-tiga atau tidak terdapat tiga, dan memperkenalkan bahawa ia menggantikan 3 karakter atau sebuah karakter yang berkembangkan untuk memperkenalkan kemahiran dan sentiasa memperkenalkan mereka meletakkan mereka berdekatan, sehingga itu akan membentuk keadaan kembangan dan kembangkan kembangan kembangan. Baiklah, itu sangat bagus. Sekarang mari kita lihat ini. Makan kemahiran untuk beberapa perang. Jadi apa yang berlaku? Jika anda ingin mengambil kemahiran, mengambil kemahiran dengan secara menangis di dalam gambar ini. Kami akan dulu melihat gambar ini di jauh. Jadi ini bukan lelaki, tapi kami akan melihat gambar ini di dalam. Jadi kita lihat gambar ini. Sekejap lagi, sebab perkara yang sangat romantik untuk berlalu-lalu menyemui kegunaan yang어주 di periuk yang dibuat dari gambar yang lebih ramah dan saya menjelaskan bahan yang lebih ramah. Ia adalah gambar yang akan berlaku dalam periuk yang lebih ramah dan ia bisa menyelenggap kegunaan yang lebih ramah. Bersama-sama, berlalu-lalu menyebabkan periuk yang lebih ramah Jadi, kita akan melihat satu kecuali kecuali RGD, dua kecuali, dan apabila anda mencari, 177, 103, dan 30. Sekarang, ia adalah kecuali kecuali. Sekarang, apakah yang penting? Apa yang terjadi jika anda mengubah kekulangan kecil yang terakhir? Yang terkenal adalah kecuali kecuali kekulangan kecil. Jadi, jika anda melihat kaki di kiri-kiri yang di atas, kerana anda akan melihat kecuali kecuali kecuali kecuali kecuali kecil yang di-cari, kecuali kecuali kecuali, jadi, tidak kecuali. Jadi, kita lihat kecuali. Dan ini di sini mengal露 177, 103, dan 130. Jadi, sekarang kecuali kecuali yang digunakan pada kiri, ke sana. Kita dapat lihat bahawa ia tidak memannya dengan kecuali yang terakhir, tetapi telah memiliki kecuali kecuali yang terakhir. Mereka nampaknya ia mempunyai kecuali kecuali. Jadi, tanda ini adalah cara yang tidak dibuatkan dengan kecuali kecuali, kerana kecuali kecuali telah memandangkan kecuali dan kecuali kecuali. Sehingkat ini tidak bisa mengharapkan kecuali kecuali kecuali. Sebenarnya, apabila kita mempunyai tiga benda yang terakhir, kita dapat 001. Untuk itu, kita berhasil membaik dan mempunyai 001, sagnografi. Sekarang, mari kita kembali dengan benda yang berlainan. Jadi, bagaimana kita mempunyai sagnografi dan benda yang berlainan sekarang? Jadi, kita boleh membuat begitu dengan benda yang berlainan yang berlainan. Jadi, mari kita ambil setiap benda yang berlainan dan kita memilih dan kita memilih setiap benda yang berlainan untuk berlainan. Jadi, kita akan membuat sesuatu seperti ini secara visual. Dan untuk mengambil informasi, kita hanya mengambil setiap benda yang berlainan. Untuk menerima benda yang berlainan, kita boleh mengubah sebuah prosesnya atau mengambil setiap benda yang berlainan untuk berlainan. Dan kita membandingkan itu bersama untuk membuat benda yang berlainan. Jadi, kita memang membuat sesuatu seperti ini. Dan dia tidak mengliksa pasun arbitr! Kenapa? Cuma itu, dia itu bukan adik yang minusalle dengan sahaja-sahaja. Menurut saya, ada 6 perseputas tenim lain. Dan juga untuk menerima benda yang berlainan. dan mempunyai beberapa perjalanan yang memperkenalkan. Jadi, data by-sat-tel, program 4,000, informasi ini, dan cuba memperkenalkan informasi untuk anda. Jadi, kita hanya tidak sangat baik-baik lagi. Ini adalah teknologi dari jenis data. Jadi, jenis model ini. Jadi, ini motivasi projek, model ego yang digunakan oleh perjalanan peribadi. Jadi, apa yang kita boleh mengecewakan? Sebenarnya, pengalaman pengalaman dari informasi. Selanjutnya, kita akan mempunyai banyak perjalanan data yang sepatutnya tidak mungkin dengan informasi. Dan akhirnya, dengan kualitas perjalanan peribadi terhadap program peribadi. Kita ingat bahwa konteksi ini lebih baik. Ada beberapa instruksi untuk pengalaman pengalaman peribadi ini. Jadi, kita boleh menggunakan data protection measures yang diperkenalkan untuk komuniti kogor, atau transksesinya untuk keadaan peribadi. Dan, sekali lagi, dan secara kontrol, kita boleh menggunakan pengalaman pengalaman peribadi untuk keadaan peribadi. Jadi, jika kita tidak menggunakan keadaan peribadi, kita akan mempunyai kontrol dengan yang lebih baik, C2, C2. Dan ia dimana kita mempunyai banyak kompangan keadaan, yang adalah pembedahan peribadi, dan juga banyak pengalaman pengalaman pengalaman pengalaman pengalaman pengalaman. Dan akhirnya, ia tidak mungkin dapat mengingat keadaan pengalaman pengalaman pengalaman. Jadi, kita tidak boleh mengerti cara kita mengexplore semua keadaan untuk mengambil permainan di dalam. Kita perlu mengerti apa yang kita mempunyai. Jadi, kita tidak perlu mengerti banyak keadaan, yang hanya mempunyai banyak keadaan, dan kita tidak boleh meminta diri. Apa yang kita mempunyai? Jadi, saya akan mengerti, 0.1214. Terus How can we represent this in Teams for other? How does a computer represent and understand this innovation? Jadi, It's got the IDE 754 and the WiFi 754íre No Glow but that's how it's got. And Unshaft and debrief, untang ke archaea and understand it before. Let's split this number into three parts, That sign, expansion, and run this up. So, let's convert them into another similar equivalence sum, So you don't really have to understand all of this. Just, you know, Okay. So I have a methodology as follows. We use 32 bit embeddings, or bit embeddings, to make information that 32 bit space may available to us by the purpose. Now extraction can become a mass of embed and extractions. For example, if you want to use the first third and fifth globally, or bit in the closer, so complexity can be introduced in which layers will use all the embeddings along with which bits will use all the embeddings and which positions, and even part order, which gives us super flexibility and basically accordingly. So I'll be doing a quick demo of what this entire looks like. So I'm going to add a nice tutorial so you can kind of you know, this part basically all we are doing here is we are importing the BGT 16. So that doesn't work for some reason. One second. So we are importing my part and we are using a pre-trained BGT 16 which is an image classification model. We are using a batch of pre-trained BGT 16 which is an image classification model. So you can see, this is just me running the model. So you can see here, the golden retrieval does give us an impact per cent, the BGT 100 per cent, the auto cycle 72 per cent which is actually a small auto cycle which is a fact, I just knew of it, I just knew of it, and I knew of it 99 per cent, I just knew of it, I just knew of it, so yeah, the model is pretty good basically. So here we are just reading and saving our exploit or payboard to this variable here. So our payboard in this case is actually important shell which you can really move up here which is actually open source or so. So as we are going to direct the single file phd shell that is basically use our RCP which is remote execution and it's only used in CTMs and in the variable basically. CTS as in I, CTS as in I, CTS, so yeah here all we are doing is importing in FLT and D which is a library that we created and all we are doing with this library is essentially doing some pre-processing and we are extracting the layers or rather the weights of the first layer of the model and here we are embedding the payboard into the model so this is our embed function right here. So as you notice there is a resolution parameter in this case right meaning we are to embed with part resolution so we only embed one part every time so we also have bid resolution so we can embed one layer every time which is again better I guess okay so okay so here all we are doing is extracting or trying to extract the information to embed it so you can see here we successfully extracted this entire phd shell so that we can do embeddings and extractions to be successfully alright so now you might ask we changed the model because we embed the stuff inside so wouldn't that exclude the model basically like the model will be shared now but that's not the case actually if you look at this the model retriever does better in fact but they see of course 100% the same model cycle the same the new model is the same and the C engine is the same so essentially we managed to prove that by embedding and extracting in very strategic regions and resolutions we can actually impact the model's performance by essentially 0% and in fact our studies have shown that our own course of studies with mostly age classification models so for studies we are not sure how we interact but mathematically we will interact using the same and change the accuracy very little we can embed up to 20 bits or using the 20 bits of the mantisah and there's virtually zero performance effect so no suspicion waste that it's also a performance analysis matrix so it retain a large amount of accuracy which lowers the action to gain and suspicion imagine you receive a model with an angle and then you try to run it on some edges and then it's like go then retrieve over by exhibition it will go erase some suspicion okay then this significant bit approach and take will sort of ask us to change the close exponential because consider how binary works so binary numbers at the end have worked less than the binary numbers in the front but this is saying with our Bayes-Lens system also for binary because we have a lot more digits it's exponentiality like less than or more than and actually you can use lower embedding density and a strategic choice to achieve minimum performance decrease in the model let's say we have 14 layers not all 14 layers have worked for the same amount some layers will affect the performance of the model more than others so we can choose those layers which will not really affect the model's performance and we can perform embedding in that right so you might ask now this seems like OK and powerful so what do we do so mitigation methodologies are here to save you so some key considerations first the savings to preserve the model bed for example and then you have an obvious which has an interest and you're working to understand malware and malware like strips the model of the internet or tries in this case because obviously it's not really for its own server so ideally you want that to be a seamless flow of information so you still want the model to be accurate and also you want a high certainty of savings because obviously let's say you do but I have a close processing of the model like stripping the model of the information or scrubbing or scrubbing because obviously you run the software say oh the model save but then boom it's not in the deal and expectation of original can go and be quite interesting because in the case of powerful communications or you want to add new setting or then make an analysis on original model OK so these are right but thanks me very so the mitigation methodology we can use the reverse logic of this if you want this don't affect model performance and we can stop the model with no pins or own random bits to ensure that the table can be extracted because envision this scenario let's say you have a low RFR that is 10 kilobyte for example all you need to do is scrub 1 kilobyte of that 10 kilobyte low RFR and it's never going to run of course it's possible to check the circumvented by using something like a rate zero of A it's a good demo about really nice scrubbing the model so our idea also implemented this we can use the same embed an extract function so here we are just generating a bunch of random bytes and we are building a model with our random bytes and when we try to extract the same table embedded before which for success we will use still we get basically a bunch of trouble so yeah that's one way you have to create this kind of context and again yep it's not really going to affect performance as much as we have shown okay so when we look at a small cyber security kind of standpoint which is recovery because I say you have an opponent somewhere that's transmitting information covertly using this and of course you want to snoop in on the information not only you want to screw it up to snoop in so how far is it for me to recover it's quite hard and quite it's quite understatement so let's do some math things to work on alpha so let's say if they do bits we can perform this summation of k equals the 1 to 32 and we do 32 bit k so it gives us this that number here which is not possible for mutations they can choose from so I don't think or at least I don't have the technology at home to prove all this so let's try a more reasonable range let's say we use 23 bits and we do the same summation and we arrive at this number so it looks really hard to prove also might possible to prove also but I don't think so so let's try to narrow down our range to see where it's not becoming possible so if you extend it which you get this number which again it's I'm pretty sure I cannot prove also but maybe NSA the GPU customer possible but we it's how fast can you extract the information because let's say you have 100 million iterations but each extraction takes 1 minit so that's not really very possible so that's where we finally operate the tricks so here all we're doing just timing the extract functions and we're working the main square for you right here so again we have two functions but extract so as you can see for by extract it takes virtually 0 seconds to embed 400,000 kilobytes actually this is 400,000 bytes of data into a into a 10,000 by 10,000 array an empire array so basically with 5 extractions and it takes virtually no in fact if you run this for if you max out your realm of use or your home system and you can max it out it takes like 0.1 seconds also an extractions that's where it becomes a bit more complicated so you notice an extractions as a size increases this gives a delay repeats because you can take kind of a follow in the background although you don't really use a follow so basically for 1000 iterations it takes 17.5 seconds to extract 400,000 bytes so it's still quite fun so it's very possible to try to improve their extractions so some closing thoughts and notes so one thing I'd like to show is our optimization tricks so this project was actually originally done with an organization which had around maybe 250,000 times or 1,000,000 speed up from our code so here we are really just doing some if you notice we're doing a lot of binary operators so it's another way that binary operators can really save your life when you're trying to perform a lot of operations at the same time so instead of group you can consider trying that and that's all thank you