 Thanks for hanging in there. My name is Prakash. I wanted to present a project that I was working on recently to tackle just kind of a usability problem I found with using Bitcoin. It started with Bitcoin but then it turned into something else entirely and just wanted to share what I found and hear what you guys have to think. So let's get this started. So this story begins with this right here. Does anybody know what this is? It is. It is a Bitcoin address. And it has a face only that a mother could love. I was really into Bitcoin at one time and when I found out that this is the way to transfer the currency it kind of turned me off a little bit because how are you even supposed to remember this? If anybody could close their eyes and just repeat the first ten characters of this string I don't think anybody could do that. So I thought it is easy enough to copy and paste this and send it from person to person but what if there was just a better way to do this? How would you even organize that? How would you intelligibly share this information to other people? So I wanted to get past this. At the time I was also interested in QR codes. They have a nice way of transferring information without having a wireless or relying on a carrier signal. You can give somebody a QR code and then the QR code parser in your phone or your camera it reads it and is able to parse all the sensible information out of it. So I kind of wanted to use a similar principle to detect Bitcoin in images. So just as like a little intro this is how the QR analyzer looks at the QR code. There are anchor points and then there's formatting information which are these blue lines and then the actual encoded information is in between. So the yellow spots. So when I saw this, I know QR never actually took off because I think partially because advertisers kind of abused it you kind of had QR codes on everything and they weren't giving people anything useful so most people including myself kind of tuned them out. But even still I wanted to have something that you could put into any image anywhere transfer it and have the Bitcoin address available. So that's where Pill comes in. So Pill stands for the Python Image Library. Recently its maintainers have not been keeping up with releases so there is a fork called Pillow that has all the same functionality as Pill. It just is updated more frequently. And Pill is a great library. Python like most of you guys probably already know has an excellent community that has found a way to solve almost every problem in every space and processing images is no exception. So really you can do anything that you ever would need to do with an image with Pill. You can access raw image data. You can apply a filter. You can crop by path. You can get the XIV info. You can write text. Text characters as images. So really you can do basically anything with this. So that's why I kind of relied on Pill. The idea is to take a set of characters and write them as pixels into an image. And you can do that pretty easily with a script in Pill which is right here. Import Pill. Now you import the image module from Pill. You create a new RGB image and then you just put the pixel data into the image. So this part is pretty straightforward. You take a string of characters. You convert them to pixels based on ASCII. I can get more into that later if you guys want. But you convert each character into pixels and then you write those pixels to a file. And then the tricky part actually comes after this when you try to extract those characters from the image. So here we are when you have... I know when you throw a bunch of code on a slide it doesn't actually make much sense to the people reading it. This makes sense to me because I know where it fits into the whole picture of the project. But let me just give you a basic walkthrough. So to get each character as a pixel you convert it by ASCII into a set of three numbers red, green and blue which is the RGB string. Depending on the average use of color in the image and an arbitrary constant that I use to encode each pixel which is just any number. It's a random number actually. You can get a unique value for each pixel inside the image. And then you can write it and then this library will output a token value after you write the string into the image which you can use to get it back out. Which is more or less what's happening here. But the idea is kind of like a QR code there are anchors written into the image that the pixel parser looks for. And without getting too much off track this pixel parser functions as a finite state machine. Do you guys know what a finite state machine is? Okay cool. So the pixel parser to get the to recognize the pixels that are the message and differentiate them from normal pixels in the image it functions as a finite state machine. Which is a finite state machine is it is a process that can transition itself from different states depending on information in its current state. It's one of those cursive knowledge things. I'm trying to figure out how to explain this in layman's terms. Think of so if a robot were to scan the room looking for a chair it would get just a boatload of raw data. Like colors and just a bunch of colors. So to recognize a chair which is an intelligible form it would look for one thing first. It would look for an orange colored pixel. When it hits an orange colored pixel it would transition to a state to expect another pixel that would correspond to the chair. So it would keep doing that until you reached kind of a pixel that was bordering something that would look like the end of the chair and it would transition back to its base state the initial state. Feel free to ask questions afterward to know how that can sound kind of confusing. But by all means let me know if it makes sense and I can clarify afterwards. So here each pixel of the string is put in between two anchor points. And the pixel parser when it hits an anchor point it transitions to a state where it expects other pixels that correspond to characters. And then when the pixel parser hits the second anchor point it stops collecting pixels. So then it takes each of the pixels and converts them to ASCII characters. And it converts each of those ASCII characters back to Unicode or whatever it is you try to encode. So here we go. Let me just show you how this works. Can you guys see my terminal? No you can't. Anybody shout out a message thread. So what's happened is so the idea behind steganography just a little background is to conceal text within images. So the whole idea is that bitcoin is sensitive information it's directly tied to your money. So the idea was to encrypt the message and to give a token. So in a real world scenario if you wanted to use this you would encode your wallet into the image, give the image to somebody else and also the token or the passphrase. So if you think this is clunky I kind of agree. The idea originally when I first came up with this was to have a web interface where the token would be tied to a user's profile and everything would happen over the web. But in case you don't have an internet connection none of this relies on the web it's just the power of math. So here we are. The token and the image now we're going to get it back out. But before that let me show you what the image looks like. I wrote the image to doge one. Now usually the image takes on the average hue of the surrounding pixel so you can't really tell it is there. But I changed that for today so you can see what's going on. So if you look there's two things going on here so it's kind of hard to see from here but on the top left there's something called a beacon that contains the location of the first anchor. The idea is to distribute these pixels around the image so every anchor contains information pointing not only to the current message string but also other anchor points scattered throughout the image. Right now that beacon points only to the first anchor and then this string of rainbow colored pixels is your message. It's the encoded encrypted message. Now let's get it back out. And yeah that was it for my formal presentation. I wanted to leave ample time for questions and answers. I found that these things work much better than that instead of me just spewing information in your faces. I could that part I was running into a few bugs on but it caught me. If you fork the project and help me out we can totally make that happen. Oh it means just a space problem. What's that? Oh sorry what was the problem with hiding the pixels in the image? It's really a space problem. You would have to take writing those pixels into that part of the image you would have to store an x by y matrix if you will of the image and then take the pixels of the top and the bottom and then average them. Technically it's not that bad. It's kind of tedious. Now go for it. You mean like I'm trying to understand the question. So you're saying it would be easier to hide the pixels in the image for high res? That's a good point. Because the number of pixels in the message would be set in a high res image you would have more pixels so it would be harder to detect the pixels in there for sure. If you converted that high res to low res you would probably lose some of that information though. But that is a good point. Thanks. I've heard of this myself as well. Do you mean shifting bits onto the pixels? I have heard of this. I don't know the specifics of how that's implemented either. But I would love to learn more about that if you can forward some info. Or if you want to fork the project, by all means. Need a proper boom. Need a proper plug. Any more questions? Where is this? Is this on GitHub you said? This is on GitHub. This is the link. By all means fork. Contact me if you have any questions. That's the next request. Right now I'm using Python's UUID library to do that. It could be anything. But yeah, I probably should do that. Okay. Anybody else? Awesome. Thank you all very much for coming.