 Hi, it's so nice to be here. I'm so excited. As mentioned, thank you very much for the kind introduction. My name is Mariko, and that's my Twitter handle. I just tweeted this link to this slide, although you may not have any problem with this much big screen. But if you want to have it on your lap, I tweeted the link to the screen. I woke up a company called Scripto, and my title is Textile Engineer, although I don't do that for my day job. My company makes a software for TV shows, a collaborative writing, a lot of text, but they let me choose my title based on my site project. One of my site projects is a meetup called BrooklynJS, which I co-organize. If you're ever in Brooklyn or Thursday, please come visit. We have a lot of media folks in down in New York, so we get to discuss a lot of data visualization as well. And my favorite data type is an array. I really, really like array. I like mapping them, I like deducing them, but I like to do that on a physical scale. So I like mapping and deducing the array that's on the needle of a knitting. And I have a whole bunch of research about how knitting is coding, and if you're interested, I'm more than happy to talk about it. But I got really interested in doing physical array that I even wrote a programming language, a domain-specific programming language to make a textile pattern for knitting. It basically outputs array. And I put the array into your software to visualize it better. And I put it back into the machine and it's something like D3Scarf. It's visualizing D3, took it into JavaScript application, took it into array, which is learning a node that's communicating to machines. So it's like all the way to JavaScript. I'm a JavaScript developer, if you cannot tell. And sometimes the quest comes in and I kind of make a scarf out of a, this was Travis CI's API usage. I gifted to the Travis CI. So this is what my weekend desk looked like. It's code, visual, graphics, and then machine and yarn. And that somehow all makes sense to me. And this kind of started heavily a year and a half ago. So I had a problem or I had a journey to take. I wanted a cute cat photo on my jumper or the sweater or anything I want, like I wanted to make that happen. And I realized that my knitting machine only takes, well, first of all, knitting machine is only 200 pixel wide. So I have to resize the image and yarn doesn't come in hex color. So I have to figure out all kinds of image processing essentially. And I go into Google things and then like, find a weekly page and the research from university and like I just want to take this cat image into your horn. Can I please do that? So it was a lot of frustration and I discovered that like I can do that on Photoshop but also Photoshop is frustrating. I don't understand what those buttons and lines do. So I got a question that can I make that all happen in the language that I use every day which is JavaScript and HTML and do that in the software I use every day which is a browser. And that's kind of started. And yes, I can. I can use something called Canvas. Canvas is designed for that. Even on MDN it says an HTML element that can use to draw graphics using JavaScript. So for the next 25 minutes or so I am going to talk about how I utilize Canvas to do various image processing. My primary interest is printing out a bracket but I promise that you will take away something useful hopefully for your data visualization for your web application, something to take away. If you are image processing professional this might be very general. I might be generalizing stuff and maybe basic but bear with me. So let's talk about Canvas. Canvas you can create a DOM element and give off size within height of things or you can just create an HTML tag and then reference it by ID. But creating Canvas itself is like buying a piece of land but you don't know where that land located. You need to give something called context in order to understand where this is located and what kind of building you can build on. So you can give a context like WebGL used for a lot of 3D and heavy computing stuff or you might be giving a context of 2D which is very flat and you deal with a lot of graphics. Nonetheless, giving a context gives the Canvas a set of language you can communicate. You might speak WebGL, you might speak 2D, you might speak New York accent, you might speak Boston accent. It's the same thing, just giving a context is a set of language. So for entire example I'm going to use 2D context. So once you've established a context you can use a human-leadable language like lexangle, put the lexangle in or even get the image from other parts of HTML and draw an image. But I like data and I like LA. So the Canvas has a thing for that. Canvas can communicate with you by data using get image data and put image data method. Basically the get image data, get the data out of your Canvas and you do magic happen and then you put the image data back in using put image data. So for the example that I'm going to show this is the process that I'm taking. You have some kind of image, you create a blank Canvas you don't know where it is yet. You give a context of 2D, now you have a set of language and you load that image into that Canvas. Now you have that image in the Canvas you can get the data out and do some cool things and then put the image back in. So what does that data look like? When you call a get image data that data that returns a object called image data. What does that look like? So this is what image data look like. Some of them are very straightforward within height is a within height of the image. So I have a three by three very small enhanced image but then there is a thing called data that has bunch of numbers and I don't really understand what this is. Like how do I understand this? In order to understand what this data is you need to understand what's in each pixel. So let's look at this pink square. So, oops, did the animation. Let's imagine this is a single pixel. It may or may not have googly eyes but it's certainly fun to put googly's eyes. Underneath of a single pixel you have four numbers associated with pixel and all of them are between 0 to 255. First three are light bulbs illuminating at different level. So this is a red, a green and blue value. You may know OGV value. So you can change those to change the color. It's just what level the three light bulbs are illuminating and if you have an equal number for each three you always get gray and this is how you grayscale an image. If you get the three colors in, do whatever the math, average, take a green, loom escape, whatever. Just put the same number for three colors you get the grayscale in. And the last number is purely for software which is opacity value. So if I have lower opacity than transparency. This is for Canvas to know how much of the color to blend it in if two elements are overlapping each other. So if with this understanding you can see the image either like this. I put it with like chunk of four and each of them for three are color light bulb and then last one is opacity. So if I change the first one which is the lit light bulb to 255 which is a max and turn down green and blue light bulb all the way and update it then the pixel will turn red. So that's how we are going to manipulate the image from now on. So, but this image is one dimensional array and you have information about widths and height. How do I know XY coordinate of these pictures? Well, I'd like to think of it like writing a letter on graph paper. So if I'm thinking about writing a letter to somebody the data that's coming in my mind is a stream and it's that one dimensional but the output itself has a dimension. Whenever I run out of a space on the paper I run blank and then I go to next one. So that's how this pixel works. In code you might see something like this the double nested loop. The outer loop goes through y-axis to get to all of the y-axis and then the inner loop goes through all eyes on tree to get XY coordinate of stuff and each pixel has four numbers so you need to calculate the index of stuff to get to each addressable data. So even if you have those numbers you can have fun project like this. I created this project called ghost image. It is a one by one pixel div that has whole bunch of box shadow to create a illumination that you have an image here. Here on that corner. This is actually not an image tag it is a box shadow with one by one pixel. Not really practical. It's used a lot of memory to operate but certainly fun to kind of plank people. So I got all the data in I know how to edit the stuff. Can I do image filters like Instagram and I kind of started to research about it and I still put up on this like everybody was talking about math and like I don't really talk math like I don't write squiggly function like function and type. So I like to think of it as like kind of like playground and the shape of the slope in the playground determines what kind of filter you get out of and this is a data visual conference so I created visualization of this slide. So this is the original slide. If you have an input of 128, output is 128. Let's invert this image. Invert photo. It's quite literally inverting the slope. So now if I have a higher value you get the lower value out. If I have a lower value you get the higher value out and this is how you invert the image. Blightness is shifting up and down the original slide. So if I lighten the image all of these numbers that's like 138 to 255 it gets just fully on all of the eliminated in the light bulb. If I do move that down then it gets darker. The contrast, I'm the kind of person who guesstimate the contrast in brightness value on the Photoshop until I'm satisfied with things up until I understood what contrast does. Contrast is a slope of that slide. So if I change the low slope that's a low contrast meaning the input is coming from zero to 255 but output is limited so you get a low contrast of color and high contrast is the opposite way. It doesn't have to be a straight line slope. It can be a step and creating a posterized effect and you can limit the color that you use in the image. Or you can do something like solarize which is a high contrast on the two side and then inverting on the middle. So creating kind of colored, negative photo kind of cool effect. And if you take a gray scaled image and put the two step one you can do the threshold in the image and I'm pretty sure a lot of image processing and computer vision use this as a first step to get to a usable data set so you can kind of locate where things are. And taking those gray scaled data you can also I know rainbow pseudo color is not recommended but you can see why if I graph it this way. So rainbow colors are not recommended because the blue steps are very harsh but then on the green steps it's kind of like you can't really tell the step. It makes sense that the green is fully on on the longer than any other color and our eyes are more receptive to green. So you know you can change that lines to create a different pseudo colors to give a color to your data set. So these techniques are not only for creating Instagram filters. I had this challenge of taking old data so I don't have any coordinates or any data points I just have an image from archive and I wanted to create an interactive map. How do I go with that? Well you can do things very basic like creating a tag and styling in CSS and making a box and doesn't cover other parts of the Boston but it's kind of not accurate. You can use HTML5 map element and draw the polygon to kind of get it right for you know send to Boston to have some kind of click event but if I want to do the pixel perfect click event then I can just put it into whatever the image editing software that they have I put it into my iPad and I draw the line and then I fill it with white and it literally took me 15 seconds. I gray scaled it. Now fully white pixels are the target I want. So I threshold it. So now I have a reference map of if the click coordinates go there and if the pixel is black that means do not trigger the click event. If it's white then trigger the click event. So I have basic function and the hidden canvas has this threshold image. So whenever user clicks something and it doesn't have to be overlapping each other directly you can just live in the memory but just for the reference I have it overlapping. Anytime user click events goes in I call the reference is it black white and then trigger the event. So you can use image processing technique not only for creating a pretty photo for Instagram but you can utilize it for interactivity. So I got me interested in like now I can control color can I change the shape of image like blur and sharp? And again I kind of and this keyword got mentioned today in the two talks in the morning the kernel convolution and to me it was like that sounds yummy cereal I don't know what that is but cool. The mass itself though like wasn't too complicated but explanation was very complicated. So I'd like to think the kernel convolution as a pixel social graph. It's just thinking about a relations of pixel. Let me explain. So you have a single pixel that you're going to change the color of and sometimes called a kernel. You have a friend surrounded by it and you create a convolution matrix. Now in the case of blur this thing center pixel wants to blend it into your friend like as much as possible I don't want to be noticed. So you give a number to each of the friend. You combine all the color divided by some of this number so in this case nine and then you get new color for this pixel. How does that look like in the bigger image? So I have this very vivid pink line going on we learned that each pixel have three colors so we do that for three channels get the number for red, get number for green, one over for blue and then do that for all of the pixel one by one and then you get a blurred image and that's how you do the kernel convolution or I would like to call it pixel social graph. So the blur that we just did is called box blur. It's, you know, every pixel is equal. This is the original image and it's subtle but it get blurred image. You keep, but like in reality your friends are not as equal. You have a close friend and then you have a distant friend. So you can have Gaussian blur which is a close pixel get more higher number and then the distant pixel gets a lower number to create more edge place within blur. And this makes sense to me as a web developer I always wondered when I make a filter in SVG I have to type FE Gaussian blur. Why is it not just blur turns out the blur has a different kind. So now I know. The sharp is the opposite. Like you do not want to blend it into your friend. You want to be unique. You almost like learn out of opposite of your friend. So you make yourself really high and then make your friends really low and do the same thing to get a sharpen effect. So I use this technique to create a fake tilt shift application. So if I put an image in and then specify the area or something and then start processing an image then it kind of the area gets blur, blur and then kind of making a fake tilt shift effect. This was one project but this taught me another challenge that I would encounter if I were to use this in a project. In the project the performance matters and exporting the image. Like I do all the image editing there but it lives in the canvas and what if I want to tweet or what if I want to share on Facebook want to export the image. So there's a two thing that needs to be addressed and I'm going to go through that in the rest of the minute. So the performance. Performance gets really slow and because JavaScript is single threaded that slow process blocks your UI the click events and everything. And you can go by making it faster by using faster algorithm, better function that you write but inevitably you're doing a lot of calculation on image and if you want to use a original image like 10 meg that comes from your phone then you will have problem with blocking UI and you can use something like web workers. Web workers are way to learn your JavaScript in the background thread off of UI. What does that mean? I like to think of that as a... Web worker is like international space station. If you think of the earth and dorm as a earth so Brazu and dorm is in this earth it shares the UI thread it's like all the elements are here you won't sell international space station you can talk to the data but the web worker of international space station cannot touch the element on the earth. So web worker does not have access to the window object or you cannot use jQuery select inside of web worker. So how does this doing that in code looks like? Quite simple, it's just that. So let's go through one by one. So launch a web worker is literally just typing new worker and giving a file that you want to execute in the worker thread and that's like launching an international space station. I mentioned that web worker does not have access to window or DOM. So even though you might be having underscore some utility library in your main thread you need to leave the choir again if you wanna use it and use that as import script. Once that's done the communication between main thread and worker thread are done by post message method and then on message event. So while the main thread is taking care of click events and all of the UI elements that you want the user to use you can push off the expensive calculation off to the worker side. And once the data comes back and you don't need a worker anymore you can just terminate. It's like sending the ISS to the black hole and you know. So oh, the video is not loading, okay. So I created a demo, simple demo of a tilt shift that I showed just now. I have the demo that's running on the main thread and then I have a web worker-ified version of stuff. And I kind of explicitly created an animation by JavaScript. So there's a color box that's moving that's controlled by JavaScript to kind of hijack the event. So once I learn that the event starts immediately the main thread one the UI freezes doesn't respond to any of the click event the web worker side is responding beautifully and doing so and in the minute it finishes it's like nothing happened. But in the main thread one it finishes and then it just executes all the click event that was accumulated and then it just junks up all of the UI. So you don't want that in your visualization if you don't wanna do that. So performance is kind of taken care of. Let's think about how to export like you might wanna put it into PDF to send it your report or you might wanna tweet. There are two ways to get data out and save it as an image from Canvas and one is to data URL and two blob and there's two use case for that. So the easiest one is to data URL. Calling to data URL on Canvas returns a basic default text representation of an image. So you can directly use that in your CSS or you can put that into HLIF in your element and create a download button. However, so let me show the example. So I have mentioned that I created this programming language for knitting machine. So I have a little parser learning on the background. Every type I do, it is learning the parser again and returning the LA and creating a Canvas here. So the Canvas is just little tiny corner and every time I change the Canvas it's calling the to data URL and getting a base 64, put it into CSS and letting the CSS take care of all the background tiling and stuff. So I'm not, I don't need to worry about and I can just click the download and I don't know if you can see it there, but yeah, that's like agree base 64 URL literally in the HF. So that's how I use base 64. However, it's expensive if the image is big like any like web like base 64 image text is expensive. So you wanna avoid that. And also the HF itself, although I couldn't find it in spec, as far as I know the HTTP spec does not restrict the length of what's going into the HF, but the browser vendor restricts it and that seems to be around 2000 characters. So if you're dealing with small image you can get away with, but if you're trying to do image with like a mega something, you will have problem with creating download button and you can use something called to blob. So to blob makes a binary large object, which then you can pass into method and window called create object URL and you can just use that image for your download in download link. In either case, you need to specify what kind of image format you want. And this got me thinking that I need to specifically say I want this format in this completion. And I was like, I actually don't know how image works. Like I know GIF is animation, PNG is like everything and JPEG is like what comes out of photo phone. But I actually don't know, like I don't have any like sophisticated knowledge to decide on which one is the best for my data visualization. So I did a little research. GIF limits the color to 256 colors. You cannot use more. PNG and JPEG use the full colors. Although PNG can specify the palleted mode and use like a GIF. The file also tends to be smaller on GIF and JPEG and larger on PNG. Although if you set a pallet mode in PNG, it's almost always smaller than GIF. And the transparency. The JPEG cannot handle transparent pixels. GIF only handles fully transparent or like fully opaque. PNG can handle kind of a variety of things. And the compression-wise, PNG and GIF is lossless, meaning once you compress it and decompress it to show it, you can get the original state of the data. JPEG is lossy compression, creating something called JPEG artifact on the way. So this is just a handy chart of how to get around in any great web question. It starts with, is it animation? And if not, is it graphics or photo? And then you can kind of decide on the image. So this discovered, I discovered after researching this, this is why Twitter picture looks so shitty. So Twitter use JPEG complication no matter what your original input is if the image is fully opaque. However, remember JPEG cannot handle transparent C pixel. So if you put a transparent image in the Twitter, then it will use PNG and it will not create a JPEG artifact. And I learned that from somebody. I created a tool to just upload your image and it's just fully opaque image. And then you can't even tell. There is like one pixel on the top left corner that is turned 99.6% transparent. So that Twitter's perspective, it has transparent pixel, so it's gonna use default to PNG and preserve your Twitter image. You're welcome. So all of that are open sourced. I have a little JavaScript utility library called Graphi.js which kind of I tried to aim it for like underscore but for image data. So you can kind of prototype stuff. It is not smart. It is not clever code. It is very slow. But the point was that I had so much problem understanding those high level sophisticated system and all I needed was basic concept of how image processing works. So the code is very simple. However, it is meant to be and encouraged to look at the source code. All of the examples links are there. And I have a googly eyes if you want to. Thank you very much for open this. This conference was, when I started, I watched all of the video from open this to learn how to D3 and how to do the data visualization. I came here last year in person. That was amazing. And now I get to talk here. So thank you very much.