 Hello. Hello everyone. We're going to talk about how to learn the Jembe visually with v5.js. And I use V because there are two of us on the stage. My name is Amit Kapoor. I work in the space of crafting visual stories with data, so the intersection of data, visuals and stories. And I have my friend here Ashok, who is a professional data artist. You may have seen him on Rangashankara. He's also a film actor. And his last film, Prakriti, actually won the national award. But the reason I've asked him to join me here is because he is a Jembe player. And he's been playing the Jembe for the last 15 years. And I'm going to request him to help me in the visualization of music by using Jembe and the way that we learned that. So just to set the context of the talk, because yesterday there was a talk by Shrikumar on Web Audio API, that talk was mostly around visualizing or synthesizing music. And this is kind of the other part of the Web Audio API, which is kind of visualizing what the output of live music is. So that's how those two link together. Now if many of you may not be familiar with Jembe, so in order to, for you to get a sense of what this musical instrument is, how does it sound like? I'm going to just ask Ashok to play for about a minute or so. And then you can get a sense of how it sounds like. We obviously have to start somewhere. And my journey with the Jembe started about four years back when I went to one of the workshops that Ashok was conducting and tried to learn Jembe. And the Jembe is actually a deceptively simple instrument to start with. So if you want to come to Ashok and you ask him to teach you how to play, he'll basically say, okay, let's start playing the Jembe, and that's how you learn to play. And the Jembe basically has two basic tones, or basic ways of striking the head. One of them is the bass, and the other one is the tone. Now those are the two basic tones that you can make with the Jembe. And the rhythm really happens when you combine that together in a way that starts to sound like music. So if I was to ask Ashok to play, like a very simple rhythm, you can start by just listening and start to play. And sometimes in the workshop you actually need to then probably verbalize it. And verbalizing a rhythm like this would just be using boom for the bass and pa for the tone would be boom pa, boom pa, boom pa, boom pa, boom pa, boom pa, boom pa. And this is not very uncommon. I mean in most of the traditional music, African or Indian verbalizing is very common in tabla or even in karnataka or Hindustani you would be verbalizing it. And those two are fine and those are very valid methods to learn. But if you're like me and I'm like an engineer with probably much more on the left side, you need to really go to the next thing which is, okay, tell me really, really how the notes are, right? Tell me literally how does it sound like? So what are the 1, 2, 3, 4, 5, 6, till you break it down like that, I'm not able to understand it. I need to really see the notes and then start to make sense. And that is also a way or I would learn want to see it visually in a way of a notation. So can I see the notes? Can I, can you show me the notes? So this is how you would write it in a tablature which is the base is the one with the circle below the line and the tone is the one above the line, right? That may be a simple way of representing it. So these four ways are there to kind of learn the jambi and for many of us who are much more on the left side, we need to really get this, really start to know about this part of showing the notes and showing these rhythms. And I was one of the very tough students that Ashok had because he would really ask me to play, right? So you all have different learning styles and you know anything that is worth teaching can be taught in many different ways. So these four different styles that I just talked about kind of combined together are as four different styles of learning. And if you were to look at the verbalization of this kind of linguistic, the playing is very interactive, any kind of aesthetic and what we are probably more used to is the symbolic math logic part of it as well as the geometric visual spatial part. So it is the exercise that I took upon myself was can we make it much more visual in a way that people can then start to understand the music and start to learn. So we're trying to use code here to really bridge the gap between music and visualization. So we're trying to combine this two together and bridge the gap. And what I'm going to talk about is much more creative coding, right? So my experience with coding doesn't start as a web developer, I'm not a web developer, I do a little bit of data visualization. But it's really started with processing which is a Java based environment written I think in early 2000 to really visualize music or visualize any kind of input or draw sketches in a very simple way, right? And that was ported or at least there was a library created called Processing.js by John Resig, the founder or the creator of jQuery to then start to use this on the web and if anybody has gone to Khan Academy and seen Processing.js it uses heavily Processing.js to kind of teach visualization. That's the first thing actually I taught my 80 year old son how to do programming was using Processing. So what are we going to do today is really use P5.js which is make coding for the modern web accessible to people like me which are beginners or people with artistic inclinations or want to be more artistic than I am. And so what this does is basically takes that it's a completely library written in JavaScript and allows you to do much more of the sketching but also integrate much more with the DOM audio and video and that's how we can actually use the web audio API to do it. So I'm going to start just explain, I'm going to just explain the code and this is probably like an hello world example for P5.js or hello world plus plus and that's why it's very easy to start with because once you have only two loops one is a setup or two functions one is a setup and the other one is a drop and the setup basically creates this canvas which is this entire screen that you see in the back which is the entire window width and the height and then you draw it at a particular frame rate. I have reduced the frame rate to about five fill the background and I filled it with an alpha so that it gives an illusion of transition which is not really happening but it just fades out generally gradually and then I choose a fill color and draw these circles and X and Y very randomly so randomly it's drawing circles and because of the fill and repainting it's kind of giving you a illusion of interactivity even though it's just painting again and again with an alpha. So this is a very simple starter example of P5.js. This is probably the only code I'm going to show in this talk and it's easy to start with, it's probably easy to start with once you know HTML, CSS, JavaScript so somebody like me probably takes two weeks, somebody who's new to JavaScript and attending this conference maybe about two days for somebody who works in all these new frameworks probably two hours to learn but it really makes it accessible to people like me to do visualizations. So we want to connect sound to visualizations so what does really sound look like? We need to understand what sound looks like and sound is nothing but a compression of air that comes and touches your ears right? So if you see this person clapping you can literally see as he claps an air compression that's going on and this is being visualized by a very old technique called Shriller and visualization using high speed camera and a little bit of deflection to kind of visualize that clap in a kind of a speed. But we want to, we obviously are not going to use high speed cameras, we're going to use the browser and try and do the same thing. So we're going to use something that is P5.sound which is just a library written which allows us to access the web or the API using P5. So let's start with creating very simple basic music visualizations to kind of just show how quickly we can start to learn in a very visual way. So very easy to start, what is volume? What is volume of a sound? Volume is just the intensity of the sound way. So if we can figure out a way to show volume, we can probably start. So we'll start with something very simple as just showing the volume. So if you hit the jambay now or play you can clearly see the volume going up and down. And this is just on a log scale so it allows you to actually probably even pick very sensitive noises on the phone. So we can very quickly start to visualize just by the height of the LFs, the volume. The next step is then to figure out can we find the beat or can you feel the beat? And all of us can actually feel the beat because feeling the beat is very common. If you listen to music and you can pick up the beat very easily. So if you can pick it up very easily, how do we actually do it using a visualization? In this case now we'll just make their lips show again and jump but we will create a threshold which allows us to say when a beat is being created. And a beat and the reason we can do that is because a beat is nothing but a sudden variation or a brutal variation in sound. So the sound really goes up and then comes down. We can catch that, we can visualize it. So if you were to just play again and see a beat so every time the, and I was also picking up my voice I guess every time it hits the threshold it can actually very quickly start to show as a beat. Can we then take this beat forward and visualize it as a rhythm? And what is rhythm? Rhythm is nothing but again a beat or a strong repeated regular pattern or repeated pattern of sound. So if we can pick up this pattern we can start to see the NOAA beat or we can start to see the visualizer rhythm very quickly. So the very simple boom, boom, boom power that we were playing you can clearly see those red lines that are coming up are actually just when the beat is detected. And I can play around with these detection on the side which can see what threshold level is a beat is, when does it start to decay and how long should the beat be held so that I don't repeatedly capture a beat. I can quickly play with this and start to capture so it could not only be a gem but it could just be my voice that is picking up. So we can go very quickly from volume to beat to a rhythm. Can we then start to create the notations because then if we can make it notations which are not only static but also interactive then we can also get a much more visual clue on how to learn the gem. So what if we were to just take this very simple of picking up the volume and say that the base ones are very, are volumes that's kind of above the threshold but not very high and the tone ones are really high. So we are still using volume as an approximation to pick up the beat and so we will show the base is on the bottom, the tone is on the top and we will try and still use this volume to kind of show these notations. So notations which are now we are using music or representing it using symbols and that's what we have. So if I ask Ashok to play again we start to see some notations. So very quickly as an algorithm from volume to beat to rhythm to notations and so far we have done nothing but just play with volume. We just said if I know how loud it is, how often it is changing I just visualize it and I can use this. And now that's a basic visualization. We can take it to the next level because technically the difference between a base and a tone is not really just volume. It's actually different frequencies. The base is much heavier, the tone is much lighter. So let's see how we can do that. We could extend this algorithm further. And to do that we will need something called a fast Fourier transformation which is, we don't need to really go into it but it's really taking a small snapshot of the sound to capture the sound snapshot and just look at all the volumes across those frequencies. And in this case we'll do use 1024 pins and just say how the volume is. So right now looking at volume across those frequency or time. So let's start with time. So we'll just show volume across time. And to do this we'll just use a bell. And this is what I think you can use an oscillator to create it but we can actually just, so this is just showing those pins now across time. But if you really want to pick up the frequency we need to show it across frequency. If you really want to pick up the base and the tone. So if you were to just map the frequencies and then you can show the base and you can see the base on the left when we hit the base and we can use the shaker to show the ones on the line. So if you see the shaker which is much more on the right side and if you were to use a jambay it'll be much more to the left. The base is much more on the left. So what we need to do to really get the base is map it on a lock curve and once we do in the lock curve we can see much more of the starting part which is where the jambay really is. And now if you were to play the jambay a little bit or just play the base a little bit you'll see much more of this space. So if you notice in that small thing the base and tone are now starting to separate out. Which is starting to separate out and we can now probably capture that to show what is base and what is tone much better than we were doing in the lock. So we now will try and just show the base and tone and we'll just use this visualization but just choose an artificial cutoff in the middle and see the base and tone. Improve our algorithm and now we can just use this base and tone to actually improve our beat detection and do much more. This is a very tough. So what next? If you're interested to explore this further and start to create here is a website with all these sketches there are about ten of them in this presentation you can go and access it jambaywiz.amitcaps.com you can click on it it's live all the ten sketches you can actually see and play. If you want to just press T on any of the sketch you will toggle between the microphone and the song. So there is a small jambay sample you can actually start to see. And we will build upon this more as both as a visual tool but also trying to explore whether we can teach in workshops and learning both the way Ashok teaches in terms of just listening to the sound and also using visual cues. So I'm going to end there is more reference material so Jason Siegel has a music with his workshop with P5JS which has lots of stuff on it which he's the creator of P5.Sound you can look at it and Daniel Shipman has the nature of code if anybody has seen that in processing it's ported to P5.js you can look at that So I'm going to just end with a small code by Confucius which is all these things that we are trying to use are still tools visualizing notations or using verbalization is still kind of a tool to learn or teach but at the end you nearly need to play the jambay to actually start to learn it. So at some point you need to just kind of stop at the visualization and just play some music. So we'll just play one last thing or play our rhythm and I'm going to join Ashok. So I have a question about the demo was very nice what I understood was essentially P5 what it does it actually helps with the visualization of the sound so it provides you tools to build visualization on top of so how do you provide the input to the JS is it like a raw sound signal you provide it as an input this is live music using this mic going in it's a web audio api so P5.Sound is a wrapper on top of web audio api so I just initiate a mic in this case and I'm just taking the mic live and showing a visual so this is live visualization okay it's not a recorded one if you go to the website and click T then there is a sample playing about that. So this was more around the rhythmic nature of the sound but there is another thing which is how good it is to extend it on melodic instruments like guitar and keyboards that we could visualize chords and tablatures yes I'm sure you can so it allows you to do the FFT the frequency and you can pick up frequencies I have not really played with going to other instruments the only one that I have experienced playing with and still struggling to learn it's okay and that's what I was trying to visualize alright thank you excuse me yeah you're mainly concentrating on the visualizations of the audio whatever you are getting right means there is a constraint in the visualization specially request animation frame can take only 60 frames per second if you see even when you are beating upright it is not exactly sync I mean there is a variation like what's your exactly the concept on that because you're mainly concentrating on visualization okay and it is not giving the accurate thing what I mean what's your further approach and so I was this is just an experiment I mean the code is on github you know please come back so I'm just still using the p5 the latency on that if any there is double latency here because we're also trying to pick up the mic but I have not really experienced actually you can as a learning tool just to start to visualize notations that you don't know it still works enough once I start to see the notations it's good enough for me but if you're trying to do with sync with games and all that that's not what p5 is designed for just coding to try and do visualization or any other purposes for just people to you know in different contexts rather than just yeah right now it's like kind of no destination but still experimenting on the things where we may reach somewhere also to add to that part of the reason you saw a delay is the projector has a slight delay so there's a 150 millisecond delay between the laptop to the projector and that was why the visualization was not in sync with audio it's not the computer's problem yeah the significant delay came from the projector not from the computer you can try it online right now and see what the visualization is thank you I think we'll have to stop now yeah okay thank you everyone I'm available outside for questions