 Trust me, I'm like a smart person. From The Conversation, this is Trust Me. I'm an expert. Where we ask academics to surprise, delight and inform us with their research. Today, we're talking about music. You probably heard your first strains of music when you were in utero. From then on it's helped you learn, helped you relax, hyped you up, helped you work and study, helped you exercise, helped you celebrate and helped you grieve. It's completely ingrained in every aspect of our lives, but it's also the subject of a significant body of academic work. So today's episode is all about research on music. We'll hear from researcher Hollis Taylor, who spent 12 years recording and transcribing Birdsong. And Dr. Clint Bracknell, a researcher at the Sydney Conservatorium of Music, will explain how he's investigating the power of song to help address the national and global crisis of Indigenous language loss. But first, conversation tech editor Shelly Hepworth speaks with Dr. Ben Swift, a digital artist and computer science lecturer from the Australian National University, about how technology is changing the way we interact with music. We're even our understanding of what music is. Ever since we built the first drum, technology has been changing the way we make and listen to music. The phonograph allowed us to have the experience of a concert at home, the Walkman made music portable, and the Internet revolutionised distribution. Now almost every song you hear is produced on a computer. So how might technology change music in the future? One of the possibilities here is that artists, instead of just giving you a recording, they give you this interactive sonic experience so that you maybe get to choose the way that the song kind of goes or some aspects of the audio-visual experience. That's Dr. Ben Swift. He's a lecturer in the Research School of Computer Science at the Australian National University, and he leads the Code, Creativity and Culture group there. You know, there are a few artists that have experimented with this. Radiohead and Björk have released app-slash-album things, and this Sigurros Magic Leap thing, certainly I will be first in line to try it, because what they promise is the ability not just to listen and passively experience this thing that your favourite artist has created, but actually be an active participant in the way that it evolves and, you know, changing things about the sound and the pictures or whatever because of this technology. So he mentioned Magic Leap there. What he's talking about is a pretty highly anticipated collaboration with Sigurros. They're a rock group based in Iceland and known for their experimental sound. It's you. So what do we know about this collaboration? Well, Magic Leap is an augmented reality company, and essentially it's trying to use this technology to enhance the musical experience. Augmented reality is kind of like virtual reality. Only instead of watching a 360-degree immersive video through a headset, you wear a different type of headset that allows projections to appear like holograms in the world around you. These sort of new interfaces and these new physical devices, they're going to change the way that we interact with computers in general, and I think they're going to be used by artists in all sorts of interesting ways to change the way that we interact with music composition systems and the way that we control music. This tech, if it works, could even lead to new instruments and new ways of experiencing music. There are a lot of different companies trying to build these devices. They're really hard. Battery life sucks at the moment. Things need to be nice and light so that they can fit on your head without needing to have Arnold Schwarzenegger kind of neck muscles. But the possibilities here is that instead of when you kind of look around you, you just see the physical objects in your space. You have this vivid overlay of a virtual world that not only looks cool and provides information, but potentially you can kind of even interact with and so you can imagine all sorts of weird new instruments that musicians could create with that technology. I asked him to describe what exactly a Magic Leap musical experience would be like. Magic Leap specifically a promising augmented reality and to kind of give you as well in addition to your physical space to have it to overlay amazing virtual creations with which you can interact and you can see little critters running across your desk or you can touch the sound as the waves kind of swim around your head in vivid hues. But Dr Swift says we need to take all these promises with a hefty grain of salt. After years of hype, Magic Leap has only just released a version of its headset and early reviews suggest the technology still has a way to go before it lives up to its promise. The other things I guess that I'm conscious of as somebody who works in technology and both researchers and also teaches this next generation to build amazing things is there's going to be a lot of people that promise a lot of stuff and it's really easy to do a slick looking demo and what I always want to sort of say when I see some of these amazing videos or demo things is you know I want to be generous and say that seems really cool but I will reserve my final judgement until I've had the chance to experience it for myself. So devices like Magic Leap are still at the demo stage but there are other new forms of computer generated music that are here right now. Take live coding for example. So one way to think about it is like really nerdy deejaying so imagine if instead of just using decks and turntables or some other kind of software a DJ actually wrote a computer program live in the club to generate the appropriate music in that moment. So live coding is the act of writing a computer program to make music where the musician is tweaking the program on the fly in response to the artistic demands of the situation. And while in the past technology has been used to imitate the traditional instruments Dr. Swift says live coding lets him approach a computer program as the instrument itself. And you're considering the computer on its own terms and saying okay as an artist, as a musician what can I do with this? Part of my research in my job here is actually building new programming languages specifically for this task. So how does it work? You know we kind of have loops where we say okay do this thing over and over again maybe varying something each time and you can imagine for instance a drum kit if you want to do a traditional kind of four on the floor club beat and say you've got a loop which is looping every beat and then you go okay every beat I'm going to play a kick drum and all of a sudden you've got doof, doof, doof, doof and then you say okay well only on the two and four beats let's play a snare so then we've got doof, snare, doof, snare, doof, snare, doof, snare and so you can see that these basic structures we can actually you can probably imagine that we could write a little chunk of computer code which encodes that basic drum pattern and then the rest of it just kind of builds from there and if you put ten people in a room doing the same thing you get what is known as a laptop orchestra rather than an individual musician using their computer as a solo instrument what happens if we have a bunch of them and again in the orchestral model we have different musical instruments which fulfill different roles within the ensemble now live coding is pretty niche you might be thinking that this doesn't sound like music you'd listen to or gig you'd attend but it's hard to predict how new technologies might influence the way we experience music in the future so in any artistic practice you've got an avant-garde who are doing weird new things but then there's sort of this winnowing process where some of those weird new things actually turn out to be really cool and you can get taken up by more traditional and more mainstream kind of practices look at sampling or computer generated beats for example tell somebody 70 years ago that the majority of drum beats for instance are going to be produced not by somebody actually hitting a piece of skin strung across a wooden kind of case but just by playing little snippets that somebody else recorded they'd think you're nuts it's just become through lots of different artists pushing those boundaries and finding cool ways to do that stuff it's become the main way that we generate beats these days music has been part of the evolution of computers since they were first invented they kind of came around the Second World War and then by 1957 we have a couple of composers at the University of Illinois Isaacson and Hillar composing the first piece of computer music a piece called the Iliac Suite and so music isn't sort of a new blow-in to the world of computing it's actually been used and woven in since right from the beginning when we had computers whether it's magic leap, live coding or some yet to be invented intersection of music and technology he reckons the future looks bright there will be a period when this technology kind of even before it goes mainstream but when it becomes available there will be a period where it just kind of blows people's minds because we're really not used to our world being augmented digitally you know we have specific narrow rectangles via which we interact with the digital world you know the smartphone, the laptop or the desktop monitor but what this kind of promises is the ability that any part of your world may then perhaps as a surprising or unexpected thing have digital content and a digital experience you know just woven right through it you know I think we're in for a fun ride especially because a lot of this stuff is going to be new and that's going to be really cool