 Good afternoon and welcome. My name is Shira Ganz and I'm at the New York City Mayor's Office of Media and Entertainment. Thank you so much for joining us today for our master class studio sessions. These master classes are a partnership between our office and the New York chapter of the recording Academy. It's part of our June is New York Music Month initiative which happens every year during June. Today is actually our last event of June. You can check out all the events we've had that have been reporting and are up on our website. I'll put it in the chat. I'm really excited for today's master class. It's a star-studded panel of Grammy Award winners. And so without further ado, I'm going to hand you off to Anne Minselli and she's going to take you on a deep dive of immersive audio. Thanks everybody. Hey everyone. Thanks to the recording Academy and P&E Wing in New York for having me. Thanks to the Mayor's Office and Shira Ganz for really doing these master classes. They're really important for our music industry to keep developing it and pushing the technology forward. My name is Anne Minselli. I'm a studio owner in New York, Jungle City Studios. Alicia Keys, engineer and album project manager for the past 22 years. And, you know, I wanted to introduce you to the team who we built. Alicia took a deep dive and into immersive audio. And I was tasked with putting an incredible team together. And most of these guys that are on the team, Michael Romanowski, Eric Schilling and George Massenberg, we worked together for years with the recording Academy helping develop many different initiatives across the board, you know, the broad spectrum of the Academy. So it was only right to bring them in as we were going to remix eight albums in immersive audio. And we've been working together on this technology, developing it the last, I'd say five years because the producers and engineers wing always had a committee dedicated to this. So it was only right. And without further ado, let me introduce you to Michael Romanowski, Eric Schilling and George Massenberg. I would love for them to give a description and their backgrounds. Let's start with Eric. Good morning or I guess there's around noon. My background is I started in studios and I was 19 went through being a tape bought old technology tape machines consoles, but I always had a love for new things and new technology. And all of us here in this group have seen a lot of things change and it's been about five years, five years since I jumped into spatial. And it's been a great prop. It's been great to learn about it and to explore it with this group. We help we help trade concepts and ideas and solve problems and learn things. So thank you, Michael. I'm glad I didn't have to go first because I wasn't sure how much to get into the going first part. Well, I thanks and thanks for having us be a part of this and thanks to the mayor's office. I got actually I got started way back and live sound when I was in college playing in a band and I realized there was somebody in the back of the room twist the knobs for people in the front of the room and I didn't understand that I wanted to know just my inquisitive nature said what what are these people doing affecting us and so I just kept asking questions until after a long time I a while I had to become in the house sound person after a series of you know of filling in and events and stuff and that just led me on the path of the of looking at engineering and how it's putting how it's expressing the artistic connection I went to math I went to school for math and computer science and started working on people's records as a producer and as an engineer and I would say through a through a series of events, actually a long motorcycle trip 30 years ago from Nashville put me in a job opportunity as a mastering engineer and soon as I found out you know exactly what it was and all that often running loved it's exactly the apex of all of my interests. I built my first. So I did that I've almost 30 years I built my first room surround room Paul Stullbine and I had a had partnered on a place and we had 251 rooms that we built in 2001 2002 so been in it for a while. I love the idea of taking it to another dimension and incorporating that so I started experimenting and trying things. Five years ago or so and adding height speakers and working with companies and and helping you know that's something that I think that we all really we all share as we've all spent a lot of time working with the companies and and people developing the technology and the tools and our presentation has gotten us to a spot where you know we're advising them and and building and it's informing us and we're informing them and so I'm just happy to be on that journey and part of this team and thank you for pulling us all together because you know I think we've also all really grown through the shared experience, figured a lot of things out and taking it to a new level and I'm super proud of the work that we've all done. It's really just the beginning. Go ahead George will save the best for last go ahead. I'll try my best. Hi everybody. Guys thank you very much for having me. Thanks for getting this together and you're, you're brilliant. I'm George Massenberg I'm an early adopter and basically an instigator. I'm a troublemaker. Currently, I'm a professor of sound recording at McGill University. I don't bore you with the history, but I started in the studio in somewhere around 1963 1964, but there's never been a better time than now to be making music and making great recordings. I'll just leave it at that. Thanks guys. It's great to be with you. All right, thanks guys. Thanks George. Thanks Eric. Thanks Michael. What I would like to do now is share my screen. I have a really cool presentation that the, I work at Sony and the folks at Sony Donna clopfer. We put together this really important deck as we develop the twists and turns of this technology. And I believe the deck is really incredible because it bridges the gap between professional and consumer. So I'm going to go through it really quickly and I'm going to share my screen and then we'll have some questions after that. All right, can everyone see that. Yeah. Okay, great. So what is spatial audio and immersive audio. Here we go. Here's how we evolve we started with the typical stereo files stereo mix which is still very popular today across multiple platforms. And then we went into surround. And then we went into 3D surround, which is, you know, they'll be at most, and we went into 360 degree audio, which is 360 RA. So I want to explain a little bit about the differences in that technology immersive spatial audio expands on traditional stereo and surround sound. It increases the number of speakers and the sound field around the user. This is mixed with a dome concept. It works in line with DAW such as Pro Tools logic and new window and can be mixed using speakers or headphones. 360 audio is mixed with a sphere concept 360 degree image. It works in line with DAW such as sorry my screen is Pro Tools Ableton new window and logic. It can be mixed using speakers or headphones immersive and spatial audio or terms used to describe and enhance surround experience immersive audio gives you the ability to hear music from all directions. Immersive is the term used by the professional audio community spatial is used in marketing and branding. And that most and 360 audio RA are the platforms for creating spatial content. One key shift in the audio production and playback technology is a transition from channel based audio to mixing to object based audio and channel based audio mixing is like mono stereo and traditional sound. The engineer would mix to a specific speaker setup and print a final mix with files designed to be output to those channels channel based output limit limits the playback ability to one type of system. In this case, a 5.1 system. The object based audio is mixing in a in an adaptive way, instead of mixing to a channel output and engineer mixes with a virtual sound field, placing objects within the sound field allows for outputting a single deliverable which can be used to extrapolate for potentially any output device. So how do we create immersive mixes existing stems and expanded stems are located or created from original multi tracks. Some artists include pre mixed files. These files are helpful tools in creating and reimagining mixes. Some artists make completely new mixes. The stems and files are processed and edited to match the final approved version of a song. The stems are then placed and programmed in the immersive environment using software platforms such as Dolby Atmos and 360 walk mixer creator. So you have your references created. I'm about to switch. So these are the screenshots of the object based mixing interfaces for both 360 and Dolby RA. You could see the differences between both platforms. One could see the spherical sound field in the 360 interface and the dome shape in the Atmos interfaces. The output from these mixes is a package which contains the object audio information. These can these can be encoded and decoded differently depending on your listening device, whether that be a theater, a sound system, a smart speaker or stereo headphones. Object based mixing and encoding technology will continue to improve over time. What are the challenges which I'll ask some of these questions to George and the crew later. Technology and creation tools are new and still in development replicating the speaker playback to the consumer in headphones. Comparison to a stereo mix is a big key, you know, a lot of artists, they, they live with their stereo mixes for a very long time. And then it's time consuming and expansive to create a mix and get, get approval. And the biggest thing which we're going to talk to George later because he's working on the deliverables and improving the deliverables and assets is locating the proper assets. And then spatial audio and streaming services. How can you listen to spatial audio here is how spatial audio is now a marketing and editorial driver for Amazon title app and music and others with the increased focus from DSPs on the emerging customer experience immersive audio mixing gives artists and labels marketing an opportunity to partner with DSPs on a higher level. It engages users at a new innovative point of interest and creates moment around catalog by offering fans a new way to discover and experience your music. So how do we listen to spatial audio anyone that's on the Apple music platform, Amazon music platform, teaser and title. I'll share this deck and this page so on Apple music you can listen on your headphones specifically AirPods Pro and AirPods Pro Max you can listen through Apple TV. And you can listen through supporting Apple devices like your MacBook Pro iPad and iPhone. Whoops, it went one page too far sorry about that listening on headphones via Amazon music is through the mobile app. Android device plus the iOS platform, or you can use their Echo Studio Pro and supported devices via Airplay and Google Cast on Deezer you can listen through headphones. There's more it's more of a customizable experience with Sony headphones supported devices via Airplay and Google Cast. And on title, we have headphones is the way you can listen to 360 audio mixes. It's a more customized experience for 360 audio with Sony headphones supported devices are Airplay and Google Cast. And lastly is just some of the, you know, ways consumers can, you know, purchase devices which is the Amazon Echo Studio and the new Sony SRS are a 5000. And that's it regarding my deck. I'll unshare the screen now so we could see some questions I'm also going to be asking my fellow panelists some questions so let me shut my screen share off real quick. All right, George, I have a question for you. How do we evolve the technology. What are the pitfalls. And how do we evolve it you know what, what do we need to, what do we need to do to evolve it. Two sides. The preparation of the mixes. One side is one side the delivery of finished mixes is the other. Starting with delivery of finished mixes. Terrific if these systems were more compatible with each other. Although Apple and at most are kind of joined at the hip. It's not clear that any of the others, Sony. And they're not able to share files so a file streaming on Apple or file intended to stream on Apple won't exactly fit into the Sony world. Right now we're making different mixes right now. Clearly the future is better standardization between the two manufacturers and in particular behind from Hoffer from Hoffer's work in developing some of the tools the software tools were more adept at cross interpreting sharing translating what's called metadata. Data is use. It's useful in thinking that it's a way of describing where the sound is coming from in numbers in a polar coordinate space. A speaker coming from a certain place in Sony might ask to find that speaker and at most and at most would need a virtual location to place that speaker and am I getting too technical yet are we okay. On the, let me skip back to the preparation side. For us to make two separate mixes, or three separate major four separate mixes is pretty complicated, especially as Eric is showing me step by step, because he's done more of this and I have this cross correlation between an at most delivery and a Sony delivery to the record label, whoever that is. When interpreting. Those are easy for us to do it away. Eric will describe how he, at some point, when you take his master class, how he takes low frequency elements and routes them to the Sony front lower ring, because Sony doesn't have an LFE. So little things like that. Sony has a God speaker. Got to put something there. Sony is, is it three and one, Eric, three and two, three and two no side speakers. Five on two layers. Right. And we have to figure out how to get at most call seven dot oh dot to into five dot five dot something. It would be great to have immediate work tools sorry to interrupt Eric. I'm just going to add to what you're saying George I think what we've done on Alicia stuff. And this is that we learned a lot in the last year is probably in this workflow is part part of it for me is once they learn the tools. I could think about what am I doing in an at most mix that I can use and prep myself for the Sony mix so that was George like he said, one has a sub one does not so what do I do with that that's the case in point. So, it's a little mix is to translate right. The mixes want to translate. Exactly. The consumer that's the goal. Yeah, how do I make my mix translate right so the artist the artist needs to hear his or her mix back. Yeah, like it's the record. They want to hear it back the way they conceived it and they want the consumers to hear it the same way and they don't care about the you know the tools the playback whatever they want to put it on and just listen and know that they're they're hearing the artists intent. We are in the middle of a project and what we're talking about is for us, making several different kinds of mixes, making our kind of bed mix, making a separated, excuse me separate objects. Should we stop and talk about objects. No, making separate objects so that Michael might be able to resolve these Michael might be able to combine objects into targeting different technologies. Maybe we haven't. We're still doing that ourselves but maybe that's the future of this is throwing all of our problems and Michael's laugh. I'm a problem solver George I do my best to figure out learn things and make them as best they can. And I think that one thing as well is the labels have to evolve they have to treat this as a regular album project sometimes. The budgets have to be there. The protocols have to be there. You know, hiring a mastering engineer and all the different ways that we work on an album. We have to have the same type of approach when we're mixing immersive audio more artists will jump on board, as opposed to the labels trying to say the stereo mix and the immersive mix need to be the same. And my second question to this is Michael and George and Eric, you know, technology still has to evolve right Michael the game stage is completely different. Well, the artists are listening to their stereo mixes in their cars loud and then all of a sudden the game stage for immersive is not the same from a stereo file to a, you know, object based file correct that so we got it we got a question to explain an object. What an object is in the immersive space is very simply one source in space where you may call it a speaker you may call it a position in between speakers but an object has a physical space in a physical location. An X, Y coordinate and has a, has a Put the page back up to And that's, and that's, that's one. There's the objects. And here's many objects. And maybe you're those of us who are used to making bed tracks for motion picture. Your seven dot one is possibly a group of eight objects with special positions and the strength of that just to jump ahead. But in the playback space, be it home theater, be it any one of a number of earbud or earphone systems, and in the theatrical presentation space, these objects can be re rendered to find the right spot in this spherical space. That's not too complicated. And then the display and what George is explaining and this is the differences 360 audio objects versus Dolby, and I explained already the differences with the dome shape, and the 360 audio how they do there is different. I can give you some of these. I'll talk to the mayor's office and see if I can share a couple of these documents with you. And then Michael technically, what, how do we keep growing this technology from a mastering standpoint like we've already developed crossfades right at one point. We didn't have the ability to do crossfades when we're matching catalog, and, and, and, you know, mastering compressors and plugins being multi channel and and time aligning files to really dig in and do a mix. And then I want to ask Eric, you know, go ahead Michael. Oh, oh, well you can ask Eric if you want to jump into that and we can proceed or well anyway so actually you asked it. Good. Well you asked something about you started earlier, like before we got into the objects things started heading down the path of the difference of the listening perspective from the artists or the, you know, versus headphones or stereo stuff and that you know, that's one of the things that we, you know we're wrestling with right now is that people are so used to hearing. If you're working on headphones the center image in the middle of your head right you've got to bull horns just scream in like this and you've got this thing, the opportunity of spatial audio immersive is able to put a listener in a space that's not in the center of their head so it's a, it's a, to some people it's jarring right away thinking well I'm used to hearing this. And I think sometimes just having the discussion to just, you know settle back just listen just absorb be very Zen about the listening, and the expectation not being trying to recreate the stereo in the same way like even if you're, and this goes back to tools and procedures and best practices as well is having two channels to get audio to come out of presents in a certain picture, but when you bring things around you and above you and in Sony's case below you you have a different picture to present. And, and it's just it's there's an education process about letting the listener, the artist, the label and the consumer all, you know know what that perspective is you asked about tools and you know I think that's one of the things we got to be tricky about we, we, you know we we we developed processes along the way with these. These eight albums, whether it was in the mix process or the mastering process I learned a lot and pushed a lot we, you know we talked you talked about crossfades, you know for the, for the, you know, other than in the surround era, we thought of albums in the immersive so far we've most a lot of people have been approaching as singles and tracks and not really about album album metadata album loudness level throughout, you know, timing and space between, you know, all of those and how did the transitions from one space to another so the tools are we're still all of us are all in conversation, developing to develop those tools to the things that we had to figure out along the way. Like you said crossfades how do we deal with that is it have to be in a mixed position do we have to alternate ending do we have to, how do we do something that doesn't have PQ markers, you know if we're not. dealing with a standard CD type playback. We had to make one album in a block we put all the sessions. So my, myself and my assistant Brendan we had to spend. So the way we mix this template we put stems and we put raw files pre mix files to give George. Yeah, Eric and Michael the ability to use the raw files and not just be limited to using stems from our mixer. Now we had to import those all together but Alicia had interludes between each song. And it was her. She didn't want to put, you know the labels push this technology out without thinking about a lot of these issues it's like, we had to put the album in as a whole block 14 songs with the crossfades in between because there's no way to mark the IDs properly in the in these layers. And what I wanted to ask Eric, I kind of had you on deck is how we approach the catalog mixing and how we approached the new albums that we did. I think catalog is critical because consumers are used to hearing these mixes they know these mixes for years and years and years. There's no way that the DSPs can just shut the stereo file off and say well here's the immersive audio file bling bling bling here it is. So when we worked with Alicia, you know Eric we came to your studio many times to really dive in and that was critical for the artists to be involved, and to approve the mixes, and to really bring the people that worked on the albums originally into the world and that goes back to what George is saying. That's what goes back to what George is saying about budgeting with George is saying about like, those are some of the pitfalls it's not just a business to pump and do these 50 albums that are in our catalog it's how do we make it all work together stereo Dolby 360 catalog new material. This is the artist art and we have to respect that. The technology companies and labels need to understand that so get ahead Eric I would love to hear your feedback on that. So it's curious. So a good point was for the older stuff the catalog stuff. Our approach was we have a stereo and that's the guide posts and it's a guide post incense of how the song feels and how it's how we blended it, or how you blend blend it when you created the stereo. When I started to make it into 3d immersive. I would use the start go back and forth as my guide posts, but it doesn't have to sound exactly same it needs to just feel like the stereo. And I remember when you guys first came and Alicia and you set up at the front mix position. And when Alicia started to hear stuff from various spots. I can see her visceral reaction I remember she kind of moved your hands like. I've never heard it like this before but yet by the same token. There were things that you guys would point out to me that need to feel like the stereo so whenever I start a immersive thing. One of the things I need to find out. Is it is it need to feel like to start or we're going to do a bit of a remake and move away from that. So on the new one and for instance we probably have a little more room to do that. So that's an I'm going to get some. Yeah, I'm starting on a game soundtrack and they just came to the studio this week. And to your point, Michael, they've never heard something in in a in a spatial room. So come a moment to get used to it. Trombones are coming from here strings are from over here choir's from there. So when we mix the stuff a lot of what we do upfront is talk about sounds design. The spatial environment. So, and with you. You and me had those conversations with the game people I'm having those conversations as that's part of the role and then one last thing I want to add is when I mix I mix on speakers. We check it on headphones, we have the two devices here that you showed on the slide, and we have a sound bar. I will take the clients sometimes and if you remember and we took the sound bar up to Alicia studio. A year ago she heard the final on a sound bar. And that's really important right that's how it's going to translate to the consumer, right exactly. Amazon devices we understand as Georgia that's playing this all ties together you know, and then George we started this almost 2019 at Blackbird. When the software wasn't as developed we had about 20 sessions for Alicia song called show me love it was her lead off single. So it was important for her to do something special with the, you know, the Dolby mix remember when we did this big event at Dolby we started at Blackbird we wound up in New York we were in LA with Michael like this was big we probably had 30 sessions alone to start off this whole three year process right. Well and it started with you're giving us the direction of. We'd like to take advantage of this new technology so to a certain degree will be reimagining the mix. And we may have taken some things too far. Some things maybe we needed to explore deeper, but we learned more about what not to do than I ever would have expected. One of the things that really helped was listening to our product on different technologies listening to it on your phones on earbuds listening to it re rendered into five and and trying to figure out what doesn't work. I don't want to go too far into this but there are certain companies that bring up the high end in the rear and rear height speakers, and you put a tambourine or hand claps back there and you'll be very surprised what you get back. Well that translate ability is huge because again that's back to having the artist know that their music is being played and listened to as closely as possible so it's our job to make sure that it does translate and it particularly falls on, you know me as a mastering engineer to to, especially with with working with you guys two different mixers on the same project goes back to the thought of an album versus singles that are people play and how do you make it seem like a body of work not a collection of songs. And I did want to say one thing you were talking about tools real quick and I was the thought of compression I just wanted to follow up on that for a second and the tools. You know in the stereo world, excuse me, in the stereo world you only have two speakers and we're trying to get a lot of information into those two speakers the techniques involved in getting 60 instruments or 100 vocals or you know whatever to one acoustic whatever it is out of stereo is out of two speakers. We have developed over the last you know 5060 7080 years techniques of of processing eq compression gain reverb delay all of those things to create this sense of space out of two. And we're talking about all of these speakers we don't and all of this space, we have an opportunity to be as dynamic and as engaging I look at compressors as tone devices, not necessarily and not looking at them as loudness things and I think when people think about compression, especially in the immersive world and thinking about how to try to recreate that stereo. I'm not making things and and compress and compressing but what's happening is things that are out in this, in this world around you close in. The more you compress it the more you're coming into coming into that you know point source to begin with so I would, you know, recommend this maybe as you're exploring this don't try to go down that path of putting 10 pounds of something in a five pound hole, but but think about how the space works around and let it be the thing that pulls the artist or pulls the listener into the music, not the bullhorn that shouting at them. That's one of the huge opportunities I see with immersive audio. I'm gonna, I'm gonna, I have one more question about live music and I was going to throw it at Eric. And then we'll go to some questions I see Jeff Silverman has a really good question I would love George and Michael to answer. Eric, where do we sit with, we do broadcast together a lot. Live shows together a lot. You know, Eric travels the world with us sometimes on our team. Where do you see this going in the live space in the broadcast space. Well, there's two things happening one for just live events for installed shows are starting to do a immersive sound in live events in Vegas for instance there's a bunch of there's a bunch of installed shows are doing in broadcast is starting to occur around the world, not so much here in the US yet. And actually the people who are doing it the most right now are sports. I'm excited to see how it evolves we're going to have been doing that for a long time soccer and surround has been around to our unplugged album that's going to be our next thing we mix. It's alive. It's a live show that we're going to put it out on vinyl because it was never on vinyl and the flip side of it is we're going to tackle it and put it out in immersive audio and then I know Dolby is working on a lot of incredible live space and streaming. So I think the door is really open to really start exploring a lot of that stuff let's let's take some questions here Jeff Silverman. As you know Apple music doesn't read Dolby at most metadata. And to me is not as an accurate representation of my binaural render mixes. We thought if there is going to be any advancements with Apple and Dolby to actually give the listener a choice between spatial audio and 3D binaural this is probably the greatest question I think that we all sing handily. And anybody that's worked with Apple knows that Apple is a very insular company. Yeah, very little data gets into their ecosphere very little non marketing data gets out of their ecosphere. Occasionally you'll see a technical paper, but not on this. So we have some friends who have gone to Apple. We're hoping that we can, we can drill deeper into Apple and see what our choices might be and just hope for the best in terms of interpreting metadata. And I'd like to see Fraunhofer contribute there, because they know both both areas, and they can but it's a matter of remapping an object and rebamping it's metadata, and it's, it's possible it's just computer code, a 14 year old and Finland could write it. I mean George I know that from offer is starting to go down that road it's a question of. People make at most and Sony they have to figure out how to speak a common language in terms of their file so it's it's on the road. And I think it's something that we as a group and all people involved in this need to push more so that these platforms can talk. So we cannot have to do something three times. But it's definitely something that's going to be happening in the future for sure is evolving. It's got to it's like, it's like, you know, dvda and s a cd and, you know, like, originally there was one player for each format and they finally realized the best thing you know it's like looking at the, somebody's going to come up with an app to that's going to be able to like, yeah, you know, I think it's not out of the question and because it's just numbers. No black it's this is not exactly exactly and it has to happen. I think what happened is the marketing from Dolby and Sony and the partnerships with the DSPs has put the technology quick but we have to remember there's the backside of developing all the, all the tech and evolving and getting artists involved not dictating to the artists this is how we're doing stuff, but have them involved in the process. Again it's the artists are the stuff has to translate. You can't be on Apple music you know and a pop up shows up that like your stereo files being deleted, there's a lot of things we have to grow on right like, like you can't dictate Apple really kind of wants to dictate to us how it should be and it's not how it should be in their viewpoint is not proper. Well, it's all about marketing. These kind of Apple in particular it's all about marketing I don't know whether marketing is really sensitive to this new technology yet, or how they're getting their feedback how marketing is getting feedback and through marketing and because Apple does a lot of research through marketing down to R&D. How's that flow how is what we need interpreted to engineering very hard to find out. Back to back to Jeff's question though real quick I didn't want to, you know, the, I do know to conversation so one of the issues that I can just to answer technically if it's not you know whatever too much and conversation wise is that they because of how Apple spatializes they take the ADM file and are spatializing a 714 with a binaural render of that that a limb at the moment that eliminates your choices of binaural render mode settings that's stripped out in that and there there are conversations going in right now I know on how to try to create keep that in in place and still respect whatever their spatialization processes that they're doing around something I worked on a project I worked on the Dune soundtrack the Atmos version of the Dune soundtrack which was fantastic what they decided to the stereo version as well but what they decided for the stereo version is knowing that most people, excuse me again, knowing that many people 80% plus would be listening on headphones probably to a soundtrack like this there and what they're calling the intentional stereo their intentional stereo mix was the binaural render. They just said screw people who are listening on on on stereo speakers. We expect you're going to be listening on a soundbar or headphones and if so it's going to decode and this is what it's going to be. That's an extreme choice but an option Jeff. Well that's a real turning point. Yeah. So, Sarah galvan asked how does an immersive mix translate to a single point monitor. Eric. Um, poorly. Exactly a softball Eric. I mean, I think the devices you showed the devices you showed on your slides let's say one of the two I won't name the one is kind of a single point speaker it does. It's not bad but I would say if you're going to go into a speaker system. You probably are better bet as a soundbar. You know I would say those devices with the lowest common denominator so it gives you a spatial effect, but it's not as specific as soundbar. Well and they can take any technology and most of them can. Yeah, yeah, most of them you can feed from some other device whether it's an Apple TV or Nvidia shield or you can play all the different formats through through most of them also. And when you get to this quiet question there's one from Gary that I want to speak to. Go ahead. Looking at it. Yeah so it's funny because it has to do with what we're doing now for LP eight. It says if you're going to mix two feet say a piano and a voice for Alicia. The, the one I'm doing is some of its piano and voice maybe one of the thing and my first thought was, what do I do with this in spatial. I'm still creating a space around it. So I'm using I'm using more use of 3D verbs and stuff to keep the character but I still think the spatiality is, I think it really helps it. I do use the phrase again it makes it not so flat and more 3DS because I can create the room around it. That's great and then I see Brian Montgomery with a question hi Brian I think George could answer this because he he kind of works this way or used to how are you determining when to use the beds and when to use the objects. Concern that the size of the data within the at most ADM media container can incredibly be bloated and large when exclusively using many objects and many be subject and many of them to be subject to excessive data compression for the end user playback with using mostly beds be a more efficient and lead to a higher quality delivery go ahead George. Well, let's start here. When you're streaming at most there are a limited number of objects that can be represented. And Michael we we said 24 last week but it may be as little as 16. Yeah, I think it depends but yes, it goes hand in hand with levels, almost level three there's other similarities between the two between Sony and Dolby with the number of objects. But I'd have to say I'm not so worried about the size of those files, in terms of delivery, they're, you know, they're deliverable. And in terms of re rendering and they're re renderable. What makes my choice between objects and beds is different is different than Jeff Baldings right, we use both you have to use both George that's why I thought this question. Yeah, would be good for you because I know that you that's how you mix. And right now, as I said, Eric and I are talking about maybe touching on delivering separate object groups to Michael, so that he can better deliver between formats and choose. But by the way, that means that we have to sort out dynamics multi channel dynamics, and we're not going to tell you what we're going to write and we have to still sort the time alignment issues. That's right. And time alignment is a huge continuing issue. And linking of channels and all sorts of linking channels to control sidechains. Exactly. Not fucking with the, sorry, can I say that not messing with the image and things you know shifting all over the place. You were the first one to drop an iPhone. I would have thought it would have been you George. Can I can I answer something I see Jeff's follow up here I just wanted to say something else that I would love to see if there is a way because he, he touched on something that I actually asked about with with Adobe some time ago but I actually hadn't followed through with and with Sony Sony on this is, it would be. It seems to me that adding the stereo mix the left right to the last two objects of any format makes sense that you're automatically carrying the stereo with you period and there is no way to screw that up. What a great in there. I think that's a path forward to look at this for all of these companies is keep the stereo engaged. Whenever it gets lost and you never have a problem, you never know where it, you know, yes, and it should be a way that it defaults to stereo when people don't have. Right. Right. Your, your in ears or a sound bar or absolutely bird. Somebody asked what's the difference between binaural and stereo. So I thought that was a good question Michael. Great question. George you want to know what let's let Michael do it. Oh, I just thought if you want well okay well well binaural well stereo is to channel. I'm understanding original stereo is three channels channel that stereo is is dialogue in the center for film and left and right was was the original version of that was why there are three channel heads and three on tape machines and three channel consoles and that that was original anyway as we look at it we're looking at left and right in the stereo world binaural is an encoding either through ambisonics or some other format to try to include localization in a 3D field within a two channel playback and the way that started was with or one of the starting points was with the Neumann head with the fake head and listening to two microphones in the ear physiology and that structure to let that create the localization that you would get through your phones. And a functional difference between the two is that one is the represent is actually the left and right of the speakers that are output that are, if you're coming into your headphones are listening to stereo like that, the binaural is is the binaural is not looking at left and right but it's looking at that sense of space using the head like you said with Neumann, how we perceive is time, subtle timing and tonal differences, using our two ears on our head as a meat baffle, you know, but it's a really hall of phonics so there's that look at this look at this by the way he stole a mic pre from me. And I haven't heard from him for 30 years. That was a very good thought I'm sorry to interrupt but there are any number of re vectorization. Algorithms and things happening in the ambisonic space and there's more ambisonics coming more than level three. So as far as capture and record I think this is a good segue rowy Shamir has a question and you know, are there, you know George we talk about this. How are we capturing. So he has a question as far as capture and record. Any experience you could share with natural at most, as I call it dedicated Mike per intended number of channels. And then ambio miking options. It's a big question, big question. Yeah, the other part of that question is, is we have these rendered formats we have codex now. What are we putting on the shelf for the future. It's a shelf for the future something that can be re rendered in 20 years to whatever the new format is where we're not limited by these codex we're not limited by the, either the dole we codex or, or any codex we have very possibly an original 714 physical format that we could print right now and put that on machine. I'd love to see that we get to a spot where we are. I mean, the MP, what people are listening to is essentially the MP threes of the stereo files and we've been working in high res audio for a long time in the stereo world. I would love to see the consumers hear what we're hearing resolution wise in the speakers I would love that the, if you could, if we if some, you know, company came out ambitious company and sold ADM files, you know, or sold the Sony files or something that people put on their, you know, servers at home or wherever they are their files to listen to and hear the full region full resolution so as, as we're as codex are developing, you know speakers are still the standard in a tune room the gold standard for playback as codex are developing for people listening in different decoding methods speakers headphones and all of that. I'd love to see that we get where you also talk about resolution and number of objects and trying to get it back get get people into experiences experiencing it as we do all do on a daily basis, how powerful and moving and emotional it can be, especially like this question about miking and certain things like I figured George I think you would jump straight to Morton Lindberg about his miking techniques for what he does. You know, as far as it's not an am I but yeah let's look at that for just a second. Yeah, please, because I think that's entirely relevant. Yeah, is one of our very, very favorite recording engineer producers is Morton Lindberg of Oslo, Norway, and he has said he has a workflow beautiful set up where he is basically one mic per track, and he spends untold numbers of hours days and weeks setting up rooms and players and that one microphone and working on the score working on performances, and he captures this pretty much one mic per speaker. I'm not saying he does some, some post work, but basically is one mic per speaker and his recording and reproduction is existence defining in terms of emersivity, immersion. And so that's a good place to start is what's the best example. And that's one but boy he has trouble with Dolby at most not sounding very good. One of you George which is, let's say we're recording a piano and let's say we want to get the feeling of the piano inside their room, depending on how the songs arranged. Yes, I would advocate more microphones to to really speak to the room and I'm going to go to a next thing, for example, in the Grammys for about three or four years. We haven't used yet we hang we hang height mics way up in the hall. And that's for the time when we can do a real immersive broadcast couple of years years ago me and the guy who coordinates the sound department to the ranch and mixed it in atmos and those height mics were amazing. So I think when you're making you want to think about how are you going to play that sound source back in the space so yes, I think you should be thinking about that. And is there I know Leslie and Jones had mentioned this at one point what companies are coming out with multi channel. Mike, I'm sure has their mic right George. Yeah, new, there's several new. Eigen mics, mics with 24 outputs called eigenvectors that can be put anywhere you can take an eigenvector and say, Well, I want that a little louder I want that softer let's move that and it's not exactly hi fi technology so to speak, but it's promising. There are a couple of the Sennheiser is one, and there are a couple of other ambient mics coming out. Beyond the, beyond the, oh shit what's the ambisonics Mike the not I can't even I can picture it but I can't even. What's the shit. Yeah yeah what's that shit what so you said it I didn't say okay so. Also, their companies like, have one shafts of DPA and a handful of other companies build you know like the bicycle seat or have different versions of arrays that are pre configured. Yeah, right. That's my next thing honestly, when Alicia gets on piano and vocal is how can I capture it. I'm going to start experimenting with a lot of these mics and she's an artist that will love, loves to experiment she loves the technology. I'm excited I'm excited to record. Even if she does three or four songs in a row from her catalog and we do an experimental session, and then I'm going to send it to George and record all the mics at once and see for my start with four mics in the air. The problem with commercial production as we all will know is how do you fix a vocal without repeating that circumstance. Okay, Alicia just sit there at the piano and let's do all the vocals with you there at the piano so we could capture, not only the space but also the cross of the leakage into the piano mics, all of these things that add to add to the size of a mix. It's very hard to do. Why is that hard to do. I remember that really yes west side story with the four mics that's that's really, that's a really good starting point and I'm going to really like now I have the time we just finished a gang of, you know, but I'm working on a Christmas album right now so I'm going to try to implement a couple of these things George and we'll be beta testers as we always are. Let's figure it out. I was thinking like what Chuck Ailey did just list the lie love it record. I got a chance to work on that with him and when he was thinking about my even miking the drums, he didn't use stereo overheads he's a quad overhead drums. Yeah, and it comes out supernatural. And he had room mics on the horn section, you know and it's pretty amazing I would love to remix that record. And there's a question here somebody's asking about custom HR TFs, which I think we should speak to before, you know, Eric I don't see that one but it's at the very bottom it says or their custom HR TFs. And I know for us, we, we can get them made for Sony and I think I've heard that there are consumer apps that will take a shot of your ear and look at the great, they I'm going to be the naysayer they don't work very well. Yamaha started this 15 years ago, and it takes a picture year and then tries to find a match with an existing HR TF data set with the same look. I mean it's kind of an AI approach, but it doesn't really work well what does work well is individual HR TFs at different source locales and build an HR TF said and that's where Sony. And I think Dolby to some extent are moving ahead, is those like that idea of custom HR TFs, because that's the best. If we could get that man we'd have it made, but it's very expensive, and it's limited. And I'd like to say we got to do this panel a year from now and see. Oh, technology developed right like that's a great point. We need to keep doing these to make sure the technology is developing, not just allowing these companies to market something that they're not developing and evolving. It's the people in the field like us every day the people in the field that are in this chat that are trying to, you know, find solutions that are trying to find problems, you know, solve problems and I think it's really important that all of us collectively as a whole. Just keep, you know, trying to evolve the technology and as I call us were beta testers and all this it's like the Wild Wild West. And I think we need to keep holding the record labels responsible, get the artists more involved in, you know, the immersive mixing and treat the budgets and the protocols like you treat all the other deliverables you know, and I think came a long way but we're not going to hit a home run every time. We're not going to get it perfectly right every time we're going to have to a lot of experiment. A lot of being honest if something doesn't sound good. Let's dig into it and fix it. Yeah, companies don't experiment anymore I don't know whether you've noticed but they, they're not much for experiment used to be pushing innovation now I hate to say sort of the stewards of back catalog but it's you know it's a, you know the business model has changed for a lot of people not everybody but a lot of folks just turn in their record to the label rather than go through the artist development point of it. You know, just just see hey, you know what I mean this is so philosophically left field if we got into you know, would spring steam have if he to try to come out today or Dylan or something where they have had the chance without the label and somebody that believe them behind them. You know, to be able to push it and to allow them to develop. In a way we're sort of in the immersive world is we need to allow it to develop but we also need to push it to be able to create the best use of it not a functional use of it we don't want to turn around and have this to be the, you know, 3D television of the audio world we want to be able to say, this is where we're moving forward, we experience world the world in a three dimensional, let's experience our art in the same way. But it takes attention to detail, it takes a commitment, it takes, you know, respect for the format and it and it and it takes, you know, pushing the boundaries to find a way to make sure that people here at the way it's supposed to be done with best practices and not just let's take the stems from the stereo and let's let's take all the super compressed buses and then we'll just deliver those to the labels, then the labels say hey we're going to take this and make it immersive. You've already, you know, committed so much at that point. And that was one of the amazing things about this project and is that you envision like let's go back to the source. And I think we you know I think that we really found that through all the work and experimentation and stuff that that is the best practice and I wish I hope going forward labels and artists do that as well go back and say Correct. And look, we are first album we delivered right was on tape. I had to go back and listen to those stereo mixes and recreate edits that was done on a SSL with automation had to go and mute the files you know we had to recreate the arrangements. And I think it's important. I think it's important Michael to deliver to the mixer, your raw files your stems and your master mixes and we got in a groove. Render your plugins me and George learn that from the first song I gave George I left auto tune on like render like or Melodyne or something like render because there's different versions of plugins when you open old sessions and render your files like, you know, and we, I would love to do another class somewhere on how to build an immersive template because once we found our template. It was like, it was, we had such a sweet workflow. Yeah, we had to learn to avoid the plugin swamp. The plugin swamp is a dangerous place to go. Exactly. So look, I want to, you know, we're over time. I want to give a huge shout out to the mayor's office and everyone involved in putting this together the producers and engineers wing Maureen droney. Yeah, I'm coming from you from all parts of the world. I'm in Milan, Italy, Georgia's in Montreal. Eric is in California and Michael is in in Berkeley, Northern California and I just want to say thank you everyone for the effort you made in joining this panel. Let's do this again. Great. Let's keep evolving the technology and thank you all. We'll see you next time right. Yeah. Thank you everybody. Really appreciate it. Everybody.