 Ie, rhaid i'n bwysig i'r cyflawn hwn. y dynion o ran eich yr broses o hyn o'r cymryd chyfrwys yma? A'r cwm sy'n ddysgu'r 12. Fe wnaeth ymmellerteb ni'n ddiddordeb yma? Rwy'n argyrchu y ddargarfer ar gyfer Ymlaen. Bym ni'n ddiddordeb ar gyfer Ymlaen i wniadod ac mae'n amser gweld ar gael A867. Be'r gweld ar gyfer Ymlaen i wniadu'r 12. I'm Conrad Beventon. I'm here from Focusrite. We make audio interfaces and various other kinds of audio equipment. I'm talking about AES67, a standard for networking in pro-audio industry. Just to set the scene, what we're looking at, the industry itself is studio audio, live sound, also things like broadcast, and also theatres, house of worship, all those kinds of things that have sound systems and audio systems handling. Traditionally, audio has been done over analog, there's been a move to digital, and more recently there's been a move to networked audio. The type of devices we're looking at here range from microphones, preamplifiers, so getting your audio in, mixes and effects to process your audio, interfaces to connect it up to different things, all the way out to amplifiers and speakers so you can actually hear it. Here's, for example, a typical live sound setup. This is the kind of thing we have. We have microphone inputs here and various sound sources which get fed into the audio network. This will be taken via the ethernet over to a mixing console perhaps, perhaps to a computer so you can record. The audio is processed here and then sent back out over the network to our output system, which will be for monitoring on the stage and for front of house sound so the audience can hear it. That's a pretty typical setup. In an audio network we have specific requirements. We're only dealing with local area networks, we're not putting audio over the internet. That frees us up from a lot of variability with traffic levels and congestion. We want low latency, we want high quality and most importantly we want lots of channels and we cannot have lossy compression in these systems. We want good quality audio. The existing technologies, there's already stuff serving this market and there are various proprietary technologies. They're just various brands there and we can have a look at them in some detail. There's various levels of protocol and various solutions within those levels for all of our different proprietary audio technologies. As you can see on this chart there's a lot of proprietary stuff in these systems. That actually hurts our interop. Connecting these audio systems together is actually quite difficult. That's both a limitation that impacts convenience, it impacts future proofing which can be quite an issue, a lot of equipment is expensive. Being able to know that you can use it in the future is quite important. There's a particular pain point in the broadcast industry especially. With a lot of live sound in studio you can just be careful what you buy and put into your system and how it be compatible. With broadcast they want to go to other people's events. They want to take their OB truck, their outside broadcast truck, take it to the stadium, this is from Wimbledon, and they want to plug into the audio systems and the video systems on that stadium and get feeds of what's going on so they can broadcast. We can't deal with bad interop here. AES67 is a standard developed by the Audio Engineering Society aiming to bridge these technologies together. It specifies various levels of the system, but it specifically excludes device discovery and control because that's not so necessary for transporting the audio between systems and it was seen as better to get some level of standardisation rather than devolwing into a big discussion that goes nowhere. Those areas were excluded. The technology in AES67 are actually all fairly standard existing IT technologies. In the audio industry it's a relatively small group of companies compared to say your Google's or your Amazon's or whatever. They don't necessarily have huge amounts of engineers to put it the problem. Anywhere they can reuse an existing technology is a benefit. This is AES67 just bridging their technologies on IP and here's an overview of what's in it. At the very bottom layer we have our audio format. We just use PCM, nothing fancy that. We use RTP packetisation. We use a precision time protocol. That's an IEEE standard. Then for session description and connection management we actually borrow from the voice over IP world and we use SDP and SIP. I'm going to just take through each layer now and at the bottom layer is the audio format. We standardised on linear PCM. There are two formats that are mandated by the standard. Those are the 48 kHz sample frequency formats. Then the optional formats, the lowest quality is CD quality, the higher quality is 96 kHz 24 bit. The standard leaves it open that you can implement over formats but these are the standard ones. Once we've got our audio we put it into packets. We use RTP for this and for simplicity the AS67 standard actually specifies that you cannot use CSRC so you can't use contributing sources and you can't use header extensions. A lot of the audio processing happens on embedded hardware devices so keeping things simple is important here. The standard mandates 8 channels per stream and has short packet times, that's the amount of audio time wise you put in a packet. That's short to keep the latency down. Multicast is optional. In order to make an audio network work there's synchronisation. This is kind of the magic thing which differentiates an audio network from a typical media setting. In a typical setting there is some tolerance for things like a sample glitch or a slight delay or those kinds of things and some jitter buffers. In an audio network there is less tolerance for this. A sample glitch being amplified up on a massive sound system is quite harsh on the AS, should we say. To do this we use anitriplee standard, the 1588. This uses a consensus election system to select a master clock and then periodically between four times a second and ten times a second the master clock initiates synchronisation with all the slave devices and over a time period those clocks converge. Once we have this network clock as it's called we derive a media clock by a simple multiplication. In a typical case running at 48 kHz sample frequency it's just defined that one second of network clock has 48,000 sample times in it and that's just simple multiplication. The synchronisation process is done in two stages. First is the clock sync where the master sends a sync packet and then notifies the receiving slave of what time stamp that had. Once that's done the slave can initiate a delay request where it goes, I think the time is this that goes back to the master, the master measures what the time is and it sends back what its time was and using this we can measure the network delay and after a few cycles the clocks will converge because we will know our network delay we will know where our master is and we can converge our clocks. It typically takes a couple of seconds for a clock to converge in most situations. So once we've got our audio we've packetised it and we've synchronised we need a way to tell our different devices what our actual audio streams are and this is where session description comes in. We mostly use the STP standard there are a couple of additional header items are they called header attributes sorry in our STP these specify what packet time we're using and what our clock sources are and how to map payloads. The rest of it is standard STP so if you've seen an STP before it looks something like this. So in this case we're sending 8 channels of audio at 48 kHz there's a 250 microsecond packet time so we're sending quite short bursts of audio quite often that keeps latency down and here we specify a precision time protocol clock domain which is our synchronisation mechanism and then we just offset the the media clock from that time domain. So now to connect things together we have the connection measurement. This is STP again a very standard protocol used widely in video conferencing, voice over IP various other industries it's based on URIs and in AES67 the standard recommends actually using serverless mode so SIP allows you to put a lot of infrastructure in your network for transforming SIP requests routing them to places and that's kind of not recommended for this use case it's over-complicates things so in AES67 serverless mode is used and you just have direct connections between your audio devices so a simple SIP session this is as simple as it gets is device A just invites if device B can receive the media device B just says yep ok media flows and then to tear the connection down say by packet and ok that's the simplest case and if you look up SIP you can see there's various lot more complicated cases available the other connection management we have is IGMP this is for multicast so it's possible to have a network device put out the audio onto the network to a multicast feed and in this case we just use the standard IGMP which tells the network routers where this audio is required there's no direct connection between the sending device and the receiving device as far as it's concerned it's just putting audio onto the network and whatever needs it will just pick it up and so that simplifies our stack and it has all the usual advantages bandwidth usage etc so that's what's in the standard so who's the organisations well there's two of them there's the AES the Audio Engineer Society they're handling the standardisation and the technical discussion around the standard and then there's the media networking alliance who are more involved in the actual promotion of the standard and some of the more informal discussion around what should go in and how to do it so there are a number of members of the media network alliance it's focused right there there's Yamaha, there's Harmon there's Bosch Security and there's also associate members these tend to be less equipment manufacturers and more like actual media companies so for example I believe Swedish Radio is an associate member as is the BBC and I think Walt Disney imaginary that those kinds of companies are all associate members so the things I want to get across here are the AS67 is standardisation and that improves interop it's bridging a gap that was previously just couldn't connect devices together it shows the reuse of general purpose technology in a specific environment in this case for audio and it's also showing kind of the growing interaction between the technology industry and the pro audio industry previously there's some interaction but with the growth of audio networks there's a lot more in network in general and there's a lot more overlap now so for more information there's the Media Networking Alliance and you can go to the AES for actual copies of the standard documents they're pretty long and deep as you expect from those so I think that's it for now thank you all I could find is a kit that I can use to stir it will be a driver for Linux and it absorbs the AES as a number of AES that's 50 bucks to get a space it's not open I don't have the right to distribute it so even if I pay 50 bucks I can't develop a driver with it so it's not going to be a driver so how do you see the future of a driver because it ordinates with predominance actually in the patent group which lies behind in Europe so far patents are not allowed these are of course hardware makers so there's a patent group with it and ordinates it shifts the driver for Linux for a standard leak in the working part, in the first standard a good one maybe you can take 210 something like that and you can edit for free as in previous as long as you buy some hardware that the driver is there and doesn't need some hardware is there some address in that front? so the question roughly is around there is certainly lacking open source code for this standard and there is a certain amount of dominance around ordinates and their patents so I'd say that in terms of opening up having the open standard is kind of probably the first step and I admit that there is a long way to go with interaction between say pro audio and the open source community so I view having standard as a first step on quite a long path so that's a point that the AS67 is mostly supported in G streamer did you say that the connection management isn't did you say connection management isn't all the standard except for the connection management is now supported in G streamer I didn't actually know that so when we implemented it we just implemented it I think either from a draw up or just from the underlying documents it never paid $50 because we don't try and avoid where possible paying taxes to the standard the standard industrial complex so good at it so the main question is so we have a lot of problems with trying to interact even with professional manufacturers of sound equipment on Linux in the main so for example clocking is an issue that we have there is a lot of applications but do you have anything to suggest to the kernel people about how they can make alza in particular more suitable towards a professional use because it's a struggle this is a question around sound development in Linux and how we might make alza better for the professional case that's a tough one you need real support now you need real support this is a tough one because personally being in audio more I haven't been on Linux all that much recently I don't know really alza is quite deep into the drivers and those areas I'm not so sure about thank you can rather we return in 5 minutes for ES70 ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok