 Hello everybody. Welcome to this talk about soundware. My name is Vinod Khol. I work for Lenovo. And I maintain soundware and subsystem in the next kernel, along with some other bits like DMA engine and some audio stuff. So soundware is one of the new protocols which has been being developed by the MIPI. It's a MIPI standard. And I wrote the initial bus framework for that, and we upstreamed it. So we're going to talk about that a little bit. So to start with the agenda is basically we'll go on why soundware and what it tries to solve, which is not done by the existing audio protocols available out there. Then we'll move to the bit of topologies, kind of build up how soundware evolves and how soundware tries to deal things differently as compared to other stuff. And then we'll move into the protocol, kind of try to skim the surface of protocol. It's quite vast protocol and quite complicated, in fact, in my opinion. But then we can't do justice to it in the next 30, 40 minutes. So we'll just skim the surface, introduce the topics of bit, and then move on to the Linux subsystem, introduce the APIs bit, and data structures a bit. Standards, we all know. There are always n-member standards. We try to solve that with n plus 1 standard, and then it becomes n plus 1 standard. So hopefully, soundware will not be like that, as we always try to wish, but we'll see how it pans out in the future. There has been quite decent interest in both SOC folks as well as codec vendor folks. So probably we'll see devices shipping soundware probably next year, and more adoption by the vendors. So in comparison to the existing audio standards, first what comes to your mind is HD audio. This is something which probably you run on your laptop or your desktop. HD audio was done way back in early 2000, driven predominantly by Intel, and it's PC centric, takes a lot number of pins, and is not really friendly with respect to power. So Embedded World or Mobile World doesn't use it, so what they do is we do I2S on that, or TDM, as we call it. Again, the pin count on this bus is not great. It doesn't do control, so you need to have always a sideband I2C or spy to do control and GPIOs. Then we also have a PDM, which is basically direct attach microphones. You can connect them to your SOC or your codecs. So PDM, again, the limitation is that you can only have two devices, and then on a particular link, you can't do command and control. Then before soundware was developed, there was another MEPI protocol, the Nth protocol. It's called Slimbus. People tried to solve all these problems with the Slimbus, but eventually there was not widespread adoption because of various factors, primarily being the complexity of the protocol was too much, and people couldn't do simpler devices, the cost of doing simple devices over very much in terms of transistors. So that led to soundware which can be viewed as an improved version of Slimbus in a lot of terminologies. So what does it try to do or watch when the soundware development started? What people wanted to do out of soundware was that we should have a bus which has a lesser pin count, and we can do both command as well as control. When we say control, it's not just registry drives, but also kind of have a capability to wake the system up so that we can eliminate the need of the external GPIOs on the system. Then Slimbus initially, although it was targeted to do PCM and PDM, but unfortunately there were restrictions on how the PDM evolved on the protocol. So PDM was never really supported. But with this protocol, we tried to have both PCM as well as PDM supported. MultiDrop is another feature which comes to mind because if you look at I2S, I2S is predominantly you have point-to-point link or PDM is point-to-point link, so it would help doing device terminal topologies if we have a multiDrop bus. And then as usual, you want to solve the problem, not just for embedded or mobile, but for PCs as well. So in this respect, how we build the terminal topologies for sound where you initially start with a simple case where you have a master which is on the application processor and you kind of partition it and that master particular driving, your codecs, which can be speakers or amplifiers or microphones and so forth. And it's a simple description. And then what we can do is have additional masters and split them into different functions, as given in this diagram. You have all the rendering devices, the playback on one master and on the capture, or do different like a simple DOM codec which is basically implementing your wired headset and another smart speaker or digital speaker, as we can call it. So this is just a simple topology, but if we try to make it more complicated, we can have a multi-lane system where, if you look at from this topology point of view, you have single master and that single master is able to drive a quite modern, complicated, mobile audio topology where you have a codec connected as well as a modem connected, probably a DSP as well, BT FM chip. And if you notice in this diagram, not all data lanes are required to go to master. So you can have a communication in this case where master is not really involved and two slaves can directly talk to each other. So this is the kind of use cases which soundware tries to solve, how it goes we'll see probably next year down the line with devices at market, but that's the main motivation of why we want to do soundware. Another is like bridge topologies, if you're a device lanes or other things really have some limitation. So you can also implement a bridge where our devices master on one side and kind of slave and doing a bridging in between. So this kind of gives us a little bit of motivation on what should we expect from soundware protocol. So let's go into the protocol a little bit. So as you can see in the previous diagrams, it's a two pin bus which has data and clock. We can do both 1.2 and 1.8 volts. And then as we see in the multi-lane diagram, you can have multiple data lanes. If your bandwidth is not sufficient or you want to have complex audio topologies, you can do multiple data lanes. Then it's a serial bus, but it's a dual data rate bus. So what it essentially means when your clock is rendered both on the rising edge as well as on the falling edge, we will sample data. So that essentially drives your bandwidth to access compared to your clock rate of the bus. And then frame length, typically if you look at any of the serial bus protocols, you will have a frame where you have a dedicated header and then a payload and then that's fixed for the duration and you will not be able to adjust it. But typically if you look at audio user scenarios, you may be probably rendering to your headset and then you want to start a capture session as well or you have a Bluetooth chime coming in or stuff like that. So in this case, your bandwidth requirements are not static. They keep on increasing or decreasing based on what you want to do with the bus. So as a result, you may want to have a feature where your frame length is also variable. You can change it on the flight runtime without having to sacrifice in audio quality. So that is something which Soundwire tries to solve as well. Then as a consequence, we don't have a static clock. We can scale dynamically the clock right up to 12.8 megahertz, which is basically the electrical limit of the protocol right now. Then since it's also predominantly focused on our mobile ecosystem or embedded ecosystem, people want to have it to be more power friendly. So at runtime, we can do a clock stop and do a power down of the whole bus. And obviously it supports both PCM and PDM. Along with that, in case of audio, since it's targeting audio, we have both ICRCONOS as well as ACRCONOS support of the audio rates. Essentially, it helps you to run on a particular clock rate, both 44.1 at 48 kilohertz kind of frequency sets. So that's quite helpful because a lot of protocols are not able to do that, so you rely on external mechanisms to do that. So as a whole modern audio embedded ecosystem evolved in last, let's say, five or eight years, we have seen evolution of DSPs. Now DSP may be on the SOC side or on the codec side, but if it's on the codec side, what codec guys want is to be able to load huge blob of data which are basically your coefficients and your parameters. So typically what they will do, they will write a put a side spy bus on it and then drive the data blobs through that. So SoundWide tries to solve by having a dedicated bulk transfer capability and you can do tons of registered writes and that have very good bandwidth, depending on again implementation of your master and the slave devices. Bandwidths of megabits are possible. Then low gate count. So one of the good things about the protocol definition was for the master, a lot of things are mandatory, but for the slave, some things, most of the things are actually optional. So what that does is you can make a really good, big complex slave system, but then you can also go and reduce the all optional features and come up with a set of device which is very simplistic in nature and your gate count ultimately goes down. So that helps in having a simpler implementations and drive the cost down. Hopefully adoption as well. And device, sorry, on protocol control point of view, we can enumerate what is on the bus and it will, we can find out when the devices show up or when they are not lashed onto the bus. So that is allowed in the spec, but the whole enumeration process is driven by the software. So software needs to play a little bit of a role here. We'll go into that a little bit. Then the soundware current spec version is 1.1 and it doesn't define device classes, but then there's a provision for it in the spec. So in future, we should expect that some people will come up with, there's already work going on, maybe work group for that. So we can have device classes. So what essentially means is for simpler devices like microphones or speaker, people should not be able to writing additional software. Your device class software will be able to handle that just like kind of a USB based device concepts. So that is still work in progress that has not been done but spec has provision for it already. So it's not like, we left the door closed at the moment. Okay, so we've been talking about various device types. Let's look into what are the actual devices. The first master. So master is a device which is supposed to drive your clock and does the data handling. Then we have a concept called bus keeper. So it means that it will assign who should be driving a bit slot. So coming back, what is a bit slot? Bit slot, since I said that it's a double data rate bus and we are pumping data on both the edges of the clock. In this case, what it means is each edge of the clock is referred to as a bit slot. So that's kind of a terminology sound where spec uses. So it will do the bus management and assign owners of each bit slot who is allowed to drive on that particular bit slot who is not and that is essentially a role of master. Then we have slave. Slave became, as we said, audio peripherals like microphone, codex, soft speakers and then up to 11 slaves can be connected on a bus. 11 is a magic number. We'll go into details on why a little bit later. It can interrupt, it can wake up the system. So assuming you have done an audio implementation and when you're idle, you would expect that you will power down everything and then somebody inserts check. So it has inbound signaling mechanisms so that it can wake up both itself as well as the master on the AP. So that's allowed. Similarly, it can interrupt. So there's some event, let's say, you have a DSP implemented in a codec and then you detected some funny words. You can wake the system up or interrupt the system up with those things. And it reports status. That's basically, so we have two types of statuses. One is master status, soundless spec status where it can tell what is your current device state and everything. And then there's an implementation defined which can be basically based on what your device implementation is. There's another piece of interesting equipment called monitor in the spec. So what monitor essentially means is you have a sophisticated test equipment which you can attach on a bus and then it can snoop and help you and debug and test. And there are a couple of vendors who provide soundware monitors in the market today. And if you desire, it can also take over the bus management from the master and start issuing the commands. But even in this case, but still the master has to continue bumping the clock. That's not allowed to do by the master. Data ports. So since it's geared towards audio, so one of the things what spec defines is a concept of data ports. So each master and slave have to define how many data ports they support and it's basically a logical entity where you will send or receive the transmission protocol, sorry, pillow data. And as I said, since people want to have implementations where they can simplify, the data port is mandatory, but the type of data port is optional. You can have a very simplistic data port which doesn't do much of the fancy bells and whistles features, but then you can have a complete data port which does a lot of things extra. So the types of data port will be either simple, which is very dumb and doesn't do anything or reduced where it can have some capabilities or full where it has all the capabilities supported by the spec. Then again, on a particular device, we can have up to 15 data ports. Zero is always reserved for our bulk transfer capability. So we've talked about bulk transfer for writing large blobs of data. So if a device supports that, it should support the data port zero. Then a data port one to 14 are reserved for audio functions. This is where actual audio streams are directed towards. And 15 is basically an alias to one to 15 ports. So if you have something like broadcast you want to send out or receive, that's done by the data port one to four, sorry, data port 15. How does the frame look like? Interesting bit now. So as you can see in this diagram, soundware spec defines a frame as a combination of rows and columns. And these are the loud rows and column values. It rows starts from two onwards and the max value it can go is 16. Column from 48 onwards and max I guess is somewhere 250. And as you can see, there's a nice correlation between these diagrams, sorry, these values based on audio frequencies you want to support. And a particular frame is combination of row and columns. So in this case, we also need to send a control word, which is basically your command and control field. And always the first row is the first row, 48 columns are dedicated for, sorry, 48 rows of the first column are always dedicated to the control word. Rest can be dedicated for your audio data, air, bulk transfer as you would need necessary. This is all decided by the master at the time of initiation and before it starts to program. So a PCM can be allocated slots in any rows or columns, we'll see. And PDM by its nature, it generates one bit at a time. So it's recommended that you generally have one particular column dedicated for PDMs. That essentially ensures that you don't have any conflict between PCM, PDM and control word. Now these number of rows and columns are chosen. So this kind of, sorry, one more thing. So as you can see in the serial transmission case, we will always transmit the row first, followed by all the columns of that particular row and then row. So it's kind of basically a raster scan kind of mechanism. This also ensures for things like PDM that you keep on bumping a bit at a time on the column as and when they arrive, if your clock rates are matching. Also ensures that if you are sending this control word on a row zero, if you have momentary some errors on the bus, your errors are getting distributed evenly and it's not that couple of bit errors are completely making your control word go for a toss. Based on these rows and columns, this is a couple of examples typical common usages of rows and columns, frame shapes. So this is a 48 cross two, where this particular row will be dedicated for your control word 48 bits and then we can send audio data on this column. This 48 cross two is very commonly used in case you want to be doing a simple playback to 20 of a 24 bit stereo, which you will probably see in your headset or your speakers. So this is the most common frame shape you will encounter. Then you can also have this other examples, 48 cross four, where you may want to be doing both playback and capture as you might have noticed, if you do that with 24 bit audio, you will be only occupying two columns and one column will go waste. So that's up to system designer to choose what kind of topology he wants to support, what kind of rates he wants to support and how the column shape should be arrived at. From the implementation point of view, bus right now doesn't choose anything. It's left to the master because master knows the system very well. So we carry what kind of, given the bandwidth, what kind of frame shape would you like to support? Another example, so as rows will go beyond 64 because we can go all the way up to, sorry, 48 to all the way up to 64 and you can pack this with audio data or bulk transfer data. And you might have noticed this particular bit after 48 data is kind of left for payload but in practical circumstances, I have never found a use for it till now. It's kind of gets waste. So 48 bits of control word, how do they look like? This is what we transmit. On a particular control word, there can be three types of commands. It can be ping, read or write. When it's ping, it can be either ping request from the slave. So a slave can say, can you send a ping command in that case they will assert this ping request and master will send the ping command. Or it can be a read write which is up code of one and three. The read write is basically a registered read write for the slaves. If we are doing registered read write, then we put device address here, register address for the device in these bits from eight onwards, all the way up down to 23 and then the registered read write data is in this 33 to 40 bit. So this is how a registered read or write will be performed. If we are not doing registered read write, then things are interesting because we are doing a ping command. Ping command is essentially a way for, it can be initiated by either master or by the ping request, by a way for the master to know what all the slaves are doing and how is there any status update on those. We were talking about monitor brief previously. So if there's a monitor and it wants to relinquish the control of the bus, they can do that by asserting the bus request bit. And whenever master is ready, it can do that by giving the bus release bit here. When the ping command is issued, each slave on the bus who is attached is supposed to send the status. That is given in, since we have 11 slaves, so all the seven slaves can give status bit, two bits. We'll go into status a little bit later. This is all the slave status. And then we have this static bit definition all the way from 24 to 31. This is essentially sync word. So when you boot the system and the clock is start triggered, at that time a slave has to synchronize to the master clock and this is the way it does it by listening on this particular sync word. It will try to listen on the sync word. Once it's detected the sync word, it will derive the frame shape, information, frequency information and then latch on to the bus. Along with this fixed sync word, there's also a provision of 41 to 44 bits which is basically a dynamic sync. So this sync word is supposed to be designed in such a way that it's not commonly found in audio payload, but then what if? The probability is very low. So that is why they want to minimize that low probability even further by having a dynamic sync bits, four bits. It's essentially pseudo random binary sequence and that eliminates the possibility of you latching onto the fixed sync. Okay, so what else is remaining are these three bits. So this is the parity for your frame. So you want to check what's the parity on that and do verification. So at the, assuming we are doing a read or write, you have sent the opcode here, device address, register address and if it's a read, sorry, write, the write data is here. If it's a read, the slave is supposed to put read data here. So within this particular frame, you have actually performed the whole read or write operation and the result of that operation is given in these two bits. So this is where the slave will tell you is the read successful or a write successful by inserting these NAC or ad-bits. Now, in the previous diagram, if you see, there's a device address. We'll go how this device address is dry. Each slave on MAP device is supposed to implement a 48-bit device address. And how that 48-bit is formed is, first 16 bits are manufacturer ID. This is a standard My MAPE manufacturer code assigned by MAPE Alliance and you can find it on m-i-d dot m-p dot o-r-g. Each vendor has a specific code just like your PCI device ID code. Then each vendor will on its own parts assign a 16-bit part ID for their particular parts they're doing. One unique thing about audio devices is you may have on a particular master same kind of devices and how do you uniquely distinguish between them? For example, you may have four microphones attached and doing a beam-forming application. So in this case, Soundware allows you to have four additional bits for a unique ID. So this is left to the implementers how they want to implement it, probably a GPI pull down or whatever, and or a particular board fuse programming. So that allows you to uniquely address same class of devices with the same type of devices within a particular bus. Along with that, we dedicate four bits to versioning. So right now, Soundware protocol version is one dot one, so that's what we should expect our bus to read. There was a one dot zero, but then I don't think anybody implemented that. As we said, there's a future revision for classes. So it's still reserved bit. They have not defined it yet. Okay, so this 48 bit is what uniquely identifies there, but then your control board itself is 48 bits. So we cannot really send 48 bits to address a device. So there's always a translation from device address to device number, which is our four bit value, which we saw in the control board. So this four bit value is assigned by the master to the particular slaves. And this, okay. So this plays a role in the enumeration. Stay a little bit for a couple of slides with this. So zero number for device number is for devices which are attached to the bus. It essentially means that you're synchronized to the clock. You understand what is the frame shape running and so forth. And when you are properly attached and you are assigned a device number by the bus, that is when you will be in the device one to 11. 12 and 13 is reserved for the groups. So you can group class of devices. Let's say you had two microphones or you had two speakers which are identical and you want to program that all at the same time because it's a stereo speaker. You won't want to program them independently. So you can create a group of devices which you are always communicating at the same time and the programming the same time. So this can be done by two groups, 12 and 13. 14 is reserved for master use, which you can do for internal programming. 15 is a broadcast device number. So if you send up command with 15 as a device number, so all the slaves are supposed to respond to that. Now, this kind of solves a mystery why we can only support 11 devices because of this partitioning. Now, on the enumeration, that's only the missing piece in this particular puzzle. So when you are synchronized to the clock, you will have a ping command issued or you can request a ping command. At that point of time, slave will report that it's attached on device number zero. This is the default boot sequence for a particular slave. Then the software will go and read the 48-bit device ID which is device now register zero to five and then it will assign a particular device number to the particular slave. So once it's assigned from anywhere value from one to 11, we need to program it back to the device. So we do a device number programming on device number register. Once that is done, the slave has to again come back and report attached on that particular device. So if you see, go back to this. So this is the slave status assuming we are programming on device number six. Initially, he will come and tell me I am attached on zero. We'll go and read the register, assign it, let's say, slave number six and then it will say attached here. This is where the enumeration cycle completes. The device number is dynamic in nature. So that means once you assign it, it's not there forever, it can be lost. If you lose the clock, you're no longer synchronized. You will come back and say, I'm reporting again attached on device number zero. So you reprogram it back. Then if you are doing a very, very low power domain where you lose sync to the clock, you don't, because after some time, master may have done some more changes to the frame shape. So the older clock which you had assumption is no longer valid. So you need to again listen to the clock and synchronize again. So in this case, we always go back and say attached on device zero whenever they synchronized. Soundware spec implements a lot of nice things. It allows you to find out what device is there on the bus and enumerate it, but it doesn't tell me what that device is. If I get a device on the bus with a part number four and manufacturer number bar, I have no idea what to do with it. So in this case, why is guys that maybe came up with a spec called disco, which stands for discovery and configuration. So what this spec does is it implements a lot of properties for master and slave. These properties by spec definition are optional, but in Linux subsystem, we have taken a view that these properties are mandatory if you want to get Linux support. This for the simple reason that we now know what to do with a particular slave, how to program it, what are the timeout values, what are the registers implemented, does it have a data port, simple, reduced or complex, what kind of capabilities does the device have. We're not leaving, we're not talking about audio functionality yet, but from the soundware protocol point of view, what does it implement and how many registers can I read and write. So this is what disco spec implements. It specifies these properties as ACPI, DSD methods or device tree properties. Current implementation actually supports ACPI, we don't do it yet. It describes what the capabilities of your master or slave are. Now, switching gears to the Linux. This is how the bus looks like. So this is the bus structure which is created by the master and then we initialize it. First member is the device ID pointer, this points to your master device. Then we have link ID, so you may have multiple masters implemented, so we assign each a unique link to it, uniquely identify it. Master can have multiple slaves, so we created, store them in the link list of slaves. Since each bus needs to keep track of all the zero, sorry, one to 11 device numbers and who is assigned and which is not assigned, that is done by using the assigned bitmap. For synchronization purpose, soundware can do both control and streaming. So for messaging, which is basically your IO, we use a separate messaging lock and for streaming purpose, we use a separate messaging lock. Reason for these two different locks is essentially to be able to do parallel operations because typically the register programming and audio programming can run in parallel and we can push the throughput here. Then since one of the not so good things which I personally don't like about the protocol is it specifies good transmission protocol, it specifies how slaves should be implemented, but then it leaves a completely blank on how a master should be implemented. There are no master or host controller interface specification specified in soundware protocol, so the bus cannot assume anything about the master, so in order for it to program anything on the master, it needs the help of the master device. Oh, sorry, wrong one. So it needs master to provide bunch of callback ops for master programming as well as master port programming. Then soundware bus based on the use cases, it will do bus parameter calculation, what is the frame shape and which required, so those are stored in this. Then we had disco properties, those are stored in the master properties. Since the intended scenario is audio and audio streams can come and go at any point of time, we track them through the runtime list. Soundware supports deferred messaging that is tracked through the deferred messaging and then we have bunch of clock timeouts for bank switch and clocks. Last is the multi-link, so on top of soundware, at least what Intel has done is to drive more complex usages, they take two masters and tie them together and try one stream over it. One of the examples can be, let's say you have a big microphone array, like 16 microphones, you can attach eight on one bus on eight on one other, four on one other, multiple or multiple speakers and so forth. So in that case, your stream can consist of multiple masters. So if that is the case, we've said this flag is true and do the things appropriately in the bus. And these are the APS, so once you have this data structure, you allocate it and then you can invoke soundware at busmaster API. This will initialize the rest of the data structures and start scanning your firmware. So if you are implementing ACPI system or if you're implementing a device to a system, although it's an enumerable bus, you still need to describe your soundware devices in your firmware. One of the reasons for that is, although it's enumerable, it's not discoverable, so we don't know a lot of properties on it. So we rely on ACPI and device tree to do that. So it will do the scanning of the firmware, respective firmware and start adding the slaves and that's when your slave device objects are created. Once the slave device object is created, if you have a driver for it, that driver will be probed. And looking at master ops, so this is what the master ops, you are supposed to implement. Read property is a callback which you can provide to read the disco properties implemented for your particular device. Transfer message is for transferring the data message onto the bus. Since we don't have a host control spec, we can't do that on our own. We rely on master to do that for us. Then we can do deferred messaging for that there's a different callback. For, so one of the things I will start pointing out was that soundware supports multiple registered pages. So you can have multiple registered pages and you can address a lot of, I think 65 kilobyte of registered space with that. So in this case, most of the soundware spec implementation is actually on the page zero. So whenever we want to do a soundware slave programming, we need to always reset page address zero. So that's a quick callback to do that. Then whenever you have a bus configuration, you can set it using this callback. And one of the things which soundware does as I was talking is having the capability of dynamically adding and use case without doing any glitches, which is kind of unique to the soundpad protocol as compared to other usages. So how that is done is soundware, you find out what are the parameters you want for the new stream to be added, then program that. And once all of everybody's program configured, we do the switch of the whole deus system at one point of time because soundware implements two banks. So there you have a shadow bank one, you can keep on programming on the alternate bank and then switch onto the bank at the same time, all the devices are synced and switched. So this ensures that there are no glitches in the audio, you program everything. So for that, we have a couple of callbacks to master to let it know that we are going to perform a bank switch before and after. Slave device, simple kind of a Linux device where we track what is a 48-bit device ID, then embed this Linux struct device as done by the other devices. Slave status, is it enumerated? Is it attached? It is detached? That's what we track here. Then pointer to the bus, it's part of. And slave also needs some operations for same port programming and port programming as well as for the link programming. So that is done through the mass slave ops. Again, we have disco properties for slaves, so those are stored here. And it's part of the bus link list, to note for that. And once the ports are programmed because you can have multiple program, so you can program them asynchronously and wait on them. So we have a completion for that. And finally, the device number, which is assigned. This is how the soundwire driver looks like. If you're implementing a driver, this is the most interesting structure for you. Name, probe, remove, shutdown, as in case of the standard Linux driver, then your ID table for the devices and your slave ops. So how do you register a slave? If you're implementing a slave driver, you need to call soundwire register slave driver and that will register a device for you. Before that, we would have done a scan of the firmware for your soundwire devices. So on this, your probe method will get called and your driver is attached to the device. But with a word of caution, at this point of time, your slave is not attached. So do not attempt to communicate with the slave on probe. That's a little tricky because that's the way protocol works. Only when you have attached, then you can attempt to do any communication with the protocol, with your device. So until you get attached, our status update, do not attempt to call the bus. So we probe only on the manufacturer ID and part ID. Instance ID is not used for obvious reasons. And on device celebration, we update the status to the driver. This is when you can start reading the registers of the device. These are the slave ops. We have read property. Since we can do interrupts, we have interrupt callback for that. Then whenever status changes, we bust let the slave know what is the status. Whenever bus config changes, we have a bus config callback so it lets know what is the new configuration. And whenever ports are prepared or de-prepared, we let our device know about the port parameters. Now, coming back to disco properties, since we said it's mandatory for the software, we provide two nice APIs, sound drive master read property and sound drive slave read property. So these callback, these APIs will go and read your respective firmware if it's a device tree or ACPI. We don't actually care because we use nice callbacks called device property callbacks, which are agnostic to the firmware implementation you have. And it will go and read the standard disco specific size, specified properties, and it will initialize your property data structure. Now, the property callbacks are mandatory, but it is not mandatory to call these two APIs. You can have your own implementation. It is mandatory to provide a callback, but what that callback does is up to the implementer. So if you are fully compliant with disco spec, you can just point it out to these two APIs. If you are not, you can have your own implementation. If you are partially there and partially not there, then you can call in your own implementation the standard APIs and then put your sacred source on top of it. IO. How are you doing? Yeah. So IO, we have read and write callbacks. Sorry, sound drive read and sound drive write APIs. These are basically your register read and write APIs. And then we also have something like end read or end write, which allow you to read contiguous memory bunch of registers. But in practice, I expect nobody to use those APIs because they should be using RegMap. RegMap support is already available. So since we do audio, how's the audio stream look like? So this is the audio stream. We allocate a stream object for whenever there's an audio instance going on. We track the stream parameters, state of the stream, and what is the type, which is basically PDM and PCM. And we link, and stream can have multiple masters. So a list of masters in this. Sorry. One quick thing about this is we can have multiple, we can have one or more masters. At least one master is mandatory in a stream. It may be the case that you actually do not have a master in a stream. You may be doing a slave to slave communication, but you still need a master to drive the clock. So a data port of master is not mandatory, but a master is mandatory. Similarly, we can have one or more slaves in the stream. And the master's slave are represented by soundware master runtime and soundware slave runtime data structures. Streaming APIs, you can allocate the stream object using soundware allocate stream and free that up with the release stream. Once you have allocated the object, you can add the masters using stream add master or stream add slave. So if you are a slave implementer, you just need to make sure that you are calling add slave. If you are a master implementer, you need to call allocate and add master. Then once your object is allocated, you need to prepare the stream. This is typically done from the audio callbacks by hardware parents. It should be linked to your prepare stream. And if you are doing enable, it should be typically called from your start stream. Similarly, disable and deep prepare. What do we do in streaming is we calculate the bandwidth and the frame shape required, and then program the transport parameters. Again, in this case, we will perform a bank switch to enable the new transport. Then we configure the ports and enable the ports and bank switch to enable the ports, actually. So converse is done on the tier down. Current status, quickly. So soundware sub system was actually merged in 4.16 Linux. And then streaming support we added in 4.20. Multilink support is actually in Linux next. And we'll go for 4.20. Regmap I is also available. We have Intel soundware controller, as well as the Cadence IP block, which Intel implements. What is remaining is the surface support for properties and debug support. I have patches. Probably they will go in 4.21, along with the device tree support for these devices. These are the links. This is a link to MEP spec. Unfortunately, you need to be a MEP member to be able to access the spec. The disco spec is freely available because it's a software spec. And then this is the link to the source tree and documentation. Questions? Yep. Can you explain what disco is? It's MEP, something. I don't know. Whenever we use disco, we are supposed to use SM with it. Discovery and configuration spec. It's defined by MEP. Yes? Yep. No, it doesn't support that. That's a good question. So I know everybody in the audio ecosystem has a soundware master or slave based on where they reside. Upstream support is only there for Intel. And next year is when I know that there will be devices shipping with soundware in them. And you can buy them off the market. I expect by end of next year, you will have a significant support for devices, both in Linux as well as in the ecosystem. Good question. Unfortunately, I don't have a crystal ball to tell you that. Any other questions? Yeah. Spec is there. If you are MEP a member, you can go and download it. That's not my call, unfortunately. Nope. But I don't expect it to be any worse. We don't have production system to major the latency yet. But it would be much, much better as compared to I2S, as compared to I2S is what I would presume. Nope. So the question from Liam is, we have a prepared call back. And since it's invoked from the audio hardware parents, we should rename it to hardware parents. Unfortunately, no, because prepare is a stream state in soundware. And we want to move from the stream transition to prepare state. So that is why it's a prep. If tomorrow audio guys make it something else, we should not change this API. Or it can be called from non-audio context. Hypothetically. I agree. I agree we had a good debate. And our friend Pierre has a lot to say on that. So we'll stick with prepare for now. Yeah, I agree. Think I'd say it, but I'll check again. Can you? Yeah. So I must stop this. I'll just talk after this, because people are waiting for the next session. So thank you very much for attendance. And hopefully, this was helpful. Thank you.