 So first of all, we're not paid actors. We actually work for Microsoft. And yes, we're here. I've been getting a lot of strange looks from you guys because of my t-shirt. And trust me, it's fine. It's a different world now. So we're here to tell you guys a story on how we started on a journey to move SQL Server to other platforms. And this basically started back in 2015 when our executives decided that it would be a good idea to explore how to land SQL Server on other platforms. And Linux was obviously the natural first choice. So we did this using a technology called SQLPAP, which is based on a project from Microsoft Research called Drawbridge. There's tons more information about this stuff out there, but we'll give you links and everything in the deck. So this is kind of how we went from, this was basically the state of the world in 2016. We only ran on Intel processors. We only ran on Windows. We only supported officially an application fabric, service fabric in Azure. And we could run on-prem and on-clouds, but we could only do so on virtual machines. And when the decision was made to switch into other platforms by using SQLPAP, we enabled all of these following use cases now. We run on top of ARM, which I know may be a shocker to a lot of you. Well, the fact that we run on Linux is already a shocker to a lot of you. We're naturally now able to run not just on Linux, but Mac OS on top of Docker. So if you guys are Mac users, we can just download SQL Server container and hack away. We also support multiple application fabrics, including Kubernetes, OpenShift, and all the Kubernetes offerings by the cloud providers, including our own AKS, Azure Kubernetes Service, the ones in Amazon and the ones in Google as well. And containers is something that we're really embracing fully, right? There's a new feature of SQL Server 2019 that's called Big Data Clusters that only runs on Kubernetes on Linux. We don't even have support for Kubernetes on Windows Notes yet. So isn't that interesting for a change of mind on how we do things at Microsoft? We also support secure enclaves. It's another technology. If you guys don't know what a secure enclave is, it's a secure area of memory, protected area of memory within your computing node that enables you to perform secure operations within that area so that no high-privileged user has access to them. SQLPile also uses this. This is a screenshot of SQL Server running on ARM. We actually call this product Azure SQL Database Edge. So it's the same database engine as SQL Server, the traditional SQL Server that we ship out there. It's just that it's packaged in a different way because we bundle a lot of other things. And we tailor it to Edge use cases. So this was one of the first builds that we did for SQL Server on ARM. If you look at the date there, it's September 14th of 2018. And that's it right there. It's running on top of ARM 64. So not only are we able to run on top of other platforms like Linux, but we're also enabling switching into other instruction sets like ARM. And we do this by leveraging, guess what? LLVM. So we leverage a little bit of magic there. There's some binary translation sprinkled on top and things just work. So if you wanna run SQL Server on a Raspberry Pi, you're able to. So it's a very, very, very different world for us. Anyway, when we started exploring what it would take for us to move SQL Server to Linux, the first thing that came up was, well, what are the possible ways through which we could achieve this? And we thought, well, let's explore how much it would actually take us to get there. If we did a native port of SQL Server to Linux, it would take us five years. We will still be working on this. We started this effort in early 2016. If we had started back then, we will still be working on it and we will have not released yet. It would have taken us five years to complete. Using SQL Power, it only took us three weeks to get a working prototype. And that working prototype was enough to convince everyone around Microsoft that this was possible, we could leverage this technology for this purpose, but not just that, that we can enable a whole lot of other things using this tech. The rest of the time that we spent building SQL Server on Linux was basically to make sure that we had a mature product, something that we could release to our customers and do feel comfortable in using it. That last number that I have right there, that is as of three weeks ago, 25 million Docker pools. SQL Server is actually very, very popular in containers. A lot of our customers are using them using the containers on their CI CD pipelines. And we ourselves are using containers on our functional tests. So instead of spinning up the ends running Windows for SQL Server, that's actually a lengthy process. Even if you use tricks like VM images and snapshots and bring things up from that sort of thing, that takes a lot more work than it actually takes to just spin up a container with the latest build from the build pipeline and just run our millions of tests against that stuff. So it makes sense for us to use or leverage our own technology to make our build process a lot better. So that number, 25 million Docker container pools grows at about 60,000 pools per day. So we're having a significant use base up there. So this is the secret sauce that enabled us to move SQL Server for Windows on Linux. And again, it's not like we actually compile SQL Server for Linux. We compile SQL Server for Windows and then we package it on top of SQL Pal. And this is how we ship it so you can run it natively on Linux as if it were just a package or on a container by just doing a Docker pull on Linux today. So the architecture goes like this. From the top, you have applications that are running within a sandbox of sorts created by SQL Pal. Okay, you have a SQL Server process. Do you have some DLLs for every process? Service host is the one that actually handles services within Windows. We have an infrastructure around services on Windows. We run that stuff within the SQL Pal sandbox as well. Then you have SQL Pal, which is a PE executable, right? It's actually running within Windows. So it really thinks that SQL Server really thinks that it's running on their Windows. It's just happened as we're tricking it into providing a Windows-like infrastructure within Linux itself. And so you can see how SQL Pal actually makes something like 400 NT calls down to down to win32kcs, which is another PE executable running within Linux. And then from there, there's about 50 calls that actually make it down to the Linux host extension, which is actually an elf executable. So when you run SQL Server on Linux, the host extension is the first thing that kicks off and actually generates all this environment for SQL Server to run inside of. It's a pretty neat technology, but it also enables us to do a lot of really interesting things. For example, we built support for persistent memory on Linux. So if you guys are not familiar with it, it's basically sticks of RAM that actually persists your data in them. With persistent memory, you can do interesting things like avoiding the entire opening system stack for IO and just keep the IO in user mode. So this is sort of one of the things that we take advantage of. Most of this stuff is running in user mode, hence why we are at this conference. For the use of B-MEM specifically for persistent memory, we can actually perform mapping of the entire database set of database files that you place on the persistent memory device and only do MEM copy operations in user mode to access them. So the latency for those operations is extremely fast. It's extremely low, I should say. Anyway, just wanted to give you a brief introduction to the SQL Pal architecture, but I brought two really smart guys that are actually gonna talk in depth about this stuff, make it a little more interesting for you guys, and I'm gonna hand it over to Brian now. Hello. So, our Janace just painted a pretty beautiful picture of like, oh, this is amazing technology, blah, blah, blah, so how does it really work? It's really terrible. There's a lot of things we have to do that are nasty and complicated to get this to actually work. So think about how a normal Linux process works. So like time, for instance, you have a VDSO where you can go directly into get time from user mode via shared map page, right? Windows has the same thing. They have a user shared page that's mapped into every process. You can get time, you can get all kinds of performance counters, and it's a strict known address. So now we have to have that same address mapped into every Linux process. And how do we plumb Linux time into this? Like, we can't go load or call the VDSO straight from Windows, right? And it's pretty costly to go straight down to Linux every time, like from SQL Pal or from SQL Server, and by engine going through all these layers down to the host extension to get the current time, right? Something that's pretty often in SQL Server. So that's complicated. Time zones are also complicated. So Windows time zones are very different than how Linux time zones works. We have to map them. They're all stored in the registry. It's very complicated. Windows manages memory different than Linux. So you can separately commit, you can separately reserve in Windows. There's not really a concept of that in Linux. So we actually had to build our own virtual memory manager, Eugene built it, inside SQL Pal. So it's kind of weird where, so the SQL Server process and any other virtual process that runs inside SQL Pal, it's running in one giant shared address space. So we have to manage that address space, so what if one of the processes dies? We have to make sure that we clean up any map to memory, un-map it, manage protections, all that kind of stuff. File systems are another interesting problem. No, I know, sorry. So Linux file systems are case sensitive. Windows file systems are not case sensitive. So if you break this assumption, lots of Windows paradigms don't work. So, and now we have to go do case insensitive and sensitive look up at the same time and go kind of try to figure out what the program is trying to doing on the covers. And so there's kind of this endless stream of things we had to go think about and implement at the right layer, at the right place. So we actually have thunks that switch calling conventions. So PE has one calling convention, like where to put registers, right? Where arguments are expected to be. ELF has a different calling convention. So we have these thunks. So at that ABI layer, we have these thunks that say, hey, switch from Linux calling convention to Windows calling convention. So if you're doing up call, you switch it one way. If you're calling down, you switch it the other way, right? So those aren't extremely performant. So we wanna make sure we do as few of those as possible. And so we purposely kind of keep this layer very thin. We wanna only expose things that are actually needed and are very important. So we try to do as much as inside SQLPow.dll, our version of the anti-kernel in user mode and leave only like the strict, actually required things that you can only implement via the Linux kernel API. So I'm gonna go deep dive into some of the actual like MIDI challenges after I gave you a small sampling of the things we had to do. So one of the big problems that we have is asynchronous IO. So SQL server is a database, of course. Most databases rely on asynchronous IO. SQL server is no exception. So we actually have our own user mode scheduler inside SQL server. So we have our own events. We have our own semaphores. We have our own mutexes, pretty much everything, right? And so that entire user mode scheduler is tied very closely to how we do asynchronous IO via the Windows model. And so if you're not familiar with how synchronous via asynchronous IO works, kind of the traditional model that you would do in like a blocking application is you'd issue an IO to the IO subsystem. Sorry, you'd issue the IO, you wait for it to complete and then you'd process the completion. It's obviously not very scalable and not very performant. So most databases try to use a asynchronous IO model where you batch as many requests as possible. You issue them all and then wait for them to complete in unison and then process all those completions as they come in. And you get a lot more throughput, a lot more scalability, and a lot more flexibility. So when we go try to map these primitives from Windows to Linux, the problem is that Windows has a common view over asynchronous IO, whereas Linux really has no high performance abstraction over a network in disk IO. So people would probably think that the G-Lib C-AIO is a solution, but in reality, it's not really a solution because it has a lot of issues. It's very slow in our experience and it has its own thread pool. So we have a lot of threads in our process and we don't want to add any additional overhead that we don't need. So G-Lib C-AIO was not a candidate for us. So of course, Linux has EPOL wait and IO get events for processing completions. So EPOL for network IO, IO get events for disk IO. And so there's a small caveat that this was circa 2016 when we were building this. So there's kind of some new stuff that I'll talk about later, but this was the kind of template at that time that we'd go by. And so Windows exposes a single mechanism called IO completion ports. And so via disk IO, via network IO, you can use kind of the same primitives to manage completions abstractly of what actually issued them. So how do we model the Linux primitives to Windows IO CP? So there's actually multiple ways to use IO completion ports, but one common pattern is that you have IO is bound to completion port. So you call these kind of extension methods, read file EX, write file EX, and then there's a socket version of Windows. WSA is WinSock. So Windows socket send, WinSocket receive. There's also crazy stuff like asynchronous accept in WinSock. And then so you would issue those IOs and then you can either wait on the completion port or you can call get queue completion status. So do a blocking call or a polling call to retrieve the completions. And then you would process the completion packet and do whatever application specifically you need. So when we were looking at this, one of the interesting things we saw was that get queue completion status is kind of, you can think about it as a polling mechanism, right? And so that kind of maps to E-Poll and IO get events. And so the nice thing about get queue completion status is that it exposes a timeout. So as long as we honor that timeout, we can do whatever we want inside that call, right? We just have to make sure we get back to when the application gave us the deadline, right? So luckily E-Poll await and IO get events both expose a timeout. So we can kind of say, for this completion port, we know that it's actually bound internally to a socket or a network, a network socket or a file descriptor, right? So we know which one to call when we call get IO completion status and we can use get IO completion status to actually pump them from the application. So when you call get queue completion status, we'll actually go honor that timeout, plumb the timeout all the way down into E-Poll or IO get events and actually use that blocking cause and method to pump these IOs back up into your application and surface them that way. So this is a general model that works very well for SQL server or applications that use asynchronous IO heavily. It kind of breaks down though when you go to applications that use blocking IO. So like very simple programs. So we actually have, we use the same model but we do it on background threads so that you don't actually have to use the outcompletion API if your application doesn't. We'll still surface theos but that's kind of an implementation detail in a caveat. So this works. There's one complication though that gave us a lot of trouble. So Windows has this thing called APCs, asynchronous procedure calls. And so when you say, hey suspend that thread, Windows will actually send an APC to that thread and then ask it to suspend itself. You can suspend a thread, you can resume a thread, you can actually execute callbacks on a thread. So you can say, execute this APC routine on this thread remotely. And so this is a very common paradigm in Windows. So we had to honor that as well. But what happens if you try to issue an APC and you're blocked with the infinite weight inside E-Poll? So our Windows stuff doesn't know anything about the Linux user space. So how do we get that guy out of the weight? So we actually inject an event FD into every single one of our weights to make sure that we can wake it up externally. So we can go process that APC and then go back down into the weight if we need to. So there's all kinds of little corner cases that you think about when you're modeling these Windows things in Linux. So what would we do in the future maybe? So the gens at Facebook has been working on this awesome IOU ring technology. So it's a new sys call in Linux as of 5.1. And so you have two ring buffers that are mapped in the user space. And so you have a submission queue and a completion queue. So without actually entering the kernel, you can go say, hey, I wanna do these IOs. And the kernel will go pull that ring buffer because the kernel maps it into your process. So you both have a view of the ring buffer. So it takes the submissions out, we'll go process them and then you can go pull them without entering kernel as well. So we can remove a whole bunch of context switches from our IO path. And yeah, I'm excited to go play with it. It's in relatively new kernels though. So it's not like we could use it in production everywhere yet, but it's interesting technology. So another problem we have is synchronization. So pretty much there's like one single paradigm for all synchronization primitive surface by the NT kernel. And that's a wait for multiple objects or there's a variant of it called wait for a single object. So on Windows practically everything is a waitable handle. So we have processes, you have events, you have mutexes, threads, semaphores, the list goes on. You can wait on all of these via this one API. So there's no real direct corollary in Linux. There's stuff that like if you squint it kinda looks similar or you could maybe model it, but when you go look at the details it kinda breaks down. So we had to do a bunch of work to actually implement wait for multiple objects on top of Linux primitives. So we built this kind of class hierarchy where you have a thread and every thread has a thread wait context. And so that thread wait context has a condition variable embedded in it. And so obviously as a mutex for that condition variable as well. And then it has an array of these wait infos. And so these wait infos are kind of the pair between the waitable object and the thread wait context. So when you go and start a wait, you'll pass in five handles. And so those handles will actually be backed by this abstract class called a waitable object. And so the thread wait context will go in queue itself into a wait queue that every waitable object has. And so when that waitable object becomes signaled, it'll go process the queue and depending on what kind of object it is. So if it's a auto reset event, it'll go take everybody that's waiting and then reset itself. If it's a semaphore, I got counted semaphore, it might take one event, one waiter. And so it'll go resume that waiter by calling back through the wait info to the thread wait context and then posting on that condition variable. And so you kind of have this very simple paradigm that lets you abstractly model, wait for multiple, using just the primitive condition variable. Yeah. And so for the future, there was a patch to the next kernel about June 2019 from the Wyden people. And so they have a similar problem. So they have an interesting idea where they could model this problem using few texas. So if you expand the few tex API in the kernel to allow the wait for multiple few texas, then you could model the same problem using Linux's calls directly, pretty much. So if we did that, we could get rid of a bunch of our code let the kernel do the heavy lifting and it's a win. So if you're really interested in how all the internals work, one of our colleagues Bob Dorr has this awesome blog where he goes through a bunch of internals and then we have a blog that goes kind of high level overview again, that we posted when we released the project. So Eugene is gonna talk to you about our debugging story. Hello. One of the challenges that Brian did not mention is how to debug the thing. Right? As Arjenius told you, when we run SQL PAL container, SQL PAL process, it is ELF process. And Linux host extension, which directly interacts with Linux operating system is pure ELF process. And we can use any debugger, GDB, LLDB, whatever, just to debug it. But that guy loads SQL PAL DLL which we internally call library.js and this is PE binary. This is Windows binary. Essentially inside Linux host extension we implemented PE binary parser which reads file and uploads it exactly the same way as Windows kernel does. And now on top of that it's SQL server and tons of other DLLs like NT DLL, kernel based DLL and probably some other process as well. And all these guys, pretty much 99% of the code, if you start counting the lines of the code it will be that much. That's all in Windows format, in PE format. Of course we have PDB files for them but what use of those PDB files if neither of Linux debugger can deal with that. So the compelling idea is to just launch, just have separate Windows VM or computer and launch VindBG on that. VindBG is absolutely capable of dealing with PE binaries and with PDB files with everything. And so VindBG will be able to give us insight into that part of the system but unfortunately VindBG has no way to connect to Linux machine and to break in into Linux process. So we have a cousin and if we have a cousin people usually build bridges. And so that's what we did. We created program which is called DBG bridge, debugger bridge. This is also Linux executable which uses LLDB library and that using LLDB class library, right? And which can connect to SQL pod process and can manipulate it. Or it either uses a live process or it can load core dump from the file system. That guy pretends that this is a remote debugger server for VindBG. VindBG has a well-known protocol, a user debug services protocol which talks over network and can connect to Windows debugger running on different machine. So a DBG bridge pretends to be just normal Windows debugger service running on this machine and gives access to that part of the system. This is fine but we want to debug both parts of the system. We want to debug both SQL, both PE binaries and we want to get inside into what Linux first extension is doing at the same time because they tightly interact. Fortunately for us when DBG is extensible debugger, so we wrote extension for that guy who directly connects to LLDB library. We have intermediate code here and controls this portion of the system. Now, what Windows, what VindBG needs to know about the process? VindBG, it turns out that VindBG has very little, it needs very little to start debugging the process. It needs to be able to read memory, it needs to be able to write memory to set breakpoints and it also needs to know about two lists. It needs to know about how many PE binaries will load it in the system and where they are and it needs to know how many threads are there in the system and where they are. Of course, LLDB library does not know that but every time we load any PE binary, we go to Linux host extension and tell it, hey, please, here is the file, ntdll.dll, please load into memory. At that time, Linux host extension says, ah, here is some module loaded, let me put it in the list. And the same thing happens when thread starts. Every thread in SQL PAL has four stacks. It has normal Linux stack where it starts executing, it has Linux signal stack where it processes all signals, like sigtrap, sigtv, whatever. It also has normal Windows stack where it executes Windows code and it has Windows exception stack. Pretty much signal stack in Windows world. So every time thread starts, which belongs to that world, it also goes through Linux host extension and all those stacks are registered there in Linux host extension. And it also is put in the list. And now LLDB library can examine the memory, can find those data structures and we can report them back to VindyBG. Now, VindyBG also has ability to stop or when we load a new module or we launch a new thread or we thread access. To implement this functionality, we just put breakpoints in Linux host extension. Every time a new module is created, we put breakpoint onto that function and when breakpoint hits, we stop the execution. Again, let me show some, quickly show some demo for you. The change is very good. Got disconnected? Five minutes left. I'm connecting to my machine in Redmond. So that debugger bridge is absolutely not an interactive program. I'll start it once and after that it just goes. Yes, yes, here. I don't know if you see it. Here, I started that program. Yeah, I started the program. So you see DBG Bridge SH that's shell script which contains all the incontentions, incontentions and it opens the core dump which I got in our lab. It's a real core dump. It's the bug I'm executing, I'm resigating now. And after it started, it says, hey, if you won't connect me, go to Windows box and launch that command. After that, it just brings some warning messages that it did not like something. But that's it. And all the debugging going through WinDBG. This is WinDBG. And you see, WinDBG says, hey, there was thread created. That means we read the list of threads and tell WinDBG that the thread is created. WinDBG really thinks it's connected to live process. And the same thing goes for modules. It says modules loaded. And then WinDBG, we can just go and just inspect memory here, inspect locals. But this is only for Windows part because if you do stack here, say K, it shows the stack ends. Okay, it's a little bit... Stack ends right here. Decay call stream right. That's the way how we tell Linux host extension, please take core dump, I'm damaged, Bencom. But if we do bank here, in working the debugger extension, now we see Linux stack here. And if we click on Linux frame, you see here is the Linux sources and where the execution is. The same way we can do, say, bank tail and it gives me a list of... That's probably a bad idea. About a couple of thousand of threads there. It gives me a list of all threads which run in the system. And some of the threads are not visible to WinDBG. Some threads are launched specifically for host extension needs. And WinDBG cannot see them. But through that host extension, I see them and I can also inspect the memory or whatever. And we also have commands to do single step debugging, set breakpoints inside Linux host extension using extensions. And we set breakpoints and do single step, whatever, debugging in WinDBG world, in P world using normal Windows commands. Pretty much if you single step through the program, in WinDBG world, it will skip over the Linux host extension calls. All right guys, questions? Hey guys, any questions guys? Hi, so say I have an arm stop or a laptop. What's the actual path to run this on arm systems? Is it released yet? I know you showed it, so I believe it works. So yeah, like I said before, on arm, this is a new product that's now called SQL Server, it's called Azure SQL Database Edge. We announced it at build, it's not GA yet. We have a private preview of it. It's not going to be released the same way that we have released SQL Server for box. It's going to be tied to an OEM, but for testing and validation and all these things in development, we're gonna have containers available for it as well. We do have an EAP, so if you're interested, you can search for Azure SQL Database Edge EAP and it will actually take you there. Early adoption program, that's what it means. I can give you more information offline. Thank you. Good question, in the back. So with all of this increased abstraction layers to map between Windows constructs and Linux constructs and SQL being a performance-based product, how do you guys overcome all of the additional latencies in this process? It seems like there's a lot of layers here. And I mean, if we're trying to query something in a reasonable amount of time, right, like this has to be fast. Right, so as you can imagine, this is a generic concern from our customers. I can tell you that we have really smart people, not me, working on the product. And we have basically established performance baselines for SQL Server Linux, and they're basically on par with Windows these days. As a matter of fact, one of our bosses thinks that we can go faster than Windows. We haven't gotten there yet. We're basically par on, so same performance on Windows and Linux. Are there public metrics? Are those public published metrics? So you're gonna see some TPC-E, if they're not public out yet, there are some TPC-H benchmarks that we have comparing SQL Server Linux to SQL Server Windows, that's fairly similar. And we're also gonna have TPC-E. If it hasn't been published yet, it will come out soon. Are there any use cases of SQL Server that are slower on Linux than on Windows? I can't think of any. So there have been regressions and bugs that we have fixed were, like I mentioned, time. So querying down a time path. So that user shared page that's mapped into a process that has a little bit that says, hey, can I go fast when I query time? And so we weren't exposing that correctly. So you could query time in a loop on Windows versus Linux and it would be like 10 times slower on Linux, but that should be now resolved. So there are bugs, right? But most of them we can fix. So it's not a performance question. I'm just curious about your user links. So you've mentioned, I understand it's like a plans for the future. But then I guess you already more or less realize how can it work from the design perspective. And two most important parts there about, well, at least what I'm aware of, it's this balance between completion and submission queues, can SQL Server by itself already handle this situation, the balance between those two. So one will not, for example, oversaturate the system by just submission. Can SQL Server do this? Sure. So IO completion ports have a concept of parallelism. So you can say, how much parallelism do I have my IO completion port? And there is internal rate limiting. But yeah, it's something we have to design. We haven't thought about it fully yet. But the IO model of libOS and SQLPOW is it matches the pattern of IO U-ring pretty closely where you go issue an async IO and then you immediately pull it for completion and then you can have this background guy that can complete it in the background. And so we're already having these dedicated pump threads. So theoretically those guys can just go pump out of U-ring instead of the current APIs that they're using. And then the second related question. It's also, well, there are some limitations. For example, the memory for those IO rings should be locable. So it's like up to, I don't remember. There's some limit. So there is also a good question. How many of those rings you wanna have? So any ideas like pet thread process? Sure. I mean, I don't have any numbers. Okay. The similar problem with kernel AIO as well, right? Where you can only have so many kernel AIO descriptors or whatever they're called in flight on the process. And so that's something that manages well right now. Okay. Thanks. Any other questions? One more question? All right. So the Alvin in the room, you did mention wine. So why not just extend wine with the API calls that you needed? Sure. So there's a lot of reasons. One of the main reasons was, so before SQL Surrealinix, Eugene and our boss worked on a research operating system inside Microsoft called Midori. And so Midori actually used the MSR research project drawbridge to get Windows compatibility on Midori. Because Midori didn't run normal Windows applications. So he was pretty intimately familiar with the technology. We already knew it worked. It was proven. And also there's like licensing issues. There's, we can make SQL Pal as optimized as we want for SQL server rather than fork wine and do custom changes. And there's lots of reasons. Another reason is that, you know, on the original slide with architecture, there was Win32kc's Midori. We take it from Windows, unmodified. We patch it. We patch it. We remove privileged instruction it after we load it. But on disk it is unmodified. So we can keep in sync with new releases of Windows. Really easy. We just grab new Win32kc's, new HTTPS's, new, and with wine, you have what you have, right? It's adoption of new APIs and security fixes for different cadents. I think the GPL part is the most concerning one for us for wine, right? It's not really very friendly for two commercial software. You also distribute Linux kernel. But anyway, thank you for that. Brian, Eugene and Arjunas. Yeah, thanks guys.