 Hello. My name is Clay Mackle. I'm the chief software architect at FileMaker. I work directly for the vice president of engineering, Frank Liu. And today we'll be talking about one of the under the hood sessions about the Draco engine itself. Okay, so who am I? I began working at Apple back in 1986. I was working on the MacTermal project back then. And when Claris was formed, I was one of the first engineers to move over to Claris. I was actually employed 15 at Claris. Then Claris eventually turned into FileMaker after a while. And I didn't actually really start working on the FileMaker product until like the early 90s. I was working on other products that Claris was delivering at that point. Actually all the stuff I worked on never actually shipped at that point. So they actually were kind of a little bit worried when I came to the FileMaker team saying that, hey, you never shipped anything. Why are you coming to the team? But it eventually went out. So actually it was kind of a rocky road because the first version I worked on was doing the Microsoft Windows 3.0 port. At that point, FileMaker was only a Mac product. Neshoba had actually tried bringing it over to the Windows platform a couple of times. And we actually started doing that work too. But then when Scully got kicked out, the next president of Apple came in and said, nope, no more Windows work. So we stopped work and I went on to something else. Then the next president came in into Apple. We started up working again. Then the president changed again and we stopped working. And then lastly when Emilio came in, I think that's when we started it up again. Then we finally finished the work on, actually because I was actually working on Windows 2.0 at the first part of it. But by the time it got cancelled and we started so many times we were on 3.0 at that point. After that point, I started working on the FileMaker Pro server or the FileMaker server product. I was taking some of the assembly code that was from the Neshoba and converting it to C and we were able to get it to bringing it up on the server product. This is when John Thatcher came in and started working on the server product at that point. Worked at that point. After that point, I worked on, I kind of did the first instant web publisher. This really was the instant web publisher, one instant web publisher, two and then web direct. But the first version was mine. We were using CDML, which came from Lasso Technology. We got that out. I always liked that logo with the building and the folders floating around it, but that's my preference. Also at that point, then I started working on the Draco engine. There was a core team and I'll show a picture of the core team of who did that stuff. And then most recently worked on the FileMaker Go product and just because I love engines so much, I go through and I repair tractor engines in my spare time. You have to do something. For the contents of this session, I'm going to first go through a brief history of the engines inside FileMaker, starting with what was before Draco and the Draco engine after that. Then I'll go and talk a little bit about how data is stored and downloaded, how it kind of moves around in the system. Then goes to the file opening process. All the steps I have to go through when you open a file over the network and sometime in a little bit about how it opens locally, but that's not as interesting. At the end, I'll go through the changes that we made in the Draco engine for 15. The most recent release has come out and then do a Q&A at the end of that and I lost the connection. So the history there. So you've seen this chart many times now and I can't put in the new item that's on the chart yet since this is going to be put up on the web. But this is something that like three years ago where Dominique is finally went through and started wanting to call FileMaker a platform. Before that time period, we never really were able to talk about the Draco engine and how things actually worked under the hood at that point. It's more recently that we actually are starting to use the term Draco. The term Draco comes from Christopher Crimm, who is one of the first people that worked on FileMaker. He really loved the movie. It had Sean Connery in it as a dragon. That's where the name came from. People keep asking why is it called Draco? You know, there's something serious about it. No, it's just a movie reference that we liked and it's stuck. So there's five errors that I split the FileMaker engine, the errors of the FileMaker engine into. Last year when I presented some of this over in Europe, I couldn't, I really was saying four, but now that we've gotten into the design surface and I will talk about more the work in Draco that's going into the design surface, there's really now five errors too. So the first error happened back in the, you know, the 85 or so and this is what we call the DOS pre-Claris. This is before Claris purchased the product and these are the splash games from there. I didn't really understand their numbering process. You had FileMaker one and FileMaker plus and then FileMaker four, but it doesn't get any better when we go to the next slide. The database at that point was a flat file. It did have lookups. So you could pull data from other tables and bring it in and we still have lookups in FileMaker two, but you had no relational access to data at that point. There is very little or no scripting depending on which version you're talking about. But what does come from this version that is still going strong at this time is the storage mechanism and the concepts of how data is actually stored in FileMaker. All data in FileMaker is pretty much stored as text, even number fields, date fields, time fields, everything is stored as text and that goes way back to the roots at this point. They always wanted to be able to, for a user to go into the scheme and make changes easily to go between numbers, dates and times because, you know, as you design your database, you know, well, what is this field going to be? And you go along and say, well, this really should be a number field and they didn't want to just throw away all the data. You could just transition between all these things. The container field is really the only one that's different from those things. You can't really transition from a container field back to any of the other types back and forth. And also the storage mechanism of how these, this data is actually stored on disk is called the HBAM layer and I'll go into more of how the HBAM layer works and what it does. But the design and the algorithms and just kind of the theory of how things should operate all comes back from this time period back in 1985. Then we had the flat file era. This is after Clarice purchased the product and basically moved all development over to California. This is when like around 88 time period script maker was added. The layout mode was changed to the MacDraw style because we had our family of products and MacDraw was our drawing product. So we had the patterns, the color, everything pretty much looked like MacDraw inside FileMaker for doing drawing your layout objects. The server version came out this period. The Windows versions came out this period. This is when runtime came out. And we knew that being a flat file database this wasn't going to last very long. That we would actually have to go through and be a relational database at that point. So this engine team was formed called Spectre. I'm not quite sure where that one came from but the name for that one. But it was started up and it was going to be kind of a more SQL style database engine. Now we knew that there was going to be a mismatch between the SQL engine and how FileMaker did things. So this is when Christopher Krim was given the job of writing this supposedly thin layer to map FileMaker to SQL. And that's where the Draco project actually started. So Draco actually just started as a thin layer between FileMaker and a SQL-like database engine. Then we get into the early relational time period. Creating that engine was taking some time and then we got really concerned because Microsoft, we got these rumors that Microsoft Access was going to come to the Macintosh. And that it was going to kill our market and we had to respond to that. And we had to get relational fast. So we came up with a relational model that happened that lasted all the way through the 6.0 timeframe. And that's doing relationships between one file to another file because there was only one table in each file. And in your file you're defining how your file is related to that file. No join graph or anything like that. During this time period we were also converting, I had done the assembly conversion code earlier to get to the server product but at this point we were just doing the conversion from Pascal to C++. FileMaker was originally written in Pascal and we actually went through and studied things. We were looking at small talk. We were looking at actually a bunch of other languages at that time period trying to decide which one. Because C++ was still kind of early in those time periods. You had to use like C front, which would compile it into actual C code and it would take hours to compile and stuff like that. But we were pretty confident that C++ was going to win the language wars during that time period. And during this time period the Spectre Engine was still just kind of limping along. It wasn't getting anywhere and the Draco layer kept getting bigger and bigger. So basically the Spectre team was canceled and laid off. And at that point Chris was determining, you know it's going to be easier if we just write the entire engine ourselves and do it the FileMaker way. Use the HBAM layer that FileMaker was using before, update it, it needed some changes from the 88 version of it but this is when that decision was made. Also during this time period is when Steve Jobs came back and Clarice was basically taking a part. Some parts went to Apple, some parts were terminated and all that was left was FileMaker and we changed our name to FileMaker Inc. So this is the Draco team, the original Draco team. So there's still three of us here. John's in the middle. He was actually the manager of the Draco team at that point. Christopher Krim in red was kind of like the lead designer. I'm on the right there. Keith Proctor is still at the company. He actually gave a presentation at the dig meeting and at the wedge last week. So these three people are still here. Actually in this case, you know, both Keith and I wanted to be the skipper but neither of us wanted to be Marianne. So we decided, okay we'll have two skippers and my ex-wife was Marianne instead. So that's how we solved that problem. And then now we come up to, you know, the odd fours, I guess. The start of the Draco-based products. So these are, 7-0 was the first time that the product was released based on top of the Draco engine itself. This thing, you know, has been, was in production quite some time versus a little layer and then getting thicker and thicker underneath. So the APIs that were used on top of it could actually, you know, the FileMaker engine could use it pretty well. It's just that we kept, we basically started from the top down. It was kind of different than how like some database people work. They start from the bottom up but we kind of started from the user interface and then decided what the engines to do based on the user interface as opposed to developing an engine and then try to figure out what type of interface to put on top of it. Something else to note is that, you know, this Draco engine was in production for quite some time. So it was actually designed for the hardware that was available in the 1990s. So actually when it comes out to, when you get up to FileMaker Pro 11 and when we came out with the first iOS version, FileMaker Go, the hardware that was in the first iPhones are pretty much compatible to what the hardware was in the mid-90s. So the engine actually ran quite well on the phone. The main work on doing, coming up with the Go version of FileMaker was doing the graphics because the imaging model, the windowing model and all that stuff is very drastically different between the Macintosh and the iOS operating system. So that was basically the bulk of the work. I did stay, I pretty much stayed working on the Draco engine at that time as Chris was doing the UI portions of it. And this is where other things, other than just the engine started causing problems because of the lifetime environment. And there's been some under the hood talks about how the Go app has to be terminated when other, you know, if you get a phone call and you're out of memory, the FileMaker is supposed to, actual process gets terminated. So you have to save off all your state information and then come back to life when you come back to the foreground, which is something that the desktop products never actually have to do. So I spent most of my work dealing with those type issues. The engine as is ran pretty much well. It's just these other requirements that came from the iOS operating system running underneath had to be worked on. And then now we come up to the design service. This is when the file format changed between 11 and 12. Up to this point, up to 11, we were still using that Mac draw style metaphor for, you know, drawing layouts and for rendering and how the drawing system basically worked. It was still based in a pretty ancient model. And we knew we wanted to improve instant web publishing, or instant web publishing was getting along in the tooth. You know, cascading style sheets was coming out as the main drawing mechanisms. People always wanted to be able to share styles, but treat multiple objects. So this whole decision to redo the rendering engine and how we store layouts and all this stuff came up in the 12.0 timeframe. Now most of this work in the 12.0 timeframe was actually done only up in the app. It was actually done in the Draco layer. So it wasn't easily shareable between Go and the Mac and the WebDirect. And at this point, parts of it were taken in this part, but over these last few years and coming up now, we've been moving all the design surface layers down into the Draco engine so they can be shared between other components and other products that we may be announcing. And actually one of the, I guess the big reason is this big request that everyone's had on WebDirect to be able to do PDF. And to be able to do render on the server, you basically have to move the rendering logic down into the Draco engine. Once it's done in the Draco engine, then we can now render PDFs anywhere. Anywhere that the Draco engine can run, we can now render PDFs. Oh, and the one last other thing that came out in the design service which is being talked about during this era was the iOS app SDK. No, the file maker, iOS SDK, that's the name of it. Yeah. We always have the hardest problem naming projects at FileMaker and that's the name that was picked. Or at least I think it's hard to pick names. So now we go through about the different components of Draco. What makes up the Draco engine? At the lowest level is the support layer. It's the platform agnostic. It knows how to do it. The support layer has worked for 68K,000 based machines, power PC until 32-bit until 64-bit, ARM6, ARM7, ARM7S, ARM64 processors. Working on the Win, the Mac, the iOS operating systems and other operating systems that we're gonna be running on soon. This is the layer that hides all the file IO, how threads are done, new taxing, cryptography, pretty much any low-level network operations all hidden there. So all the code that's written in C++ basically uses the objects in the support layer. So all the code above there doesn't need to worry about which platform you're running on. You just all write to the same API. The string and text management stuff is down in here too. Next is the H-band layer that's on top of this. Now this is kind of like what you consider like the oldest part of the Draco engine because this goes all the way back to 1988. This is the mechanism for how we store the file maker data into blocks on the file itself. This engine has gone through many revisions. As I mentioned, it was first written in assembly and I converted it to C and then 7.0 had changed. And this H-band layer has been always kind of more tied to the file system. The first version of it was 512 byte blocks because that was the size of a block on a floppy drive. So when you were dealing with floppy drives, when you had your databases on those things, if anyone remembers that, and swapping floppies in and out all the time, that was fun. You would read and write data in 512 byte blocks. And then when the 3.0 file maker came out, the block size was bumped up to 1k. And in the 7.0 time frame, we went up to 4k because most disk subsystems at that point were reading file chunks of data in and out to the hardware in 4k chunks. Now that things are going to SSD and stuff like that, we probably should come back and revisit those assumptions at this point because disk.io works very differently now in the modern OSs and how stuff works. But we're still sticking with the 4k blocks. Whenever we change this layer, this requires really a major file format change, so you probably don't want us to change this too drastically and quickly. The next layer above that is what we call the DB engine layer. This is what you would think of the standard engine, or the standard database engine. This is where the calculations are done, the networking is done, queries, transaction handling is done. Now this is not a SQL database engine. Our join graph is the mechanism that we determine where data is located. We don't do queries and like that. As a little subsection of this DB engine layer, there's a little box, if we blew this out into a much bigger set of boxes of different components inside of it, there's a little box in there that you can take a SQL statement, pass it to it, and then it goes through all this conversion, splits it apart, tries to figure out and tries to map it into FileMaker or into the Draco DB engine calls, the binary calls that are located in there to do that. And then when it gets the data back, it needs to usually, hopefully it doesn't need to create a temporary file to store the results coming back from the SQL statement, but sometimes we have to create temporary tables. And this is why the SQL APIs are never gonna be as fast as the native APIs. Now sometimes there's certain operations that are hard to do using the FileMaker APIs going through the UI and SQL is actually kind of faster just to get to that point, especially doing ad hoc queries and stuff like that where you don't want to use the join graph. But I just wanted to make sure for people to understand that the engine is not natively a SQL database engine underneath it. So even when you're dealing with ESS, ESS is down in this layer too. It's actually taking the Draco operations and trying to emulate the Draco APIs through the SQL database underneath it. And that's why the performance sometimes isn't that great down in that layer too. The next layer about that is the design service. And this is the one that is actually even still in progress at this point. This is the layer that knows how to render layout data, what views are, how to, if a mouse event occurs here, not that it actually got the mouse event, but if something called a mouse event occurred at this location, which object was hit, how to move that stuff and generate a PDF from it. And basically kind of like the drawing layer. Now the server version of the design service only knows how to generate PDFs. It doesn't know how to draw to other platforms. And there's extension mechanisms to the design service. So on Windows it uses the Windows drawing APIs on Mac, the Mac APIs and iOS, similarly. This is also where we've switched the PDF engine from using DLI down to Hamas now. And this was another requirement for us to bring PDF to the other platforms is that we needed to go to a version of the PDF generator that we didn't have to pay a license for every potential web direct user that's out there. And then lastly on top of this, of the Draco engine is what we call the FM engine layer. This is actually where the layouts are stored and the scripts are stored in what we call the user model. The user model is the environment that the scripting engine runs into. The framework of scripting engine is very context sensitive. You say go to next record, but you have to know what window you're talking about, what layout are you on, what record are you on, what is your found set. All that information is not specified. You don't pass that as a parameter into go to next record. You just say actually go to next record. So what I'm calling the user model is the state information of what the frontmost window is, what the active record is, what field is active, even where the insertion point is located. Because there's insert object script steps that can insert certain things at the insertion point. Now you don't have to look at all this stuff. This is kind of like a big layout. You can go download the PDF and look at this and look at all the little lines and see how things are all hooked up. What I mainly wanna do, the top portion of the chart talks about all the different clients that we have like ODBC, PHP, XML, the pro go clients, web direct and stuff like that. The stuff under the line is all the stuff that's running inside the server. But what I really wanna mainly highlight was where the Draco engine is located in all this stuff. Now the Draco engine isn't just atomic engine that everyone just talks to. The Draco engine is actually compiled into each of the clients. And into, in this case, you see the XTBC listener on the left side. The pro and go clients are both on the upper right-hand corner. There's a standby server. There's a SASC, which is the server side scripting engine. When you do perform script on server or when you're scheduling scripts on the server, that's the process that's actually running it. The CWPC handles both the XML requests and web direct requests. Actually both go through that process. Now all these boxes have their own copy of Draco. It's the exactly same copy of Draco. We don't compile one for clients. We compile one for server. They're all exactly the same and they all communicate back and forth. Using the GIOP protocol. These engines that are in all these different locations do a lot of work to determine whether the work should be done on the client or the work should be done on the server. And we go through and we tweak that pretty regularly. What's done where, what needs to be downloaded and what happens there. Things get more complex because the FileMaker Pro product can actually talk to multiple servers at the same time. And this is something that I guess most databases probably don't do. You don't have one client that has one join graph that can span multiple servers, completely different servers on different machines. So the client actually has to have all the knowledge inside there to know how to take this query and split it into the different machines and then to join and combine the results back together once it comes back to the client. So next time we go on and talk a little bit of how the operations work when you're inside the engine. How data is moved around, how data is stored. Kind of coming back to the whole HBAM. So I think HBAM stands for Hierarchical B Tree Access Method is what the name of it was for. And this is like what you see in a directory structure on your hard drive. How you have directories with sub directories in it and then files underneath it or you think of it as a tree. There's a bunch of different types of representations of that. So the file, so when you get our document that describes the file format, what our file format basically is is this big hierarchy. You can think of it as even an XML or a JSON even. You could use that to represent it if you want to represent it if you want. So there's branches in those leaves of data. What HBAM does is basically you have this whole tree you can ask for a specific branch or a specific leaf and it knows how to find out what block that data is actually located on. So it's the thing that's managing this whole tree and then figuring out which blocks that data is actually located on. So most of the operations of the database engine really are just dealing with moving parts of the tree around. Like if you delete a record it'll take a branch and just delete the branch and all the leaves that are in everything that's connected and anything that goes away. If you're adding a record it adds a branch. If a client just wants to download a record it takes a branch and downloads, takes that branch, packs it up over the network, ships it over and then drops the branch into the temp file on the client. So if you're working down in the engine layer you're thinking about these key paths and chunks of data that are stored underneath it. So large chunks of data like when you have a large container object they're being split down into about roughly a thousand byte chunks. So the biggest leaf that we ever store is about a thousand bytes. So we have a standard mechanism that we use throughout that if an object is over a thousand bytes in size instead of storing one leaf it's then a branch of its own that has a whole bunch of leaves one serially underneath it that are all 1000 bytes in size going off along. This is to make sure that we can roughly fit about four leaves in every block and not have too much extra space left over. So talking about extra space I'm actually recycling slides back from presentations we made back in 2007. I don't know if you remember this slide but in addition this mapping that I'm talking about it also manages the space in all these blocks. So it's watching if you try to insert a new leaf into a block it may need to split the block into two blocks. So take that block, duplicate it, delete the upper half and the lower half on both sides and then insert the leaf into the block that it can fit in. Something that the original version so the encryptions of these blocks is something that's more recent when we added the encryption at rest and that's actually handled by the HBAN layer too. So as each block comes in it's into memory it's decrypted. Now when these blocks are coming in and out of memory when you have the file cache settings that are in the pro product and the database cache sizes that you can set in server what this cache is setting is storing how many of these blocks can be stored in memory at one point. So the cache isn't really based on records or anything like that it's really based on the number of these blocks that you can keep in memory. When a block comes into memory not only if it's encrypted it has to be decrypted or encrypted when it comes in and out of memory. So if we can stay in memory all the time we can stay decrypted and you don't have to worry about decrypting it every time you need to access that block. So that's one thing that the cache helps in speed. Now with SSD drives out there and much improved caching mechanism by the operating system the original reason was there was so that just to keep the block in memory because early operating system cache management were not that great and it was better for us to cache the blocks instead of the operating system. That has changed a little bit and you'll notice that sometimes you'll get better performance by decreasing the cache sizes in your server or your pro product when if you're dealing with very fast SSD drives or there's some people even that go out there and run on completely RAM drives for their file maker databases but when they want the absolute fastest performance they go through and spend wild amount of memory on it but it is extremely fast. At that point it's almost better just to leave the blocks on disk and they're not encrypting, they're not doing EAR at that point because they have enough money to build big guards and rooms and things to protect it. But sometimes that this block just shows is that the h-band layer also manages the compression so when you go through and you do save a copy and you go through and select a file and say I'm gonna save a copy, I wanna compact it it'll go through and move all the blocks around and move chunks and fit as much as you can into one block and go through and do that. It handles when the database delete large amounts of data and when you close the databases when all the free blocks that have no data in anymore all move to the end of the file all the blocks are shuffled around and then the file is truncated and that's actually the only time that a file maker file shrinks in size is at the close operation and after you've done a large delete. We don't during idle time or something like that keep moving blocks around to the end because there may be a free block in the middle of the file but we may actually just refill it again as you insert more data. We don't really know until you close the file that you're actually really done with it and at that point is when we clear up all the free blocks. Something else that the engine is managing all this time we're getting a little bit higher level going up is that we have these temporary files. So whenever the Draco engine opens up a main file in this case in host when it opens up file A you creates a temp file for it and this is true whether it's the host or the local client either. If you're just opening it up inside file maker on your local desktop. In the network case you actually have two temp files the host has a temp file for that file and the client actually has a temp file for that file. Actually each client has a temp file for that file and what we use these temp files for is basically when you talk about caching. So this is why the file block cache of HBAM is really different from what we when you hear people talking about caching records and stuff like that. When we're talking about that cache we're talking about what's stored in the temporary file not what's stored in memory. We're caching stuff in this temporary file. External container data's or a thumbnail representations of these things these things are not inside the file anymore they're inside a folder that are next to it. And on the client side most operating systems have a cache folder like where the browser puts cached files where other applications that are wanting to cache some data that it wants to build up like font images and stuff like that. We use that same location and we have a folder in there that will put container objects that are externally stored where we'll download them from the host and put them into that folder so we can use them when we're displaying them. So when you're doing browsing operations this is when we start downloading records from the main file to the client. And mainly at this point we're moving it from the main file on the client on the host into the client's temporary file. As has been mentioned in previous talks and stuff like that we work at the layer of records we don't go down and just pick specific fields to download based on the layout because the complexity that is we don't know when you're gonna be switching layouts at any moment. There's calcs that are dependent on other fields there's a whole bunch of sets of dependencies. So we stick to using the native block of a record. It's also the locking mechanisms that are used in FileMaker for when you lock a record is basically based on the hierarchical tree. So our lock manager is just working off this tree. You say I want to lock everything underneath this branch. So when you do a pause operation on side server what you're doing is saying that oh I want to block all writers from the entire tree. So it puts a lock at the top most part of the tree or the root of the tree depending which direction you like to think of things. So records are locked that way when you do table operations we lock just the table and the entire hierarchy underneath that part. So we're downloading portions of these trees as I mentioned and in this case entire records. We if something that is a little different is that container fields what's actually stored in the record is really just a pointer to what the container data is. There's a separate library that stores the actual container data. And we go through and we consolidate if you insert the same container data into multiple fields we just basically ref count the first one multiple times. So we've been doing this for a long time because in the especially when you were dealing with like the earlier versions of FileMaker this logic goes back to like the 3.0 timeframe because people would be doing these green light, red light things on each record and stuff like that. And when you only had a 20 megabyte hard drive 10,000 green lights and red lights would burn up most of your hard drive for that. So that was basically a big feature of FileMaker back then is that we consolidated all these things down into one entry into the library and with a ref count of like maybe two and a half million versus three red lights or something like that. Something else is when the client is asking for stuff when you're asking for external data or actually asking for containers in general the view system now can ask for an image of a specific size if it knows it's just gonna render it. And in that case when the request goes up to the host the host will generate a thumbnail of that image instead of downloading the entire three megabyte TIF or whatever it is it'll just download a couple K ping of it or something like that which will then get stored in the local cache too whenever it needs to reference it. But when you do operations like export field to disk or something like that at that point then we do have to download the entire TIF, the full size object to put it on the hard drive for you. So in this case, we're gonna talk about how the client keeps pulling things from the host. There is one case where the host actually pulls something from the client. When we're doing queries and you do an un-stored query this could be almost just a whole talk of how the query manager works but some queries can, usually we try to push the query right to the host to do the query there. So you can use the indexes and reference a bunch of records and do that type work. So if the query, if the un-stored calc contains a global field in it but the host would go through and ask the client well send me all your global fields at this moment because I need them because to do the operation and to evaluate those calculations on the server side it needs the values of those globals. Something that's new in 15 is that before 15 some people were noticing that we weren't uploading variables to the server. So if you were doing queries on un-stored calcs and then the client decided to say oh I think we can do this on the server side. The query would return an answer that you weren't expecting or but the answer would be always the answer you would get if the variable was empty. So that's actually something that's new to 15 and changed that you can now use variables in your un-stored calcs and your query should always work depending on whether it decides to perform the query on the client or on the server. So I've been talking about all this data going up and down and so I'm really talking about what does that temp file on the host for? The temp file on the host is for when changes are made on the client. And this is not even actually just for record data. This could be if you change a layout, if you make a change to a theme, if you update a value list or something like this. Pretty much everything works in this kind of mechanism. Is that what happens is that when you're editing the layout or you're editing the value list or you're editing the record you're editing the copy that's in the local temp file, the temp file A on the client. And once you finish editing it and you say hit the okay button in like define value lists or say save and layout mode or whatever like that we upload the object that you're editing and we uploaded it to the temp file that the server has for that file. It's uploaded to the same exact location how the tree looks on all these temp files almost look identical to each location. So it says take this branch and put it in the same exact location on the host. So it goes up to the host that way. Once all the different components of the layout or the database schema or the multiple records you can be changing multiple records within one transaction. All those records are uploaded to their proper locations in the temp files. And maybe different temp files if you're related data happens to be in other files. They all get uploaded to the temp file and then a final command is sent up to the host saying okay now commit this data. And at that point then the server takes looks in the temp file moves everything into the main file does whatever other processing has to be done like indexing, clean up all the indexes and go through and send the notifications out to all the clients. We mainly do this so that if there's a network break during the time that we're uploading the data that we don't mess up the main file. And this actually even in version seven when we this is something that changed as we were going through and I think it was finally completed around the version 10 version of FileMaker. Some operations were actually just moving things directly to the main file even in the seven time period. So that's why there were some people were complaining about instability and we're afraid of doing editing stuff on the clients back then. Because if the network did go down you could screw up your layout and stuff like that. But over the time we've been going through and making sure that everything goes through this process. It does slow down making the changes but it's a lot more safer. And if your network goes down during this part of the process you don't want your data corrupted in the main file. A few more facts about temporary files. They're always encrypted. The level of encryption is based on the main files encryption. If you're using EAR and it's AES 256 encrypted the temp file will be encrypted using the same scheme. Even if the main file is not encrypted we do do a light level of encryption that's pretty fast. You can kind of consider it instead of 256 bit encryption is really only 32 bit encryption. So it is much smaller but it is still encrypted and you can't just take the file and look at it. You'd have to go through and instead of taking years to break it you know maybe it'll just take a month or so or something like that for your temp file. If you really want the security make sure you go through and use the EAR encryption. Network files that are, temp files that we keep around now with the cache data we keep for 15 days. We keep them in the cache location that your operating system provides for applications. If it's a temp file that's created for a local file we delete it immediately because we can just pull the data back over for the main file for when you're doing editing and stuff like that. If the free disk space on the main drive where the temp files are located gets under 250 megabytes we'll start purging these things. Unused cache files will even start going through and start deleting data that's in the temp file. Cached records will start deleting that data and removing it to make up so that we have some free space available inside the temp file. External container data is a little bit different. Those files are being cached in a folder inside the cache location on your hard drive. And there we're a little bit more stingy on the amount of disk space we'll use for that. So if you have less than two gigabytes available on the Mac or Windows or one gigabyte available on the iOS device we'll only keep the layout or the container objects that are actually in use and otherwise we'll start purging the one that was used the longest or last or the oldest referenced one. So a little I guess performance tip if you really want a file maker to work the best if you're not dealing with container data at all or external container data you just need about 250 megabytes free. But if you're dealing with external container data you really want to have about two gig available on the main drive wherever your main drive and when I'm talking about main drive this is the drive where the operating system says to store temporary files and cached files. So wherever your browser that you're using stores its gifts and stuff like that that is going to be used over and over time. So usually the C drive on Windows and the main drive on Mac but there's a bunch of different ways you can configure things to move that folder to different locations. Next time we go over the file opening process. One of the reasons why I'm talking about this is that last year this was a question that came up on a panel that I was at and I went through and gave like a minute and a half answer on this thing and people were scrambling things and writing stuff down as fast as I could and everyone came back and said can you please go over this slower and go into more detail of what's going on. So it's a seven step process couldn't figure out enough to get up to like a 12 step or you know some other fancy number but so the first part is a path list processing step one. In FileMaker when you define a file to open and actually even the scripting stuff to do import and export and stuff like that you can specify a number of paths. It's not you don't necessarily have to put one path in there. I don't know how many people some people just go in there and hit the add file and they see that one bit of string and hit okay and go away but actually FileMaker is a little bit more flexible in there is that you can put multiple paths in there and it'll resolve until it finds the first one it can use and then use that one to open the file or use that location to store the external file. So you can if you know if you you want to have it stored in one place on Windows and one place on Mac and maybe whenever you run that script you can actually put those two paths in there and it'll use the first one it can find. And also this is where you can put in local variables and global variables into it you can use the dollar dollar and dollar variables in there you can use that as a representation of what to use. So what the path was processing what it does at this point it goes through it looks at the list of files that you have there it'll replace the global variables with whatever the variable value is at that point turn relative paths into full paths based on whatever file is kind of the parent file that is driving that process and then it removes paths that are not valid for that platform. Now things like the replacing of local and global variables isn't necessarily done in all the processes the key part here in one of the big requests I keep hearing again is that people want to use variables inside external data sources. So when you reference another file in the drawing graph they want to put a variable in there. There's some problems with that right now that the variables aren't even known at that point when we're processing stuff and we've been having discussion with other developers about maybe different techniques we can use and ways to maybe improve this in the future. But as an example, so here's an example with a four line path list and this is the script is in a file called invoices.fmp5 so I have a relative path first I have a variable as the second choice to go through in a file win so that path should only be ever resolved on a Windows machine and the last one should ever only be resolved on a Mac machine. And after we go through that processing that I mentioned if we're on a Mac and the dollar IT variable contained that FM net path then we end up with a different path list that's fully resolved going down to file colon with the absolute path, absolute FM net path and the last path there. So now we get to step two and now we have this nice path list now we need to find the file. So steps two, we start walking through that list of files. The first thing is to check is that file already open? If it is and we're all done there's nothing else to do. We're all happy and we can stop all this opening process we just use the one we have. For the FM net paths, the first thing we do is see if we already have a connection to that host. The how the GOP protocol works that we're using that talks between multiple database engines that I pointed out on that graph sometime back can multiplex multiple calls over the same pipe. It's not like HTTP where you send a request and you wait for a response on that one pipe. You can send multiple requests over it and the order of the request and the responses coming back and forth can be intermingled in any way over that one pipe. So we look to see if we already have a connection to the host. If we have that, we're gonna use that one. If we don't have it then we open up a connection to that host. And this is just opening up a connection to the host is not logging into the host or anything like that it's just opening a pipe up to the host so we can make additional queries to it. Step C, then we try to open the file. We try to open it read write first. If that fails then we try a second time asking for read only. And this is gonna happen over the network if you get your permissions wrong or actually if you do want to have your file read only on the host you can do that and we'll only open it read only. But we actually go through and actually try to open it twice and if you go through and you see like when John starts showing some of his stuff you'll may see two open calls coming through if you're opening read files and stuff like that. And that's the reason. We always go through and try the read write first and then read second. That's a small performance hit sometimes and it hasn't been decided and worth enough to actually make that change and there's actually a lot of stuff that have to change and change that. But if that path that one path that we're trying to process if none of that works then we go through and then try the second item in the path and then the third and we continue going on. So next we come to step three that's changed the most in the last release. And if you were following any of my social media stuff you'll keep saying that I was saying you know I'm gonna talk about the mysterious step three B and I just didn't really put enough effort into it and I have to really thank Beatrice for being the one person that, thank you for kind of following along and thanks. Actually Kent from New Zealand helped a little bit too but for step A we will either, we determine the path of the temporary file that we create. If it's gonna just be a local path or a local temp file that we're gonna throw away we just pick a random name that we're gonna use. If it's one that, if it's a file that we're opening up over the network we use the name of the file, the name of the host and the path of where the client is stored to generate a supposedly unique path for where the cash file should be stored. Now we've been hearing some issues from some customers here that I guess are hitting some conflicts where this is not being unique enough for some reason when they're running server and client on the same machine and we're still researching this issue of what's going on with that case but we're gonna figure out what's happening with that but it's supposed to be a unique name for that file hosting client so I don't know why that's failing for some reason. Then we get to the new step 3B. This is where we'll actually reuse the file and this is the new thing that helps performance. I mean, Richard Carlton has been like yelling, oh, that file maker is so great now how fast it opens files and it actually does work quite a bit. I mean, if there's no reason to redownload the layouts and the records into the cash file if they've never changed. And this is really not even that new of a concept because as I was talking back when I was working on a file maker go we had to keep around that temp file and all that state information anyway when the app was unloaded and brought back in. So this logic and this mechanism had to be tweaked a bit because it's not exactly the same thing going on but the general mechanism we knew was working because it was working in go. Actually what we did is we found out there was some bugs in go that no one's noticed and when we put it into the mainline product so we fixed go now so it's working even much better even though we never got any reports about it but it's working much better in 15 so upgrade. And so when we use that temp file then the step C is where we go through and determine if any data is out of date. So you'll see a remote call especially we use top top call logging. What is it? Okay, okay. Go see John's talk tomorrow morning. He's gonna go in a lot more detail about that but you'll see this call saying I'm not even sure what the name of it is but we go through and we send up a list of all the modified objects or all the objects that are in the cache file. We send that up to the host and the host goes through and checks to see oh okay these are all these branches and leaves is the modification ID for all these leaves still valid. If they match it just deletes them from that and then sends back a report of all the ones that are different. So then the client gets a list of all these leaves and branches that are in the temp file that are out of date and it just goes through and just trims them all. So it doesn't even know whether it's a record or a layout or anything like that. This is really kind of at a lower level. It just knows that these portions of the tree that are stored in the temp file are out of date and we need to delete them. So it goes through deletes all that stuff. So when the client opens a file and it needs to access that layout which got deleted because it was out of date it'll just download it again or download the record again if it's not there. And some other little information comes down into the temp file. There's some versioning data to keep track of what version is a file maker and what you've said for what the minimum operating versions are, the window locations, auto-logging information that may be needed whether we show the guest check box is enabled or disabled in the login dialog box that's coming up. And then the login process. This is a pretty complex series of events too. We'll try to, if you're reconnecting like if the network went down and you're trying to reconnect we're gonna try to use the old credentials. Step B is trying the account and password for the entire session. And certain clients have more of these session basings and the biggest example of that is ODBC. Because in ODBC you log in with the username and password and you're gonna use that username and password for every file that you open. So that's what step B is, it's handling that case where whatever the client is is gonna try to use the same username and password for every file. If that's not there, the third thing that we try is the parent's files credentials. If this file is opening that file we'll try the same credentials there. So we're basically passing the credentials down and trying to reuse it for the next file you're opening. If that doesn't work then the auto log in credentials. The ones that you may have said in the file options. On Windows there's single sign on at that point. If your server is also running on Windows and you have it hooked up to the Active Directory we'll try to use the single sign on logic that the client's Windows machine if they're on the same domain we'll go through and use that and try to log in for external authentication. Next, the key chain or the credential manager. If you're using that, if you've enabled that I think by default now the key chain is turned off in any new file that you create. So you actually have to turn that on to see this feature now. Next we handle the expired password cases. We may need to go through and change the password because it's been expired. And then only if all these cases fail do we then actually go through and ask the user and bring up the login dialog box to actually ask for the username and password. Next is the database engine processing, step five. This is where we go through and there's a minimal amount of stuff that we need to download. We need to know what all the schema is basically that the tables, the table occurrences, the relationships that are all between there. And step B is where we actually merge all this stuff together. And we get questions often saying, well if I have this big join graph is this really a slow operation or how does the size of the join graph affect operations and stuff like that? Loading in one join graph into memory for one single file is really a pretty fast operation. It says, oh okay, here's the join graph. We dump it into this global map because the database engine has this one map of all the files and how they'll all interrelated to each other. Where you'll start running into the slowdown is if you start opening up multiple files that all have massive join graphs that all interact with each other. Because things that this master map is keeping track of which tables need to be modified for cascading delete. If you delete a table, or if you delete a row in this table, what are all the other tables that need to be modified? And objects need to be deleted in there. There's dependency information about portal row creation and stuff like that. What happens, what are all the intermediate paths? What may have to be affected and what may primary keys may need to be generated before doing the automatic record creation that we do inside portals. So if you have a file with one big join graph and then all the other files have real tiny join graphs that aren't really that drastically interconnected, that really shouldn't slow things down too much. But you really went into the more of the performance problems if you have massive old files with these big join graphs and you all open them at the same time. And really the worst thing was if you were just converting pre-70 files which basically had these big star join graphs and you'd have 15 star join graphs that all had to get merged and have to go through and follow the paths through all the possible mechanisms to get that hooked up. Now if you're doing some of the simpler clients like ODBC or even a server-side scripting, basically the process of the opening process ends at this point. Because those guys, actually no, server-side scripting doesn't stop at this point. ODBC is the best example of that because you can't run scripts inside server-side scripting in ODBC, sorry, get it mixed up. So we get to step seven, which is the step six, the file maker engine processing. This is where the user model that I talked about in the FM engine layer, it wants to have a window for this file usually if you're doing the opening. Even if it's on server-side scripting, there is a virtual window that's being constructed there at that point. If it's a room file, we start downloading this stuff. This is when we start downloading the layout, the style sheets, value lists, font ID mapping. We have a little structure inside each file because when we store data in their style runs inside of it, we don't store the name of the text of the font for every style run that's inside data that's stored throughout the tables. Each style run has this ID number and then there's this big table that maps all these IDs to font names. And this is a chunk of data that it comes down and it should come down once and then this is something that could slow down your system too. And when this font table comes down, we try to go through and resolve which font we'll actually use for those IDs at that point. So if you have a file that's been used, I mean, we have some files that go way back at FileMaker where we're using, you see font entries in here for like Monaco and Chicago and ancient Mac fonts that no longer exist, but we still have to go through and process that and determine which font we're gonna use if we come across one, if you display that data using that reference. And also custom menus come down. At this point, we also queue up any script triggers that are based off the opening of the window that we'll add to the queue for things to execute once we get to some idle time. And lastly is what happens after this opening process. Then the operating system is gonna probably most actually ask to draw this window. And we're waiting for the next idle event to come through to run the scripts because basically scripts are run when no other events are coming in from the operating system. And then in the real case, when after you'd open the file, this is where the view system when it's going through and starts saying, oh, I wanna draw this layout. Oh, I need data from this record. Oh, I need data from that record. I need this portal. I need this. This is when the requests are coming in to download the stuff. And for record data, sometimes we usually go through and try to prefetch stuff more often. Like I believe we'll prefetch like the next 20 records from the one you're currently viewing, assuming that you're gonna be going forward. So it's not necessarily, we're downloading one record all the time. If you're in view as list mode, or you have portals, we'll go through and download records that span both sides of what's currently being viewed. So as you start scrolling that, we have some of the record data downloaded already. We'll be downloading scripts as they're being executed and we download container data when needed. Next, so the last part, I'm gonna talk about some of the changes that were made for FileMaker 15 in particular. So this is a little chart that I made for using how the magical step 3B where we're caching the data. Now, even as you go down the chart here, we're always making performance improvements to the FileMaker engine. It's not just that we decided just at this point to do that point. You can see from this graph where I did this with a downloadable solution, a common template that people start their solutions from that are on the internet. And I ran it on 12, 13, 14, and 15 going through the loopback interface, which is basically having a machine and the client on the same one. And then using the Mac line conditioner software to simulate Wi-Fi, DSL, and 3G. So this is all a very controlled environment because it wasn't really using a real network. It was using the... And these are handy tools to use to play around with if you wanna see how your solution will operate on specific environments. But as you see, the 3G or basically the LAN case improves drastically when you go through and you have the layout and the record data are already in the cache file to open that. The next change, there's a new script step that was added called Truncate Table N15. You can go through and it'll just basically delete all the contents in there, not doing any cascading delete. It's a very fast way if you just wanna get rid of all the data on the table immediately. Now, one reason why I delete all records, what you can use is slow is that there's a lot of work that has to do when you're deleting records in general. When the delete records is walking through all the records that arrow that I'm pointing going down, it goes through, it needs to check to see if you have permissions to delete the record. It needs to know whether you have cascading delete operations on that record. It needs to know whether someone else is using that record and it can be deleted. So there's a lot of checking that goes along in there. And then also whether there's container data that's in that record and then go back and see what the reference count in the containers store area where the actual container objects are stored, see what the reference count is on that thing and see if it's gonna go to zero or not. So when you're going through and you're doing delete all records, the delete all record command will go through and do this in chunks of 500 records at a time. And that's why it's pretty slow. But after we implemented the Truncate Table, some people say, well, why don't we just use the Truncate Table in that case? Because in that case, all we do is we just throw away, when I'm talking about this whole tree thing, is when you're doing delete all records all the way, you're going through and deleting all these little branches individually. But if we're doing the Truncate Table, basically it just chops two branches. The branch that contains all the container data and the branch that contains all the record data. And it's a lot quicker. But to do this, to be able to use the Truncate command and set the delete all command, we have A through G of all the things that have to be true. Now, it does manage that this actually happens pretty regularly if you have the full found set selected. It can go through and actually use the Truncate Table. So even though there's all these conditions here, it's not that rare or a thing that we will use Truncate. It actually happens pretty common in the solutions that we've seen. So one thing is the version has to be 15 and it has to match the host. The second one is it only works on file maker tables and not ODBC ESS tables, shadow tables. You have to be trying to delete all the records, not just the subset. But then the command says delete found records instead of delete all records in the menu if you look at it at that point. The user has to have permissions to delete all the records and there can't be any calculations determining whether that user has delete permissions or not. There can't be any cascading delete operations in any table or in any file that would have to be handled if you deleted that record. The Truncate Table ignores that restriction. If you tell the ScriptStep to do Truncate Table, it doesn't care about cascade deletes. It assumes that you know what you're doing and you need actually full admin permissions to perform that ScriptStep anyway. But when we're doing delete all records, we don't want to skip your integrity of your database by skipping not deleting your invoices based off your line items for your invoices. There can't be any global container fields. This is because where global fields store their containers is in the same library object where all records store its containers too. So we're gonna blow away the entire container library, but we don't wanna blow away container data that's in your global fields. And we don't actually delete the container, I mean the global fields either. And the last thing is that we have to get a lock on all the records. This may be the more likely one to fail in the multi-user case. If a user is editing one record in that database, then we'll go through and start deleting one record at a time and hopefully, and that's one of the bad things about delete all records is it doesn't necessarily guarantee that you're gonna delete all the records if one user is using that record. And this is where you may wanna design things and maybe use Truncate Table if you can because Truncate Table will succeed or fail. It doesn't fail halfway or anything like that. It's gonna delete everything if it succeeds. Another big change in the Draco engine is going through is that we're trying to make the client a lot more multi-threaded. I mean this started first with container field loading. You, I forget which version this went into. They're kind of becoming a blur now. We're going so fast of a release cycle. But in this case, when you have a container object on there and you're over the host, we would spawn another thread to actually fetch the data which may be fetching the data over the network over the host or waiting for the host to generate the thumbnail. Or even in the local case, if there was like a five megabyte TIF and we wanted to generate a thumbnail for that container to actually display, we'll actually do generate that thumbnail on a different thread even in the local case and then cache that thumbnail the next time you draw it again to make it faster. So this was our first foray into doing multi-threaded drawing. But the new feature in 15 that we have is doing the portal inline progress bars, what they call it. And it's basically a similar process that we did for container data. And we'll go through and to generate the rows that are in a portal, you have to do a query. You may have to do filtering. If you have a portal filtering turn on or if you have filtering of authentication information you may have to sort it. If you have your portal sorting on or the relationship is sorting. There's a bunch of different operations that you can take a while to generate that portal list. So basically the same as what we did for container objects we now do for generating the list of records that are in the portal. This is actually not the fetching of the data in the portal. This is just generating the list of rows that need to be displayed in that portal. The fetching of it works the way it used to where it goes through and it'll start downloading records and fetch stuff there. One big change that we've heard some developers going through and promoting these techniques of, oh, you can set a variable with a let statement in a calculation that's based on some object that's off the screen but in the background and then you can use that variable in all these other locations and stuff like that. You really don't wanna do this type of operations anymore in FileMaker. Because we don't guarantee what order we're gonna actually draw and render data that's on the layout object, in the layout in general. Like one technique I used to use to determine how fast the layout took to draw is that the object in the back would set a variable for the current time and then the topmost object would then read that variable and then subtract it from the current time and say, well, that's how long it took to draw the layout. That no longer works now with these multi-threading things. If you have a portal on there, it's gonna come back and if the portal's only thing there, it's gonna say, oh, I drew in like 0.1 milliseconds even though the thing is still spinning in the middle. So any of these tricks where you're using, setting variables inside layout objects and then trying to use that variable inside some other layout object or any, you really can't make the assumption at what point that layout object is gonna, that variable's gonna be set in some let statement. Now if you're setting variables inside things like script steps and stuff like that, scripting is still all single-threaded. We're not gonna go through and start executing script steps in random orders and stuff like that. That's, we don't wanna give you the headaches of multi-threading that we have to deal with internally, but people have always asked, I have a four core client, why is drawing only using one core? So we're actually going through and changing that now. We are gonna start using more processors to do the drawing and to speed up the drawing. But that does break this one assumption that some people have been depending on. And proactive security warnings, that's another thing that came in. It did cause, it's not terribly interesting. I mean, you get this stuff now, but it did cause some, a bunch of changes in the Draco engine because these errors that come from SSL occur way deep down inside open source libraries that were, you know, there's multiple layers of file maker logic on it. And we had to come up with a scheme to be able to get these errors back from these lower levels and to get the text of the type of errors and the actual certificate so you could display them to you. So that actually took a chunk of work to actually get done through the Draco engine to transmit all this information up. Actually, even when you're dealing with the multiple threads because you may be, a portal may be causing a new connection to be opened up to another server which may have a bad certificate. So that certificate error may be occurring on a different thread than the main thread. So we can't have, and the OSs nowadays don't like doing gooey work on anything other than the main thread. So we have to be able to send messages back to the main thread to display these errors and handle all these cases. So tomorrow, and then there's some more server changes that have been done. And so this is the ad for John's session tomorrow morning. He's gonna go through a lot more practical applications of things that you can do to actually improve the speed of your solutions and the new tools that are in server that you can look at to help you manage it and things you can do. I recommend going to that session. I will put the updated slides up on the website so you can see all those big long lists and you can take a good look at that big graph of all the objects that are in the server and in the clients. So we have some time left for questions. And please go to the mic so you can get recorded if you have any questions to ask. I have a question about the internal progress bar. Yes. Run into a situation where the portal loads extremely quickly so you don't see the bar, but then the interface does freeze and it seems to be when there's an unstored calc in the portal. So is the inline progress bar not for unstored calcs? Does it? What's being done on the opposite thread is getting the list of records, not actually what's in the contents of those records that you're displaying. So once the record is determined that we need to display this record from that table, then we're back on the main thread. And when the main thread comes along and says, oh, there's this unstored calc, then it evaluates that calc at that moment. Now, we are thinking about maybe start performing unstored calcs on other threads and this is gonna really make it wacky what order unstored calcs would be occurring. So really don't, you know, you'll really have no idea of what order calcs are gonna be performed in once we start doing that type of work. But we are moving that way, more work on more threads, but we're doing like one step at a time on one type of things. That's not gonna be the next one on the list of things to do, but it is in the plan of things to make more multi-threaded. Great. If you add the one in the back next. With the use of the new temp file, how will that, will that still be used if somebody's in a Citrix-like environment where people are creating sessions in a Citrix environment and then logging off and then a new user's coming in? I guess the temp file wouldn't be resident in that case. I believe most Citrix systems go through and end up deleting most of those caches and stuff like that as you switch between users and stuff like that. So they won't be... Yeah, so the opening wouldn't be any faster in a Citrix environment. No, it won't be. But hopefully if your Citrix machine is running on the, you know, very close to your server, you're getting good performance in that case anyway. So it really shouldn't hurt you too much in that case. Up here in the front. So I have two questions and one of them is Citrix. I noticed that in FileMaker 15 files, FileMaker launching, not just a file opening, but the app launching is significantly faster. Do you know of any improvements that were made specifically for that? Yeah, this goes along. I guess this is kind of the theme of doing multi-threading. We noticed that there was some things that are done in a long time. Like one of the things is going through and finding out what all the fonts are on the system. You can see this in Word sometimes. You'll see it going through. You see all the font names flowing by on the splash screen and stuff like that. And we were basically doing the same type of operation too. But what we do now is that we actually spawn a thread at launch time that goes through in font processing. And some people would notice if you have a large number of fonts installed on your machine you would get a pretty slow launch time at that point. But now we go through and we're going through and processing all the fonts and building up our structures of what are needed to prepare for the mapping when you open the file to map all these numbers from one number to the other. And we go through and we start doing building that thing because there is a lot of other stuff we can do at that same time. But the minute the first draw operation goes through and wants to do the first font mapping those threads are going to wait until that one finishes doing all the font processing. But typically there's a much other going through that whole opening process there's quite a bit of stuff that needs to be done before you even get to the point that you need to figure out which font to draw. So most of the time that thread is done before then. So then the other question that I have is why what would is it a big technical challenge on macOS to have single sign on with Kerberos? I guess as you've heard in the opening session we're going to be going down the OAuth mechanism. So then the actual mechanism being used behind whatever authentication method you're going to use could be using Kerberos in the back end. But we're going to basically work behind that platform and let you use whatever biometric or whatever mechanism that OAuth wants to use to do. That's where we're going in that direction. Thank you. Nice in the back there. Clay is when you mentioned that you write out the script and the layout and the changes like can you move it up to the server and then you write it out to the server? Would there be any chance of also writing out who made that change? So that like someone made a modification to a script we could actually see it in the script like who last modified the script or layout or that kind of thing. Yeah, that's always possible. Yeah. And a secondary follow up question is at WWDC Apple announced changes to the OS and in terms of how they're going to, I forget what it is exactly, but it affects kind of the OS at a lower level kind of how those blocks are written. I notice you mentioned at some point this stuff needs to be updated. I wonder if you have any thoughts there about that? They did announce a new file system. But it's not going to come for a while but it's just starting to... Yeah, the big problem right now we see with that new file system is that it's currently case sensitive and most Windows and Mac users haven't dealt with case sensitive operating systems before. In the next space people have been dealing with case sensitive stuff a lot. And I'm not quite sure if we're gonna, if we have to start working on a case sensitive system with a client side, whether we're gonna have to try to hide the differences when you're dealing with those two systems or which version of the name we're gonna show. So there is some work that we have to start thinking about if issue that you're talking about with the new file system. But Apple may change our mind and make it a case and sensitive system at the last moment too. It's hard to predict what Apple's gonna do. Okay, thanks. Yeah, front here. I'm gonna try for three quick ones. If you were building a new solution today that had about a hundred tables, would you put that in one file or would you recommend not doing that? Try to keep the file sizes reasonably. I mean, if there are a hundred tables that are tiny tables and stuff like that I'd throw them all into one file. But I start getting a little bit worried when one file gets over a gigabyte in size and stuff like that. I may start splitting up some stuff. And if there's one table that has multiple gigabytes or whatever that person, I forgot who did it, that has a two billion record database now that they're playing around with. And it's kind of interesting to see where we start breaking down at that point. Yeah, so it's really more, I think what we call reasonable size of the data as opposed to just the number of tables and stuff like that. Next one is now that you've figured out the logic for scripts to run on the server all the time for WebDirect. Do you foresee a time when Pro will do that as well? There's a lot of state information. When I, you know, it's where that user model is running that determines where you can run the script. And the user model is actually running on the server. The user model doesn't run in the browser. So that's why the scripts are running there. There's a lot of state information that's determining like where window locations are, what windows are most foreground and stuff. And trying to keep the information in sync between both the server and the client would probably generate a lot more traffic than you would save by doing the work over there to keep track of it. Okay, well, you know, did the window order change during the middle of the execution of the script and when the script pause here and really the script runs where the user model is. Gotcha. The new temp files, are they in a kind of obvious? Yeah, if you go into wherever that your typical caches folder is on your operating system, you'll see like a file maker folder and you'll see three folders in there. One like for data, one for thumbnails and stuff like that. Yeah, no, it's only the front. From a virus scanning standpoint, is there some documentation as to where all are located so we can put them in the virus scanning defaults to not scan? Just look at, for temp files and cache files, your operating systems tend to have, I think on windows what you do, you say echo what percent path, percent or something like that, or no, percent temp. Well, one thing you can do is inside file maker is use the get temporary path folder and that'll show you what folder that file maker is using directly there and somewhere near there will most likely be the cache folder. If you go to your favorite browser, sometimes they'll show the GUI of where the cache folder that all programs are supposed to store cache data. And that's the same on the server also? Yeah, I believe so, yeah. Okay. I was hoping you could explain what happens when we use file maker to save a compacted copy of a file, and when we, or how often? A little graph that I showed you was, what is doing at that point? It's going through each block and saying that this block is almost full, but there's a little bit of chunk here, I can move that over there and move it in there. So it compresses this stuff down. I don't know if it's really as important as it used to be at this time. Now if you're doing a type of operation where you go through and then like, you maybe once quarterly you push in a whole bunch of data, you make big changes and stuff like that, and then the data doesn't change much for the next three months until the next end of quarter. Maybe then I would maybe do a save a compressed copy at that point. So it'll be a little bit faster during those three months that the data isn't changing. But if you have a system where you're changing data regularly and stuff like that, you actually want some free space in all these blocks so that we, one of the slowest operations in HBAM is when you have to split one of these blocks to fit another leaf into it. So if there's data coming in and going out regularly, you need some free space in each block to put some more data into. And that's especially true when you get to the indexes, when things are coming in and out, getting added and deleted regularly inside there. So it really kind of depends on your data model. I really only recommend it if you're in some type of operation where the data stays stable for a long period of time and at that point compression. Thank you. Oh, and don't compact right before using the EAR compression operation. Because the problem is that when we encrypt a file, we need 12 more bytes or 16 more bytes or something like that. We need a little bit of space in each block of the encryption mechanism. And if you do the compact before you do the encryption, that means we have to go through and we have to split every block during the encryption process. So you definitely don't want to compact before you do the encryption. That is like one of the very slowest operations you ever do inside FileMaker. You mentioned an issue that you found in file 14 that you fixed for 15. I'm just curious how that presents in 14. I'm guessing on the context it opens files slowly. Oh, on Go on 14, the 14 version of Go. It's, I think you had to do something with a custom menu when on the second layout that you use, it was really pretty obscure. And usually, I don't know how many people use custom menus on, I mean they do do something, but okay, I saw one hand. So it's something that many people probably wouldn't ever run into, but it is something that, because you use custom menus all the time in the pro client and stuff like that. So it was something that would affect the Go product too. Thanks. Hi Clay, thanks for the presentation. Feature request. How can we know the size of each table in kilobytes? Because if I go, I can see the number of fields and the number of records, but I don't know the size. So for example, now I have a two gig database and I really don't know what table is the one that's costing to have such a large file. Okay, well, there's a couple of questions when you get to that right away, is in units of Unicode characters or in units of bytes or? Bites. Yeah, and so if there's a container field that has a reference count of five, should we count that container file five times? Or you just want to know how much space that that one table is eating up in the file maker no matter how it's sold. Yeah, yeah, yeah. Just that table that has 300,000 records. So something more like a percentage of the file would probably be what you want to say or something like that. We could track that. The unfortunate thing about adding statistics and stuff like that in there is that if we add a feature like that into the file format and then you take it back to an older version of file, it wouldn't keep that data up to date. This is like a, that'd be a type of feature that we would hold off to the next file format change. So it would be guaranteed that, you know, no client could get that number wrong, but we can write that down and keep that as far as statistics that you may want. Yeah. Thank you. So you spoke a little bit about the cost benefit of one file versus multiple files and reconciling the graph and so forth. And, you know, maybe an efficiency in splitting to a separate file from a standpoint of the stability of the file sizes. Can you further offer an opinion on the benefit of adding another server for that second file to get around some threading and concurrency issues with dealing with transactions in those tables? If I go to servers with two files split up, is there some benefit there or does the cost benefit negate each other? It really depends on the type of operation. Well, it really depends more on the type of queries you're doing. If you're just pushing data to two servers in two tables that are not interrelated in any way, but one thing I was mentioning is the client has to determine where the work can be done for doing certain types of queries. So if you're doing a query that needs information from both types of tables, that means the work actually has to be done on the client and can't be done on the server. So in that case, it may ask the server to do partial joins of those data and ask each one to send the response back to him, back to the client, and the client has to do all the work of combining that stuff to eventually display it to the user. Whereas if the tables were all on one server, it just sends one request to the server saying, do all this work for me? You can all be done on the one server and then the one response comes back and the client doesn't need to do anything at all. So you have to look at the data model. If it's a very simple data model where you're just pushing data to multiple tables that you aren't doing interconnected queries between the two of them, then it may be faster to actually split it up that way. But it really depends on your data set and how you're using it. Perfect, thanks. And I guess that will be the last one back here if we're running out. Thank you. If we have a go client writing to a server over a mobile network, we're writing to a temp file on the client and then a temp file on the server and then it's getting written to the database. Is that what I understood? Yes, and on go when you're editing the stuff, you're dealing with the temp file. When you commit the changes, the data from that temp file gets moved to the temp file on the host on all the different portions of that data that you've edited. And then once that's all up there, then a final commit operation is done to actually do the final commit on the server itself. So if we have any interruption with this signal, are we still, is it still possible to end up with a corrupt record or a ghost record? No, once, I mean, as the data is getting uploaded to the temp file, it's stored in a location that will basically be ignored if it never gets the commit operation call that comes at the end. And the operation, the remote call that does the commit operation is basically one that the client doesn't need any special response from. So it sends the request over there and then the server does the work and the server will do the work once it gets the request. If the client dies while the server is processing the request, the request is still gonna get completely done. It's gonna be finished. So you won't lose the data. The client won't get the response back saying that the host has finished doing the data but the client has gone anyway and it won't ever care. Right, so the data for the record should be there. Yes, yeah, the data is all on the server. Now if the server crashes through in the middle of that process, that's a different problem. No, just to say no. But we guarantee that the network won't cause the problem. Okay, so we were used to end up with ghost records on occasion and I was always told it was because of a network issue but that shouldn't happen anymore. No, it won't be that issue. It could be other issues going on but it's not that issue. Okay, okay. Remember to do, you know, fill out your evaluations and stuff like that. So thanks.