 Hello everyone. Thank you for joining us today. My name is Matthew O'Dell and I am the technical marketing manager at FileMaker And I'm really excited to be your host for today's eight ways to make FileMaker databases run even faster web seminar We're gonna spend some time covering the best practices for designing FileMaker solutions that perform at an optimum level Based on the materials of our new design performance guide And I'm actually really excited about this guide personally It's the first in a series of design guides that we will be putting out there over the next few months Luckily we're joined today by Mark Richmond the author of the guide He's also the president of Platinum level FBA member skeleton key and he'll be doing all of the demonstration today But before we get started. I want to run over some brief housekeeping notes For the best experience we strongly recommend that you participate in this web seminar with at least a broadband connection If you have any problems or require online assistance at any time, please contact Citrix technical support at 888 259-8414 again that number is 888-259-8414 We'll also be answering some questions that you have so let's go over how to ask a question So if you go to the go to webinar control panel and click on the question section Right below that you can enter your question and then just type send at the very end of this We will cover as many questions as time allows and Without further ado, I'd like to hand over the reins to mark Richmond mark. The floor is yours Thanks Matt. All right everybody. I'm real excited to be here We're going to dive right into some content make sure we leave plenty of time at the end for your questions One brief slide about me. I'm the president owner of skeleton key as Matt said FBA Platinum level company Based out of st. Louis, Missouri I'm a certified developer been developing and file maker for a little over 20 years certified in versions 8 to 13 an authorized trainer back when we did that and And these days I tend to be a solution architect or engineer an estimator more so than a daily Developer but as a result I spent a lot of my time focusing on the capabilities of the platform Optimizing how to plan for performance and solutions to work well And then obviously received good feedback from my team about what works well in the actual field In a past life before or actually concurrent to my family could development I spent a lot of time doing Cisco internet working routers firewall switches, you know through the various Maturation of that material over time. So I have a little bit of an understanding of the underlying network concepts that are at play with Moving information across networks, which has been helpful in my understanding this information But it's by no means a requirement for you to be able to take advantage of the sort of simple and practical tips I'm going to give you today Enough about me. Let's talk about the farm make a performance guide So we're going to you know focus on little bits and of information from the guide today There's certainly not enough time to cover everything. It's not a substitute for reading the guide So while I'm going to present some ideas I'd recommend you log in or join the farm maker Technical network to access the guide as well as other resources like it And I'll be referring to a lot of those different resources along the way So even the guide itself doesn't go into as deep a dive as it might on certain topics in an effort to just cover A broad range, but there's lots of great content up there with that free resource So here's the big idea Performance is not something that happens by accident Performance is an intentional design decision and it's often a decision you make early on in the process Or you have to keep changing as time goes on, you know, no one likes to work on slow solutions They cost money they waste time They're hard to justify when it comes to budgeting and they're really no fun to develop in either So they're really just not fun across the board not even a developer likes to troubleshoot a slow solution or to try To solve a problem a solution that's performing poorly So my goal today if anything is to just get you to think before you act and to plan for performance from the beginning So they don't just kind of wait in doing things We've always done it but you start to think about the impacts of what you're doing on the users and on the solutions as they scale and get used over time My hope is that you'll find this information useful in a number of scenarios I'm a big fan of quotes like plan the work and work the plan But life's kind of funny you rarely get to plan anything from scratch You know things change every second every day And so you often kind of have to improvise and you keep steering towards a goal But you're always steering from a different position and the goal keeps moving. It's a bit elusive So some of the scenarios you're going to encounter you'll see on the slide like when you're planning for new solutions For many of us, it's when you've converted a file maker solution for others It's you finally decided you need to move your solution out to a Cloud server or a WAN based server because you have more mobile users or web users And you're starting to see performance issues in your solutions Which otherwise work just fine when you're on a local network Sometimes it's just that you've got to troubleshoot something you've got a report or a particular area of the system that is starting to exhibit signs of Slowdowns or age and you're trying to figure out. Where do I start in trying to troubleshoot this as opposed to just archiving records? So some of these scenarios are common But probably the most important one I think it's going to push many of us other than the mobile piece is going to be Planning for increased scale just more data more users using your solutions Usually those are good things. It means the solutions have got legs They're actually being used more people want to get in there and get information out of those systems or contribute And those are putting stresses on the system and causing performance issues All of these whether it's by maker pro by maker go or web users Whether you've got a large group or a small group of users using your solution and where you've got millions of records Or just a few thousand all the things we're going to talk about today can have helped They're just performance fundamentals, but I think you'll find are useful anywhere you go So let's talk briefly about the different ways that you can measure performance There's objective measurements such as bytes moving across the network Usually we use a tool like wire shark or some other kind of packet sniffer You can attach to your network or you know launch on your software on your computer and watch traffic as it moves to and from a Fomica server and then you can kind of pull out from there and see how much data actually move in either direction based on the actions You were taking you can use a stop watch or put time stamps or something into your database to try to figure How long it takes to do things and these are really good and I'm going to refer to some of those objective measures to kind of back up some Of my points today There's also subjective ones and you can't minimize these your users and whether they think it is the solution is working Well, whether it's making it easy for them to get their job done whether it's you know Feeling responsive and snappy or feeling sluggish or just if it's easy to use all of these are going to be considerations And I'm going to touch on some of these things today But I think you'll also see in those future guys that Matt mentioned at the top You'll see more and more about how to make your solutions easier to use for your users and design for that in mind as well All of these things Subjective and objective measurements can all be impacted by external forces. You can't control network congestion high server loads Backups that are running all kinds of things can be happening that are going to make any of these things get slower or feel faster So part of your goal is to control the variables you can to create the optimal experience under any situation Some of the changes we're going to talk about today You're going to see pretty dramatic differences or I'm going to sort of indicate that this can make a pretty big impact on your users And and there's not going to be any need to sort of measure it people are just going to immediately see how much faster it is It's kind of hard to show that in the webinar, but I'm going to point out some of those as we encounter them Other changes you're going to make are really tiny. They're like adding, you know pennies to a jar It takes a little while for you to see the results, but you do know that you're adding up Cumulatively making a difference over time So those often really benefit from measurements because even if you just see an incremental reduction of 10 or 20 percent of the amount of data That's moving and you multiply that by 50 users and you think about that over several days and weeks That can really make a difference in whether the solution feels snappy or whether or not the solution kind of sticks So we're going to try to talk about simple and practical ways you can make a difference that matters Whether they are dramatic changes or whether they are cumulative, you know benefits Somewhere in these three topics that are part of the guide as well as those eight subtopics that break it down Whether it's layout things you can do Schema things you can do or solution logic things you can do and these are of course I'm not only the only areas that you can have an impact on performance, but they are the areas that are covered in the guide But because of time I can't really dive into each of these as deeply as I want In fact because of the format of the guide I couldn't even go into some of these as deep as I want so our goal in these Guide in today's discussion is to kind of give you a little bit of a taste of Many of these or all of these and the deeper dive into some of these But again not a substitute for the guide and the guide of course refers to several resources Which expand that any one of these topics in depth So today I'm going to focus on the bold ones that you see here Although there will be at least one slide on all the others as well And I'm just going to quickly display those and keep moving so we can focus on the items that are bold And then that's not because that these necessarily are the most important ones or that they have the biggest impact In fact, you could argue that Calculations could have a dramatic impact on so many different areas and I'll talk about that later But it's more that now these are the areas that I think are really important Sometimes harder for people to wrap their heads around and are so different sometimes from the way you're thinking Now the way you've done it always that I wanted to make sure I point these out specifically and give you specific reasons Why they matter but first things first let's talk about some basics So for the last five years or so the file maker engineers have been sharing with us through various white papers and presentations and devcon Presentations and not to mention a number of speakers including myself at the developer conference have been talking about What are the rules of the road that govern the basics of how file maker moves information? You have to understand these if you're going to be able to understand the reasoning behind the other examples I'm going to give you if you're going to really be able to predict yourself How the tool and the platform are going to behave when you're troubleshooting a problem. So really quickly first and foremost File maker moves record data one record at a time. That means it moves all the fields in the record There's a couple exceptions like un-stored calculations don't move unless you see them and container fields Don't move unless you see them or you reference them in some way But all those date and time and text fields and such those all move whenever the record moves They don't ever just move one or two fields if there's 20 fields or 300 fields in your table That whole thing moves as a block one record at a time Prefetching is basically the idea that file maker uses to make things feel a little bit faster when you go to like a form view When you look at a record if there's a lot of records in the sound set file maker is going to send you a batch of records The first 25 so that it feels when you go to the next record like the record was already there And you're not waiting with each click for a download You'll see even more of them potentially in the list view or a table view depending on how many rows you're displaying There are various things that trigger invoke the data to move generally if you can see a field or you can you know Touch the record in some way with a script step or you sort the records or you summarize the records Are you reference them with a conditional formatting calculation? Any of these things potentially are going to trigger or invoke those records to move one record at a time So it's really important to be aware of all the kind of places in your solution that you could potentially be Touching something that's causing it to move even if you're not even seeing the data Maybe you're just referencing related records doesn't matter if they're on the screen if you reference them They're moving and they're being copied down to the client Round trip basically speaks to the idea that even though you're mostly thinking about downloading at the same time You're also making changes and committing those changes and the way file maker works is it sends the entire record back So if I download a hundred fields for a table and I make one change to one field and I hit commit You know then boom that whole hundred field payload goes all the way back to the server So it's not this incremental pull it down incremental push it up It's pulled on a record at a time push back a record at a time So we need to keep that in mind later when we start talking about let's say scripting Some of the reasoning I give as to why I like to break things up or handle things certain ways Latency basically is the term used in networking circles to describe the sort of delay That's imposed upon network traffic moving across wide area networks You know on a local LAN at gigabit speeds the latency you know as packets move around it's tiny It's like a couple of milliseconds when you get onto the internet that could easily jump up to the 60 milliseconds or 150 millisecond range depending on where you're going and all the different hops and carriers It has to go through and that means that even if you're moving the same amount of data or even if the data is really tiny It can feel like it's taking forever because everything's moving slower It's not just that the pipes are smaller is that the pipes move slower too So we need to keep it in mind the fact that we don't just want to reduce the amount of data We're sending because we want less data We actually want fewer packets containing data to have to go back and forth because that means we won't be waiting for Each of them to make the round trip And the last is don't worry about all of this too much in the sense that you know that some point you need data Right, and that's the whole point of using your solution So it's good to know that once the client does get the data Fomaker uses some temporary caches to basically help make that data feel like it's more local So it helps you when you do get data to not have to ask for that data over and over if the data hasn't changed The user will access the local caching that will automatically feel faster So you don't want to squeeze every last bit of data out of the solution to the point where it's unusable You want to focus on sending the data that the client actually needs and making sure all the other stuff You send them like layouts and scripts and such just like record-centric at the top of the list Those kind of go out there as objects as well one layer at a time one script at a time And so ideally we want those things to be as optimized as records are as well All right, let's dive into actual specific items from the performance guide So the first topic is themes and styles has been a lot written about this blogging about this webinars white papers From FileMaker and so on it's a very important topic It's a new in FileMaker 13 your ability to kind of manipulate and manage these themes is sort of been refined and enhanced to give You even more control than ever And it's kind of technical to kind of get into the details And I'm going to try to keep this very much at the surface level and keep it simple for you Most of us are probably used to doing things the old way You want to make an object bold you click on you make it bold You want to make an object, you know right aligned you click on it You make it right aligned and so we're doing there lots of local styling on objects each object sort of has its own Little collection of information about how it's styled In fact often this happens kind of by accident. You don't even realize it you've taken an older solution You've converted it and all the underlying themes on all of your layouts are actually the classic theme and all the objects on that layout Are now locally styled. That's just how the conversion process works and if I was going to tell you to go through and like take all your existing layouts that have been coming from a converted solution and Convert all that styling into themes You might feel like that's a whole lot of work and a whole lot of busy work Is it really worth doing and my point is to say it's absolutely worth doing so let me give you some examples So I took the starter solution I took the invoices starter solution has about the 30 or so, you know 30 to 35 active layouts in it for various types of users and all I did was make a clone of the solution on the left We have the original starter solution Which is using the enlightened theme and on the right I have the same solution where I went into each layout I copied all the objects off a layout I changed the layout to classic on the theme side and then I pasted those objects back in So the layouts look almost the same. I didn't go to the you know tweaking and the degree to get them all perfect But they're pretty close The point there was to basically make sure that using the classic theme which is basically a non-styled theme Everything is locally styled. I wanted to basically kind of strip out all the theme based stuff all the efficient stuff and focus on the Least efficient possible the closest to a converted solution you could get So you can see right off the bat. I added about a megabyte of data to the file So I now have a larger file which means larger backups and you can kind of guess that if I'm going to send data to the user They're going to be getting more data for each layout than they would have otherwise So then I did some measuring of what it would take to actually access that file So I hosted that clone file right next to the other clone file And I simply logged in and started moving from layout to layout manually going from layout to layout There's no record data moving at all and I measured how much data came down to the client So on the left what we have is a chart that shows when I went from themed layout to theme layout I got 1.7 megs or so of data that came down to my client when I went from locally styled layout to locally styled layout I got 2.24 megabytes of data So right off that bat I got about little over half maybe 0.7 megabytes of extra data that was going to me just by making that one change At the same time. I wasn't measuring how much data went back, you know So I started looking at packet counts to basically say well, okay I I know that I had data coming down but every time I bring data down I have to send a response back to the server to say hey, I got it Well, how many packets does it take to send 2.24 megs instead of 1.7 because each of those packets takes time for the server to pack up And send to me takes time for me to say I got it and so on and all that has latency on it If there's a slow network in the middle and you can see it was you know Almost almost 800 or maybe almost a thousand packets more that were basically coming down in the local situation So this is more box cars with more data moving over a slow pipe Which means I have to wait even longer. It's not just waiting for the data. It's waiting for the data to finally get here So my point here is that this is all for one user This isn't even like if I have 50 users in the solution if you had 50 people each bringing down an extra meg That's another 50 megs of traffic coming off your server. That's more work for the server That's more work for the networks that's more users waiting and of course, you know The more users you add that just gets worse and worse for everybody because everybody's waiting for the next guy To have that information kind of get there There's only so many you know active sessions that you can get real time on there's going to be some kind of delay or buffering going on So roughly 25 to 30 percent overhead and that's just on a simple starter solution with relatively streamlined interfaces That have been made to be as simple as possible as opposed to maybe one of your solutions Which could be filled with several hundred objects on any given layout possibly legacy graphics and other stuff you'd imagine I think you'd see a dramatic difference in this it could even get worse Okay, that's themes and styles. There's lots of resources recommendations in the guide to pursue that topic further Briefly, let's talk about layouts the basic idea in this section of the guide in layouts other than teams and styles It's just to keep it simple. That's really it. It's all about trying to do more with less So here are some tips There's not just about fewer objects on the layout But sometimes it is a lot of times it's about using the objects are there kind of conservatively or thinking about how many Layouts you have versus trying to get one layout to do everything I'll move on to the next section in the interest of time and let's talk about found sets So my personal favorite way to make solutions perform better is to manage the data you send to the users in the first place I mean my feeling is that you know There's only so much I can do when it comes to layouts and other kinds of pieces of the structure of my file But when it comes to data if I can just send the user the data they need Then I'm being as efficient as possible as a developer So the challenge here for you though is to kind of think about how FileMaker moves data Which I mentioned in that sort of the rules of the road the basic slide and then to take deliberate steps throughout the use of your Solution to kind of manage and curate that experience for the users Part of this is because some of the default behavior in FileMaker You know is there to make things easy for your users, but it doesn't necessarily make it as efficient as you'd like it to be Let me illustrate So this is the iPhone interface for the invoices list and the invoices starter solution We've got about 900 or so records in here as a sample data set The default behavior for any layout with any context in a FileMaker solution If you didn't take some kind of administrative control or developer control of it is to show you all records So you host your file and even if you left the file with a layout with 50 records in it If there's a thousand records behind there when you go to that same layout on the hosted solution There's going to be a thousand records there. It's just immediately reversed to all records That's a good thing in the sense that it prevents people from calling support and help desks and you saying we're all my data Go I can't find it because they're not looking up there to see how many records are found from the downside It doesn't scale very well So for example in the invoices starter solution when you go to the invoices list It shows all records and sorts them by date and that's fine Maybe at small scales, but you know when you get up to several tens of thousands of records or even hundreds of thousands of records That's going to be a terrible first login experience for users should navigate to that list So what if we had a business rule that said well Maybe the users don't need all records, which is the default behavior Maybe they just need to see the unpaid invoices because all the other invoices are a non-issue So maybe there's some business rule that could say to me Let's just show them a set of records that matters as opposed to you know, giving them all let's Manage that process and give them the unpaid records and so you can see here just using that same 900 records I had about 54 records in the set that were unpaid and so by changing that I'm sending Significantly smaller portion of the data to the users and therefore should reduce the impact of how much information has to be sent And then the last is the more sort of dramatic change, right? This is where you're actually changing the user experience dramatically enough that you're going to need to get the users to engage in some Discussion about this in this case. I said what if we went to the extreme and said don't show them any records? You know risk the phone call the tech support that says where'd all my data go? But more importantly train your users that the expectation for performance on this layout is that when you go to the invoices list It'll be empty and then the next thing you do is ask for the records that you want Which means the user has to deliberately do something But it also means that they don't get anything they don't need and of course that can scale really well just like the middle one You know that will scale very well because it'll always be the same starting point in the user as to what they want to look at next So if we actually look at this in terms of performance your mileage is going to vary depending on the actual data But I basically did that exact same thing with those 900 or so records And I've eliminated from the amount of data that was sent here in my chart any of the overhead of the layouts or the file itself I just focused on Going to that layout with that found set and how much data gets moved now These aren't very big records in the invoices table, but even with just that difference you can see I've got about a fourth as much data going in the unpaid scenario than the all scenario and of course There's nothing in the non-scenario right there's just the layout that gets sent But there's no data at all that gets moved and that's simply because there's no records in the found set at all You can imagine over time that that first column just keeps getting taller and taller and taller because the number of records and all It's just going to get bigger and so that's going to get worse and worse Unpaid depending on how many you know what percentage of your records or how many records are typically unpaid at any given time And it may stay about the same size or it may stay relatively small compared to the all size depending on the business rules That drive your data, but the none will always be none. It'll always be tiny and so you can count on that So we have to keep in mind also that um, it may not just be records that are moving, you know I'm you know that the records you're looking at like here if you notice that the blue text the 3450 or the 4224 those are actually a sum of the related line items on the invoice So I'm not only moving those records you see there I'm actually moving the invoice line item records as well Which are needed for that unstored calculation to evaluate and display In fact if I was sorting on that value so I could put maybe my highest unpaid items at the top I would actually invoke all of those related line items from all 903 invoices So it could be several thousand line items that I bring over and all of that data would come down to my client before I'd get The result I might have the same thing going on in the middle under unpaid But at least I'm starting with a smaller set and of course in none I've got as optimal as it could be so you have to keep in mind that if you understand those rules of if you touch It or see it it invokes the data Then you can kind of sort to see how these kinds of found set issues can compound Depending on what you're showing and whether or not you're invoking related records along the way You might be bringing down more than just the records in the founds that you're looking at you Maybe bringing some children or grandchildren along for the ride as well So a few more tips to help you with found sets things like using blank layouts where it doesn't really matter What the found set is because there's nothing on the screen to invoke the data Maybe just thinking about the workflow and whether your users need it And then making sure that when there is large data sets you don't manipulate them in a way that forces the data to move We're going to talk more about scripting and how that can be used to orchestrate some of this in the next few slides My next primary topic is tables So there are a lot of great books and websites out there In fact, I'm actively reading one of them right now sort of a primer on basic Relationships in all the different ways the relational data structures can be made But I'm not going to talk about relationships yet What I'm talking about right now is just the basic tables and fields that make up your schema These are the building blocks of what you'll eventually use in the relationships graph And what I want to talk about here is one way you can you know Basically take tables and treat them a little bit differently and get huge benefits So even though you see a little relationship graph here I'm really talking about taking a singular table and Splining it into two tables or more so it really is still just one table But you're treating it as several tables and I'm going to explain why and I'm going to give you three examples Of where this can become useful and again It's still a little counterintuitive, but it can actually have a pretty dramatic impact on performance and responsiveness of your solutions Let's talk about the first example So we all use with views and form views in very common ways when the list views tend to be a subset of the fields in your database We use them to quickly identify the things that we want to get to like find the record I'm looking for and then almost always the next thing you do is click on the row You want to bring up the detail view on the right the form view on the right So this view serve the purposes of subsets quick find grouping organizing sorting That's what they're there for the form view is where we look at all the remaining field That's where we look at the detail. That's where we do our revising and our editing It's where we go to isolate a record and focus just on that one record So if you think about that, you know Normally all the fields for let's say an invoice or all the fields for a customer record would be in one table And so we know because Fahmacher's record centric that it would send me all the fields at once But if I was to actually split those fields and say let's put the fields I use in the list view in one table and let's use the rest of the field and another table for the form I could still of course show Fields from the list on the form if I wanted to but it would allow me to have a list view that performs Significantly better because it would have fewer fields now if I found a lot of records where I sorted them It's still not maybe the best thing to have all of them found and sorted But at least if I am they're a fraction of the size of what they would have been had I left it all in one table So what I'm really doing here is I'm reducing the payload for the list by segregating the remaining fields into the form Let's talk about another example, but same kind of concept You've got some small fields office email personally mail FaceTime address record status customer ID Whatever it is, but it's small and then you've got a big field. It's a notes field. It's a contract language field It's observations. It's something kind of bulky and weighty, you know We know we can store several gigabytes of data in a text field So we already kind of know that you know, that's very useful, but it also means it carries kind of a heavy payload and a heavy price So the idea here is to find a way to also control that when I pre-fetch these records when I download these records They all come together even if I'm on a layout where I only have those first three fields I still get that big weighty field in the background even though I didn't see it because they're all part of the same record But if I was to put my heavy fields in one or more separate tables each joined in a one-to-one relationship back to the light field I get an impact and in effect very similar to the previous example when I'm looking at the form view with the light fields I'm seeing the light fields when I click on the slide panel Let's say or the tab control and show the heavy field or even if I just put the heavy field right there on the layout I get just that one records worth of heavy field data. I don't get 25 in a pre-fetch I don't get it automatically I only get it when I see it because it's a related field because it's been split not only are the light fields a Lighter payload, but I can selectively load the heavy fields. In fact, we've got clients where they've got several heavy fields You know reviewer comments reviewer comments. They didn't want to have you know, maybe a related table They wanted to have these be part of the main record But we broke them out into a separate table joined with the one-to-one specifically to control when they load The third and final example is this concept of idle fields versus busy fields So let's talk about like a scenario where you have a product table You've got you know the product description. You've got maybe the ingredients list You've got the abstract for the marketing team You've got the skew and the barcode and all this kind of attribute data that hardly ever changes And then you've got a small set of fields like cost or price or units on hand or quantity sold Which are highly dynamic Maybe this is a point-of-sale system or an inventory control system where any number of people on your network could be making orders Or ordering new supplies or adjusting prices to the market and these things are happening all day long So because again Fomac is record-centric if I put all those busy fields in the same table as all the other more idle fields Everybody who's using this table is going to be contending for these records We're going to run into more record locks where someone is going to basically be trying to change the quantity on hand At the same time as someone else is trying to change the quantity on hand And suddenly you're going to have you know loops in your scripts to control who can do what and when Plus if there's a lot of other fields that describe the product all that data has to come down Just so I can change the quantity and then all the data has to go back And then they have to come down again and I change the quantity and then it goes back And so you get this incredible churn as data moves up and down because of these high-paced fast-changing fields So just like the heavy-field example if I can isolate that data the busy fields into separate Tables and link them with the one-to-one relationship back to the idle fields I can let the marketing people and the other people who hardly ever need to make major changes look at the idle fields Whenever they want to I can use that for pickers or other kinds of feeds wherever I need to and I can have the busy fields Which are the quantity sold or the quantity on hand or the you know The reorder price or any of the other kinds of numbers that change dynamically be separated and be as busy as they want And because those are separated I'm going to reduce the incidence of record lock because I'm not going to have as many people touching the data Because I've got the people somewhat segregated in addition Even if they were trying to both work with a busy field a particular one the amount of time It takes them to pull that down Finish their transaction and commit the changes will be smaller because there's less payload So the opportunity the window for record lock goes down I mean record locks a great thing every good database has it But I want to make that window for record lock as tiny as possible But I don't have to worry about managing that process or queuing up requests for changing things I can count on those windows being really tight and controlled and so Segregating busy fields is a great way to not just reduce the overhead of moving all the idle fields But more importantly to kind of free up those busy fields more quickly because there's less overhead that they have to carry with them So there are some things you have to keep in mind though Okay, there's a couple of manageable trade-offs that come up when you start to segregate one table into several tables First of all you have more tables. So where you're trying to maybe keep things simple You're suddenly going to have a table that's like a string of pearls It could potentially have several different layers really where they were originally all part of the large table And now you've kind of broken them up into sets to kind of meter and control how you send that data to the users Navigating between unrelated records can be a little bit different You know think about that list view versus form view kind of approach in that first example when I go from the list view to the form view I'm you know going to the next record might be a little funny because I went to a related record So I you know I don't want to necessarily have to figure out How do I go to the next record and get the record I expect because I've left context a to go to context B So there can be some issues there They may be non issues depending on your workflow, but there could be issues there for you to manage and think about And then finally there's the obvious ones You know if I'm used to importing or exporting or reporting on data that used to be in one table and suddenly it's in three tables I'm going to have to change from my importing and exporting scripts I may have to have some kind of you know a temporary table to help me import the data Exporting is pretty easy, but importing might require bringing data in and then segregating it and putting it where it belongs And then of course reports and charts may have to you know accumulate data from several tables to do the reporting So these things I think are manageable and worthwhile trade-offs even though it seems like you know Splitting a 20 or 30 or 100 field table into two tables seems like a little bit of extra complexity the speed benefits it can bring are actually not so trivial and these trade-offs I think are well worth it Relationships you know are a pretty important topic and people spend a lot of time in bandwidth debating the pros and cons of different relationship graph Methodologies in the phomica world. There's been debates and discussions and white papers and so on. I even advocated for particular approaches in the past I think the main point to get across here is that you can take the same set of tables Behind a database the same set of fields and you can organize them on your relationship Graph in any number of ways and still get to the same place when all is said and done You know a lot of times people build their relationship graphs around particular preferences or styles They sometimes build them around the way their layouts are going to be expressed You know everyone's got their own particular spin on this The key thing is just to make sure you're not making assumptions along the way that all of these different approaches are effectively Equivalent and that they'll all perform well at any scale because some won't and in fact sometimes some of the things We did just to make it work that you know at small scale or maybe locally on the network with a local server Just don't play so nicely when you get them out and under load with large record sets or out in the wild Like on a WAN or a cloud-based server So I want you to consider some of these tips This is another topic area which we dive into much more deeply in the guide And it's also one that certainly is worth discussing maybe on tech matter outside in the world because it's kind of deep But the key thing here is to keep it simple as you can as well You know don't make hyper complex big connected graphs and really learn the difference between portal filters Predicate filters and making dedicated fields and joints all these things can get you to a filtered portal But they won't necessarily all perform the same under load or perform the same period in the first place And so there's different reasons to you each of them in certain cases and probably the most important one here And I know I'm slowing down on this slide because I want to make this point The execute SQL function is going to seem a little bit foreign to those of you who are just die-hard file maker people Who never touch SQL in your path? I'll admit I myself find it a little bit perplexing because I'm just not as comfortable with SQL as I am with file maker But its ability to reduce the complexity of your graph to allow you to get information is cannot be overstated It is a incredibly powerful tool and rather than build structures in your you know schema Just to support a report or a query you can often use execute SQL to just split past that and just get the information You need especially if it has nothing to do with the primary workflow But it's only to do with reporting or some kind of ad hoc need execute SQL is sort of the great equalizer This is another topic I could probably spend all day talking about you know the file maker Calculation engine is ubiquitous you find it everywhere in security and layouts and conditional formatting and script steps in custom menus It's just all over the place in file maker. It's incredibly powerful and with the ability to support custom functions It's even more powerful than ever So there are seminars and discussions out there and there will continue to be discussions about the different approaches You can take the calculations to make individual calculations in and of themselves perform better We talk about some key learning points in the guide So I'm just going to kind of give you a quick overview on one slide Just some general quick tips learn where they can be used Understand those rules of the road and therefore what's going to trigger them to fire and actually evaluate and then study in the guide Some of the different approaches that are more efficient like using the let statement or using case versus if All of these approaches and more are really effective ways to take your existing calculations without changing anything else in your solution And just make them evaluate faster And I believe Matt did a session at a developer conference recently showed dramatic improvements by just some simple adjustments within an individual calculation Last two topics are storing data and scripts and these are actually kind of interrelated So we're going to talk first about storing data and then I'll circle around to talk about the scripts that I used to do this Storing in a minute So this is probably one of the biggest opportunities for dramatic performance in any of your file maker solutions I know I said that at tables and of course those are kind of core underlying scheme of things But I think what this really gets to is the ability to scale solutions well So to some of you the idea of storing or caching your data and I don't mean like memory cache or you know The file maker server performance cache I'm talking about like creating copies or snapshots of data in your file maker database That are specifically there to take large sets of data and make snapshots of them and some kind of pre-grouped or summarized or aggregated scale Now a lot of people have been using this approach in platforms of all kinds shapes and kinds I remember using a monitoring system that used to just get huge logs of data But we just needed to see general activity summarized by user summarized by IP address or summarized by website And it was just too much to wait through but we would run the summarizing routines It would kind of generate kind of quick distillations and summarizations and counts for us that were sort of very quick to report on And that's what the idea here is to say what is your raw data look like what kinds of questions you're going to ask for it How do I create a little table structure that specifically stores that data? So when I do my reporting I can report on it in its summarized form So there's less data to move and it's quick and easy and it scales well over time So let's look at an example So this is the same exact chart, but on the left I used 50,000 raw data records Each of which was a record of a transaction for a particular client in a particular state or the district Columbia And then I said create this chart and show me the aggregate sales by state and then rank them by state and You know that take took some time You've seen a minute it took a fair amount of data to come down to my client And then of course it had to you know sort through and rank those in order to produce the chart If I had already though, which I did created a separate table to store the pre-summarized versions of that Basically 51 records one for each state and one for the district of Columbia And I basically had that table ready to go with the totals already added up It took a heartbeat and boom. It was there. I had the I didn't have to download five 50,000 records I had to download 51 records and those records didn't have other values in them like dates and customer names and Other kinds of information about shipping addresses or anything all they had was the state or district of Columbia and the total Across the population. That was it. So they were incredibly lightweight records Incredibly smaller number of them and of course it made it much quicker to move that data So let's talk about what are the elements that you need to make something like a storing data scenario or caching data scenario work Well, first of all, you need a place to store the cash data in this case. I created a summarized by state table It had two fields state and total and of course you can see that Significantly fewer records and it had fewer fields by far than the data table and that wasn't even the very big data table It's just 14 fields Then I needed a script a script It's actually going to summarize that data and so it you know it goes to that Table it finds all the records it does the sort it exports them in a group fashion It brings them into the summarized by state table and it creates that cash for me So when I go to generate my chart I pointed at that data source as opposed to the other data source And then the last is you need some kind of way to maintain it You know, how do you make sure that that summarization is accurate, you know, because I don't want it to be the wrong date? I mean that's half the reason most of us stuck with the dynamic approach is at least it was always accurate Even if it was slow and we want to make sure that this stays accurate So it could be that you have a server side script that's going to run every night at midnight because your business rules say You know by 11 p.m.. Every day all transactions are done and it's safe to summarize the data Or it could be that you have some kind of script trigger that every time somebody makes a change It modifies that cash table in some way directly by adding or subtracting some kind of value I'm going to show you when we get into scripting how you can leverage some of the new features and file maker server To actually make some of this happen sort of in the background and in a highly efficient way So your mileage is going to vary depending on your data set and how many records you have but in my example I had 50,000 records about 14 fields basically dates and times and text To bring that data down and create that chart Took about almost 10 megs of data had to come down and that doesn't count how many packets had to come down Or how many responses they had to send back to the server to acknowledge that I received them So that whole thing took a noticeable amount of time. We've all seen it the dialog box, you know Sorting and it's reading and then eventually the chart displays and of course anybody makes a change to the data And I see a but as it kind of has to refresh and sort especially if one of those numbers gets moved Whereas on the other side, it's hardly registers on the graph It moves such a tiny amount of data 51 records each with two columns It was just nothing and so it hardly even hit the register it happened so instantly and all the work that was being done on Summarizing and ideally is being done either late at night or is being done Incrementally on demand or it's being done with another kind of scripting approach I'll cover in a moment in which case the user's experience is dramatically faster They're not waiting for anything that chart is just boom right there And the really important thing about this storing data if you think about it is who are the users who consume this kind of data? The guy at the gall is concerned about this is a decision maker is the person who's looking at this from a management perspective They want to understand what's going on across a global population of data and they don't want to be given excuses about Well, you have to wait three minutes for the reporter the chart to draw Oh, well something changed you have to wait another three to five minutes to your time is money And it's important that they get the answers quickly and that they have faith that your solution can answer you them quickly And so we can create data caches that are reliable We can produce extraordinarily quick dashboards and charts and reports for them that really answer their questions and help them feel Wow, the solution just gives answers so quickly Okay, let's talk about scripts So I'm going to give you a couple of different ideas here about making scripts in general perform better just by changing your mindset Just thinking about things a little bit different It's almost like you always make grilled cheeses one way and you're going to start making them a different way And oh my gosh, I never realized I could just change the ingredients slightly So the idea is to give you some tips on just thinking about scripts in it with a fresh mind And therefore getting them to perform a little bit better And of course understanding the rules of the road that govern that and then I'm going to spend some time talking about Perform script on server, which is a great and fancy new feature with filemaker server 13 Which I used specifically in that caching example, and I think you want to use it as well All right, let's go ahead So the first idea is this idea of navigating around records. So, you know Maybe the first example is kind of typical I want to go to my customer list and I want to show somebody the active customers And so the way we would normally do that now is you'd have you know You probably have a set hour capture on but let's just look at what the slide says you go to layout customer list and perform A stored find and that's fine. It gets the job done But of course when I go to that customer list the very first thing I happened is in a split second before it performs The find it displays to me x number of rows of customers that are already in the list And of course those may not be the active customers So it sends me that data which I don't need and then it replaces it with the found set that I do need So by just having a slightly different thought about this about where I go to do my find in this case I freeze the window first I go to a blank layout. I perform my find when I'm in the blank layout So no data gets moved because I'm in a blank layout. So there's nothing there to move But I'm in the right context and then I switch the customer list and reveal the found set So in this case, I'm moving somewhere where I won't invoke data. I'm doing my find in that context And then I'm moving to the place where I want to be eventually and I skip over and navigate around However many records would have been showed to me in that brief moment in the middle that don't matter to me And of course if you have sorts or summary fields on that list view, you know Then of course you may be loading all the records that were there beforehand like in the invoices example I gave earlier and then you know this would make that dramatically different because I really never get to a layout Where anything like that fires until I already have the right found set in my hand Here's another similar kind of example, but this is another one where again pretty innocuous We're probably used to doing this all the time. You've got a main menu or something You've got a button that says, you know Find customers or find notes or find invoices and so the script probably looks something like this You know said error capture on go to the layout enter find mode pause and wait for the user to perform the find So what I'm doing here is I'm saying well just that whole sequence there of Going to the layout then entering find mode What if I switch that around and I said enter find mode then go to the layout I still move the layout, which you know you move either way, but I don't move any accidental data I'm already in find mode. So there's no data being seen. So nothing gets moved to me Then I can perform my find And then of course I see the records that they want so it's very similar to the previous example It's just kind of a different approach to looking at it here I actually use a blank layout in the script to kind of orchestrate and avoid data And here I actually change the sequence of find and go to layout so that it kind of puts me in a mode That's relatively harmless from a data movement perspective So my hope is that you'll take these two examples and just kind of think about the sequence of your script steps And you'll consider alternate approaches that you can use that won't trigger data to move that you don't want to move More interesting example I want to show you I think in all these and the one that's maybe a little bit more Intent is that perform script on server example So what I'm doing here is this is the script I'm kind of looking at more closely that I used to summarize the dollars by state in the previous example Well, you could have it be that when someone in like let's say the cleric's office, you know A clerical office is doing something and they may have entering a new you know sale That they enter the sale and the script trigger fires and it does the summarization But that means that user is downloading the 50,000 records and doing the summarization and maintaining that table By using perform script on server I can actually have anybody anywhere working on the solution make a change that affects that table And let the server do the extra work of actually processing that and that's much better the server's faster More efficient at doing these things and it doesn't have to send any data across the network because it's happening right there on the server So I want to point out a few things about this script to make clear how this works And there's again There are some white papers and help articles on the tech net and in a file maker product that talk about How to get your mind in the right, you know mindset for working with perform script on server But key things to note all of my script steps are server compatible You'll notice nothing is grayed out. So all of these script steps work on the server. That's sort of the first requirement The next is that I'm explicitly defining the context I'm telling the server when this script fires you need to start by going to the list with all the data You need to show all the records You need to do the following actions because there's no way for it to know where I started when I triggered this I could be on the You know the Sales record making a sale entry in an entirely different kind of context But the change I just made needs to somehow impact this script And so I have to make sure I tell the server what to go where to go and what to do You'll notice that all the relevant script steps don't use dialogue boxes They're all got settings that are predetermined and don't require the user to interact So there'll be nothing that will stop the script from firing completely You notice that the path I use is dynamic. It's a variable It's going to the temporary path and that way it's very flexible But this script will work just as well for me on my local desktop as it will for the server Because all the script steps are compatible and the path is dynamic and was set on the fly There is of course an assumption in here, right? I'm going into a table and I'm deleting all the records and then I'm importing data in And so in order for me to delete all records, there has to be nobody who has a record lock So there could be situations where depending on the caches that you're creating That when you tell the server to do things, you have to kind of make sure that it's safe for them to do it But because I never give a user a chance to go into the summary table directly Then odds are the worst thing that could happen here is, you know Two people fire this and the server just keeps doing the same thing Purging deleting replacing and so Okay, so that's all the time I have a day for tips and demos Certainly want to leave time at the end here for us to talk So I'm going to hand this back to Matt who's going to point you to some important resources And then we'll spend the rest of the time answering your questions or going deeper into any of the examples as needed Matt Thank you mark. It was a lot of great content. I think we all learned a lot about performance and file maker Before we get to the q&a portion of this I did want to remind everyone that you can download this document from the file maker technical network It's a great community where you can ask questions and get tips from experts and also collaborate and share with other people out there in the file maker community as well as we have a lot of great software and resources like tech briefs and other downloads that you can go to and get information such as this About how to use file maker and what are some of the best practices for that and again, we'll be having more design Guides like this one coming in the near future Um, here's actually a few links to actually download some of the stuff switch over here There we go. Uh, you can actually get this performance guide. If you go to file maker.com slash r Slash fm performance guide dot html Also, this is one of a series of web seminars that we do You can learn more about different webinars that we are giving by going to file maker.com slash support slash webinars The recording of this webinar will be available there in the next few days two weeks So please go back if you want to if we had some people asking if this was going to be recorded It is being recorded and we'll be there to reference back to We also have a lot of great training resources for you as well If you want to get trained in file maker by going to file maker.com slash support slash training And then lastly mark was gracious enough to put his contact information here So if any of you do have direct questions to mark about different things He was referring to you can email him directly at mark at skeletonkey.com There's also his phone number and if you want to follow him on twitter You can find him at mark underscore richmond So before we drive right back into questions here, uh, let's remind everyone how to ask your question If you just go to the go to webinar control panel and click on the questions area You can type in your question and hit send and we will see them We did have a few questions come in while you were talking mark So we're going to start off with them and we'll try to get through some of the other ones that are coming in now Um, we were there were a few people that had a little bit of confusion about What fields are transmitted when? In terms of this record centric moving of data Can you just reiterate a little bit about what fields are transmitted and what fields don't and how entire records move? Sure So let's say you have a record with a bunch of different fields Let's say every single field type is represented and you show one of the fields Let's say a text field on a layout The file maker is going to move all the text fields all the date fields all the time fields all the time stamp fields You know all sort of the standard data fields where you store information File maker will only move on some of the other fields under certain conditions So you have to display or somehow reference a summary field in order for it to move any of the data associated with that So summary fields don't load automatically. They only load when seen Unstored calculation fields won't also load any data unless they're seen or referenced in some way now stored calculations You know things have already evaluated and they're sort of already set those will load just like time and text and number fields But unstored calculations are more like summary fields They have to be seen in order to be triggered to evaluate and then start moving data And then container fields, you know container fields are unique in the sense that they don't start streaming any content until they're seen So if you have your container field in there and you know that they contain some Sigurius kind of weighty data some pictures or other kind of binary objects files and such You don't have to worry about those moving unless they're actually displayed in the list of you or actually displayed on the form view That's when they'll start to actually move data Great and on this idea of moving data and when data moves You're talking about a few different techniques there for making sure that data does not move like having a blank layout or something like that Can you explain some of the different techniques that you you use there and kind of giving a better description of that idea of What a blank layout is and how it's different from a regular layout Yeah, so I mean all the blank layout is is really any layout that doesn't contain a data field or an object using a conditional format Or some other kind of calculation reference to start triggering data So a blank layout could be literally just you go into layout mode and say I want to create a new layout based on the customer table But i'm not going to put any fields on the layout So what's behind the scenes in that layout are the actual records You know you could be looking at a found set of a million records in the customer context But because you don't have any fields on the layout to actually trigger the data to move You're in the found set, but you're not actually moving the data But that means you can do things like you can do a find and you'll you'll send the find request Um, you know the data will come back to saying here's the record the list of the record IDs The records that you're going to want to download But because you don't actually have any fields on the layout the data itself doesn't start to come down So a blank layout is basically a really useful tool to put you in the right context to allow you to potentially Interact with the data in a very controlled way without kind of creating this avalanche of data that moves down to you But it's really no different than the other layout. It just doesn't have anything on it Great another question a few people were asking about how any of these recommendations that we're making might be might or Might be different in terms of using it with file maker web direct or file maker go Um, I think that there's a few differences, but generally all of these apply What what kind of differences can you can you talk to? So I'd say for file maker go the difference is effectively nil I like to think of file maker go and file maker pro is effectively the same when it comes to how they move data The biggest difference is the device, you know, generally when using file maker pro You're using a laptop because of you know system requirements. It's a relatively modern laptop or desktop computer And so it has appropriate ram and processors and you know disc speed to really support the moving of data around at relatively high speed When you work with an ios device, you've got a you know a significantly less powerful processor You know, you don't have quite the same amount of ram and performance So the ability for file maker go to manipulate data as quickly as file maker pro is is truly limited by the device But all the other characteristics and behaviors in terms of data movement are more or less the same Web directs a little different Web direct is you got to think about web direct is almost having two pieces I'm kind of borrowing this a little bit from I think a really great description matt put up on the performance guide thread on the technical network And that is that There's like a virtual client taking place on the web publishing engine. It's basically doing this for you So all these performance issues that we're talking about are going to be felt somewhere Right, they're going to be felt in this case by the web publishing engine Which is having to move all this data around for this little virtual client session that it's running for you And then of course it's rendering that information and sending it to the actual browser itself And there it's a bit more efficient there You have experiences where you are just getting what's needed to be rendered on the screen So you're not caching all that data locally at the browser level But all that heavy lifting is still being done And so if you really want your web direct solutions to perform well You want to make sure that those virtual client sessions that the web publishing engine is running are in and of themselves running as efficiently as possible So even though this they didn't necessarily scale all the way back to the browser and the user at that end The number of users who can get in there over web direct And the performance that they feel on that solution are going to be affected by all those intermediary virtual clients that act very much like pro and go Great another question on this was Talking about using smaller table occurrence groups And do you have any specific guidelines for How to determine what's the right size for a table occurrence group You know, I don't I don't know if you have a good answer that one that I don't have a specific answer. I think the the best answer I can give is As long as you are um Putting things into groups of I'd say, you know 20 to 30 maximum Table occurrences and not just connecting everything in your graph. You're going to be in much better shape You know, I'd even say if you can, you know, the the fewer Occurrences you can have in a group the better I know people that keep it pretty minimal and and you know, I I don't want to say, you know, keep it under 10 I think that's pretty unreasonable, but I think it's you know on top of this also It's just you know testing and doing your own performance and seeing how well it works But generally if you have an idea of let me keep these things separate into different groups instead of Breaking everything or instead of linking everything together You're going to be in much better shape than just someone who continually links everything together. So just go in with that mindset Yeah, let me add two other comments on that. I think one is um Keep in mind that, you know, the work that's done in sort of reading the relationship graph and and Bringing that information down sort of the schematic structure that's read down to the client when they open the file It really happens in the beginning and so um some you know, however many tables and relationships you have in the beginning Is sort of a map of your file that's read down to the client It's really um from that point forward that you don't want everything to be Interdependent and connected to each other because it's just going to add a little bit of a performance overhead throughout And the more complex the graph gets if it's all interconnected The more every change needs to kind of update that dependency tree But the majority of the the penalty you pay is going to be at the initial connection to the file The other thing to keep in mind of course is the developer, right? If you've got a solution that is easy to develop in easy to trouble Shoot where you're not scratching your head trying to figure out. How is this connected to that? You're going to be able to work more efficiently and more productively and that's that can't be you know Understated as a performance objective as well. This isn't just about building solutions that perform well It's about building solutions that are easy to support. They're stable. They're easy to troubleshoot You know that can withstand the test of time and so you want to build solutions You know having a whole bunch of singular pairs in your Tiny super tiny table occurrence groups is not going to be a very easy and Efficient solution to troubleshoot because of all the places You might have to go to make changes when things need to be updated in terms of relationships or predicates If you can keep it sort of somewhere in the middle ground where it's Complex enough to get the job done but not so interrelated that it's just a big cluster spiderweb kind of thing I think you're right in the middle in the sweet spot And actually we had a side question of this is what do we what do we mean by we refer to a table occurrence group? Um, there's kind of a few different ways of describing this But generally when you're on the relationships graph if you have multiple table occurrences that are all Kind of related together that is a group of table occurrences And sometimes people will relate all of their table occurrences to everything else And so what we end up getting what we call a spider graph where things just kind of Balloon out like a spider like spider legs from kind of a centralized hub You kind of see this a lot when you're converting from final maker 7 on you see these kind of spider graphs of Things coming off from the center When we call when we talk about a table occurrence group, we're actually talking about Breaking up that thing and putting it into distinct groups and using those groups of smaller smaller groups of table occurrences in order to Do specific things run specific layouts? run specific script specific scripts So another question here, this was a kind of an interesting one, but I think it goes back to one of our points is That someone is using a layout with a bunch of un-stored execute sequel calcs on them And it has a tendency to refresh really slow on that layout Do you have any recommendations for what's the best way to actually improve the solution? Obviously without having seen exactly what these queries are Sure, so yeah, setting aside the fact it could be you know problems with the queries I think the big idea to remember about execute sql is it in execute sql It's primary purpose in my understanding and from my perspective is It allows you to without having to worry about context Ask for information from any table or collection of tables without having to have specific joins Pre-established you effectively can say Just like you're talking to a cloud hate cloud of data tables Give me this information from this table that table where this equals that and that's joined with this and so on And there doesn't have to actually be a structure in the relationships graph that Specifically or explicitly supports that at all you just have to have the tables in the file With the data in the fields that you want and you can ask your query and get the results the other thing to keep in mind is that You still have to have that data get moved right so for example if I was to say execute sql Give me the answer to this particular question That data still has to move to the client to answer the question So execute sql still you still pay a performance penalty when it comes to the amount of data that gets moved in order for it To give you the answer, you know Then your client has to download the data from those tables to perform the queries and it delivers the results to you all at once So that's another problem is that you know when you do a find now and let's say you found 3 000 records Your list view might only show 20 of the 3 000 records because there's only room for 20 to show in the screen in the window There could be another you know 2800 records down below there, but you're only seeing the top of the pile In next to execute sql when you ask a question you get the entire answer all at once Which means if you ask a question and get a very large answer all of that answer data is coming back as a result for that calculation In that field So my feeling is that execute sql is most useful personally if you can use it in a perform script on server environment Because the server is immediate access to the data. There is no latency or network overhead It basically is almost instantaneous And so by asking the server to perform your execute sql function for you All the heavy lifting it needs to go through to accomplish and achieve the result is taking care of at the server side And then you may determine that you only needed the top row or maybe you only needed the first 15 rows Maybe you're just not feeling comfortable enough with sql to kind of ask the questions in a way that gives you just Sort of the first chunk or the middle chunk of what you want But it may be easier for you to kind of process that using file maker scripts on the server side without having to move The whole payload So I think it's that idea of reducing the clutter on your graph Not thinking that you're getting away with murder and you're not actually moving some of the data You're still moving the data and in fact when you get the result you're moving a lot of data And then think about perform script on server as a sort of intermediary or a proxy that can do some of that heavy lifting for you So that you're not moving all that data to the client I'll also add in there You know, this is a perfect example of where caching or storing that data might be a great place Like if this is something that is the same for every record And it's every time you go to a specific record You're supposed to show some sort of execute sql for that record Then that might be a perfect example of where you need to cache that data So with that we're a few minutes past so we're gonna We're sorry if we didn't get to everybody's questions, but we're gonna call it for the day We'd like to thank you all for joining us and we hope you got a lot out of this. I know I did Thanks again mark again the recording will be up on finalmaker.com soon and we hope to see you soon. Have a great one guys