 Okay. Welcome to second session for Singapore past after again. As usual, this is Nagraj Venkatesan doing yet another session for our user group. Just as it happens, the regular speaker who was supposed to do it would not make it. So as usual, I am the replacement. Today's topic I have, what I have taken up is perhaps the hottest topic in Microsoft SQL Server data platform right now. It's about SP1 service pack one, which got released last week. Unlike other regular service packs, what generally service packs are made up of? Anyone? Yes. Bug fixes, security fixes. These are the things usually service packs actually talk about. But this time around, that's not the case. You have some massive, massive changes that has happened. I can certainly say this by far the biggest service pack release SQL server has ever had till date. Why? We are going to see in the next 40 minutes. What happened on November 16th? Most of the enterprise features were made available on standard local express web all editions. With the service pack one off, SQL Server 2016. Can you guess why some features which you have to pay thousands and thousands of dollars to enjoy are there in express? Which means they are there for free. Any guesses why? The intention behind this session is I also want to have a discussion with you all. There are certain changes that has happened. I would like to hear your comments as well. Do you think it is useful? Do you think it is not useful and so on? Why? You can just picture it in the normal context and you can move to that. Okay. You will know but the downside is that you bring it in. Let's say you have to turn for the instance if they don't use this. Let's discuss that. These are the kind of things I want to discuss in this session. So as he pointed out, meaning you want to have a common environment across editions. What has actually moved is features that are related to programming or T SQL. The idea is to have a common programmable surface. What it means is across the editions it should not matter how I code. May certain standard or even express or in enterprise. It should not make much difference. The difference should be in terms of the performance, whether your application demands so much of performance, then you move to enterprise. The way you code should not be different at all across the book. Why is it so if you are wondering, just take an example. Now we are living in an era of cloud and shared environments and so on. You may develop a very small application in probably express edition. You develop your application but it is most likely going to be deployed on production. It may be deployed in an enterprise edition. That doesn't mean you should be restricted. Your development option should be restricted just because you cannot afford an enterprise edition on your development environment. So whatever you code should be independent. That's the idea. Yes, that's one option. That's certainly one option. Idea is basically whatever development features you have needs to be consistent across the board. So that as your applications usage increases, you get to scale up. The way you code should not actually make a difference. So good that some of you have already got started the discussion. What moved from standard to enterprise? The biggies. Let's column store. Column store index. How many of you have heard of column store index? So it is more the data warehouse feature of sorts which can give you lightning fast performance when you are fetching large volume of data. So LTP released in SQL 2014. Even Andreas I think touched upon it. Released in 2014 in actual aims at giving 100 times performance by placing tables in the memory. I think I myself have done two sessions on in memory for our UG. And then very, very interesting security features have been made. Lots of security features have been made available to standard web and express editions. And then change data capture. Database snapshots, compression, partitioning, all these have been made available. Let's take a look at a few of them. Take a closer look at a few of them. In case, fortunately most of you know about column store. But let me just do a very quick introduction or into column store index. A two minute introduction. Traditionally on the normal tables, the table is stored as pages, right? Where all the columns of the table are stored together. Think of data warehouse application which casually fetch 100,000 rows, a million rows, which deal with terabytes of data and fetch about a few 100,000 rows easily. Okay, so what happens is it goes and it fetches the pages that contain the row data and then it projects it, right? What happens in column store is instead of designing them as pages containing all the columns, the columns are stored separately, individually the columns are stored. And then it is broken down into these boxes which are called segments. A segment actually can contain up to a million rows of that column. So what happens because of that is, let's say I have a query, something like this. Select some of c1 average of c3 and I need to, where I have a filter condition. In the traditional table, let's say if I have this particular query, it's going to give me around about 100,000 rows or 200,000 rows. It has to go and scan or seek or scan all the data pages that are valid. In this case, and then after seeking, it has to project those columns. c1 and c3 alone, it has to project. Imagine data warehouse application, which if you are aware of these star schema type of table designs, they will contain 50 columns, 60 columns in one single table, right? So what happens if you are reading those pages, though you are projecting only two columns, you are reading 50, 60 columns over there because you are forced to read the pages. What it will happen in, if you have a column store, if you have designed it using column store, you get to project, read only those column segments. The read happens by segments. So you get to read only these two. But as I said, a single segment can hold up to a million rows. You read one segment, two segments, three segments and so on. So remember that. Now let's discuss what happens with standard edition. Okay, just before that, the benefits of column store, obviously when you have to query large volume of data, it does give you very good performance because you are eliminating what you want to. You are just restricting what you want to read. Another big advantage, you are storing those columns separately. What does it mean? You can compress it really well because if you store the column separately, the data type is the same, which means I get very, very good compression. You can get up to 10 times compression. That's what Microsoft says. So now it has been made available to standard, which means applications on data warehousing, on standard edition, can take advantage of it. But let's see. What are the nice things? Performance, yes. You use column store, obviously you get a better performance. Again, as we discussed, consistent environments. I may have expressed. I still can have the column store index, but when it goes to production, it will make good use of it. Last is compression and column elimination, which was what I was showing. The more important thing is the cons, the way I see it. It's a memory hungry technology because I said it reads segments. So it pushes segments into the memory and then it does the processing. Can anyone tell what is the memory limit in standard edition? 128 GB. So what Microsoft is saying is you have a 25% max memory limit for column store. What they mean is, let's say you have a 128 GB server, your column store can use only up to 32 GB, 25% of it. But one nice thing they also say, this 32 GB does not fall into the max memory limit, meaning this is like a bonus of sorts. This does not get countered into the SQL servers utilization. So what happens is if I create a column store, I get to use 32 GB, my max memory is 128 GB, total SQL utilization can go up to 160 GB. That's what they are saying. Even though they say that in my opinion, this is just my opinion, I really wonder how many people will move, take advantage of this, because column store, as I said, works really well in large volumes of data. It has to use a lot of memory to get optimum performance. So if I restrict myself to 32 GB, nowadays 32 GB is an average of, I won't even call it a data. There is no way you can call it a data warehouse. Even OLTP systems go up to a terabyte nowadays. So how much it's going to make a difference for column store, I still have my doubts. Yes, it does achieve the common programmability surface of the policy of having the same environment across editions. Yes, that's there. But will somebody who wants to use column store for data warehouse purpose on standard edition, I have my doubts because of the limited amount of RAM. Probably if they say 2 to 6 GB is allowed for standard edition or 500 and full GB is allowed for standard edition, then maybe, but with 32 GB, I have my doubts. Any comments? My opinion for this, for this, I still believe SP1, I'm starting with something with an average feature. Andreas has a different opinion. He thinks actually column store is very, very nice. In my opinion, some other features which moved are wonderful. To make it very clear, columns store index can be created on in-memory tables. That's a completely different thing. What I am explaining right now, what I am talking about right now is when it does the processing, it uses lots and lots of memory, meaning the segments are loaded into the memory and then the filtering or the processing happens. That is a technical fact. My observation is whether it will be useful on standard editions or not. That was my comment. Sorry? Yeah, 32 GB limit, 25% of the maximum memory. Yes, you can boot up to 128 GB, but column store can use only 25% of it. The column store index can use only 25% of it. Yeah, it's legal. Still, that's what I was explaining. You can probably have a 192 GB machine and SQL, the normal buffer pool, can use up to 128 and this 32 GB can be used by column store. Parallelism is limited to two and there are a few additional features which were introduced in SQL in query processing on column store like aggregate pushdown. They are not part of the standard edition. They are, again, to give even more better performance to be specific for enterprise edition. Sorry? No, it's not a coaching. This is an internal, the way it processes the... This particular feature is internally... Okay, there are certain features which are transparent to you but different from standard and enterprise. Okay? So things like fast recovery when you are starting a DB, that's specific to enterprise, something similar to that. Internally, these features are something called aggregate pushdown. Certain aggregation operations can be pushed down to segment level. You need not wait till the data is fixed and so on. These are little deep dive for this particular session. If I have to explain what aggregate pushdown is or a simple predicate pushdown is, that's beyond the scope of... But what I can say is the way these queries are executed, there are certain optimizations which are just specific for enterprise edition. Okay, next thing we have is in-memory. In-memory, as most of you know, you can pick your tables, they are placed into the memory and then the way it's accessed is different. The way it operates is different and it aims to give you 100 times faster performance. Now, this has been made available in standard. Just a very, very quick intro into in-memory. As I said, you pick the tables that you want to put it in memory. One of the most important features is it is lock and latch-free, meaning what I mean by lock and latch-free is it does not use locks to maintain the consistency. It offers much, much better concurrency. The rows are versioned, which means you don't get into read versus write locks. The way it is placed in memory, unlike traditional tables, not as pages but as rows, which means you don't end up with latch weights. It does not use latches again to synchronize. These two are major factors when you're dealing with OLTP systems. Most common weights on OLTP systems are locks, latches, and so on. These two are taken out of the equation. The way internally these tables and stored procedures are accessed are via DLLs, which again makes it a lot more faster. Though the tables are in memory, they are also written to the disk. The way it is written has no random writes. It has only sequential writes. That's the way they have designed it, which again makes it a lot more faster. In a nutshell, the table needs to reside purely in memory, and you get very good performance regarding in-memory tables. One important factor we need to remember is when you use in-memory, let's say you ran out of memory, then your inserts and updates can start failing. Your memory becomes something like disk. Now let's move. Let's discuss. How come it's going to be slow? Because all my access is through memory. As I said, it operates latch-free and lock-free as well. I personally did a test over here to show I think last year, I showed you can get up to 30 times faster performance out here on this laptop, on the level 22 out here. The thing is your memory becomes something like disk. You run out of memory, what happens is it's not like it gets low. You start getting errors, meaning inserts fail, your updates fail and so on. Now you're moving it to standard edition. Do you think it's a bad thing? Any comments? No. In my opinion again, let me force my opinion all over the place. In my opinion, it is one of the best moves. The reason is not your most important table, your most important table in your OLTP application or most important tables, let's say five tables. They may not necessarily be the biggest table. They can be few thousand rows or 100,000 rows. Regarding memory limitations, again similar to column store, but out here they are not 25. Bob Ward says it is just 32 GB. It is given 32 GB similar to column store. In my current database, you get 32 GB of RAM. Again, it is not in the buffer pool. For a database, I get 32 GB of RAM for in-memory data. My experience tells me in working with applications, OLTP applications, my most important tables are not usually the biggest tables. They are few thousand rows. For example, let's say you have a share creating application where you have the stock prices coming in. The guy places the build and immediately you need to match and you have to close the deal. For this kind of application, the table that contains all the bid requests probably can go few hundred thousand, maybe the prices probably few hundred. At the end of the day, there will be one process which will actually take all the older previous days' RAMs actions into another table so that the most important table remains short. Most of these applications work in this fashion. I move my most important tables into memory, which means I don't have locks, I don't have latches, I get a dedicated 32 GB. Still, I can achieve really awesome performance. So, in my opinion, one of the best moves of SQL 2016, SP1 was to move in memory into standard and other additions. Yes. The way you design your application, as I said, let's say my one-day data. I have a table where I archive my data on to another, I delete and I insert it to some other table. I maintain only one day of my data on my prime table. What you're saying, let's say I'm hitting the limit, then what I do, the way I look at it is it's basically a capacity planning problem. You should ensure that you monitor and you ensure that you have designed your application in such a way that you work within the 32 GB that has been provided to you. Meaning you don't pick your massive tables which actually go in, but you pick the tables that are most important but not so huge. One thing I also want to interesting thing that I would like to point out is on Enterprise Edition where you don't have the memory limit. In memory usage, the row versions, whatever deletes it stores, the row versions are also counted for the memory, amount of memory used by memory. But in standard, the row versions aren't part of the quota that they have defined. Security features, again one of the best moves of SP-1. Moving the security features into standard Express Web Editions. Like always encrypted. Why I consider security features? The moving them is very good. Let's say I have a very, very small application. Imagine this finale. I just build my own application SQL Express and probably I deal with few hundred rows, sorry few hundred transactions in a month. Very small application. But still I accept payments. I deal with credit card numbers. So still I need good security features in my application even though it is not the most busiest or something that demands highest possible performance. I deal with sensitive data. Even having one monetary transaction is indeed sensitive. I may just probably get only two orders in a month and probably it may be worth only three hundred dollars. But still that transaction needs to be addressed in a very secure way. In my opinion, security features by right are not something luxurious. It's something that has to be mandatory. It should be available for all. A very welcome step that Microsoft has gone ahead and made this available across the board. A feature like Always Encrypted which encrypts the data on the wire is something that's wonderful to have it across editions. Same goes for dynamic data masking which Andreas also demonstrated. Rollable security again where you want to control the users, the right users access certain rows of your table. You can control at a roll level which users can see which data. That is also made available. Unfriended range auditing which was previously on enterprise. Now it has been made available as well. As I explained it's in my opinion it's not a luxury and it's more of nowadays it's more of a requirement and it's a welcome thing that it has been available. Any comments? Yeah. Okay, TD. Initially what I said, we are actually looking at common programmability surface. TD is more the infra solution where you want to protect your backups. So the sensitive data, one of the options is you use Always Encrypted where the key is only there at the right client. So it still makes it very, very hard to get around it and crack your sensitive data. TD is still you configure at the database level not at your coding level. All these things are closely related to your programming. Any other comments? Yeah. Okay, other major features that were made available if you change data capture and database snapshots. I find these two more closely used in development environments. You want to have a common interface where it has been made available on lower editions as well. You cannot have it in Express because you don't have SQL agents on Express editions. Compression partitioning, again, partitioning if you define it on your local database it should comfortably port into your enterprise edition as well. So it has been made available across the board. Another very, very interesting feature that moved was StretchSQL which was one of the most interesting features of SQL 2016 got moved into made available across other editions too. Okay, any comments on this move? Do you think it will force people to move into SQL 2016 a lot more because many of these features are now made available? As I said, I want this session to be more of a discussion where you get to hear some comments. Yeah. No, SQL for Linux is some completely different. What we are talking about is for the normal... Yes, yes, sir. So, personally, I think it will definitely influence customers to move quicker now and those customers who may already have moved now start taking advantage of some special features like for taking and enforcing. And especially personally also since I'm on the memory side very much I think that will be the start, the kickoff. Yes. Because that really means like everybody can move in memory stuff and really everybody can even express the data but you can hold it, you can load it. And then you can load it. That's what I've learned from what I'm doing in memory training since video, I think. Many of you. And all the time I ask who's already doing it it's like one problem in the room. You make it when you let it. Sometimes nobody. So it's very, very low adaptation because of enterprise. Also because it's complicated. But the main reason really just for the enterprise customers in Germany we don't have so many Jewish companies. So enterprise is not so common. So this combination of everybody wants this stuff to be faster. Everybody wants to keep the memory without knowing what it really means. That's because it's all good. Now everybody can start coding or in memory find out is it really helpful or not. So I see a big adaptation right there. Okay. Yeah. I'm using at least at least five to six sequel editions in my office. Sequel projects that are using sequel 2012 will be moving into because of this release are likely to move to sequel 2016 or in memory, you know. So that is the level of impact of this particular announcement. In addition way we don't need to pay like enterprise different cost we are paying. We can get that feature and in standard edition we are paying different cost because we can get these features. Now we pay for standard and we can get all the features. Absolutely. So some people may ask what remains in enterprise? Okay. I get everything. No, there are certain things which certainly remain in enterprise. High availability always on high availability fully functional always on high availability still remains in enterprise. You have something called basic availability group which is available in standard since sequel 2016 but you get to use only one database for an availability group and you don't have readable secondaries and you can't do backups on your secondary in basic AG. TDE has somebody already asked the question. TDE is still reserved with enterprise resource governor again reserved with enterprise. Online indexing is still enterprise. Okay. As I said all these features which are more closer to infrastructure performance availability which are typically enterprise class things still remain with enterprise the resource limits of 24 cores and 128 GB for standard edition still remains basically what they are saying is whatever features as such you want you get it in standard itself but if you want unbelievable performance or if you're dealing with large volume of data which still which demands extreme good performance you may need to go to enterprise. Okay. Now no session is complete without demos initially if I are comparing editions obviously I can't show too much of demos. There are certain other good things which moved from which are introduced in sequel server 2016 SP1 we'll take a look at them and now what are the other things other than these edition changes there are also other interesting things that moved into sequel server service pack 2016 SP1 we'll go through a list of them and yeah this part I will explain it via demo so that it's a little bit more interesting. Okay. Sequel server 2014 in SP2 if I recall introduced cloned database. Anyone knows what it is? Yes. Yeah. So you get to export by using cloned database you can get the schema alone of the database and you can put it over to other environments basically you just clone only the schema part of it. It's not the data just the schema alone if you are wondering okay you already have export option to do that task generate scripts to do that it works in much much faster and better way. What sequel 2016 SP1 does is you also get query stores data when you use cloned database I don't know if you have had attended my last session which was on query store where you can actually store the query plans of all the queries that are fired onto your database you can store your query plans and do performance tuning with it. With the dbcc clone database now you can also you get the structure and also you get the query stores data and reports as well let me just show you currently I have this data on database and I have query store enabled on it I get to see the query stores data out here and if I use the dbcc clone I still get all the query stores data okay previously when you want to switch between cardinality estimators meaning let's say you have a database running with the older compatibility on the latest compatibility level but you want the older cardinality estimator to be used you can use this query hint called option and you can specify the trace flag to move your optimizer functioning at the lower cardinality estimator now but the drawback with this was you need when you use option query trace on you need sysadmin permissions to run that particular query okay with sp1 what they have introduced is you have the option use hint and then instead of using the trace flag numbers there are you can give these descriptive lot more clear clear hints saying that what you are actually trying to achieve and this does not need sysadmin rights previously to use such kind of data sysadmin permission okay nothing fascinating about running it it's just going to run and give me the result that's it okay this particular thing is really really interesting okay okay first to use this you need to have management studio 16.5 the latest one and of course sp1 as well I'm just going to run a query and open up the query plan there are some very very interesting things that have been introduced on the latest show plan what are they let's see first most interesting thing is at the bottom wait stats when we have a query plan now we have the ability to see what are the wait types it actually underwent previously you get the plan with all the Ivo cause number of rows and so on but now you can even see what wait type that particular query went through expanded you get to see okay so a scheduler read it to 2 milliseconds and so on which is very very cool isn't it now it gives more power when you are actually fine-tuning queries then next thing query times chats you get CPU time and labs time over here this is another very very useful and the best part is you can take a look at it by operator level time statistics meaning this particular operation how much of CPU time and labs time it took look at this also you have Ivo statistics similar to what you used to get when set statistics Ivo on this was previously not available you get to see it by query plan you can see how much of physical reads this particular operation took how much of logical reads and so on pretty similar to what you get by sets Ivo statistics on all these on query plans itself which is again very very useful create or alter how many of you actually wanted this when you are doing your deployments you can specify say create or alter previously what people used to do is if exist select star from objects and then drop the object and then you have the create syntax right the problem with the dropping is you lose your permissions if you have certain if you have certain grants you should ensure that those grants are also placed on your script if not you you end up losing the permissions if you are casual now you get something called create or alter with which you can actually either just say either you create or if it exists you alter it very simple right there you go okay there were there are many many interesting enhancements and just picking the ones that appear interesting to me most interesting or the ones I can I can I could understand within two days because I knew that I'm going to do the session two days earlier okay you most of you are aware of log pages in memory right log pages in memory and instant file initialization something our Dharma really likes to talk about over and over again instant file initialization so these two these two these two are permissions that are that are given now you can actually programmatically meaning via a DMV you can actually check whether the SQL service account has these two enabled or not over here you get a different value for if you get locked pages in memory if you get if you have actually enabled it and for instant file initialization over here I get why these two are provided on these two dmb's meaning dms is always informed dms servers services okay okay last demo of the day most of you production dbas most of you would have some mechanism of tracking what are the queries that are executed once every five minutes or certain intervals so that you can actually use that for mobile shooting purpose right and most of you will be tracking query plans right you also capture the query plans I believe some of you also capture the query plans those query plans have what you capture using the dmb exe c query plan it will give you the estimated number of rows all the estimated estimation time stats right with sql 2016 you have a new dmb called dmb exe c query statistics example with which you can actually get actual number of rows not just estimation for number of rows you get to see actual number of rows that are being processed for that particular query let me run this query I'm just creating some workload and then I'm running it for to use this you need to have this trace flag on 7 4 1 2 it is called lightweight profiling which not like your profiler but it it is a lightweight it tracks a few statistics in a highly lightweight manner this is a new query plan that use using the new dmb this is the query plan that are generated let me open up this plan you see the actual number of rows like this can be very very useful if you are actually troubleshooting any performance problem where you want this particular dmb to track the queries that are fired then you get to see actual number of rows and not just estimations okay with that yeah with that I am done with this session yeah first part of it was more on the additions what moved and what didn't and second one was just to give you a very quick preview on to the interesting thing enhancements on sp1 very very cool enhancements on sp1 there are more you can probably read up the links I am providing along with my slides they will contain where you can actually see the other ones as well anything else yeah any questions please you have to missing indexes dmb as far as I know not that I am aware of but there are blog posts that keep getting released almost every day right now on new feature on sp1 there are so much of so much of new things that have happened yeah bug fixing yeah there are known issues that has also been released still I in my opinion of the biggest moves of in SQL you have enough of those missing I don't have prices to give away so thanks a lot for staying back this late the 940 yeah yes some yes a question for you just use your power points I can write something yeah yeah sure sure please so please come with me just one minute because I can say like an agarai on me on the other organizers we are thinking about making a special event next year and one of that one of the ideas is to do a one-day training class so the exact set up pricing and naming that's not the point right now what what I would be interested is what topics are you guys and girls interested in so I have prepared a small list which I can offer you basically I would just like to have some hands up for each of those topics if you think that would be a topic you would like to spend one day very cheap special price in memory and column store would be one topic keep a layout is basically the same as in German I guess you know what I mean there so what I mean by this obviously not the in-memory OTP features which are now available in style edition I think this would be a good reason to teach but the question this is interesting for you also the column store in combination would that would be an interesting topic where you would say I would like to spend the time for one day pick hands up yes okay next one would be the opposite not so fancy but more like basics who would think yeah I would like to have a really basic like what's the what are the best practices that everybody needs to know but ministering secrets or what settings should I use which ones should I rather not you wouldn't be needing how do I do a correct backup not the fancy backup with fires but with multiple functions like what's best practice to go along basic security set up configuration options that would be what I'm thinking about here hands up for that please if you're interested two and a half let's say three so I count three the next one performance analysis analysis basically what we had also here the secret Saturday anybody interested in that again or maybe have not been here who knows two three four three four five six cents I think indexing in general not the column store more like generally non-cluster clustered how to set correct indexing how to check if indexes are used like half the hand let's say one the next one high availability especially with ability ability groups on one hand okay and the Azure space there's a secret Azure DB and that's also the also the option to put the secret solver into an Azure VM would that be interesting for you as a introductory saying how to do it in Azure or to work with the case on Azure that would be a one two three four hands okay that's interesting for me yeah what will come out you will see next year maybe you will repeat the poor next time this other people also so we get some idea thanks a lot yeah sure yeah