 Mainly what we have been working on lately, that is Centi-Fendwork Core 3.0. And we're going to start with the new link implementation that we added on this release. So link is this feature in .NET that everybody loves to use, but that is actually very hard to implement. And in this release, we basically collected all the things that we have been learning in the last probably 12 years from link to SQL, the first few versions of ES6, and the few versions of EF Core. And we took all of that, and we decided to make a few changes on the architecture of this implementation. And we made a couple of design changes that are worth talking about. So one of those changes was basically we now restrict the client evaluation of a query to just the top level select. In previous versions of EF Core, we actually allowed if, for instance, you had a filter in a warehouse that we couldn't translate to SQL because maybe you were calling some custom method in it, we said, OK, well, we are not going to be able to translate to SQL, then let's push down some SQL query that doesn't contain that word close. And then when all the data is back from the database, we are going to apply that filter in memory. And that really worked with a few rows. But if you had a lot of rows in the table, then it could cause severe performance problems. The other change you probably want to talk about better than me. The other change we made was to generate a single SQL query for every link query. So correlated to client evaluation, whenever we encountered a complex query which we cannot generate SQL easily, what we did was n plus 1 evaluation that we are going to run a top level query once. And for each row in that query, we are going to send additional query to get related data. We did certain optimization on that too for collection include kind of scenario, which are like pretty common to generate only two queries and relate data together. The few of the issue we ran into with that approaches, while it was avoiding duplicating the data, we are running two queries rather than just one query to server. Plus, if your data changes in between both of the queries are executed, then you will get inconsistent data, which generally can cause data corruption. Also another thing we saw with the approaches, if your database does not support having multiple data readers open at the same time, then you are going to buffer result of one query fully. And even though you do not need to keep all the result at the same time, that is going to cause huge memory drag. So to deal with all these issues, what we decided that we will go back to the behavior which EF6 had, which is to generate a single query for each link query. So we generate a complex join scenario just to support that whatever complex query comes in, we are going to generate a single query. Yes, it involves a duplicated data in certain cases. Even though duplicated data is getting back from server, we are not going to read it because we already have read that data. And to facilitate that in three zero release, we have enabled a lot more patterns of complex queries to be translated to server, which were n plus one evaluation before. Cool. Yeah. The other thing that we implemented in this version is support for a couple of new features of C-Sharp8 that are very useful and applicable to EF. One is Async Streams. So Async Streams is this new great feature that standardizes how you consume basically asynchronous results from some query or from some request. And we basically switch from a custom implementation of this kind of interface to the standard one that is IAsync and variable of t so that we can, you are going to be able to consume results from a query from EF using the new await for each statement. The other thing that we did was start reasoning about nullable reference types. So with that new feature in C-Sharp, you can annotate basically a string variable or a string member or an object member, a class member as non-nullable, which allows us to decide that then the corresponding column on the database needs to be also not nullable. We used to do the same reasoning with the required attribute. Now we extended that to nullable. We also finished the Cosmos TV provider. This is a provider for EF code that we have been showing in different versions. Actually, I think that the first time it was probably two years ago, we showed the prototype. Now that we are done with it, you can start using it in your applications and we are looking forward to your feedback so that we can decide in what direction we are going to evolve it. The next thing that we did was interception. So this is a lot like a feature that already exists on EF6. It allows you to provide custom logic that is executed by EF core in this case. Whenever there is a low-level database operation happening, for instance, before we are going to open a connection, you can change the connection string. After we open the connection, you can send additional statements to the database. Before a transaction is committed, after it is committed, or let's say before we execute a DB command to get a data reader, you have an opportunity to write some custom logic that executes there and also even to replace the results that come from the data reader. The next thing that we did was basically allow reverse engineering or scaffolding of views from a database. So we have had this feature that we used to call query types on EF core. In 3.0, we renamed the concept to keyless entities because that's basically what they are, entities that don't have a primary key. And we allow you to map those types in the model. Now, with 3.0, we also said, okay, well, we can take advantage of that feature. And if we find a table that doesn't have a primary key or a view that very often don't have a primary key, when we do the scaffolding of the DB context and the types, we create a type for those two and we configure it so that it's just a keyless entity type. And finally, we also adapted EF core to work with the Microsoft data SQL client. There is a new get package that contains the latest version of the ADOnet provider for SQL Server. This one has a couple of very important features for the net core that were not supported before. One is always encrypted. And the other one is the ability to connect directly with Azure Active Directory by configuring it in the connection string. So, we would like to show a few things that EF core 3.0 can do that the previous version couldn't. So, going back to Visual Studio. So, let me walk through you what we have here. So, we have an author class. Author class contains collection of blocks. Each block contains collection of posts. This is our model classes. The way we have configured our DB context is we have configured to connect to SQL Server. And we have also integrated a logger factory which is going to use console logger to log SQL command we are going to send. This demo uses query. So, we want to show you what SQL it generates. Apart from that, so in main program first we set up the database. So, here that function is going to create the database and add some dummy data so that we can run the query and see that it is generating the correct results and everything. Then we are going to run few queries and iterate over them and print result. So, the queries we have here. So, first we have recent blocks. Basically, this is all the blocks which have a post which has been posted in last seven days. So, we are going to use the diff function on SQL Server through EF functions. The other query we have is Diego blocks which is all the blocks which Diego authors. Which is exactly one. There are two of them. So, the query we have is like recent block union Diego blocks. So, union is a set operation. So, in previous version of EF the set operations were being client evaluated. So, we would send two queries, one for each data set on set operation and then on client side we will compute the result of set operation. For set operation like Concert it would not make a huge difference but for anything else especially intersect and accept. We are going to fetch a lot more data from server if your common denominator of data set is really small. So, since we are moving away from client evaluation we decided that we need to translate set operations to provide good functionality to customer. Which is something that we have been asked for, for a while, right? Yes. So, here the query is going to do union query and then we are going to use await for its syntax. The assasing innumerable function basically converts an iquariable to iasing innumerable which it's getting an innumerator how a synchronous innumerator would be but in async way and await for it would innumerate over and print out the result. So, when we run this application so this is our SQL generated. As you can see the union is happening on server side and these are the results printed. So, another thing with set operations query which a lot of customer asked was about they wanted to do set operations which on client side may have been okay but if you integrate paging with it that means you want only subset of set operations not all of the result and doing set operation on client side paging was just has to happen on client side and that was youth performance drag. So, if we integrate a paging here with our queries so after doing union we are going to order by the post count in descending order and then do skip zero and take two blocks. So, when we run this query now so with on top of our union query which we had earlier it is going to generate an order by clause based on count and then generate offset and fetch. So, this guarantees that we are only going to get two records from the server for the paging scenario and not the all the table to generate all the results as it did in previous version doing client evaluation. Correct. So, this was like one of the thing on set operation people really wanted because of paging scenario. So, moving on from other client features which other things which were client evaluation in past was n plus one queries. So, the most basic example of n plus one queries was doing collection first or default. So, for here the blocks we have suppose we want to get from post collection with the most latest post which has been published very recently. So, that query was n plus one evaluation in two two. So, in threes. I remember what we did is basically we traded over the main query and then for each row we executed a second query, a third query, a fourth query to get the data that we needed to project alongside the data from the. Yeah, that is correct. So, for block post scenario what we did was like we generated block query and for each blocks ID we used a query against post table to get all the related posts for that particular block. Which can be a disaster for performance. Yes. If you have a lot of posts. So, if we put a custom projection here for our query. So, the projection is going to be select from our order by design page query. We are going to select the block and the latest post. Latest post is defined as b.post order by and then doing first or default over that collection. And if we change here how we are printing out because our result has a different shape now. So, this is, it is accommodated for new result shape if we run this query. So, first or default is a window operation because you cannot easily generate a join for that. So, what we have added in AF code 30 is we are a support partial support for window function. So, we generate a row number expression and generate a left join with the query so that we get related data. So, if you look at the first query this whole query is how it was for blocks ordered with paging. And then we did generate left join with post table using row number to get related data. So, this query was n plus one in our two two relays. Now it is single query. And in this scenario since every block is going to have only one latest put at at max. We are not going to get a duplicated data from the server side. So, it is the most efficient query we can generate out of it which was really drastic performance. Yeah. I want to emphasize that before you move on. People often say, oh, look at this query that is so complex that you have generated for me. In this case, we are actually, we actually believe that we are generating the most efficient translation possible for this. Even the link version of the query may look simpler but this is the best SQL that we can generate for it. That is the difficulty in translating link to SQL. Yeah. So, moving on, so one more thing we want to go over in this demo is database interception. So, another user asked feature is query hints. So, we are going to utilize database interception to add a query hint after a query. So, here we have a union query. So, suppose we want to append a query hint option merge union. So, for that first we will define our interception class which is my command interceptor. What my command interceptor does is it overrides reader executing async. So, this method is called whenever you are executing a command to get data reader from server asynchronously. So, this method will be called back then. So, what we are doing is if condition wise if our command text contain merge union in its text then we are going to add option merge union at the end of the command. And then we are just going to call base which would execute the command and get results from database. So, the merge union here is a kind of a pointer for us that this is a query which involves where we want to put the query hint. So, how do we get that text in the query? So, in previous release of EF we added a tag with API. So, what tag with API allowed us to do was to correlate a link query with the server query. So, when you put tag with API on any query so here we are going to write merge union. So, whenever this query is translated to server we are going to add as a comment in the SQL query merge union as a tags. So, when you are looking at your SQL logs you know this link query generated that SQL query and we are going to utilize that feature to correlate while interception that our link query here is going to is being executed right now and we have to add the merge union very hint. So, we are sending like a message through the query pipeline from the top saying to the interceptor rewrite this query because I wanted these to be added to it. Correct. So, now we have our defined interception we also need to register the interceptor with EF. So, for that we have API add interceptor which takes bottom server interceptor so you can register multiple of them. So, here we are just going to create my command interceptor class and initialize it. So, this is how we will register our command interceptor to work with EF. So, when we run this query now so it will run exactly same query as before example but as we can see at the end it added option merge union. If we look at the full SQL then at the start there is also comment merge union which is generated by tag width which we used to identify that here is where we have to add the query hint. Yeah. So, to clarify this is just a demo. This is not the feature query hints. We know that people want query hints first class support for it and we are still continuing to evaluate that for a future release but this is like a good way to show the power and flexibility of interceptors. So, going back to slides. Yeah, also we wanted to talk about another thing that we did today. We released entity framework the traditional version of entity framework version 6.3 and the big new thing about it is that it runs on .NET Core now. So, the idea for this release is that if you have an application that you want to move to .NET Core but it's complicated for you to migrate also to EF Core at the same time and you have been using EF 6 for a while then you now have the option to move to .NET Core directly without switching to EF Core. So, we know that there are a few features in EF 6 that are missing in EF Core still so we believe that this is going to be a simpler path for many people to do the migration. On the other side, there are no new features and there are no new features planned for EF 6. It's not a code-based in which we are investing a lot in the future but it's still open source and we are going to consider pull requests from the community and if they are like super high quality and low risk that they are going to break something we will take them. Another limitation for this version is for .NET Core that is no support for the SQL Server Special types it's basically the geography and geometry types that are in a library. That library hasn't been ported to .NET Core yet so we don't have it. And also we don't have support for working with the EF Designer directly against a project that targets .NET Core or .NET Standard. There is a workaround that consists of basically linking to files that exist in a project that targets .NET framework for it and there is more details about that in the documentation. And next, we ship a provider that is the SQL Server provider for this release it's part of the product but if you want any other provider you need to wait until new versions of those providers are shipped. Not, that is only the case if you want to use that provider with .NET Core. Actually, EF 6.3 on .NET Framework is backwards compatible with EF 6.2 provider so you are not going to need a new provider if you want to keep using .NET Framework but if you want to move to .NET Core you are going to need a new version of the provider. And then we want to talk about the beyond part of the talk which is what we are doing next and there are a few things that are certain like the next version of EF Core is going to be a minor release is going to be called EF Core 3.1 we are planning to release this by the end of the year and it's planned as a long-term support release which means that it's going to be supported for at least three years. And because it's a long-term support release and it's a minor release we are not going to make any breaking changes we are not going to take any risky changes we are basically going to improve the stability and the quality of the product by doing backfixes and things like that and maybe a few small enhancements. After that there is going to be EF Core Vnex that is the version that is going to ship alongside .NET 5 we don't know the version that we are going to use for it yet and everything that is in this list is under discussion. I just put together a few things that we know customers want but we need to pick a few of them because we cannot work on all of them at the same time in the same release. One thing that is very important is performance improvements. This version that we released today EF Core 3.0 is basically give us a fundamental platform that we can keep building on because of this new architecture for Link. We believe that it's going to be relatively easy for us to keep improving the performance and to add additional translations and to get things better. That's the reason we made that investment. And after that we can also take advantage of things like the new API for batching that we are going to do on edu.NET for .NET 5. And there are a few things here that are basically part of the gap between EF6 and EF Core that we know that we need to cover.