 Hello. Thank you for watching our talk. We are Smith and Diego. I'm a Program Manager. I'm Diego. I'm a Program Manager on the Intel Fengar team and on.NET on Data Access, and this is Smith. I am Software Engineer on EF Core team. I work on Query Compiler and Cosmos DB provider for EF Core. Okay. So some of the features we are talking about today are things that he wrote, and so we can ask Smith some questions about that. We are going to talk about Intel Fengar Core 2.2. We are very happy that we shipped the second preview of 2.2 this morning. We published the blog post with a number of getting started demos of how to start playing with the new features, and we are going to show some of them today. So 2.2 is a relatively small release for us compared to 2.1. When we did last year the presentation in .NET Conf we actually had lots of demos and we had to choose which ones not to show, but this year we have a smaller number of new features. Some of the features are really large for us, but mostly 2.2 is a release in which we are going to have few features and many bug fixes, and we are already working on 3.0 that is going to be the next release. So let's start with the first feature. So our first demo is about collections of own types. The idea with the own types or own entities is that you can represent in the EF model a stronger relationship that just a normal relationship in a database. You can say, okay, this entity for instance, the canonical example is address is owned by a customer. That means that you cannot have the address just exist independently of the customer, and when you want to manipulate data on the address, you need to actually load the customer and basically go through the customer to do anything. What we're doing in EF Core 2.2 is we are adding the ability to add ownership also to one-to-many relationships, which is something that in 2.1 we didn't have. So let's go to the code. So here we have demo about, so basically let's do some quick recap here about like own types which was introduced in 2.0. So we have our customer entity, which has work address and home address. As you can see here, address is an own entity. It does not have any primary key defined. It's just a different string values. In order to use own type, we use like owns one API and specify reference navigation. So in this case, work address and home address both are stored as own entities. So in the demo, we are first going to create the database and then store some of the data. If we run this demo. So one thing I wanted to mention that I forgot is when you do ownership in entities, that triggers some automatic behaviors on the model. That's one of the uses of it. For instance, you don't need to write include to get the instances of own types. They get included automatically in the results of queries. So here we have run some SQL. So in the create table script, we can see here that like work address and home address both are stored in the same table as customers. Since they are in one to one relationship, they can share a single row to store all the data. Next, we are going to store and insert some of the data. If we go to our SSMS to look at the data. So if you are familiar with entity framework six, we have this feature called complex types and own types are kind of a super set of what complex types could do. So one of the things is basically this by convention, just store the data of a complex type in the same table as the complex type. You're running into issues with the... Yeah, looks like SSMS is not working. Okay. I think we can show the... Yeah, we can just go forward with the demo. So in this example with own type with reference navigation, we have created single table. That's like what is a relational behavior here. Now moving forward to, moving forward, let's convert this to an own collection. So suppose here we have work address and home address. If we want to store more addresses than this, then or it is an unbounded set, then what we need here is a list of navigations. So first we will add a list of navigations and remove work address and home address navigations. So now we have address as a collection, which also requires us to configure them differently. So in our model building code, now we are going to configure ownership for addresses as owns many API. So you do model builder dot owns many and pass in collection navigation. There is a small bug which currently requires you to have a primary key defined for own entities. So therefore we will need to add ID property here. So with this configuration, the only things we need to change now is the data sitting part, which we were adding through reference navigation. So we will just move it to collection now. So with this, so for now we have configured one to many relationship to use ownership. This is going to, this is very similar to what it would be a normal one to many relationship in relational database. The behavioral difference comes in what Diego mentioned earlier is like automated include and address cannot exist without customer and you have to store together and all those things. In terms of like database structure, it is still going to be same. So if we recreate the database, so here you can see we have customers table and address table. Both of them are like stored in different table. They can't share the same row because there could be multiple addresses for customer. And so most of this part is the same and like yeah, then we can insert the data and it will insert the data into different tables. Yeah, from the perspective of a user of EF6, you can see now basically what EFCore does is equivalent of having collections of complex types, which is a feature that was requested for EF6, but we never supported. Also, this I think brings back the real reason we are adding this feature. If you want to talk about that or you want to- So the real reason we are adding this feature is like we are writing a Cosmos DB provider for EFCore. So as you know that Cosmos DB is a non-relational database. The way data is stored is a JSON document. For a JSON document, it is very natural to have nested elements. So each JSON document can have another JSON inside as a nested. It could be single entity like a dictionary or it could be an array as a collection. So ownership is for us, that's what we have decided that we will utilize the ownership to decide if this should be like nested resource and it should be embedded in the same JSON. Therefore, that was the main reason because it is very required to have collection of entities nested inside in the JSON document. Therefore, we are implementing own collections along with own reference navigations. So basically it gives you a uniform way to talk about ownership across different providers. The implementation is going to be different if it is a relational database in which case it's going to be stored in a separate table or if it is a document database in which case it's going to be stored embedded in the same document. Okay. So the next demo we are going to talk about is our spatial extension. So in preview two that we shipped this morning, we have a spatial extensions for the EF Core provider for SQL Server and also we enable the functionality with the memory provider. And the plan is for 2.2 preview three to also enable it for SQLite. We are doing this because many customers have requested that we have this feature in EF Core. It's a very popular request and it enables working with databases that already have the functionality that basically can store data that is spatial and can also compute functions on the database and index data that is spatial and make it efficient to work with it and to do things like give me all the rows that belong in an instance within this radius or things like that. So we are enabling that in link as well. So the way it works is basically we went looking for a library that we could use for, that was a .NET library that we could use in .NET Core and .NET Framework that had spatial functionality. And we found a NTS that is a net topology suite. It's a library that there is another provider, the provider for a PostgreSQL that is already using to implement some of this functionality. And it's a popular and decent library. We believe that it works very well. So what we are supporting with this extension is to say you create an entity, you put a property on that entity that uses one of the types from this library. Let's say that you want to use a point or a line or a polygon property. And now we have the ability to map that to spatial types on the database. So let's say that you are using a geometry column on SQL Server and you want to store points in that and then perform some computations. Then EF through this library is going to enable that to be mapped. And then whenever you write in a link query some computation on this type, for instance you want to compute the distance between this type and another type, we also have the ability to look at that in the expression and translate it to the corresponding function in SQL. So we want to show that in the demo. So here we have a measurement class which is what we are going to use an entity. So measurement basically records time and location and temperature at that location at given time. So we have set up our sensor context to use measurements and persist measurement entity. In order to, so in our measurement entity we have a point which is location which is of type point. In order to save point to server we need to use a new package which is called Microsoft Entry Framework or SQL Server.Net topology suite. With this package there is an extension method which comes into picture which is like use net topology suite. So with this your sensor context is ready to use types which are based on special. So we are going to store some of the measurement to server and we are going to run query against it. So let's just first set up our database. So here we are going to create the database and store the sample data. So as you can see in the create table script the location is mapped to geometry column in SQL Server and in our data stored on the server if we go to the server and look at the data. So we stored point of with X and Y coordinates and they got converted to a geometries type and stored as a blob on the server. Now with this like yeah we have saved data to server now we let's go and run a query over it. So we have our current location as 00 we want to find out all the measurements which are less than 2.5 distance away from the current location and sort them by the distance from current location. So when we run this query to server basically we get four result out of it in the sorted order based on the distance. If you look at the query we run it is having a where clause which using sdistance function which is basically like what distance in link would map to on SQL Server functions in our current provider. So basically with Net Topology Suite package we have included is it allows you to map type coming from Net Topology Suite to like relevant SQL Server types along with it also includes translation for link. So all the method and functions called which Net Topology Suite provide on their types they are translated to equivalent function on SQL Server for querying. So most of the query can be done on server side without fetching all the data from server. Okay we have some limitations though in the current implementation that we are that we ship as part of preview two. For instance there is a type that is also very popular in SQL Server that is geography that is very similar to geometry but optimized for scenarios that involve like points on the earth or similar scenarios. So that doesn't work still very well or at all you could say on preview two but we are planning on improving for the next preview. The other issues you already mentioned or I already mentioned that we don't support SQL Lite yet but we're planning to add that. Do you remember any other limitation? No, that for me. Okay. So we're going to show then another smaller feature that we're working on. That's called query tags. So query tags is a feature that we thought about long time ago but we hadn't implemented it yet. People often that write large applications that generate lots of queries to the database they often need to correlate data, sorry queries that they find in their logs with queries that are in the code. That may happen if you are doing like the post mortem of some incident that you have with your database performance problems that you had with your database and query tags give you the ability to flow a string a piece of data that uniquely identifies a query so that it also gets emitted on the SQL and we emit it as a comment. So let's show how the API looks like. So to use query type, basically we are going to use the query from our last demo essays. So in order to use query tag, basically all you need is like call a with tag API with a string provided, which will go as a comment in your SQL. You can put on query root or like anywhere in the query tree. So here we are putting after measurement with tag. So now when we run this query to server it will generate same select expression and same results for us. But as you can see in the logs here the DB command which executed it has a comment with EF core prefix. This is my special query, which what we have written in the tag. So using with tag basically like we are going to print out this comment which allows correlation. It is also useful for the scenario like collection include in which case EF core is going to generate multiple queries to avoid redundant data being fetched from the server. In that case a single link query generating multiple select expression you can correlate those scenarios also. Yeah, so in summary it's a small feature that can help you with log correlation of your queries. So the next demo is a little bit more interesting. We are going to show the Cosmos DB provider that we have been working on. Now I want to start saying we have been showing a Cosmos DB provider prototype that we put together as a proof of concept for some time. I believe that if I'm not mistaken on the dotnet com talk that we did last year we showed that prototype. The biggest difference now is that the code that we are going to show is for real. It's basically the code that we plan on keep working on and improving. It's a real deal. It's not a prototype anymore. The prototype was very useful to get some feedback from you. The main piece of feedback that we got was basically that you see the value in having the ability to use the EF Core API to target Cosmos DB as an application database. People that are used to dotnet they are used to how data access works in dotnet and are using EF or have been using EF for some time. They see value in being able to now just target Cosmos DB as a different option. Of course Cosmos DB being a non-sequel database having its own benefits, its own really fantastic characteristics is going to give you different values. So you are still going to probably want to do things directly on Cosmos DB that you would do differently but at least you have some uniformity on how you deal with it from the EF Core API. Let's look at it. So here we have basically for the purpose of demo we have a blog and post model. So each blog contains a list of posts. For this demo we are going to use Azure Cosmos DB emulator which is basically a local host based emulator Azure provides to test and develop Cosmos apps targeting Cosmos DB. So in order to use Cosmos DB basically the API is like use Cosmos SQL which takes endpoints authentication token and database name. There is also plan from like Azure Cosmos DB team to have a API based on connection string which combines all this thing into one instead of specifying them separately. Once it is enabled on that side we also plan to add first class support for that. So you could provide connection string just like you do like all other relational databases. So in our basically blogging context we have set up to use to Cosmos SQL. So now let's just first create our database. So basically we created database and if we go to emulator now in data explorer. So we can see our database EF code demo is already created with a single collection Unicorn inside it. Next we are going to do is after creating database we are going to add bunch of records in it. So here we have a blog which has two posts inside it and two other blocks without any post. So total five records. So we save using save changes async we saved those records to Cosmos DB. If we go back here and look at the document. So we have five documents in our Unicorn collection. So as you can as you would notice here that like blog and post both are stored in the same collection. So this is one of the thing which we changed from our prototype last year. So in our prototype model or the demo we showed previously every inheritance hierarchy of entity type would map to its own collection. This is generally like what we follow in relational model that every hierarchy goes to own table. The feedback we got was mainly based on the pricing model of Cosmos DB. So each collection in Cosmos DB has a fixed minimum amount of throughput quota and storage quota assigned to it. So if you have a collection which does not have many records or like low utilization rate then it is going to cost you a lot of money even though you are not using it. Based on that feedback and like what customer told us we decided that it is like in the best interest of customer that likes to store all the documents in a single collection. We also have built this in a way that customer can decide how they want to store or they want to store across multiple collections or they want to store everything in same collection. There's a way to configure it. In order to store multiple entity types into same collection what we are also doing is like we are going to introduce a discriminator column. So in this document as you can see discriminator name is blog. Same way for a post the discriminator column would have value of post that is what we are going to use to identify which kind of entity it is. So when we are materializing back we know which data to read from. Another thing here is like the ID is the ID property in a JSON document on Cosmos DB side it's a string of 250 character max. For a user application the ID could be like a different variety of things. Furthermore when there are multiple entities in the same collection those ID could also clash with each other if they have same value which is not incorrect data since those entity types are like totally different type but it's just how it is mapped to database it could cause an error. Because of that what we have decided to do is like ID is what we auto generate automatically and the primary could be stored separately. So primary case unique and it also would be treat as such an EF core model but it won't be enforced on the side of database strictly and database would be using ID which is like auto generated by the server. So preview to currently has this limitation so while using preview to package you may need to provide your own ID which don't clash with each other across different entities but that is what we have fixed in our nightly build already. So after One detail that I don't know if you remember this but the reason we started the discussion on just mapping by default to the same collection started because of somebody posting the feedback to channel nine after we did the demo of the prototype. So we hope that people are going to start giving us more feedback. We actually need that feedback in this case. This feature is huge. The spatial extensions are also big and normally we need some time to mature features like that to make sure that when we ship them for the first time it's something that we can keep evolving without too many wrecking changes and that we get the design right. So the more feedback that you can give us the more chances we have of actually getting it right on the first shot, on the first release and the more chances we have of actually including this features in RTM of 2.2. So welcome all the feedback. So in our demo now we have stored the data. What we are going to do is we are going to query the data from the server. So first we will iterate over all the blocks in the database. So in order to query basically we are going to run a SQL query because document database SQL API provides us that functionality. So we run a SQL query with the wire predicate. So here the wire predicate we are using to specify the discriminator. Since we are only querying for blocks we don't need to get data for post or any other entity type stored in the same collection. Therefore we introduce the discriminator and as you can see in the result we have the three posts stored in the server. Next we would want to load the post for ADO.NET blog. So first we will query for ADO.NET blog. We will create entry for that and use collection API to load the navigation. So if we go further then so first query we run is to load the ADO.NET blog from the server. So here in the predicate we also specify C name is ADO.NET. So in current link implementation we have quite a lot of unary and binary operator getting translated. So implementation is at least for the filtering part or wire predicate. We have quite a few stuff which would get server evaluated and would avoid fetching unnecessary data from the server. We are also looking into improving and mapping the server side functions from some of the link operations so that more and more stuff get server evaluated. This is what we have like biggest improvement we have to make in terms of like query towards like Cosmos DB. Next query we run after fetching the ADO.NET blog is to load post of it. So this is the second SQL we generated which is basically using discriminator on the post and passing blog ID as a parameter. And the results are like yeah we have two blog for ADO.NET blog. We have two posts for ADO.NET blog. Now we got we save data, we query data now let's modify the data. So for the first post which says welcome to this blog which is let's just remove content for it because it was just a test blog. So going further basically we are going to change the content to content removed and call save change is async. So if we go back to the server so this was our first post post ID one and blog ID one and the content is now content removed. So as we can see in our demo so basically we have got all basic functionality implemented in Cosmos DB for users to try it out. We have database creation, insert, update, delete implemented in a simple querying API. One thing probably people have noticed in this demo is like so ensure creator and deleted or APIs like save changes. We are calling async version of those. So at present we are using SDK coming from Azure Cosmos DB which provides for DML and DDL provides only async API. So in our initial implementation we have implemented it with async only. We plan to add sync APIs also with using REST API and stuff but it is not yet in preview two. Same way on the other side, the querying API is at present only sync based. We have introduced async query pipeline also. So all the queries being run like you can also run them asynchronously. So that is kind of like one of the limitation right now. So if you use for save changes if you use sync synchronous API or for querying if you need asynchronous API you're likely to run into not implemented exception but that is not a long-term goal that is like current limitation only. Another limitation that I remember on we have in preview two is that inheritance doesn't work very well in queries. If you issue a query for a base type then you may not get the derived type instances just a bug in our query implementation. There's the other one thing that we really want to do which is the thing that we talk about with ownership that we want to embed the own entities in the same JSON payload that we haven't implemented yet. And there are like more query translations that we want to do. Yeah, so in terms of Cosmos DB since it is a non-relational database there are no foreign key constraints. So there is no support for join especially certain operations are in Cosmos DB are quite costly because it's globally distributed database. So you may not have all the data at single cluster because of that quite a few things which we realize could be differently translated. So in the demo we use load API to actually load the post instead of using include. So the way include has been implemented right now it does query for post using inner join with the blog the first query. When we try to do the same thing with Cosmos we can't do that because inner joins are not supported and we have to do that on client which is going to cause a lot more data fetching from the server than what is needed. On certain things we plan to improve the querying and stuff in a way that there is some intrinsic based on the design of Cosmos DB certain query operator does not work and what is the most optimal way to get the data. So for the join scenario it would be better to get based on like key values or getting this based on like where predicate. So those are some of the things we plan to improve on query. There's also like a lot of server side functions which are not mapped right now like top and top or order by is not server translated. It's those are minor things which it's been based on like it is just taking time. So not been implemented. We are constantly looking into improve it so that by the time we release an RTM version we have like really good story about query translation. Another story that we currently don't have any solution for it but that we want to basically learn from the feedback that we get on the provider is what happens when you evolve the model of the application. Let's say that you are using Cosmos DB to serve data for a mobile application and then your schema changes or well Cosmos DB doesn't have a schema but the object model that you are using in your application changes on the server and then you still need to support older clients and you need to support new clients that are going to store new data that the other clients don't support. So one thing that we have done to support that already is that when we fetch an object, sorry a document from the database we keep a copy of the whole document. We don't just keep a copy of the values of the properties that are mapped to the EF model we keep the whole document. So nothing is lost when we do a round trip but the scenario in which we need to basically migrate the existing data in the database to a new schema or things like that is something that we just need to learn what the highest priorities are for our customers that are using Cosmos DB. And we are going to learn that only through iterating over it and talking to customers and getting the feedback. So welcome to talk about that with us. So next we have a roadmap. Yeah, so let's talk about what we are working on. So we are planning to ship EF Core 2.2 by the end of this year. It's going to contain besides all the features that we talk about today, it's also going to contain another small feature that is about having the ability to reverse engineer a database view using the feature that is called query types. That is something that we introduce in EF Core 2.1. That is basically like an entity object but without a key and it's read only. So this is like ideally suited for mapping to views in databases. Besides that, if we move to the future and we talk about things that are a little bit more uncertain in next year we are going to ship EF Core 3.0. And we don't know exactly what the feature set is going to be. We have been discussing more than once different set of features for EF Core 3.0. What we know that we would like to include in the release is in this slide, basically the main thing that Smith is actually working on is an overhaul of our link implementation that is going to make it more robust and much better. And also we have features like the ability to map to new constructs in C-Sharp 8. So C-Sharp 8 is planning to add support for a standard interface that represents asynchronous collections. So if you want to stream data like a query, query results for instance, that asynchronously. Currently we are using kind of a more ad hoc version of this interface, IA Sync Enumerable, that is defined in Microsoft Interactive Extensions. But now there's going to be a standard version of this interface and it's going to be included in the language and the language is planning to do some work to enable interesting features like for each or iterators using this interface. So we want to converge to use the same interfaces. Another feature that C-Sharp 8 is adding is a nullable reference types or actually what is being added is non-nullable reference types. That is the new thing that we can take advantage of because knowing that let's say a string property is not going to be nullable, we can make that non-nullable on the database as well. And at the same time when we generate the SQL, we know that it's not going to be nullable. So we can simplify the SQL that we generate. There are other features like property bug entities that we would like to include. This is about being able to map to an object that is not a different CLR type from theirs. It's not a Poco, but it's just a dictionary of string to object. And this is very useful for two reasons. One is when we have, for instance, this request to implement many, too many relationships without an entity in the middle. And that entity could be hidden, basically. It could be just a property bug that we don't expose and you can have your navigation properties going directly from one entity to the next, skipping one level. The other reason this is interesting is because F-sharp has type providers. And it's a feature that if you are an F-sharp developer, you probably know about type providers and you know that the erasing type providers are about having types generated at the same time at the same time and then don't get preserved at runtime. And having EF support property bugs as the types would enable that scenario. Another feature that we have here is, for instance, aggregate behaviors. Aggregate behaviors is also based on this idea of document databases and supporting Cosmos DB and how we can have a uniform way to represent certain constructs across different providers. So when we talk about document databases, we often relate that to aggregate oriented databases. That is a concept that Martin Fowler talks about a lot in one of his Wiki documents. The idea is that, well, a document is just an aggregate of data. And if it is an aggregate of data and the ORM reasons about that aggregate of data, that aggregate of data can be mapped to a relational database as well. It doesn't need to be just a document database. So we are thinking about making more automatic behaviors around aggregates, like cascade deletes can depend on ownership and they can be automatic. Things like concurrency control can also be automatic in that way. The next thing that we have in this slide is talking about entity framework 6.3, working on .NET Core 3.0. This is something that we started talking about in May when we did the first announcement about what .NET Core 3.0 was going to be about. And if you want to go to the next slide, so we have a number of reasons we are doing this. So the main reason we are doing this is basically because we want people to be able to move from .NET framework to .NET Core 3.0 very easily. .NET Core 3.0 is going to support a number of desktop centered technologies like WPF and WinForms. And we find that many existing applications are using EF, EF6 in particular, or previous versions of EF to do data access. And we want to enable basically those applications to move to .NET Core 3.0 very easily and very cheaply. Although we are not actually changing the direction, we are still going to recommend to use EF Core for all your new applications. And anytime you can afford porting to EF Core 3.0, that would be the recommended path forward. And we are not adding new features to EF6. We are not thinking about investing on new features. We can look at pain points and we can try to solve pain points. And we, because EF6 is also an open source project, we can get pull requests and we are going to review the pull requests, but we are not making a big investment. We are investing on EF Core 3.0 that is already paying off, it already has many more features that people wanted for EF6 and EF6 made it super hard for us to add them. So another thing here is that EF Core, sorry, EF6, 6.3 on .NET Core, the plan is that it's going to be cross-platform. There is no reason to limit it to just work on Windows. It should work on Mac OS and on Linux. And also, likely, we are not going to have spatial support on when you are running outside of .NET framework because the types that SQL Server uses for spatial support directly, that EF6 depends on only work on .NET framework currently. And if you are working with an existing provider like Oracle, for instance, and you want to port your application to .NET Core 3.0, then you are going to need Oracle basically to ship a new version of the provider that was compiled to work on .NET Core as well. So, well, that's all for .NET framework 6.3 on .NET Core. So our final message, our demos are available in this aka MS EF Core demos link. Again, I don't know if I said this enough. We really need your feedback to make sure that we finish these features with high quality and with all the iterative process that is needed to develop software. The main names of the package, I put them there so that you can download them very quickly. And finally, we want to thank you. We are having a great time developing software as an open source project with this developer community. And from the whole .NET framework team, we want to thank you for all the feedback and all the great interactions that we are having. So we are ready for questions. So the first question I'm seeing here, it says in all EF, you made conscious decision to make a single enumeration for every query. That's basically, it says more things but EF Core doesn't do that. That's the main question. And this is a very important question. This is an interesting conversation to have. One of the decisions we made very early when we were designing EF Core is that it wasn't going to be only for relational databases. One thing that is sometimes different between different kinds of databases is the richness of the query language that you can use to talk to the database and the capabilities that it has. So we wanted to implement link as the query experience but then we had to translate that link to some language that could operate and do perform those things efficiently on the server. Now, if you are working with a SQL database, there are a number of things that you can write in link that they are going to translate to SQL. But there are other things that you can do in link that are not going to be translatable to SQL. If you are talking to a different kind of database, you can probably translate even less to whatever language you are trying to target. So the idea would be, well, actually we got a lot of feedback from customers using previous versions of EF that said, okay, I don't want EF to throw whenever there is a query or something in the query that cannot be translated to SQL. I wanted to just resolve it. I know that it can be a little bit more expensive to resolve it in that way, but if it needs to issue more than one query, I want it to happen. So it's basically a user option to be able to do this. And what we found is basically while we decided that by default, EF Core was going to have this option enabled by default. It's like, if I need to resolve a query and it's going to cause more than one round to the database, it's going to do it and it's going to issue a warning. Now, we have a warning system that you can use, you can configure it to actually make it throw in that case. And we are thinking also about making it even more easy. Like you can have a method on the query that you can use to say, okay, this query executed, but if it is going to cause multiple round tips or it's going to cause a lot of client evaluation just throw because I don't want that to happen. So do you have anything to add to that? To add more to that, especially specifically that question said about N plus one evaluation. So yeah, it is true that EF6 did only single query. So there were not more than one query and in certain cases, EF Core is going to do one query for each record. So that is something which it happening right now because we don't plan to convert it to how EF6 worked. The main reason we moved to multiple query mode is EF6 because of it would do only single query. Certain data which is coming from first query in EF Core, it would get duplicated way too many times, which is unnecessary. Therefore, we moved to like split query mode where we are going to run multiple queries. At the same time, we are also looking into combining some of the N plus one. So instead of doing N queries, it would do two or three queries based on how the data is. We also had some certain tracking issue around where none of those would work. We would still combine based on the key value set. So instead of sending N different queries, it would send them in a chunk of certain fixed size. So those are the improvements are there, but it is going to be different from how EF6 was. That's a good point. There are situations in which things that you write in link can be translated to SQL, but we are just not doing it yet. That we need to improve, we need to make it more efficient, but... One certain example of that would be like group buy and first-all default, which is there is no direct support for that in SQL. And there are various ways to write that query, but yeah, that is something which is complex and we haven't gotten around it. But we plan to improve all those kind of things to avoid N plus one round trips. Yeah, one scenario, we are focusing on this question a lot, but I think it's an important question. One scenario in which it's clear that for instance having this model is a benefit is when we implement Cosmos TV, there are certain things that you can do in relational that you cannot do like joins across different collections or across different entities basically. So the only way we can implement that in EF core is actually by doing some of the evaluation of that joining memory. And especially for the example of Cosmos TV, the N plus one evaluation could be actually good because given the SLA criteria and stuff, it is really fast to fetch the data from the server rather than client to computer complex query. Okay, so let's move to the next question. The next question is, how is the Oracle connector coming? That's another good question. So the answer to that has two layers and several options in each layer I guess. So there are providers that are commercial that have been already available for Oracle, for EF core and for AD.NET on the net core. Now the Oracle team as far as I know, they release beta three of their AD.NET provider for the net core and they are working on a provider for EF core. I don't know the exact dates for them to, they are planning to release, but they are working on this. I believe that the AD.NET provider layer is actually very, very close to being done. And the other thing that we did was we actually created ourselves a sample Oracle provider for EF core that is based on Oracle's AD.NET provider. And the reason we did this is because we wanted to help anyone that wants to write an EF core provider for Oracle. And we wanted to also discover by doing this exercise what limitations we had in our provider mode that people would hit. So Oracle and anybody else can use our sample Oracle provider as the starting point for this provider. So next question we have is, can we connect to MongoDB using the same Cosmos DB API? So Cosmos DB has, it is a single database but it has different access APIs, like SQL API, then Cosmos, then Cassandra API, MongoDB API, Table Storage. So current implementation of SQL API, it actually targets SQL only. So even though Cosmos DB is a single database, each API has a different set of characteristics which requires different way to access the data or generate certain queries. So our current plan is not to have a provider which would target all the different APIs of Cosmos DB. We are making it for SQL API right now specifically. But we believe that this work will actually provide a certain non-relational abstraction for people who would want to write a provider which would target different APIs of Cosmos DB or even it could target other non-relational database out there which are not Cosmos DB. Yeah. Especially like MongoDB and the Cosmos DB SQL API have in common that they are storing documents as JSON. So a lot of the work that we are doing in that area is something that can be leveraged probably in a MongoDB provider. But for instance, the SQL generation doesn't make any sense. So there is, by the way, a MongoDB provider that somebody in the community is working on, I don't know exactly what the status is for that. But hopefully, with all the work that we are doing for the Cosmos DB provider, that's going to be easier. So Connective Framework Core be used to process large data sets for data science and machine learning. And actually, that's something that I would like to hear from people that are doing data science and machine learning to hear what their experience is. We haven't done this. We have had only initial conversations with people that are working on machine learning on the team to see if there is any opportunity. Because mostly because we have customers that are asking the same question, but we haven't learned much about it yet. What about skip-level navigation? So that's one of the components that we have for the plan for many to many. That's something that we would like to do next year. But we don't know exactly in what release it's going to fit. It depends. Are there plans for Cassandra with DF Core? No current plans. It's also an interesting database that we are starting to think about. Hopefully, we are going to be closer when we finish Cosmos DB than we are today. Many to many support like was in EF6. Yes, just answering the same question again. We would like to have that next year, but we don't know yet if it is going to fit and exactly in which release it's going to fit. Any plans to support single statement bulk updates in search deletes in the core instead of a single query per manipulated row? That's something that we have in the backlog. It's not the highest priority. Something that is interesting is still not close, because it's something that we believe eventually would be very good to work on and it would make some scenarios very efficient. The easiest of those scenarios is probably bulk deletes, because you only need like a set of keys to generate the SQL. But it's not the top priority just because we have so many other things that we need to work on. So apparently, we don't have any more questions. So if we don't have any more questions, I guess you can ask some questions offline. You can use Twitter or you can use the comments here in channel nine to ask questions. Thank you very much for watching this.