 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVercity. We'd like to thank you for joining this DataVercity webinar moving beyond SQL, delivering personalized responsive experiences that customers crave, sponsored today by CouchPace. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen. Or if you like to tweet, we encourage you to share our highlights or questions via Twitter using hashtag DataVercity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. Just click the chat icon in the bottom middle of your screen for that feature. And as always, we will send a follow-up email within two business days containing links to the slides and the recording of this session, and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Tyler Mitchell. Tyler is the Senior Product Marketing Manager for CouchPace where he works closely with a broad group of teams across the organization to bring new products to the market. He focuses on empowering customers by creating awareness and understanding of CouchPace's latest innovative capabilities and how CouchPace technology fits into the overall enterprise database market. Tyler's background is in big data and SQL analytics. He previously worked as an engineering director with Actions R&D Group for Ingress and as Executive Director for Open Source Geospatial Technology at OSCHEO. I can speak today. Tyler, hello, and welcome. Yeah, you went with the long bio, I guess. I did. Not the elevator pitch one. So thanks for having me, Shannon. This is great. And thank you everyone who's joined and joined on time. This is great. I really wish we could have more of a conversation maybe at an event sometime. We can sit down and talk through some of the items we're going to touch on today. But I wanted to bring kind of the core foundational principles that really have driven us forward as a company as we respond to customer needs and demands in the marketplace. So I want to share some of that perspective with you today. And with that, I'll start off the presentation by changing the title. Let me just click on the right screen here. I really should have actually called this Beyond Relational because as you'll see, we actually still enjoy the benefits of SQL like query languages within a couch base and also within the broader market. No SQL doesn't always mean abandoning the query language itself. So I'll just get that out front. We're going to burn through about four different sections of information. I'm not going to spend a lot of time talking just specifically about couch base. So if you're turned off from hearing too much vendor pitch at the beginning, don't worry. It's light and should be fairly digestible. But I also will just kind of compare and contrast how couch base fits into the needs of kind of modern architecture that we have for developing engaging applications. I'll talk a little bit about how you can get into the no SQL space and give you some good resources to link to at the end that will help you kind of assess your own situation and plan your next steps forward. So let's get some of the biggest assumptions out of the way right now or maybe depending on where you're at in your application development kind of life path. You'll know already that the interactions that customers are having with applications is a lot more than the interactions are a lot higher in number than the transactions themselves. So if you think about how typical end user interacts with an application, let's say like a web application where they have to click on a link, liking a tweet, starting a game, entering a search term, everything from playing a video, adjusting volume, all these little interactions are quite numerous. And then hopefully at the end of all of it, it leads to a transaction making an online payment for example. And that might be what you're really looking for. So that kind of idea of building this big volume of interactions and supporting that kind of interaction in a meaningful, engaging way is really what we're talking about. We're talking about moving beyond relational because typical traditional relational approach was we need to track some transactions and that's what we're interested in doing. But there's this whole other realm of interactions with customer base and user base I should say that isn't been captured being being captured very well over the last while. And needless to say these are just a few of the expectations that customers have now. They want personalized and responsive experiences and that's really not just expected but demanded. You build an app that's slightly less personalized than your competitor and you're out of the running in a lot of cases. These look like more like consumer examples here but really we're talking about users within your own organization that you're building your applications for to not just consumers. So behind these requirements is the need for a real flexible architecture. The users or customers in this example are really demanding. They have dynamic needs unpredictable in many ways and it's really not the same requirements that we've had in the past. And we'll get more into it but use cases for specific apps can change and pivot and grow extremely dramatically and very quickly bigger than you expected them to be. So we'll come more to how we deal with handling that in the NoSQL world. And I should say application developers often know that this level but the database owners have a different perspective and are just trying to be stable transaction transactional database but the merging of both the application flexibility and the database flexibility is really what we're talking about today as well. They both have to work together to produce these experiences that the users demand. So historically we've had two kind of approaches, the transactional database and then when we eclipsed the capabilities of real-time transactional analysis we have the analytical database and that was kind of the two broad buckets of databases that we've had and failure in those two categories was when we try to push them too far to the edge and try to get them to do more other things than they are architected for. And then so we end up with this scenario where we have these additional solutions that are required but they don't really fit firmly into either of those buckets. You might take a transactional database and put the data into a cache and make that available to Web App or you might put it into a search engine, a search system and make that available to your Web App or etc. So there's a bunch of different little solutions that kind of have hang out there but these are really the meat and potatoes of where the user interaction is. So many databases have attempted to add these new functionality through just adding little components or bolt-ons on the side of their core database product but it's really sacrificing the manageability of their database and still not solving the agility problem. So we've come at it from a slightly different angle and said okay let's look at the end user and the kinds of engagements they need to have with the data that we have that we're collecting from them and we've come to this idea of an engagement database and really you can think of it more as an integrated data platform that serves those kind of customer needs in a more flexible way and reduces complexity of the overall architecture. So if we went back to that previous slide and there's all the little dots we can roll up those requirements into a new type of database that serves those needs best without sacrificing kind of manageability and agility and performance and complexity. So we'll deal with each of those today. So these are what we call our attributes of an engagement database. It's really what kinds of features are you going to need in the solutions you're building? It's interesting when you read through these, it sounds like common sense but really we are still pushing the envelope on several fronts with several of these items. And as I was mentioning earlier applications, so the front end that interacts with the user applications are tied tightly to their underlying databases and then those underlying databases are tied tightly to like for example the cloud example here they're tied to their mode of deployment and they'll be limited by their ability to scale. So the higher level applications will suffer if all of these kinds of attributes are not addressed in the solution that you're building on top of. So it may not be obvious even now if you're a smaller shop and you're building your own applications that you're going to, maybe it's a mobile solution and you have high availability requirements and you're deploying on cloud, you might think you've got it set if you solve those three, but then you launch and you exceed your expectations, thank goodness, and you have to scale and you have changes to your product. You'll grow and you'll eventually hit every one of these issues. And by built-in smarts too we think of things like manageability, management consoles, other additional features and things like that. The other ones I think here are all fairly obvious and I won't go through each of them in any more detail than that except to say there are specific attributes of an engagement database that are not what you would traditionally say is the purview of a relational database. Although we tried to deal with a lot of these problems wherever they come up. This is my only kind of overview slide of our product in terms of our data platform and you'll see front and center and this is kind of the center of all of this is the red items that present the core expectations for scalable systems but there's persistence layer but there's memory-first architecture so it's not just an in-memory database that doesn't have a persistence layer but no we do store it, we do save it, we do replicate it and we have high availability capabilities. But that whole memory-first persistence and elastic scalability is a really important core function but then we also have replication capability and I'll talk about that in a bit. But on top of all of that kind of core capability then we have the ways to access the data through different services whether it's through querying or through SDKs or through full tech search for example. Those capabilities are all serviced by the same core and really that's the differentiator compared to typical relational systems is that that's not their core, that's not what they're trying to solve a memory-first scalability persistence replication problem but that's what we set out to solve out of the gate. And then you have the kind of the gray wrapper around all of this that makes it easy to use having a common SDK for example and common security protocols and then also integrating with other big data and SQL products so there's a lot going on at different levels built around this really solid core. So why are we even talking about moving beyond relational databases at this point? Here are four of the biggest items that we've identified as really these are this is the reality that our customers face on a day to day basis with their traditional databases and how it really hurts and holds back their ability to grow or to develop new applications and just on that note we do see customers who do start to move say Greenfield new applications they're building put them onto a new platform while they still keep their older systems running on a traditional database so it's not like we're advocating rip and replace but I will come back to a few scenarios of reference architectures for you and I'm sure one of the four I'll present are going to touch on something that relates to the applications you're building. But we all understand the challenges around rigid schemas as an application developer you need to change your say a front end form on some application you need it to be reflected in the database you also have to change maybe your middle layer, middleware layer to change the query that's being sent back and forth there's that rigidity of schemas can cost a lot of money for companies even though they know they need to change the schema they kind of go is it really worth it it's going to be expensive let's hold that off until later time and that's one of the challenges for sure. Performance is a big one we love to talk about performance because we have a really good story around that as you know from that previous slide those core red capabilities in the center of our product also play really well to performance as well as scalability advantages that we need to focus on so everybody wants those things they want their application to be able to scale they want their database to be able to scale and as they have a thousand more users they don't want to have to add a thousand more servers and that brings us to the cost item as well so a lot of traditional databases are expensive not just from a licensing perspective but from a growth perspective so when you do try to scale up to manage bigger workloads the cost will go up and you often have to overcompensate and buy more capabilities than you really need so we kind of look at relational database limitations from three different perspectives agility, performance, and manageability and from the traditional database vendor side we have put access through all three of these because these are the highest and easiest points for us to address as a vendor who has specifically targeted these limitations and made sure that they are not a limitation for our product I wanted to walk through first of all why is the limitation than an example of how we deal with it so we recommend you deal with it no matter what platform you go with in the end so the first one is that things don't – the relational databases don't handle change well and we've got a little picture here showing oh there's the first iteration of our schema looks good but the second iteration we want to change something we want to add on Twitter user ID instead of just having your first name and last name and that costs money to do and it's a challenge because it's not easy to implement those changes but we also know that applications are always changing and you identify one weakness, you identify a market opportunity you adjust and you adapt you shouldn't have to actually go back and re-engineer your database technology to be able to handle that especially when time is tight and quite potentially very costly so that's the agility schema agility problem so the way that we've dealt with it and the way that other vendors do as well to be fair is to adopt JSON data as it's kind of a JSON data structure as it's kind of primary type of document that's stored in the database so instead of tables and columns and rows instead we have a JSON document with keys and values and those are adjustable and you can add to them you can update them at any level whether it's a high level, change the whole document, replace it or replace a piece of it or replace one word or update a counter, et cetera the data manipulation can be very simplified we know how to update strings we know how to update arrays we know how to update those things so from a developer's angle it's actually not complex to really get your head around but you're not having to change how you're accessing the schema for example so we say that much less code is required and because there's less code required to access and interact with these things we believe that you have a more bug-free experience and JSON is already part of a majority of the workflows that are going on especially all the ones on the web pretty much the other limitation that we deal with that you need to evaluate your application on is around performance and scalability is a big question here you can see the graphic on the left there's the big database at the bottom there's many little application servers in the middle and then there's the end user applications at the top and to continue to make this perform let's say you're going to add more application servers and you're going to need a bigger database instead of database hardware for example and we find that the typical experience of our customer is that as they scale up and have bigger and bigger machines for servicing that database eventually the return on investment becomes negative and you can only scale to a certain point and often that means going back and saying okay now we need to reevaluate all of our machines and have something that's more powerful out of the gate that we can then scale up to but this leads a lot of customers to saying okay now I'm going to upgrade my servers more than I need to right now in the anticipation I might need to later but it's such a pain I don't want to later so your provision beyond peak capacity and then there's just challenges around running these kinds of scale up environments on cloud platform the cloud providers will only provide certain machine types anyway so you'll kind of hit this end of where you can't scale beyond a certain point so that's a challenge and often we know that the hardware costs are so high and you have to get extra special machines to keep the business running that it's a real problem for people so we come at it slightly differently we have a database tier that itself is scalable and can be is a clustered replicated environment so because the data is distributed we can actually distribute our web servers can actually access the data on the specific nodes that they need to access it from so I'll touch on that a little bit later but the efficiencies there are that you can add new servers, scale out add new nodes without having to buy extra beefy machines all the time getting better and better and better and we have linear scalability I should jump ahead on my points here we have linear scalability on that front so you can add more and more nodes and have your application performance continue along at the same rate or even improve we also have capabilities that allow us to scale out your environment without actually disrupting the database performance itself so you add a new node to it it's literally a push button experience type in an IP address of your new node say rebalance your system and it will systematically go through and redistribute or replace nodes that have failed and redistribute the data across the new set of nodes that's available so that's the kind of experience that we expect especially when we're talking about not affecting performance just when you want to scale out further we want the performance to be maintained sharding, again I realize I'm probably running a little slow on my slides here so I'll try to skip through this a bit the sharding can be a massive headache especially when you're scaling up and the additional overhead of managing shards across multiple nodes or multiple databases in a traditional relational database is a real pain and we hear from our customers all the time couch base we have an auto sharding capability by design that your data is automatically redistributed across nodes you don't need to as a DBA for example need to know where your data is going or where it's sitting the applications will actually get that information directly from the cluster itself and know where to field their requests to for optimal performance across these shards so with replication being first class citizen in our database it's built in and it makes it a lot easier to scale and keep your costs and management challenges down so those are the main weaknesses that we've been engineering and architecting our system to solve let's go back to the summary slide there the agility and flexibility performance at any scale and simple management in another webinar sometime or maybe you can look up one of our videos from our couch base connect events you can see in action and it's actually quite impressive especially our cloud orchestration capabilities when we turn on elastic scalability and see how a database can grow and shrink based on demand just by the push of a few buttons it's pretty impressive so how do we get beyond those relational challenges that we looked at we've talked a little bit about our platform and how it solves some of those problems you can look at how the first step is really re-evaluating how you're managing your data and what your data the most atomic pieces of your data look like and the JSON schema approach is so flexible that we've actually listed four different ways of storing your data and what I really wanted to call out here is you can change a piece of your data one value of your data of one key or the whole document you can have a normalized or a denormalized scenario either one will work and if you're looking for example talking about a user profile you might want all that user profile data in one document not split out across multiple tables so you would say give me that whole document and your application can I just snap my fingers there your application can have it all in one request otherwise if you do break it up into multiple tables with references between those tables then you'll need some kind of a multiple request or a query as I'll show you in a second to pull all that information together so in a relational database you might have multiple tables to represent your data and the JSON you can have multiple or you don't have multiple it's your choice instead of the database forcing you into one paradigm to make that decision based on the data that you have at hand and you can do relationships in CouchBase they're different than in a relational database it's not a primary foreign key kind of concept but instead it's going to be programmatically accessing referenced data and we'll talk about one way we do that right now oh sorry I meant to add accessing data is the other side of the equation we have storing data using flexible JSON approach accessing data in multiple ways is important too you can do a direct call to get a document you can do a direct call to get a specific value from that document we can do SQL based querying I'll show you on the next slide you can do a full text search we also have an analytics query that's built on SQL++ that allows you to do more aggregates and more performing group buys and things like that and that's the call out to the MPP for large ad hoc query access so we don't want to just store the data and then have one way to get it back out we have multiple ways to store it multiple ways to get it back out the permutations are endless and really address the most common enterprise requests for data handling that you can guess what they all are like based on your own needs so here's one of our approaches to data access one is our query language called nickel no SQL query language or SQL for JSON I hope the yellow text shows up okay for you but you'll recognize the for the relational users in the room you'll recognize the insert into statement you'll recognize the green select from statements and again there's no magic in any of these I better turn on my highlighter here just a second there's no magic in these little colons and stuff these are just names of a document and these are names of keys within the document or field names within the document you could think of it as so it's a simple JSON data structure but you can reference field names like you would normally in a normal SQL environment and has almost all of the SQL capabilities you're used to having really for example we can do joins so if you choose to have your documents separated into multiple components you can do joins and there's a SQL or a nickel example at the bottom here of just joining based on a key name in this case the common key is used as the as the join parameter but we also actually just implemented also full ANSI joins so you can do your ware clauses to join all the documents together so if you think of it as a bucket full of documents and that's what we call them buckets then you've got multiple documents you want to pull together you can actually do that through simple query capabilities that you're used to using in a relational environment so it's pretty exciting but it was for me moving from a SQL analytics platform into no SQL and realizing I can still write my queries without having to write code or writing some obscure proprietary kind of query language instead your SQL developers can start to work with with couch base no SQL environment without really having to change too much of their capabilities and then full tech search is one of the other ways that we're exposing that data that's stored in the highly distributed scalable environment I won't really go too much into full tech search it was a product I worked on extensively so I could talk about it for an hour but the idea that your data maybe you start as having cached data and you want to expose that data through a web search capability well we have web we have search capabilities built in as a service again around that core functionality and you can index your data without really having to do very much work and build an application that calls that data we have a scoring algorithms built in and we have the ability to give you kind of the snippet of where the data is stored in that field and give you some context so it's pretty it's a pretty fun project actually to use I'd like to switch gears a little bit here now and step back I've talked a little bit about our capabilities how we're addressing some of those common questions around how do I still query data for example and now we're going to move to what would it look like if I was to get started with couch base and one of the recommendations or four of the recommendations that we have for how to picture this in your mind so a lot of people come from using a product like Memcache D or other caching solutions for Memcache D actually several of the engineers that started couch base are from that project team so we know it well and we actually are a drop in replacement for it if you are using that technology already but couch base can be used as a caching layer on top of your relational database so you've got your typical relational database there you don't necessarily need to rip and replace that instead why don't you push the data into couch base and let us service the request through this highly scalable environment highly performant and to feed your application layer in a more performant way and because our databases can run on really on commodity hardware you can scale that out to your heart's content and hopefully manage your costs a bit better than you were having to scale up bigger and bigger servers all the time but this is a really typical kind of first step into learning more about couch base is to start caching some of your database requests through couch base so your tabular data can get stored as documents in couch base people can write SQL queries or our nickel language queries against those documents still so your app can still use SQL concepts if desired or you can start to just do direct document fetches back up to your end users this is where it gets really exciting for me is this idea of replicating data geographically so we all know that you don't put all your eggs in one basket you don't have one database and if you do replicate your data to another database you don't want it necessarily stored in the same data center or similarly you have application users who are in different geographic areas and they want the data a copy of the data close at hand so they have optimal performance as well well one of the aside from that core capabilities built into couch base the other kind of core architectural advantage is really this replication capability and we call it cross data center replication or XDCR and you can have so many different replication models this one here is just showing basically because we have a multi master environment you can keep multiple databases in multiple locations all synchronized with each other and using our conflict resolution that's built in and everything you can always make sure that you have the most latest document stored in all those databases this is really worth looking into it's a real game changer and it's a differentiating feature for our product you won't find strong capabilities anywhere else but that said once you moved to maybe not maybe you had a cache and maybe now you have a distributed cache that's geographically distributed as well and accessible to all your application servers around the world and then it gets a little more complex when we started talking about how to aggregate data from multiple sources so the red stuff in here is all couch base for example the couch base globally distributed replicated environment the bottom right and it services the applications that are that are needed but behind the scenes there might be a lot more going on for example if you're already using messaging systems event buses or Spark for analytics and ingest and you're pulling data from whatever other data sources could be from other couch base databases that matter owned by a different group within your own company we push all those things into an event bus or even directly into something like Spark and we can connect with those and have the data actually synchronized with couch base with the clusters themselves then you can also push that data out and store it in another external data source maybe you're going to throw it into your data lake or you're going to create another couch base database like aggregated or pre-computed or maybe you're running some machine learning algorithms in Spark and you're saving the results out into a relational database or into a couch base so this is really more common when people get really into using couch base and having to develop full featured applications especially when you talk about machine learning and data analytics and things like that not everybody wants to use our platform for everything and this allows us to augment our capabilities with integration with products like Kafka and Spark and the final reference architecture here is really how to create mobile applications that are as resilient as the couch base typical applications are so we have the couch base server at the top we have a product I haven't talked about it's called sync gateway it's part of our mobile platform we have couch base light database that runs on embedded architectures and we can synchronize that data back to the couch base server now there's other things we can do with sync gateway I won't go into detail on it it's not my strong point anyway but you have web apps that are working with the couch base server you have web apps that are working with the sync gateway and you can have mobile apps that are all keeping synchronized with all of the data in the stack it looks more disconnected than it really is it's all fairly well integrated products but you can also have other external databases feeding data in and changing information listening for changes really from your mobile application I should say couch base light also has peer-to-peer replication and offline capabilities so you could if you got five handhelds and they're programmed to work together they can go offline from the main database keep each other updated because when you connect again there's some really kind of architectural challenges we've overcome with this architecture here I'd like to walk through a few use cases but kind of before I dive into to say specific industry use cases or customer examples I wanted to lay a little bit about the landscape that we fit into because you might be wondering well how is this going to work with my BI tool how is this going to work with my talent ETL processes et cetera and maybe just if we can start at the bottom of this diagram talking about data sources we obviously we all have different data sources whether it's social media data sensor data or other databases from mobile apps et cetera or in the case of many of our customers mainframe and they want to pull that data out and integrate it with something else so we move up to the next level here and we have like Kafka and Spark like we already talked about helping digest that data and push it into couch base we have talent doing ETL operations and maybe you're using Spark or even MapReduce to do more kind of processing of that data and then spitting that out into couch base now it should say with the data lake and data warehouse here we're not replacing those we're not really even competing with those those are different use cases all together but we might ingest data from there or save data to them depending on your use case but our focus is really on the real-time data ingest from these other platforms and then that's our core data serving capabilities and then we can put it into an in-memory cache and service your BI tools and dashboards and query tools we have our own integrated query work bench so you can just access it through a web UI and get up and running queries right away actually after this call it would take if I missed download the software two minutes to install it probably and you can load one of our sample data sets and start running queries it's that easy but we have other tie-ins to other products as well as Tableau and Noe and other BI tools so that's how it kind of fits in there I think that's probably enough said on that slide but not only are we integrating with many different products but we're also having to address the needs of multiple different industries so if you think of we don't sell a database just for gaming or a database just for financial services if we go back to what a lot of those engagement database requirements were they all of these industries have those requirements so we have to be able to service those requirements meet those demands and provide the data in a meaningful performance agile way and we do that for many different kinds of applications not just certain industries but each of those industries also has several of these use cases going on and this is really again it's a data platform that services these kinds of different use cases it's not just a database for session data or database for product catalog really customers are building these different end points on top of couch base to service multiple types applications and use cases for each customer and you know really if you step back and you look at this diagram or at this layout here and you think that the traditional relational database wasn't designed for a lot of these challenges from its outset and these are actually these are these cases that we've been designed for from day from day one really and continue to be targeting these kinds of use cases for our customers obviously a lot more we could talk about going through each of these use cases but I do want to have some time for questions at the end here so I'll burn through that and we can come back to talking about some of them but if you do think about things like customer 360 you're fusing a bunch of data sources ingesting from aggregating data sources from multiple places and you'll surface that information to an application that needs that kind of unified view of everything well that's a very different database approach than a traditional relational approach would be where maybe a product catalog as well you want to we're in a new age now where we want to serve up as much of our kind of internal data as possible to educate that customer as fast as possible so they'll make that transaction happen well now we're actually servicing up documents and imagery and other items that are all part of a catalog capability now it's a different entirely different use case than we used to think about I won't talk through too many of these customer examples but you can see we've got a range of the providers that are customers who are running session databases we've got a range of travel related travel and hospitality managing their inventory and what assets they have in the field down to managing product catalogs of what to buy at the store so we have a broad range I believe it's 20 of the top 200 companies that we actually have as customers and a couple examples I thought you might appreciate seeing how LinkedIn with their 450 million plus members I didn't realize it was that big that's amazing with their billions of hits per day they needed faster read time but their traditional relational database just wasn't working for them and even their caching layer McashD in this case was causing reliability challenges and manageability challenges neither of these platforms were developed with these kinds of both performance throughput and scalability demands that they had now who would have expected that they needed to service 450 million members 10 years ago it would have been a bit of a joke at the time but they were able to get their latency down they were able to get their query performance up and they were able to reduce their costs by switching to couch base these slides will be in your handout that you'll get after as well FICO looking for fraud detection you might appreciate their work even if you don't know what they're doing in the back end the they needed to have higher throughput is important and especially consider these fraud detection scenarios they have to analyze things fast they have to react quick they have to hand it off to for example in this case there are neural networking algorithms to compute stuff and get that information back in a timely manner and they can never go down they always have to be up towards a huge problem so we were able to help them with that example one more customer use case for you here again moving from a relational database they had a lot of cost challenges at ebay to support 13 million sellers that they have they had performance issues and again it's a key architectural challenge that they had around performance it was really something you can't fix unless you're designing for it they were able to increase their performance and also increase their availability by adopting coach base so the scalability questions and performance questions were obviously very fundamental for them so I just got a couple slides left here really for your reference this is our data platform turn off my pointer this is our data platform and the core capabilities that we bring to the market there were continued to grow we also have full text search capabilities and MPP analytics capabilities as well now that we just released last week so we update these slides but mobile multi-dimensional multi-master we don't have there's no limitations you can read and write from any of those servers that you're replicating to our competition can't claim that and we have a full sequel query language that makes it easy to get involved quickly so there's some really game changing capabilities there and when you start to look into our cloud native platform or hybrid cloud platform options it's really exciting stuff as well we'll have to leave you with a few next steps of things to do for example you can obviously go to our website and learn more I've put some links in here that you can access in the PDF two white papers that would be specifically interesting to you following this talk we can dive into the architecture we can talk more about how we compare to other relational databases and we have our performance benchmarking which you probably find really interesting that's an easy URL to write there so I put it there for you to go to a couchbase.com slash benchmarks you can get access to those reports right away it's really detailed sets of queries and stuff is all spelled out in those reports we have free online training we have in person training capabilities as well if that's what you need to train a large group up get up and going really quick we can train large groups of people within your organization easily and then just encourage you to go and download our 6.0 production release that just came out seven days ago give it a try and I'd love to hear feedback you can send us just general information to info at couchbase.com or email it to me and I'm sorry I didn't put my address on here but it's just tylertyler at couchbase.com I'd love to hear more from you so thanks for sticking around this long and I look forward to any questions that I might be able to answer in the next few minutes Shanna are you fielding those questions for me? I am yeah so if you have any questions please submit them in the Q&A in the bottom right hand corner of your screen and to answer the most commonly asked questions just a reminder I will send a follow up email for this webinar by end of day Thursday with links to the slides and links to the recording and yeah so and everyone's being very quiet right now well I'd love to hear I don't know if we need to encourage them but more I'd love to hear other people who have made this switch or made this move for an application to move from one environment to another give me a plus one or something in the comments so that we know that you've gone through this transition or if you're just kind of contemplating doing it I'd love to hear about the challenge that you're concerned with what's keeping you up at night considering that change everyone's everyone's so quiet today and the community I know y'all are just very active and engaged I know someone's got some it's an unusual day it's election day everyone's watching the news okay here we go some questions coming in so I appreciate this but what sort of lead times would you expect moving to couch face well it's gonna obviously depend on your requirements and the data volumes and complexity that you expect but from a developer angle you can get up starting really really fast just on your local machines very easily and we have really tried to engineer it so that the getting up and running with the software is like the easy part the other part is developing on it and we've got a single a single SDK that you could use for seven different programming languages so if you're proficient in one of those which you will be and your hardware is already ready to go then you really have a very short time to get moving really the goal is that we get you up and running the software and we step out of the way and the rest will be up to how fast you can develop on it a real single answer to the question unfortunately how long I should probably see if I can get those statistics for those three use cases and see if I can make a comment about how fast they were able to move over but you're talking about eBay and LinkedIn there's so huge anyway that it probably took them months before they had really moved stuff over or set things up enough or architected their solution from a developer angle first so give yourself a few months and I think you'll probably be pretty happy pick a use case that's small enough and easy, maybe a new project that doesn't require a whole bunch of reworking of existing applications or database and that would be the way that I would approach it I like to do the baby steps approach find something small, have some success and then maybe try scaling it up and adding and doubling your cluster size for example but from an architectural side if you need one cluster or you need our cluster with one node for development and testing your own idea out that's easy you can install this on your MacBook by the time we're done this webinar and if you want to scale it up to 3 nodes or 10 nodes and you just want a few nodes that are doing querying capabilities and a few nodes that are doing data storage then you can do that as well you can actually customize all of those capabilities and spread it out the way you want amongst your cluster and there's a lot of plus ones coming in here Tyler with statements of people looking to go from SQL to are relational to non-relational some high volume SQL server uses out there that they're looking to convert so what's to be done transfer from traditional databases to CouchFace what are some of the quick and easy steps that you haven't maybe covered already there's a couple of different scenarios I can think of off the top of my head one is if you already have like an ETL framework going on where you're aggregating data from multiple sources look at using a product like Talend to run your ETL process and start putting that data into CouchFace and then have your developers start to maybe take an existing application and instead of querying the relational database they're doing a nickel or in this case a SQL like query directly to CouchFace and that's kind of the two components moving the data and then building and rewriting the applications or writing new applications that access it it really is that simple there isn't a whole bunch to manage the system itself so we've tried to make that the least barrier as possible you can clarify the question if there's more you'd like to know about that's pretty much my summary anyway again baby steps approach get some data in, get some real data in start developing against it and go to check out our developer docs and actually our online training has developer training in it it's not just how to use the product it's how to build on top of the product how to write a full web app on the product, how to write full text search capabilities on the product so you can get up and running really fast and I love all these questions coming in, I knew you all weren't shy and had some questions so how many core customers in US currently using CouchFace do you know in the US I don't know but I do have this slide summary summarizing statistics let me pull it back up yes this one yes so we have 500 plus enterprise customers including 20 plus Fortune 100 customers and we have a community edition that's free to use as well so we have a lot of usage in the market that we're probably not aware of because of the open source model we also really focus on some of the larger enterprise needs obviously that they drive the hardest challenges to us smaller companies don't have as many of those scalability challenges as the big folks do but obviously any one of any company of any size benefits from our model here and there's a question here can you provide simple documentation following the development on CouchFace and maybe Tanya if you can get that to me I can get that out to everybody in the follow up email yeah I really recommend the training the training includes all the material that you'll need has video tutorials has code examples actually has quizzes and things as well so you can know you're really getting trained on the the patterns that you need for developing on CouchFace but there is documentation as well docs.couchface.com and you can start diving in if you want to see those examples there's some getting started examples in there that pop up right away so what is the strongest Couch SDK in terms of heavy IO or large payload scenarios Node.js, Java, .NET well I certainly those are our three largest ones right there Node.java and .NET you go with whatever you're most comfortable with I can't speak to kind of throughput and heavy lifting like that but join our forums and at forums.couchface.com and you can actually interact with the engineers themselves directly if you want to kind of dig in a bit more there but I think any of those top three are going to be hands down easy easy for you to get the performance you're interested in so what is the max data volume Couchface can handle we don't actually have a max data volume limit we don't have a max node limit either I know some of our competitors have limitations in their caching products so we can scale up and we can scale out as needed we have a documentation limit of 25 megabytes per document so in 25 megabytes per record so if you're making that shift over in terms of philosophy keep that in mind but other than that we don't actually have any hard limits that we're aware of our customers let us know when they hit performance challenging limits and they need to scale up but it's usually not due to data volume but due to say processing and indexing keeping up with the amount of changes that they have going on those are more of the challenges and not so much the data volume question I should say too like our pricing model is per node so it's different than some of our competition in the relational world too that likes to charge per CPU so I think you'll find that you can scale up your machines and keep scaling them up in our model you won't actually pay more but as you scale out then your license needs to expand to include those new nodes that's great and I think we've got time for a couple other questions here does catchphrase work with all apps or is there any limit I'm sorry I'm not sure with all apps correct all apps all apps well we have integrations with there's probably some specific apps the person has in mind but the approach we've taken is our SDKs obviously anything that's available to those programming languages elsewhere you can leverage in your application to use with catchphrase so if for example there's a machine learning capability that you want to integrate with but it's only available in Spark well we integrate with Spark so that you can then go and use all those advanced capabilities of another platform that's kind of focused on that capability so we really we have integrations with ODBC and JDBC I mean that kind of almost goes without saying and then we have integrations with all of the the main programming languages so there shouldn't be any limitations there and then we have specific integrations with with Tableau and Noe on the BI side and then we have specific integrations with talent and others and then the community builds their own integrations with even more so we they're being because we're based on open source and all of our SDKs are open source you can actually continue to do more integration if you feel that that's needed but of course someone will have an application that we don't plug into directly or maybe we plug in through JDBC and it's not as optimal performance as they would expect there's room there's still room there for more integrations with other products but we've tried to take a broad enough approach that people can build what they need if we don't have it but also to support what we believe that the core market needs so Kafka and Spark fill a lot of those gaps from the big data side for example well Tyler that does bring us to the top of the hour thanks for another great presentation for this balance or ship from Couchface and just love it and thanks all of our attendees for being so engaged in everything we do thanks for jumping in on the questions there that was just fabulous and just again a reminder I will send a follow-up email by end of day Thursday with links to the slides, the recording and we'll get that additional information requested to STL as well and Tyler thank you so much thanks everyone okay have a great day see you later bye