 Hello everyone, we are here to talk about expanding group of universe and basically looking at all the other possibilities that group can help us moving forward. We have been dabbling with group for a long period of time. We want to now move ahead, look forward and see what group can provide us in terms of business opportunities, in terms of implementations, in terms of different technologies. We concentrate on some of the use cases that we have thought about, that we have been either working on or plan to work on, we would like to share all of those with you and have a, probably at the end of the conversation, have a quick discussion around something that you want to talk about. I am Shishank. I have been working with Srijan for more than 15 years. I am a digital strategist based out of New York. I have been doing Drupal for a long period of time, since the time when it was Drupal 5, really, and I have been looking at content management, content strategies and also some part of UX and UI, so that we can make sure that everything is presentable. Ashish. Hello everyone, my name is Ashish. I work out of the Bay Area office for Srijan and I don't come from a big Drupal background. I have more worked on enterprise systems and data architecture, data warehousing, a lot of machine learning and NLP and we tie some of those things together as we talk about the presentation as well. So, real quick, only one slide about our company, one shameless plug. Srijan, we are a global company. We are about 330 people worldwide, so one of the largest Drupal organizations worldwide. About 70 of us are, we are certified as well. Most of our work happens from our headquarters in India and we have a small delivery center that is coming up in Manila as well. We are a fairly global, that's okay, we are a fairly global organization and some of the stats here again that you can review. How we wanted to look at this presentation was when you look at, say, Googles of the world or some of these other larger media companies, they talk about how do they onboard the next billion users on to the internet? How do they bring in the next million users to this? And Drupal being one of the platforms, Dries talked about this yesterday as well, that one in 40 sites is on Drupal. There's so much that happens on Drupal ecosystem. So, we wanted to give it a subtitle of saying how do we bring in the next million users thinking about different use cases that we may not think of Drupal as a use case but how do we bring in and onboard new users? That was our build plan and to take that approach, if you will, we focused on one industry, retail and some of the use cases that we worked on go to financial services or insurance as well. There are use cases that do manufacturing pieces. So, anybody in here that is from retail domain or media domain so that we can customize a lot of our conversations accordingly. Any domain specific folks across the board? Fantastic. We'll talk about general areas. We'll talk about three big buckets of pieces, one being digital experience, other being content, intelligence and third being collaboration and content or collaboration and calls. And we'll talk about certain use cases. We can go on to the next slide. I think what we are going to be talking about under digital experiences is how do we create engaging and efficient digital experiences? Which for us means developer portal. We'll talk about what that means. Argumented reality, how do you do AR use cases with Drupal? So, we'll show you some use cases of what we are building. There are things around brand advocacy, brand promotion. So, how do you link analytics? How do you bring in third party data from supply chain, from manufacturing operations? So, if you have heard manufacturing 4.0 or additive manufacturing, these are some of the terms that are fairly popular in large industries where Drupal is now playing. So, how does Drupal become more than content management system or just a website? How does it become a critical part of the ecosystem? So, we are going to share some use cases that we are doing there. When we say content intelligence, this is where we are going to build a lot of use cases that we are going to talk about our NLP and machine learning. So, natural language processing for say auto tagging content or image recognition. We'll show you some examples of what we've done there. Automatic content generation. Sorry, if you can flip back. Sorry. The one click up between the two of us. Sorry. I'll just take this. Content generation piece. Content generation is around robo publishing. So, if you've heard about piece of media companies wanting to move faster and faster to be able to present medium grade content. So, this is not high end thought leadership type of content. And this is just news media that you need to move fast enough. How do you automatically generate that content? Because you have all of these associated press articles coming in. Can you read it? Understand it? Give it your own communication style if you will. And auto generate articles and publish them onto your site. So, you don't lose users going and reading the same content somewhere else. That's the content generation piece. There is object awareness which refers to taxonomy pieces, things like that. Depending upon how much time we have, we'll talk about that as well. There is collaboration and communication at scale. We'll start with Sushant on that. And we'll specifically talk about use cases around virtual triumph. So, one of the leading areas within any retail shop today is around virtual triumph. So, you want to be able to have users engage with your products and do this. Sorry, Sushant, go ahead. And we're trying to rush through the slides as well so that we leave enough time. Thank you. So, Ashish quickly talked about virtual triumphs or BTOs as we have quickly referred to. For the past few years, we have seen rapid growth in terms of retail trying to use different and new technologies, new advancements to ensure that the engagement increases. That's where the virtual triumphs have been a great success. Everyone wants to make sure that people who are trying to get by their products online, they're able to also try them out very effectively. Virtual triumphs really becomes a very good product. And what we want to be able to do is try a Drupal layer where we say that the Drupal can become a hub where you can do a product provisioning. You'll be able to map range of products and ensure that what kind of a product that you would want for a particular kind of a share, for a particular kind of a party or a specific need that you might have. You'll be able to get products from that perspective and then be able to try it on and then be able to say that I would want to really buy this. So, it's not just about trying them on, but a direct connect with the commerce where you can say that for a particular product I would really want to buy this. One of the use cases that we have implemented is around lenses. For FQVU, we said that all the lens data resides with Drupal and the DTUs are now able to pull in that data. It could also pull in data based on the recommendations. There could be a quiz that you can quickly take based on the answers that you have given. You are able to pull specific lenses. You will get recommendations and you can try on other recommendations also or other lenses. One quick thing on this just integrating computer vision pieces with Drupal as well is as soon as you launch this application it will ask you, do you want a recommendation for a lens? Johnson & Johnson brand launched across 9 countries in Southeast Asia. Colored lenses are regulated in the US and Europe. So, they can't sell colored lenses without prescription within Europe and in US, but Southeast Asia market is not my brand. So, these lenses are right now in Southeast Asia. As soon as you launch this, it will detect, as soon as your camera comes on it will detect your hair color, skin color, eye color. It will detect your age, range and gender. Based on all of that it will say, these are the things that I have detected. Do you want to slide these things over? You can make a choice of saying, do you want a formal, casual, what type of lenses you are looking for? And it will make a recommendation for lenses. And you actually see that in your eyes. So, we have the app on the phone here. If you are interested, we would love to walk you through. If you want to see how you look with blue glasses, with the snake eyes, the lenses that are in there, we can show you those as well, working on our phone right now. The other thing that virtual triumphs have also sprung up is around digital matching and smart recommendations. We talked briefly about it in the previous slide also. There are recommendations that could be made around some of the choices that you have defined. Or for example, like Ashish mentioned that you can define what age, what gender that you belong to and then you can have recommendations. The recommendations could also be part of how you want to approach results or how you are really using a certain kind of a product for a history, for a long period of time. So, smart recommendations can be made from the system itself to suggest this is something that you might want to try on this or this is something that you may want to try on as an add-on for the product that you have already brought on. This really helps in making sure that the engagement level increases for the users during the online presence when they are buying. Typically, at this point of time, these are all ways to ensure that the basket size increases. But as we go along, of course, this also helps in making sure that sales increases for the brand. And this is life right now in Sephora's worldwide. So, the content, the Sephora one is not based on Proto. But if you look at Mac stores in New York, that application back end works on Proto. That is built by us. And Tom Ford, which is another STL order brand. So, two of the brands have gone live. And this is a platform that is 20 brands of STL order to do virtual try-on across their product segments. So, this is in-store experience or in-home experience. So, both it works on a native mobile application or an iPad that is mounted within a retail store. The other thing that comes up also is, of course, when you are talking of the large set of data, then you want to also be able to manage it. The Drupal works in perfectly here where you can make this a digital asset kind of a system where you can have content subscription in place. You can have all the content, all the assets, with all specifications. For example, for the products that we have seen, the ingredients around them, the energies that are associated with the beauty products. All have that one place and then it could be served on to your videos. And because it acts as a content subscriber, you can also make sure that this content is available across your websites. The customers can have a look at it. The beauty advisors are able to gather that information by different ways. All have that one place. The second aspect that we saw, we wanted to also talk about around top-facilitating, top-notch content intelligence. This basically talks about how we can make sure that we are able to find information out of the data that you have. We might all have a lot of content with us, but we want to ensure that there is information that is flowing through something useful that is applicable. ML and machine language and basically, APIs and AI really helps us in making sure that the content comes out and what kind of data that we want comes out in a certain way. Ashish is going to be able to be talking about it a little bit more. So I should have said this earlier. If you have any questions, please feel free to ask as we are talking about it because sometimes they are in context. We'll try to answer it quickly. I'll just not get into a discussion here, but happy to be able to elaborate that at the end as well as we have time. So feel free to chime in if you have any questions or if you have seen certain use cases. We just took retail as an example of use cases because we thought that is most relatable to the most variety of audience. Some of these use cases also apply in various other segments that we can talk about as well, in insurance adjustments or at churial processes where you are doing risk modeling as well. So coming to the second section where we want to talk about the content intelligence. So Shashan said a keyword in there where what we talk to our customers is everybody's heard about unstructured data is 90% and doubling every two years. And unstructured data is all this massive blob of data that people don't know what to do. So they throw some part of that in content management to people, some part of that goes into document them and pieces like that. But the fact that stays with the content management systems is that it becomes very difficult to search through the content. It becomes very difficult for you to drive intelligence and content, house content being used. What is being used within your PDFs, say if you have hundreds of these PDFs. One of the biggest things that we've done in these pieces are classification of documents. So simple use case like let's say 10Ks, 10Qs that are filed every quarter by every public company are in a structured PDF format. What we've been able to do for some of the financial services companies which have content writers that sit in and write content every time a 10K, 10Q is published. They will say, let's say a pension that investment is one of our customers that runs a website. They have many content writers that will just sit in and actually read through the document and be able to produce a report that says the pension investment from a company like say Ford Motor Company was this much last year. It is this much. It has grown this much. It is trending like this. Last five years trends have been this. So they'll write a commentary about it. We've been able to automate 80, 90% of that process just through parsing the PDF, understanding where the content is, because these SEC filings are semi-structured. They are not completely unstructured. They will find pensions and investments, will be called pensions and investments. It won't be just called something else. So because it is semi-structured, we can look through the content, we can find the content, we can find the appropriate values for it. We can find based on other databases that is subscribed to what were there investments last year and be able to generate content, get the content 80% ready and give it to the content writer to finish it. And so that is where machine is helping accelerate the workflow process, not completely automating it. Though of course we could ideally automate it if there was nothing else needed. But the idea is that you want to be able to put your comment, you put your human intelligence into the content. That is how you differentiate your content. That's how you retain your customer. So we are trying to automate that process and all. Entity extraction and resolution. Two pieces in this is around taxonomy generation. I'm sure anybody has gotten this use cases before where you have a lot of content but you're not really sure how the taxonomy should be structured. And taxonomies are very important to drive your sales from your site. So every web page should have a call to action. And so people are visiting your site or your portal for some need that they have. But then we are presenting that information because we have some need of our users as well. So what that means is should be driven through your taxonomy. Where does the user go next? What is the next best action? What is the recommendation that you want to make from that? So that's the entity extraction piece. So given a content, how do you extract keywords out of it? So search has become easier. Catalogging becomes easier. Auto-tagging becomes easier. And then you are able to create a hierarchy of documents so that your whole catalog becomes better. That's one. The second part around that is generate content from PDF documents. A very interesting use case around manufacturing 4.0. So taking a different industry view. If you've heard of manufacturing 4.0, people talk about all these automated manufacturing pieces where complete from order to delivery is all automated. No human intervention. The problem with that is most of the manuals that are used to operate these machines are still locked up in PDFs. So if you want to do a clear line clearing piece, it's an 80-page manual in a pharmaceutical to be able to clear a line before you can start production. So within those 80 pages, you have to, based on what you were manufacturing, you have to go to a specific page. You have to literally print that page out as a quality manager. You have to sign it to say, I have made sure the line clearing process has been done. So we were able to take those PDFs. We were able to convert them into Google sites, integrate that with voice space, Google voice, so that ideally you could integrate that with a wearable device. And you could voice sign the document to say, I have checked it. You can take a snapshot. So we worked with two companies. One is Music and one is Workware. They have a little different ways of how they work on wearable devices, but both of them do not block your eye. They present a page to you three feet from your eye. It's a wearable device that goes on your eye. The idea there is it is safe for manufacturing workflow units as well as then you're no longer having to carry a paper or a tablet in your hand. Everything that you're doing is through voice. And the back end of that is your content management. How am I doing on time? 50. Good. All right. Content intelligence, detective sensitive information. This is a very simple use case around understanding, being in Europe, of course, with GDPR compliance requirements. California is coming out with their own. So US will have a very similar to GDPR compliance law. California is the first state where it goes live in the US next month or January 1st. It goes live. But to be able to understand what information is there in your data, we've done a very, again, a new stage in here is simple use case around Git repositories. All developers want to save their code base in Git. What we do is we have code base written, which we've written for something else, but now we have repurposed it. It will be open soon where you can actually run it to say, is there any sensitive information across my Git repositories? So your personal Git repositories, you can still stand for to say, are there personally, personal identifiable information? Any PII there? Any keys in my Git repository? Because a lot of times those public repositories are available to everybody. And you don't want to, especially your customer data, out in there. Auto tagging image tagging, I think this is one of the most used use cases around NLP. Oh, please. Yeah, just one question. Do you use machine learning for this process? Absolutely. And do you write it yourself? Do you use existing libraries? So almost everybody in the world, unless you're Google or Facebook, everybody uses what is already available in the machine learning domain. So we've not written any algorithms ourselves. But what we do is configure those algorithms. And typically, in machine learning, the idea is how to take an input from one algorithm and connect it to the other. So mostly there are either, there are only about two or three primary algorithms, right? Either you're doing regression or clustering. From that, if you do it multiple times, then you would call it deep learning because you're doing that multiple times, you're connecting those things. So we do both. We do what is called shallow learning, where you have, say, computer vision type use case where there are RNN type of use cases, algorithms available. And there are then these audio, video text type of use cases where we work with, say, a company that does legal transcriptions. So we developed a whole platform for them, a marketplace where legal transcribers come in and they can say, I want file number 10, 12, 13 for transcription. So we are now taking those things with multi-channel audio to be able to transcribe that as well. This is another use case of machine learning where we have taken the models for audio learning that have been written by various universities. But we are extending them to the use cases to understand the taxonomy of legal industry. Good. I'm sure everybody's seen this. I could remove one and talk about a couple of these use cases. These are simpler ones. Again, image recognition, image object awareness is something that has been used for a long period of time. We are trying to repurpose it to say, when you are uploading an image, what is the kind of a text that you would want to put in? You want to take away that responsibility from the content editors where they have to figure out what exactly would be a title for an image or even writing a complete paragraph around the kind of content that you want to really create based on the image that you have selected. So we had a quick use case around how you can upload an image and make sure that the system is able to understand whether it's a bat, whether it's a person, whether it's a bird, and then make a sentence out of it and say, this is a possible title that can go with your image. You can use it as an all index. You don't have to think of something or to increase accessibility of your system. You can move one side further. So these are basically similar in nature where not just the images, we can also talk about a particular paragraph and then create a summary out of it. You might have a completely used long blog post that you have created but you want to create a quick summary out of it. Again, you want to make sure that this is not something that people are investing too much of a time. They should be able to quickly generate a summary out of the keywords that are being put into your content that you have created, generate a summary, and put it back into the local system you're up and running. This is also very useful for voice first world. So if you have users that want to be able to consume your content through Alexa's type devices, long form articles being summarized is the only way for you to be able to present your content onto a voice first device. So we've been able to do this for one of our customers where we are taking a long form article and summarizing that into 100 words. And of course, we give it back to the content writers to say, is this a good summary of your article? But the idea is to give them at least, we give them 50%, 60% and as we learn more and more about certain industries. So in this case, it was automotive industry. There are a lot of keywords that are very unique to that. And how do we make sure that they translate well for voice first world? And you don't necessarily need to hear about talk on your voice first, but you need it in the summary. But image... Sir, I mean, this was another use case. Sorry, I had a question. Question. Is it only applied to English or is there a language? So at this point of time, yes, English is our primary language where we are doing this. But as we learn and as we grow forward, we would like to extend this to a multi-language kind of a platform because a lot of enterprises are looking for multi-language. They are across seasons. They want to ensure that the content is relatable to each and every local flavor that they are producing content for. So yes, the idea is to be able to extend this to a multi-label platform. Especially in machine learning, I think it's more English-centered. There's no piece that we talked about earlier. That's in 36 different languages. So that's a worldwide role. Also, it becomes a lot more challenging when we are talking about a larger learning system where you want to ensure that the content is again across different regions. Everyone can understand, relate to the content in their own local languages. So it's a good use case. We are yet to get there, but something that we are working on. Again, I will not talk about this. Probably move on forward. So again, this is a very relatable concept that has been paining us. And I'm sure that a lot of people in Drupal communities do face this problem. There are amazing ways of how you can upload an image and you can ensure that it is scaled very well. Cropping has always been an issue when you are rendering across different devices you want to ensure that a certain intelligent kind of cropping happens. At this point of time, a lot of companies are soulless to frontend. They say that frontend has to manage it in a way that they have to render this and hence this is not something in the domain of Drupal. Or they'll hand it over to content editors. Give them multiple fees to upload. Here is an image that you can upload in 10 different ways and then it can run across mobile in a different aspect ratio and an iPad again in a different aspect ratio. So you want to be able to again solve that problem by saying that let's become the ML where it becomes object aware and then be able to say what kind of a cropping is something that you would like. So you see that from a particular original image and there are certain keywords that are pulling in you are able to really crop that image to make sure that it fills most of it. There is not much emptiness but on the other hand if you crop it in a different way you use the C crop function you are able to crop it to a particular specific part of that image. On the other hand this would be a generic thumbnail that could be created with where you would see that there is still some shadow coming from another object. So these are different ways of looking at it but when you apply ML it becomes much clearer that you are able to look at a past the image and look at what really you want to showcase. So we move on to the third part of it where we are also looking at engaging and creating effective digital experiences for overall platforms making sure that Drupal is really leveraged as a backbone for your businesses and I think Ashish will also be able to talk about some of these augmented reality something that we talked about briefly earlier around videos but we can extend the concept and say not just the try-ons but you can potentially have a certain virtual reality system or a gamification in place. You have a product that you can scan through your applications and then once you are doing that the information around the product the catalog of the product or more information around that product could be thrown at you on your system itself so you don't have to ask a person you don't have to search for it online just quickly scan a product or an image of the product and you are able to get more information about that product itself. Ashish, something around there. So one of the pieces that we've used Drupal also is developer portal so does anybody have anybody worked in Apigee here? As an API management tool everybody knows Instagram right so if you go to developer.instagram.com or dev.volmark.com so developer portals are one of the biggest fastest growing mechanisms of how enterprises are building ecosystems of onboarding third party developers to build on top of their environments that they are constructed so over large periods of time there are enterprises that have constructed internal APIs that have not been open to third party developers so Apigee is one of the companies that does this from the front end standpoint where they've embedded Drupal as package Drupal so it is an open source page you can go to the Drupal.org there is an Apigee portal available and starter portal available that a lot of companies can start with to build a developer portal what we do in there is that's a basic starter path that says list your APIs here and Apigee brings in certain pieces of things around saying how do you do rate limiting how do you do developer onboarding how do you do key management how do you do API documentation all of these are basic hygiene factors that are needed for any company to run a fully functional API portal and this is again one of the fastest growing areas for developers because this is where you are onboarding third party developers to develop applications on top of your data so these are monetizable revenue generating opportunities for a lot of these customers a great example of this is say how you Uber will work with say Google Maps so if you want to see how far your cab is if you want to get your bill all of those maps that the actual car is coming from Google Maps and until I last read Uber was paying about 10 million dollars per month to Google for licensing on their maps and this is all the API calls and so this is a huge business that everybody wants to get a part of to say we have developed the baseline APIs we want you to be able to bring your developers on so what we have done in there is companies like of course Apigee customers, these are AT&T T-Mobile, Garmin various large customers that wanted to create a developer portal and create a developer ecosystem to be able to onboard developers and also when there are open source API platforms like WSO2 or Com that do not have API portals that they come out with so that's a great opportunity for Drupal to be able to go in and say we'll construct these portals for you and onboard customers. One of the biggest growing area for Europe is open banking all financial institutes will need a developer portal ecosystem for their developers to be able to build third-party services on top of their financial services so one quick example on the next slide that we have is around the open banking portal that we've built for one of our customers the idea there is how do you create your APIs, how do you generate enough developer experience so developers want to build their services on top of your APIs. This is one piece of the developer ecosystem then digitization automation, I think we've talked about pre and post experience we work with cruise line companies of how do you engage your customers before they take a trip and after they take a trip so this is customer journey mapping can we talk a little bit more about this? Some of the use cases that we have been working on are around adaptive service or micro service after a certain experience has been taken from a particular application you give them a couple of questions do you like this content or was this content relevant for you or you must have seen Google rewards doing it often times where they say you have looked at a video on YouTube earlier do you think that this is something that is going to be relevant if I throw it at you as part of a related video you are able to say yes or no that's a very simple question but it impacts on a huge amount of feedback for Google itself so those kinds of micro service and adaptive service becomes part of the post sale experience once you have made the same quick questions that helps you gather a huge amount of feedback from the users themselves the other few items are around brand promotion this is mostly around analytics behavior of the users how they are engaging with the platform and then be able to present them with different kinds of recommendations around offers these could be personalized of course based on how you have been interacting with the system again makes it a kind of cleaner and more enjoyable experience for the users to really interact when they are working on your system I think we'd like to now close we have closed at approximately 12.5 any questions that you might have we'd be happy to take your own we'll be here otherwise if you need to head out for lunch we don't want you to miss your lunch we tried to wrap up in 35 minutes so we did talk a little fast if you have any questions we can go back to the slides as well and talk a little bit further on that yeah I had two questions one was also about multilingual I look in the Netherlands and I would like to use machine learning or for instance automated tagging which is not available for instance there are services that provide this on AWS and I know of others but they are not available in Dutch is there anything that any tips on how to be able to still use those kind of services could I translate Google translates and then that's a good idea we've done that so it depends upon what your use case is so if you have any use case that is customer driven it becomes really difficult because you're translating it twice you've done that Google translate translate from Dutch to English then send it to the machine whatever comes back in English you translate back to Dutch so those double translation you typically do meaning we do a lot of work in Japan and then they have a customer facing application where the whole language library failed in double translation but the same thing we did for internal use cases which was fun so depending upon your use case if you have a customer facing use case and especially in a regulated industry where you cannot have a word say something and you cannot have lineage around it then it's really difficult until the language library is released by one of the providers but otherwise internal use cases are still available for instance like suggestions for tags or even if you're creating a hierarchy a taxonomy hierarchy for search if you have elastic implemented in the back end if you want to create that search or call to action in terms of say e-commerce site you want to be able to build a better product catalog for product recommendation cases like that you will still be able to work with product name but still work with translation thank you another question because very interesting cases what I was wondering is the reasoning to use Drupal as a back end which I was interested in because I think for some of these use cases you could just as easily use some completely other technologies that's a fair point I think one piece is that Drupal is core to us we actually believe that Drupal is a fantastic framework and not only a cns so we did a company called oncorps and we can send out the url as well but they are sense analytics advanced analytics start up out of Boston and we did that very early days of D7 we did it completely progressive no front end only native web app it was still built on Drupal and we believe that their time to market built on Drupal was at least 8 to 9 months faster because the back end was all built on Drupal even though we had to hand code all of the what is now called headless we built that headless very early days of serving this is going back 4 years but they have still stuck with Drupal because it provides them the functionality in the back of being able to create surveys and content to be able to they have actually decoupled the database in the back end as well so a lot of sense analytics type of data goes to mysql sql server piece or rdbms I am forgetting the name of the rdbms but Drupal related content still stays in mysql but all of the big data content breaks out in MongoDB and they still needed that structured content to be able to query and to be able to do analytics around it even though Drupal is still between the framework the same pieces with the real transcription industry of trx the record exchange these are all public URLs if that is of interest and this is getting recorded so you could get there also the record exchange that that whole portal is getting on Drupal but actual audio files audio recording of all code cases in the US at least I am sure here as well but all audio cases all code cases in the US are recorded on an open mic there are up to 6 mics all depositions all code cases are recorded and all these audio files live on s3 so there is an Amazon what we did not want to do is get Drupal in the way of you being downloading the files so created on microservices middleware where you can say I want these 3 files I know who your user is because you can onboard it as a private driver so behind the scenes we will run a process of those files and send it to you rather than you waiting on it for the front end because then you will time out but we still believe Drupal is a great framework to be able to build these applications much faster another very quick point that I would like to add is because with Drupal 8 API creation and conversion has become so simple that Drupal again becomes the center of it all you can make sure that the APIs are created in a way that you want and it can be consumed by anyone you can have a front end on react you can make it only channel really to ensure that the content goes across everywhere so really Drupal 8 becomes the center in that type thank you a lot of the stuff you are talking about has to do with content personalization are you creating your own application or are you using existing ones like Aquialift, interaction studio Kitewheel we have used Aquialift we are partners with Aquia as well and we are certified Aquialift partners some of our customers again I keep going back to regulated industry but that's one reason the other reason is they didn't want data to leave the country Aquialift is a hosted solution you can't get it on-prem right now there's somebody from here in Aquia so I wanted to make sure that I'm not stating something wrong but that's a hosted solution so if you don't want your data to leave in a lot of cases for some of our customers for example in Malaysia there is no cloud provider no top 3 cloud provider that has the data center in Malaysia so and they cannot have their consumer data go out of the country so in cases like those we build things on-prem as well as well as on local cloud providers where we've done more back end personalization so personalization is two ways one is you capture everything coming in through JavaScript or you do back end personalization to be able to take the content how long my user was within the Drupal ecosystem your Google analytics, your fire base all of these analytics you can collect in the back end and what you will generate data lake out of it in the back end and that can feed your back end personalizations so we've done that for travel companies travel and hospitality companies we've done that for telecom companies where we've done back end personalization live and a lot of other tools like Marketo and all of them they tend to be front end personalization this is an expertise of Shashank as well if you wanted to add something I think, I mean as you have already spoken about it the analytics that goes right on the front end really ensures that the kind of engagement that you're having with the system how much time we have spent on the system what are the kind of products that you have been looking at not just bot but looking at or maybe wish listed those all create your preferences and then you are able to generate analytics around it and then you are able to serve better content any other questions we are at boot 5 region if any of you want to talk by that would be great please take us away and make sure that you log your votes and yes yeah this is not necessarily a poor Drupal type presentation so a survey would really help us to understand if this was useful from a use case standpoint we did here I didn't see a bunch of people here but folks that came to the booth and they said there is not enough conversation around use cases that they are hearing in Drupal console we just took a whole step just talking about use cases rather than go deep into demo or technology because like 35-40 minutes isn't enough to be able to do demo and do various use cases as well so happy to talk in more detail if any of you are interested please take the survey let us know if this was useful not useful it would really help us good