 So, hi everyone, I just wanted to share some experiences what we have done at Zoho with data. So, that basically three discoveries I will start with. The first one is I am not a big data expert. Second one is we do not follow ourselves as a big data company unlike some of the others I do. And third one is there is obviously during talk about this presentation there is a lot of hype around big data. So, you really have to separate the big data share and try to determine what really is a big data problem and what is not. So, with that I will just talk through five instances, five cases that we deal in terms of data management at Zoho. So, the first one that we do is we basically give you a background of what we do. So, Zoho basically we build few products one of them is our cloud based applications. So, when you say application it means something like a CR application and email application, instant messenger. So, all of these have data in it. It could be an office online office equivalent. So, user can generate a lot of content they can build content like a document presentation spreadsheet kind of thing. So, that is one of the parts that we deal with in terms of data. Other part is managing which deals with enterprise ID. So, when you talk about enterprise ID you have products that monitor various parts of the ID infrastructure and a lot of data comes out of that including log information. So, we are talking about the application log, we are talking about logs from your devices, we are talking about packets coming from there that is also a lot of data that we will have to deal with. So, moving back to the five use case I am going to be talking about is the first one is we have a search that is part of the Zoho online cloud application which is going to go and search across various services. The services would be a CRM service it would be an email. So, I will have some screenshot that basically give some example of what we are talking about. The second one is operations log. So, the Zoho services basically each one of those application service generate a lot of logs. So, if something goes wrong would be over troubleshooting it that is the second thing. The third one is the Zoho application itself store a lot of user generated content. So, how do we manage that data what is the infrastructure we use for that. Fourth one is we have something called Zoho reports which basically deals with analytics and business intelligence. Once you feed it data it is able to present that in a visually appealing form. Fifth one is in terms of you know what I talked about the IT infrastructure management in terms of log analysis, data analysis that. So, let me jump into each one of these quickly. So, the goal for the first use case was the Zoho search was to give us unified search experience across various Zoho services and that was the goal. So, which means that you have to not only search relevant information across these services, but also be able to collect information and then handle. So, the thing here is you have to handle structure data as well as semi structure data. So, because you know there will be some data in RDVMSS, there will be some data in Hadoop file system and because there you have a lot of usage of even content like document spreadsheets. So, where do you put those right and then obviously exposing the rest APS for this. So, if you look at it if you do a search across various you know these are the various assets. So, it could be an email, it could be documents, it could be contact, it could be you know discussion board all that. So, it will basically go search across these various services and basically get the information out. And in terms of how do we do this you know for the search part, the indexes we use Lucene and so on for storing the index. And in terms of the data itself and we also have some content information in an RDVMS. And in some of the source search services where we see a lot of data coming in we also use a column editor. So, this is the one use case. So, this is give you search information results from across various services. The second one is operation log analysis right. So, for example each application each those services will skew out a lot of logs. And for the operations team and the you know there is something that takes into devos also. How do we go and how do you see that there is something the exception being thrown there and how do you correlate those various log information. And basically provides some kind of analytics to help you correlate that information. So, here again you know in terms of storage since log is kind of semi-structured we use the Hadoop file system there. And in fact we use the same kind of Lucene and so on for generating the index because all the logs have to be completely text searchable. So, that makes perfect sense for us. So, that is the second use case that we use. The third one is generating you know all these user generated content. So, someone could actually write a fresh document themselves for import from an Excel or basically an email documents or chats from transcripts that you type an instant messenger. All that is stored in our own clusters right. And then for that as well we use an HDA file system for storing all that information. But it is not always a single type of storage for all data. We use a combination of various technologies or various storage types or RDBMS versus RUNA database for different types of data. And obviously it has to be highly available, it has to be written and that is what we use basically distributed solar or distributed HDFS system for that. So, this is one of the services it is called Zoho Reports. So, what it does is if you feed it information and the information could be either a CSV file or an Excel file or you could actually put it from a third party application or a third party data source. We can actually visually present that in a graphical form. So, it is similar to a business analytics. And here again in fact one of the things is here we use actually we use not only commercial RDBMS, we also use MySQL. So, wherever it makes sense to do that we use MySQL. So, for a large, for cases where customers handle a large amount of data to be presented we use vectorize as a column of data. And these are, I mean it is not as though you see this information being presented on Zoho Services portal, it can also be embedded in your own keys or in your own application. And then like I said the data could be pulled out either from standard interfaces, standard formats or you could also have plugins for other third party apps. But to pull out the data and then store it we use column database for that wherever the data volume is huge. So, these are some of the visualization parts of the report so you put two different types of charting with this. So, this is the text and the last use case is with regard to log analysis. So, the managing innovation of Zoho basically deals with IT monitoring or enterprise IT monitoring. And one part of that deals with analyzing log information that could come from variety of sources it could be network devices, application logs, SysLogs, various kind of logs. And then you have to basically store all these logs in a scalable way and access it, pull out that information very quickly, formulate it across various records and show those correlations in a very meaningful way. At the end of it for the user the log could actually be a security alert, a security event, a security incident or some kind of normality in the system or some kind of access which is not permissible someone is trying to access those. So, this can be used to generate some compliance report also. But again the storage for this to use a combination of various technologies, the HDFS, so that you see what that is. So, that is about the five use cases. So, this is how it would look. So, if you want to search for any log information that you want to collect once you set up the log collectors basically you would type in the strings that you are looking for and just pull out these information and represent that graphically. It is very similar to what Splunk does but Splunk does it on a very large scale. And then you could also have, we also have API's kind of thing so you could actually generate some kind of a index expression on top of the data that you pull out of the log information. So, that is pretty much the five use cases that I wanted to share with all of you. If there are any questions I can answer. But you have to keep the C disclaimer that I said earlier.