 Please do start so good afternoon. I'd like to first thank the Attorney's General and John Palfrey and all the members of the committee and for the great work they're doing here if they To help protect kids online. So I'm Peter Ferrioli and I'm director of operations at net nanny has anyone out here not heard of net nanny Okay, so let me dispel everything you do think about it or know about it right now So net nanny is we're basically a parental control tool We are a tool to help empower parents to block filter monitor What their kids are doing on lines and various aspects of all the different things they do online currently whether or not that's Blocking peer-to-peer downloads. That's blocking inappropriate content. That's appropriate blocking inappropriate contact. That's managing the time they spend online Who they're chatting with what they're saying, etc So net nanny was pretty much the very first internet parental control back in circa 92 93 around when everyone was using Netscape as their browser and mosaic and those fun days back then and That nanny our new version actually at 96 after two and a half years of having the same version out and small Versionings coming out. We'll have our next version out next week So you guys will be the first group to hear about three or four of the killer features that We don't think anyone else is doing in this space and that we're ahead of the game in We have just in the last four years. We have about 750,000 plus Users consumers using net nanny parental controls. So that doesn't take into account from years 1993 to about 2003 when we had the first four versions out and I hope no one's still using those if you are contacting me I'll upgrade you So we are a very flexible powerful filter that allows what content Is appropriate based on families and family members age that comes into their house? We use a patent-pending patent pending dynamic contextual analysis filtering engine. Now. What does that mean? Well, that basically means that unlike the majority of Filters out there today. They're using list-based mechanisms that we find quite antiquated. We use an engine An algorithm that we develop that basically allows us to analyze a page on the flies It's loaded into the browser Determine its content and then based on a user's settings, whether they want to block warn or allow that page we can Apply that policy to that page and that content So this is when I was doing these slides and got the template They asked for some Quantitative data here. This is just two recent things when we talk about Underblocking and overblocking in our industry Those are two of the key things that like to get thrown around about how effective and ineffective a content filter actually is you can see here that Recently the internet the IAA in Australia tested that nanny and you can see the some of the results here that on correct blocking We're 98% and 3% overblocked on websites And then when we had a test just recently we took seven of our competitive competing products out there We applied the same list of 4,500 websites And we ran them through all the filters to get accuracy and we came out with about a 96.2 correct blocking and 4.3% overblocking and that's just a little bit on just the filter side of what we do I'm gonna talk just a little bit about net 96.0 next week what comes out and some of the things that we're looking forward to We partnered with the ESRB so we will be the first desktop software client to actually Block PC games based on the ESRB ratings and the content descriptors of those games So you can choose to play tea game tea for tea games But if you decide within that category that you don't want your children on their PC to play games that have sex But you're then you can just block sex if you're okay with violence you can allow violence So we've not only based it on the age ratings of the ESRB which you can select But we also based it on the content descriptors that the ESRB provides for games That's one new feature in net 96.0 another new feature in net 96.0 is the ability To filter and block Secure proxy websites now. This is actually a huge leap forward for us over many years and many of these conferences Including ones that deal with the NECC and technology what you find out is that Parental desktop filters have a bad rap because everyone assumes that a child can enter a proxy website in the which they do currently with Today's most desktop filters in schools and they can circumvent that filter easily and they can do that We have been aggressive with that that behavior and our new version will actually block secure proxy websites And we'll be able to actually filter the content understand what's going on currently today We block just proxy websites We have an algorithm that can determine if a site is a proxy and block it But we're taking that to the next step and no one on the client side is doing this This is done right now on an enterprise side You can see this on some of the back-end Appliance solutions that are done with internet filters and content filters, but you haven't seen it on the client side desktop yet Those are two of the major features another one will be instant message alert and analysis So we've partnered with well, I wouldn't say partner with let's say we've worked with several criminal How do I say this is cyberbullying experts in the in the US a couple of them and criminal professors and We've taken logs and looked at what cyberbullies do what they say We've looked at years of parents who have had kids cyberbullying submitted things to us We developed an algorithm in net net even allows any instant message application at this point to Send an alert if somebody Uses a word or phrase or in a certain style if they go back and forth in a conversation in the context of let's say They want to meet let's say they're giving racial hatred speech hate speech Any kind of harassing speech we can just alert parents via a real-time email alert that that's occurring in context Then they can have a conversation with their children. I should step back a little bit net nanny does not worry We don't install stealth. We're not a stealth mode key log or anything like that We're all about educating and open communication between parents and their kids about taking responsibility about what they do online Now we know that for I think it's Adam Adam some Adam theorist great work back there parental controls that only one out of ten parents who have access to Free parental controls today on their cable their direct TV their cell phones their computer only one in ten actually uses them So people say oh well net nanny is you know a subscription product or if you have to buy something for 395 well the fact is is that 90% of people of access aren't using the free ones that they currently have so what that was That that says a couple things that says that the ease of use of parental controls And the education are what very key about parental controls Lastly, we have a social net. We have a social that network dashboard that we're rolling out next week But that will enable a parent to do from any remote place that they are on at work on their desktop They can log into their net nanny account and they can see a little profile of all the different social networks that their kid Has gone to what's changed on there. We can flag items. We can show pictures And we've worked with a couple of those social networks in particular and Thank you, you are you are great. Thank you. I appreciate it Peter. Stay upset. I Know it's particularly hard for those with very mature products It's yours that you've got so many problems you're trying to solve to get it into five minutes, but I appreciate your trying Questions for content watching net nanny How about any of the yes, please let's get a mic over here and please tell us who you are Just had luck there Well, that's quick thing about that We solve the social networks about the alcohol and tobacco with net nanny You don't have to worry about your kids being exposed to any of that on social networks today, so My name is Denise pillow and I have a small business called kids be safe online I educate parents and educators on internet safety and everything about kids and technology So my question to you is you talked about the I am alert analysis by keyword selection Who sets those keyword selections? Well, it's actually keywords are just a small part of it and Those we've taken logs over over many years from various experts whether it was based on online predatory behavior or bullying behavior And we've based the algorithm around a lot of that around a lot of real-world incidents that have occurred and triggers that led up to those and patterns that parents wish they could have seen in the chats previously before Harmful incidents occurred to their children or things that got out of their control So well, it's based on a lot of things that based about the timing in between the messages How many times it's coming from them certain keywords in it are one part of it for sure So they are able to select themselves and also you have recommended selections No, you can't enter your own keywords into it. No, not currently you cannot that was the question Thank you. I see John Morris has his hand up But may I defer to somebody who has not yet had a chance or fewer chances? It will start back here and we'll come to John So we'll go Adam then gentlemen from chat safe and then up to John Just real quick Peter you answered this question for me earlier Maybe you can just tell the crowd how net nanny handles the new incognito mode in the Google Chrome browser as well as in private browsing in Internet Explorer, right? That is a really good question question right there Yeah, we've been getting that question at the talk to a report of times UK And let's let's just put this to rest right now with net nanny those modes do not exist The only thing what that means is that Because they're in the incognito mode does not mean with if you have net nanny installed You can still track wherever they've gone through your reporting So we work at the port level whereas these browsers are working at the browser level They're they're they're basically looking at your history your cash the cookies etc And they're putting putting a report there So incognito mode all this stuff is not tracked or wiped away But with net nanny installed you can still track all the websites Visited to and we do have plans to implement a plug-in that would lock the the private mode out of a browser So today with net nanny you install it you can lock Google into safe search mode So your kids cannot get out of safe search mode We will be doing the same thing when it comes to browsers and the private and the private Ability of those great and I should give if John Bershetter others from Google want to talk to the Chrome issue at any point Just raise your hand sir Jim Carmichael Carmichael technologies chat safe Just a question for clarification If a child is moving to another desktop to a friend some more down the street or whatever How does net nanny handle that situation? Well, if my son goes to a neighbor doesn't net nanny I recommend they put net nanny on their computer It's a good sales tactic right there very well done Let's go John Morris because I've been waiting and then Larry and if anybody from the TAB wants to raise their hand I'll look over there next But if let me let me just seriously address that for one second Everyone knows about cloud computing and you know software as a service and We haven't heard a lot about that today, but I'm not going to say that you know We're quite there yet with that but to address that issue in particular That's the way that it looks like it's going to be addressed that no matter where that child goes Hopefully there's some type of proxy service that if the school is using it You know ISPs are using it etc that they'd be blocked everywhere if they didn't have a local parental control You're not saying you have a cloud computing solution. I am not saying that just to be clear John Morris CDT Your your product allows Various reports to go back to parents about their kids activity How many of those reports go through your servers and what information do you retain about that if any? That's a great question. So what we do is when a when a profile set up on net nanny we basically Put we we sync that profile to our back-end servers We only keep any data for 30 days every 30 days all data is purged We only we basically only keep chat logs if that feature is selected if you want to log chat for 30 days And if you would like to any URL that was visited we keep that for 30 days as well, and then those are purged There may have to turn it on from the bottom Let me just say the reason we do it on the back end We sync it is so that if you have multiple computers you can then sync that profile out to wherever your child is where net nanny is Hi, Larry maggot connects safely depending on the data you look at anywhere from oh 50% to let fewer parents use filtering technology like net nanny you guys have been on the market for many years It's a very mature technology Why is it that you think that the uptake of the technology across the board is not as high as perhaps? Some people might expect or might want Well, it's it's there. There's a couple of Education and ease of use if I can make it those simple words That's why with great education like the things you're doing and some of the NGOs and with the ease of use of parental controls For instance aren't next week We release a whole new feature where you just select what age your child is and you'll get a pre-can set of Policy so you so if you choose a young child You put them in a walled garden if you chose a teen and you want to enable them to social network and chat All those policies will be preset So we're trying to make it an easier to use product and we're trying to also Educate parents about using the product But what about the possibility that a certain number of parents have thought it through but have decided they don't want it They don't need it. It isn't necessary. They have other means to keep their kids from doing inappropriate things I mean have you looked at those numbers as well Yeah, but but we when we talk about filtering content doing inappropriate things This is pretty straightforward is that a few of the top revenue generators online or pornography gambling and prescription pills Now I don't know about anybody else out there, but I don't want that content being pushed to me all the time So when I when I installed the filter on our our home and my son says why'd you put this on there? Stop me from doing whatever it had nothing to do with my son has everything to do with protecting the family from content We don't want to come into our house I think if everyone understands that not kind of wants to point filtering out Well, this is going to stop Johnny from you know going to playboy.com versus well this actually allows us to Hold what our morals and values in our household and allows it to come in there You know, it's a just a different way of looking at it I think go honey Friedman I think Wendy Seltzer is trying to get in on the conversation be my guest so we'll go in that order So we have a three to four percent of over blocking what mechanism do we have once something is over blocked to unblock it? There is well, so if you come to That's a great question. So we have a we have an override situation where either the the The child's using the product is given an over separate override password by the parent or we can email in real time The fact that your child wants to access the site. It's been blocked Maybe it shouldn't have maybe CNN today is running a story on pornography and the word pornography is showing up 20 times on CNN so our filter blocks it so that day a child needs to get that article in CNN They can override it instantly from a password from their parent or via an email that sent to the parent in real time The parent can do it remotely Honey is that responsive? Wendy Seltzer Thanks, since the first question that I had was already asked about how many parents might actively be choosing not to use these Products, I'll move on to another one which is how do we prevent these products from preemptively blocking the new and the unseen as you say You're developing newer better technologies to block out the proxies and the circumvention attempts and the places where Kids might go to sites that can embed content from other sites or include content that would therefore appear to be a clean url up top, but In pieces from other sites that the parent has chosen to block. How do you avoid blocking? new new uses of technology Well, I don't think I understand your question completely if you're so we have a dynamic engine basically So if you if an hour ago you decide to Post on your blog something about Let's just take any one of the categories that we have predefined that we block on let's say it's Hate speech or you decide that you don't want your kids learn about the drugs or how to make meth So you decide you're gonna block that but you post a blog about it an hour ago That goes to load into the browser It gets blocked on the fly because we can we're able to analyze the page and categorize it So if something is overblock, we're not that concerned I mean we don't overblock things but we we would be more concerned if we were under blocking over blocking an override Is an easy thing it's a click of a button to get to the website and if we underblock that's bad We're trying to prevent the exposure in the in the first place when it's just a tiny tiny follow-up, and then we'll go okay the question was do do those over blocks a Interfere with of web 2.0 ish content Not with our engine with a list based antiquated filter It does if I'm using a filter that's relying on three million bad known porn URLs that tomorrow or something different Then it's a problem. All right, please join me in thanking content. What not not nanny And if we could welcome our friends from cement tech, thank you And just to be clear as Semantic sets up we've moved into a category. We've called here filtering auditing again. I'd urge you to look at the submissions Related to what problems the each of the providers say they're solving so all but one I think was checked off in the net nanny submissions so don't be The fact that we categorized in this way doesn't mean to to limit or to in fact create any expectation about what it's solving sir Hi, my name is Keith. New stat. I'm a software developer at semantic working on Norton family safety Oh, sorry. Can you hear me now? Is that better? So again, I'm Keith. New stat. I'm a software developer at semantic I'm working on Norton family safety We feel that Norton family safety takes a different approach to parental controls In large part by focusing on Providing parents with the tools that they need in order to foster a Positive relationship with their parents with their kids in the context of online safety So we all know it's why we're here. We all know that there's a lot of different dangers that that face kids as they go online and that could be anything from accidentally viewing content on the web that's not appropriate for them to Being targets of Online predators or cyber bullies so We feel that in combatting this There's a couple of approaches that we need to take I mean certainly we need to have activity controls at the client on the child's machine To provide sort of boundaries, you know things that that they're not supposed to do Versus things that they are expected to do and that they should be allowed to do and that they should be encouraged to do But at the same time we We feel that it's really important to provide the visibility to the parent into the child's online life now So for example Suppose my daughter goes to Google to Google and starts searching for things that are associated with eating disorders Now it's probably less important to me that this that these searches get blocked Then that I'm informed about it. All right, so the actual response is is to Basically to give me the information I need so that I can open that kind of dialogue with my child so now this idea of visibility is kind of a tricky one because Parents these days don't have a whole lot of time and they and they're not able to provide a physical presence at the child's Computer whenever the child is online And from the parents that we've talked to A lot of them don't want to even if they could they want their kids particularly the older kids To feel like they have a sense of privacy in their online life, whether it's browsing the web or talking to their friends So we feel that it's important that a parental controls application like Norton family safety provides the the parent with the granularity and configuration to To decide what's the appropriate level of blocking or monitoring for the kid maybe a younger kid We monitor everything and we block everything that's inappropriate. Maybe for an older child We monitor less we give them more privacy. We give them more lila leeway to make their own decisions But we still want to get notified when something happens that might be an indicator of something that we need to address So in the end the goal is to Provide the parents with tools and the and the child also with the tools to to create a relationship with each other that allows them to To together make sure that the child is is is safe online so This is how it would work the the parent goes to the Norton family safety website and sets on the count and Defines their family then the parent sits down with the child and together they define the rules So this is full disclosure. The child knows What's expected of them? What kind of monitoring is going to be in place? This is not spyware, right? In the meantime the child also gets a sense of Ownership of the process they're there to try to influence the rules And there's a certain a level of buy-in that they've agreed to these rules when they go back to their computer They can at any time View basically that contract that they agreed to that defines what their parents expectations of our Are of them when they're online? so now the child goes back to their own computer and they they Engage in the activities that they usually do browsing the web searching the web chatting with their friends Using social networks but in the meantime The parent has the ability to from wherever they are whether it's another computer in the home or maybe from work They have the ability to Get a sense of what's going on for the child in their online online life and to get notifications if something Maybe problematic comes up so Suppose that the child goes to a Website that they're not supposed to something that they've agreed that they're not they're not going to go to Well, the parents going to get a notification and has the opportunity to again open up that dialogue has the information to open up that dialogue At the same time the child is informed that they've done something that transgresses this this contract that they have And they have an opportunity to also open up the dialogue with their parent about that and maybe for a younger child They actually get blocked They actually can't go see that website and so they've got an opportunity to you know Maybe typing in a note to their parent to say Sorry, I didn't know what I was clicking on It's an accident and that's going to show up in the logs for the parent and the parents console So they get a little bit more context Or if it's an older child, maybe the parent has decided that That they're going to give the child the leeway to make their own decision Well, they're still going to get a notification the child is still going to get a notification on the client saying You're doing something that might not be appropriate Are you sure you want to do this and again the child has the opportunity to Open up a dialogue with their parent. This is the reason I'm going here. Maybe it was an accident I clicked on a link. I didn't know where I was going to go or maybe there's Actual content that I validly need to access to do school work And they also have the opportunity to Request permission to go see that site if they're blocked and again the parent from wherever they are maybe at work They get the notification They can see what's going on they get the context There's that dialogue that's opened up They can choose whether or not to allow the request or deny the request and again It's about providing the parent with the tools that they need to understand what the child is doing online and Whether or not there's something that they need to do to personally intervene So and then Beyond that again, there's a focus on Providing parents with the tools that they need in order to parent So also access to forms where parents can share ideas and and issues Access to other online content that that that helps parents with this task of parenting in an online world Great. Thank you so much for this Do you mind if I use moderators prerogative and ask the first question? One of we've had the target again of our inquiry on social network sites We're talking about it more broadly and I appreciate very much that you're looking at instant messaging and other kind of environments That we might have concerned with but can maybe just talk to the social network Example and let's imagine that I with my child set up this Arrangement to limit what he or she could do with the social network What are the kinds of constraints that this product would place on on his or her activity? So I'm gonna have to apologize up front. We're still Sort of in the depths of development and we're still haggling over what features are in and out So there's some things I'm not gonna be able to be specific about okay But for and this is one of them for social network. Yeah, I can still answer. Okay. Okay, so for social networking Sorry, so for social networking Certainly there's gonna be Parts of the this contract that are going to find for example, which sites the parent is comfortable with a child visiting which sites They're not We're also gonna be monitoring to see which sites the child is using not just for browsing But which sites they actually have accounts on and will be collecting information again configurable by the parent About, you know, what the site is what the URL is what their screen name is You know the age that they're purporting to be on that site that sort of thing So again, the goal is to give the parent enough information to create that dialogue So maybe the parent has blocked access to a particular Prove to a particular social networking site the child has the opportunity to Ask for permission right maybe the child has permission to get to a certain social networking website But it's created multiple accounts when they're only really supposed to have one Well, the parent is gonna know this right and it's gonna have the opportunity to engage in that conversation And in particular to make sure that that they're on the child's friend list that that they've got visibility into what's going on Got it. Thanks very much I'll start with Teresa unless there's someone oh Scott Bredner is not yet spoken Which is extraordinary other than having people come to the mic and we'd like to hear from Tell us who you are Scott Scott Bredner one of the tab members work for her work for Harvard How much research have you done to gauge the level of ongoing parental involvement in in this sort of thing? It could be quite a bit of ongoing effort About how much effort is required for no how many parents what percentage of parents stick with it Well, so we're a version one product. Okay, right? So the answer is not much yet Well, we so we've done research in in in talking to parents about what they're looking for and the types of things that we found is that There were a question came up earlier. Why isn't why aren't parental controls being adopted more and and I think that that a lot of the other Reasons that came up apply, but I think that one of them that hasn't come up yet is that parents are hesitant to Be viewed as spying on their kids Right, maybe particularly as kids get get older And then another reason is that the amount of information that you get when you are monitoring in detail What kid is doing is a little bit overwhelming particularly to a parent who may might not be as Technically savvy as the child right so, you know, listen listen list of URLs in search terms, and they don't necessarily know what that means So it's it's a little bit overwhelming. So What we've gotten is that parents of younger children really want to be able to Constrain what the kids can do and to have full visibility into it Whereas parents of older children want to give Want to give the the child? More free reign more freedom more privacy, but they still want to get notified They don't want to see everything But they still want to get notified when there's something that they might want to address again the example of The girl who's searching for terms associated with eating disorders. I don't want to block that. I want to know about it Yeah, I the reason I ask is not that I Completely agree with this is the kind of thing that parents would say they wanted The question is how well it where they stick to it and so because your product is new it's it's hard to tell But it's certainly one of the areas of worry something to monitor Teresa Polaris Teresa Polaris from Polytechnic University of NYU You had mentioned in one of your slides that in stark contrast to other approaches And you are emphasizing your reliance on the parental involvement and monitoring How are you doing that in a way that's different say from the example before with that nanny? I Think that it's I Think it's it's in large part to the way we approach the problem, right? This the context of an online world is new right children going out and interacting with other people online Engaging behaviors that might not be appropriate exchanging information that might not be appropriate, but this is kind of an old problem This is something that existed before the internet just basically parents asking themselves I know how I parent my I know how to parent my kid when I'm there You know, how do I parent my kid when they're out in the world, right? So a lot of it is about the approach and a lot of it is about providing tools that you know, it's not about Installing an agent on on the child's machine and laying down the lawn saying these are the rules that I'm imposing on you It's about making sure there's a continual back-and-forth discussion between the parent and the child and having the software facilitate that so What the child is doing online and how effective the rules are how appropriate the rules are a Continual conversation, I guess my question was relating to how does the package support that though? with you know and again, I'm gonna apologize for being vague but but With certain workflow items like Like the child being able to see you know a natural language view of what their contract is and what they've agreed to which which Experts in the field of parenting online say is very important also workflow that that allows Children and parents to communicate within the context of the parental controls right the web page doesn't just come up and say you Can't do this the web page comes up and says you're doing something. You're not supposed to do. Why is that? Let's engage Brian Levine of you mess. Thank you So I want to really applaud the approach of integrating parents into this I think that's that's great, but I wanted to ask you a question about data security so It seems that the information you gather about children and so on is in a central location stored With cement tech and I guess that's because you want parents to be able to access those logs from work say things like that but How long is that data kept and can parents delete it upon request and is that information ever sold to third parties? So again on the specifics, I'm not gonna be exactly sure with what we come up with Certainly the data is gonna be purged regularly The data will never be sold Semantic this is not the only scenario where semantic Keeps customer data in the cloud for them, right? It's not even in consumer products. I mean we've we've done it for a long time in in enterprise products, too So this is something that we're familiar with we certainly don't collect any information Without permission the parent always has the option to turn off at a pretty good granularity. What's being stored? What's not right and and you know and we we've got lots of experience on how to make sure that that data is secure when we've Went when it's in our possession. So is it typical if there was a data breach at semantic that you would notify your customers? I'm gonna say this and maybe I'm gonna get slammed. I don't believe that that's happened yet Notify them or that you've had the breach then we've had the breach So so but I would but I would assume. Yes. I would assume. Yes, but that's with the caveat I'm a software developer. I'm a yes. All right the crew from Symantec steps up. Thank you That away. This is good clarity defensive it Jeff Smith This is okay. Has that information ever been subpoenaed? Questions are getting harder. No. Yeah, I well that's actually an easy one for me. I have no idea Do you guys care to respond to her? Okay? That may also for lawyers be hard to answer even if it were so Professor Harry Lewis author of blown to bits, which if you have not read it you ought to thank you John I'm a computer science professor at Harvard. I'd just like to Offer not for your not this is not aimed at you. It's aimed at a number of the comments that have been Made in the discussion of why parents don't use These monitoring tools whether covert or overt as yours are and I certainly agreed that of the Of the two the open posture you've taken is preferable, but there are parents who simply believed that the developmentally more healthy way to Have a trusting relationship with your child is to have the child understand that the child has Certain privacy that the parents will not in fact invade and that they Attempts to monitor since they will inevitably be circumvented by going down the street or Whatever is better Assured to the child from the first place and the discussion which the parent and the child need to have is about the responsibilities that are being placed on both parties by Having that happen so it's not so much fear or avoidance or many of the other sort of negative externalities that have been associated with with that, but it's actually an attempt to help children grow up I think it's worth noting that in addition to being a parent He is also the former dean of Harvard College. So we had 6400 such charges under his under his giving Do you want to respond to this or or others who may have been involved all I can say is that that that I totally agree We very much agree Great any other questions? I feel the energy sapping slightly from the room after all these great presentations But I will do one more from Larry and then we'll move forward First I want to ditto that last comment. That was the nature of my question earlier the other issue is To the extent to which this is going to be effective it's going to be effective in families where the parent is actively engaged in their children's life and Ironically that's probably the place where it's least needed or where some form of protection is least needed And so I know you're an engineer and I and I know you can't solve every problem in the world But I wonder if you could at least talk about the child who comes from a home where the parents are unable or unwilling Or just simply ill-equipped provide supervision The very homes where we're to the extent that online predation is a risk the very homes where it apparently there may be some risk What do we do about the folks that you know aren't going to rush over to Best Buy? To spend money and spend time to Supervise their parents. There's a lot of parents in that category So you're asking how do we protect children? Whose parents don't buy your service are not maybe that's not a fair question for you So I'm going to redirect it if that's okay. Can I send Esther Hargitide? Do you mind being on the spot for a moment if I rephrase the question for you? So, I'm sorry, and maybe I'll welcome them. Actually, let's thank Samantha. Thank you Welcome McGruff to the podium and while you saw it sit up. Thank you. I'm going to ask Esther Hargitide Someone who's done a huge amount of study as of Dana and others particularly on this question of the participation gap in terms of the Skills that kids have and whether or not kids may be on SES lines or maybe otherwise if you're willing to go down that road Maybe more or less at risk and it's definitely the case that sub study differences in Young users skills and there first of all there are huge differences and what? Younger users understand about the web So while on average younger users are more knowledgeable than older adults There are huge differences among them and as John cited these tend to be along lines of socioeconomic status race ethnicity gender So you can't assume that all young users are very savvy and it is systematic as to who is more or less savvy So this does raise concerns because One way I measure SES is parental education. And so what you have is Kids who are in families where parents have lower levels of education the kids themselves Understand the web less so the fear there would be that the parents would understand these risks less themselves and the kids themselves would be more Prone to those issues, but I'm afraid I haven't really worked on solutions to those my area is more Trying to figure out how what we can do through training, but that's actually a really tricky issue itself Esther's got tons of great data relatively fresh from the field and to the extent you want to dig into that I know she'll be around a graph. Thank you so much for being here. Hi, my name is Marty Schultz and I'm with McGruff safeguard I want to kind of reiterate what the Attorney General said this morning Which is that no solution is perfect and we need to empower the parents to keep the kids safe online And that's what we do. We give parents a simple Free easy to use tool that empowers them to keep the kids safe the Problem We believe is not just age verification because even if we verify the kids age. They'll figure out some way to get around the verification The problem is not the internet or social networks We can't stop progress and social networks the beginning could become a fabric of society The problem is not the existence of bad guys out there because there are bad guys and that's just a reality that we have to live with The problem is parental ignorance and parental apathy Parents don't know how to keep this kid safe online That's that's what we're all fighting about here So our solution is to give parents the ability to monitor the kids online just like they monitor the kids in real world We have an intelligent virtual parenting service. It's free. It's from a trusted brand and alerts parents to potential danger We knew in order to get massive desktop adoption out there amongst all people It had to be free it had be trusted the McGruff brand is very well trusted and it had to be as we've all pointed out Very very simple to use So how do we use the product? You sign up for an account for free at the McGruff website you download install a laplet on your kids PC McGruff monitors everything coming going on the kids PC So we see exactly what the kid is doing and then if there's ever a problem the parent is alerted via email or a Cell phone text message that there's a problem the parent signs into the website and sees what the kids doing We monitor everything email website chat social networks. You name it. We're doing it a Typical when a parent first signs up they say what kind of behaviors concern themselves about their child Do they want to watch out for sexual predators? They want to watch out for drugs They want to watch out for self-destructive behavior. They put in that profile and then McGruff safeguard will watch out for those activities Here's an example from the root from one of our customers Where the kid used the word hot ice in a conversation? What McGruff safeguard did was it noticed that's a term for crystal meth sent a cell phone message to the parent saying hey You better get into the website and look what you kids talking about because this could be potentially dangerous parent logs in sees the conversation and then Has the conversation with the child to avoid a potentially dangerous issue with crystal meth Now before go there what we notice is now we have tens of thousands of parents coming to visit the web the McGruff safeguard Website every day to see what their kids are doing. We realize we have a community of parents who are watching out and really care about their Children so we said why don't we take the power of that community and use it to stop? Sexual predators from targeting the children so we added another feature which is now covered by our patent We just got a few months ago where let's say John sees a predator talking to his daughter and He's reviews the conversation said this yes, this is a real predatory incident pushes a little button saying report predator at that point we Put John through a vetting process calling him up talking about it after we determined that this is a real report Not only do we block that predator from communicating with his daughter? We block that predator from talking to any other child protected by McGruff safeguard at the same time We initiate a real criminal investigation by passing this information over a national center for missing exploited children So by working together and taking and harnessing the combined power of all these parents who are watching over their children We've kind of changed the tables on the sexual predators. They're no longer swimming in the sea of anonymity, which they are today We're having tens of thousands of parents who are watching out for them able to stop them report them to the police and protect their kids We believe that parents need to monitor what their kids are doing Online at the same way they do in the real world We believe that monitoring needs to be simple and smart and we believe that social networking firms and the chat firms who provide these methods for kids to talk to each other need to cooperate with the security firms like ourselves and Net nanny and spectersoft in essence our request is Really? Open up channels so that we can help parents see what their kids are doing online I guess if we had one major Request here for some of the social network companies to take away excuse my notes is that if we can cooperate that the companies who are in the parent parental control market and the people provide social networking and Provide these venues for parents to see what their kids are doing it'll make the kids safer online and I know in some cases that it's a business threat business threat for Parents to see everything their kids are doing But we think that threat is worthwhile that that risk is worth taking to keep the kids safer. Thank you so much questions for my girl We'll maybe start with Scott over there and then we'll do the Larry and John show over here after that The same questions are asked of the other folks that do a central clients collection. How long do you keep it? Who do you have who has access to it? Has it ever been the date is stored encrypted? It's kept for 30 days parents can delete it at their own Volition and has it ever been it's never been compromised. It's never been subpoenaed never been subpoenaed We feel if you're using it indirectly to Engage law enforcement on To track down predators. Do you offer? Do you provide any of that data to the law enforcement? We've been advised by the National Crime Prevention Council and our other law enforcement Assistance because we partner with them That we are a remote storage facility and if the parent chooses to give this data to the police they can But that wasn't the question I asked you do you provide in the context of bringing in law enforcement? Do you provide any of this data to the law enforcement? What do you just provide a? Screen name for the predator the we provide the information to the to the parent We assist them in filing the police report and then we let the parent gain access to their own data We basically give them the tools then get out of the way so as not to mess up the investigation You don't find you don't file a report the parent does the report is filed on behalf of the parent by you The parent initiates it through you Yeah, in other words, they push the button and our our servers talk to Nick Mc servers And the parent is the one whose name is on the report and and it but I'm just trying to ascertain how clear it is What information is being provided to law enforcement when the parent presses the button? Does the parent have control over that if so how clean is the control? It's these standard form that Nick Mc has up on their cyber tip line. So it's the same information. It's collected there No, okay. I'll just use this other questions From anybody sure John Morris from CDT so in the collaborative process If if you receive a report that a certain user is a predator And you receive it from two or three different McGruff users But you're not able to reach them to verify it I know what will you do with with that kind of report? No, there's I mean how much of a denial of service attack is at risk is there through that system? If a parent chooses not to go through the vetting process or abort the vetting process Which is not to file in essence a false police report It basically it's held if we notice a pattern there will certainly contact the vendor from which that Conversation came be it a Facebook or MySpace and MSN or name to notify their security desks But that's not enough for us to actually turn the report over to Nick Mc or law enforcement. I Think we've had about eight reports filed off to Nick Mc Eight reports came in to us and about three have been forwarded off to Nick Mc. I don't know what happened from that point Out of tens of thousands of users Okay, do we have further back here Jeff Not for profit is there a revenue model here? Is this all funded by I mean there's a lot of we are an affiliate of the National Crime Prevention Council McGruff. We are a for-profit McGruff Safeguard is a for-profit corporation We're an affiliate of the National Crime Prevention Council the business model is an upgrade model where the product is given away for free and Some percentage of the parents choose to get some additional features for a small amount of money. Okay Okay, so just to clarify and Jim Carmichael chat safe step number one there is the result of the parent being Diligent enough to monitor and see that there are Telltale signs in it in the exchanges our software has a linguistic analysis built in to detect Predatory trends, so we would send an alert to the parents saying we know you haven't come to the website for two months because you think everything's okay But we just detected an incident that follows the The patterns of grooming excellent All right, are there any other questions in the audience? Did you say it's I'm sorry. I'm Sahara burn from Cornell University Did you say that it was human it's humans who are monitoring it or is it some sort of software or our software does linguistic Analysis to monitor it to warn parents that they better go in and and verify what we're assuming is going on Thank you very much again. We welcome KB up. Absolutely. We've made up a few minutes. It's sort of amazing Okay Hopefully the energy will return to the room absolutely. Anybody needs to do jumping jacks. Please feel free More coffee in the back. I think let's just keep on and encourage people to Stick with us. Okay. My name is Paul reamer. I'm the CEO of KB technologies I'd like to thank John and the rest of the task force for inviting us to talk with you today Also, thanks John for limiting us to three slides if we had our own way We'd really have PowerPoint overload I'm here today to talk about KB company based in San Francisco We were formed two years ago this month when we saw how social media publishers were having trouble coping with a tremendous amount of User-generated content that was coming into their sites Now while these publishers were happy to get all this action including the revenue that goes along with it The methods that they put in place to protect their users from inappropriate content Were sometimes an afterthought and and frankly not very effective and not very scalable When you think about it these companies could meet all three of their business objectives, which are really Safety member experience and monetization if they could do a simple thing, which is enforce their own terms of service After all the terms of service expressly prohibit all the types of things that everybody in this room is concerned about So enforcing them really would be the key to to this Recognizing this KB's mission has always been quite simple. We want to help social media companies We want to provide them with the best technologies for helping them to enforce their terms of service So in September of 2006 we started developing our KB moderation suite and two years and about 25 man years of development later KB is becoming the de facto independent standard for content moderation in terms of service enforcement At its heart the KB moderation suite is a workflow solution that enhances Human moderation teams ability to get through content accurately and quickly and these human moderation teams can be employees of our customers They can be outsourced firms either in the US or offshore Or they can even be KB people that are provided as part of a complete solution to these customers And here's how it works our customers connect to our system through a very simple API Typically takes just a few days to get this integration to work Then content is fed into the KB system via the API where it's analyzed and queued up for action by the human moderators The decisions of the moderators such as accept reject blacklist whitelist are then passed back to our customer servers Where they can take whatever action they they deem appropriate which is typically Delete the content remove the user and in extreme cases notify authorities that a user has gone bad But the real interesting work is happening behind the scenes within the KB application servers So let me explain the analysis and scoring a little bit more first of all we take in all types of user-generated content text videos and and images and Along with also, I'm sorry We also get the unique user ID from our customer that gets sent over and is associated in our system with the items that have been uploaded and it turns out that knowing a lot of information if as much as we can about the user That's uploading the information is more important to our technology than looking at the individual pieces of content, which we do as well Next we automatically grade each piece of content against the typical abuse categories such as cyber bullying racism and pornography We continue to adjust the user's KB holistic score Looking at other signals from the social graph such as their friend scores Community flags of their content and their own past violations The weights can be adjusted on a per customer basis for example One of our customers has discovered that most of the pornography is uploaded by males ages 15 to 25 from a particular country so we're able to bump up the The KB score from that demographic in order to ensure that that content is more likely to be put up in front of the human moderators quicker The KB system also learns over time and is able to get better at helping companies enforce their own particular terms of service Since launching our system at the end of last year KB's acquired 15 customers across the social network spectrum And we are processing several million pieces of user-generated content per day with our with our tool We're in conversations with virtually all established Social networks as well as quite a few startups that are safety focused social networks that need a technology like ours in order to even launch their sites Our goal is to have about 30 customers by the end of this year So what does all this analyzing and scoring do to foster a safer social media environment? The bottom line is it allows social networks to remove most of the inappropriate content and users from their systems without having to Actually look at all the content This chart shows a couple of Couple of examples of this effect in the upper example 55% of the total content is reviewed first by humans using only homegrown tools that don't prioritize the content and second by humans using the KB solution in This case 72% more problematic content is found using KB The effect is even more dramatic For a customer who can only look at 30% of their content In this case 179% more inappropriate content is found using this prior to Prioritization and scoring methods that I talked about The other point on this slide is a little bit more qualitative We found that a mid-sized social network using three moderators can Successfully identify and remove 70 to 80% of the inappropriate content With three moderators and by adding a fourth they could significantly improve this but ultimately they're going to make a cost-benefit Decision on that's based on what's appropriate for their community Let me come conclude by answering two questions in advance first of all KB is a Japanese word for sentinel or guardian and Second our biggest challenge in our market is to get my space and Facebook as customers happy to take other questions Oh, thank you very much Questions for KB. Let's see great long start over there Thank you for your presentation Islam win with straws free Berg. I'm a tab member The screen you have up there talks about how effective KB can help in Identifying additional bad content for the reviewers Do you have any statistics on the amount of false positives that you're generating and how does that help or dissuade? Your potential customers because you're either asking them to review a lot more content That's not necessary to review or is that a selling point of your product and what are you telling your clients? Yeah, well, we've done a fair amount of analysis again We haven't been in the market that long and we're starting to collect some pretty good Some pretty good data But what we're finding is the techniques that I've described and we actually have a patent pending on this Aggregation of all these different signals has the effect of bubbling up What called the bad content to the top of the queue? So it's not so much a question of false positive because there is subjectivity Different social networks have different terms of service, you know bikini clad women are taboo and some and not in others But by looking at these images were able to provide to give them a head start to make them more productive What we found though is that if the cut if a social network will look at about between 50 and 55 percent I'll use images as an example of all the images they'll be able to remove about 87 to 90 percent of The problematic images of across their entire network So at about 50 percent it gets close to the human error rate first for another firm looking at 100 percent of the content Well, thank you. I see both John and Teresa would like to speak but others who have had less airtime Hi, I'm Ron from eGuardian just a question What's been the biggest obstacle with when you've spoken to my space and Facebook? Is it getting into talk to him or is it just not a willingness to do this? No, they they both of those companies have a tremendous will to do this and get it right But they also have significant development staffs of their own So unlike the next year of networks down which would include the you know high fives friendsters And so on Those companies would would probably rather not put their core developers on to doing systems like ours So tomorrow we'll hear from both my space and Facebook for anybody in a public session in the morning And let's if we can just keep this more or less on the technical front and we can ask those questions of them tomorrow They have a lot of engineers Thanks, Teresa then John Actually, Lon asked my question. Fabulous even better John Morris of CDT. Nice work mom Thanks, so What what user information do you gather and am I right in understanding that? Is the review happening on your computer your servers or your customer's servers? So those are really two questions how much how much user information do you gather keep and how long do you keep it? Okay, the only actual user information as you describe it is it is an encrypted Unique user ID that we can't that is basically provides no identifiable information whatsoever Now having said that the you know the images that come into our system, which is hosted by us. We're a SAS model We have a subscription based model those images are analyzed within our servers They're the results are delivered to the moderators wherever they happen to be and then the images are removed from our servers Typically within 30 days we keep a thumbnail around for QA so that their moderators managers can go back and look at it and see What right decisions were made Thanks Bob others Or KB Excellent. All right. Thank you very much. Much appreciate it. Okay. Good afternoon everyone My name is Andrew Tate and I'm the director of product management at Spectre South Corporation. We're based out of Vero Beach, Florida we've been in the monitoring business for quite some time almost 10 years and our products are very mature and really cater to parents who are You know really have a concern about their children and what they're doing on on their computers and on the internet and I can tell you you know the thing about education is so key And it's really a challenge for everyone Especially parents to be to find out you know about these technologies even though they may not be Technically savvy and that I think we find that a lot at Spectre soft when we're talking to our customers. They're just intimidated so that drives us to create products that are Very easy to install and very easy to use I mean the the nature of the product itself is is is really dirt simple and Education is is is key on the parental side, but I think it's also very important on the the child side Just to give you an overview of our products. We'll get into that a little bit more in just a moment We have several products on the consumer side. We also have an enterprise product that caters to Businesses and that's Spectre 360, but we won't go into that today Our consumer products are Spectre pro and e blaster and we also have a spectre for Mac And I think we're one of the only companies Currently providing a monitoring solution for a Mac computers that the total deployment of our products are roughly 400,000 Worldwide and we record practically everything that they do on their computer and and that goes from you know Their website visits to when they go to my space who the pictures they're downloading The keystrokes for everything they do okay chat. I am The file downloads They they change something on their screen for instance I actually use the product for my own children I have a 10 year old at home and a 15 year old and recently my 15 year old had a Tommy gun You know on the screensaver and I saw this pop up We have we actually record screen snapshots and you can play them back in an application that has VCR like control So you could do fast forward or reverse it start over replay and every event has an associated Action so you can actually jump to the screen snapshot that that surrounds those events and look and see what happened leading up to it But getting back to the Tommy gun story now I have the relationship with my son such that I can go and talk to him I have an open, you know line of communication. He knows he's being monitored I said this just doesn't look very good to have a machine gun on your Screensaver Jordan, what do you know? What are you up to and we talk about it? And it's a collaborative thing and there's trust there So I think you know there's also the approach of not informing your child that you're monitoring them And that's kind of difficult because when something does go wrong How do you inform them that they're doing something wrong and then at that point in time? They feel like they're being you know There's some lack of trust there and they feel like you don't belong on my computer So I think I personally promote the fact that Telling your child is important. We also have well actually let me just step through this we have been awarded several accolades throughout the press and industry trade press we've As you can see there's PC editors editors choice twice We've been awarded that the gold award as well, and we continue to receive these accolades through time It's just testament to the maturity of the product and again that you know the fact that it's it's so so easy to install and And simple to use very intuitive and and just some of the things that pro can do I'm if for instance a child misrepresent themselves. Okay, there are a lot of products are a lot of Technology that we heard about today where it's preventing these type of things or it's it's you know It's it's more of a blocking type thing We're more into monitoring if a child repeats these behaviors the parent can see that so okay They could go to that site once and and maybe it was a mistake But if they keep going to a day in and day out We're gonna show you that as a parent and then let you react accordingly And we also have a product called eblaster I love eblaster because my my son is right now in Boca Raton, Florida typing away, and I'm getting his Eblaster reports to my PDA so no matter no matter where I am in the world Okay, I'm able to keep in touch and connect it to where what my son is doing. He's hey dad's on a business trip I'm gonna go to this website or I'm gonna go to that website. No, I've got the PDA right here I can see everything they're doing From a million miles away, so that that's a that's really a great Great feature of the eblaster product is an intuitive report that's emailed to you and it's very Very useful and and very easy to understand and and go through the other thing that we do our keyword alerts so for example if the child is typing in profanity or Let's say your credit card number Okay, they go into the into mom's purse and pulls out the credit card and starts Typing that away to buy some music or itunes for example You get an alert and you can know right away and intervene and so I think that's that's a really important feature as well Eblaster as well has received several of these same awards And just a recap of the features are on this slide, which are also included in your handout Just as in summary these these are the products and also our specter Mac product as well Any questions? Interesting. Thank you very much Start with Scott Bradner and then honey Fareed and take it from there the same question asked earlier Do you have any research to show how well parents stick to this considering there's an awful lot of data potentially How long how well they deal with this over ongoing? well in terms of There they're staying with the product you mean staying with the actual responding to your messages Yeah, well, they do they do stay with it because we continually update them and we know that they you know we have A great rapport with our customers as follow-up So we we definitely see you know the customers staying in touch and using the product through time because we're our products Not static it has to you know It has to conform to like different events that occur or different features that are provided by Mainly webmail we see a lot of changes in webmail strings and that sort of thing So yes, we do find that customers stay with the product and they're consistent over time percentage wise You say what 400,000 customers you said 400,000 so what percentage of the what percentage of those have stuck with it for? Six months, I would say Probably close to 95% I don't have any heart statistics, but it's definitely we see definite positive trend there Thank you, honey So we've been hearing a lot of the filtering and auditing and monitoring and this is more of a clarification It sounds like you're more in the monitoring so there's no Like you're just given the parent all the information and let them deal with it or are you doing some? quote-unquote intelligent Filtering because I mean a kid spends hours on a computer and that can generate just a huge amount of data Especially if taking screenshots well, it does but the thing of it is is that we categorize this data So it's statistically presented to that to the parent so in some condensed format That doesn't take them an hour to start through I mean for instance like the websites We give them top 10 reports of the top 10 websites that they've gone to or the top 10 chats that they've been participating in Those those are the type of statistics that parents are really after because they can go bubble up to the top Where are my kids spending the most of their time and furthermore? We allow you to do blocking Okay, based on those sites So for instance if a child is going to a particular site that's not desirable You can just say block the site from from future access and we do other have other controls as well Like block certain chat IDs or even provide You know times of when they can use the computer for instance if they come home as soon as they get home from school They start you know typing away. That's not desirable So we give them like say six to eight at night that they can use their computer And that's something that's built into the program as well. Thank you. Thanks Andrew others for specters out. Yes, sir Do we have a mic this way? Thanks start with the gentleman from net nanny then John Morris will get his question And a child from installing this on their parents computer to capture their admin password and stuff like that Well That you know, this is all going back to the education thing. Okay, a parent shouldn't have their administrator Password out in the open to a child So you need administrative rights to do this and I think that it's important that you know Parents are educated and understand how to manage their computers That's that's again part of this this whole process And I think a lot the people in this room are going to be key moving forward in making that you know that Knowledge available to the community at large John Morris CDT is the same question as before with other folks How long do you keep all the data that you're pretty massive amount of data that you're collecting? And you do you do anything with it beyond? Well, we don't house the data at specter soft All the data is First of all the eblaster reports are relayed. So we don't we don't Store any data at specter soft. That's that's owned by the parent the actual data itself that's Hosted on the computer can be pruned back and we provide tools for doing that. So it's really a configuration Parameter so it's largely stored on the client machine. Well, there is no there is no client server approach here I mean, this is that the pro-product that the user interface come that the parent uses is on that computer or they can Install the viewer on a network like a local area network But but that's just a viewer that's looking at shared files from that from the child's computer itself So those those files are resident there eblaster again is just a relay Blair tell us who you are Hi, Blair Richardson from Aristotle This is to follow up on a question that was asked earlier to the McGruff people which is do you have a mechanism to facilitate reporting of predatory Contacts that a parent discovers through the use of your product. No, we don't have a clearinghouse or any or any kind of You know relationship like the McGruff folks do but we do Provide very detailed You know insight into the communication that's that's ongoing So a parent can look at our reports and it says Johnny Let's go meet at the park tomorrow at four o'clock and then the parent can see that and as a matter of fact They could set up an alert a keyword alert that says meet me or park or after school Okay, or I'm your friend, right? You know could be any kind of string and then immediately get an email alert So they don't have to be looking through the logs all the time It can just be you know something that they're alerted on and so it's really we put the onus on the parent to do that Because really that the clearinghouses aren't foolproof there there could be what if the predator has changed their their identity and they often do Frequently for that matter and so in that case you that nothing happened from the clearinghouse and you know There there you go that the child's meeting the predator and there's nothing I mean the parent has to be more involved. I mean figure this you go to a mall with your child Do you let them go? They're ten years old? Let's say you don't let them go run around the mall and then meet them an hour later Why would you let them on the internet unsupervised? It's the same exact logic? Ross Cohen been verified Because you're monitoring everything at what point do you feel like your software is just counterproductive and you're forcing the children to Just go to their friend's house where they're not being monitored Well when you say that you know go over to the friend's house I've thought of that quite a bit and I and if you're going if your child is going to the friend's house Then you should know the parents of that friend and you should have a nice long discussion with those parents to tell Them the advantages of monitoring as well. So when they go over the friend's house that their friend is also being monitored I mean this again. It's education. Why let why let your child go over to a house where they can be? You know access to the what the Wild Wild West. It's the same thing as the mall analogy Don't let them roam around the mall unsupervised and then the other thing that was discussed earlier is this Trust level between the child and the parent, you know, I think that that's really a an age You know that the age of the child really is is is a key there. I mean, I'm gonna let my 15 year old go to maybe some promiscuous sites now and then I mean he's a guy, right? I'm gonna let him grow up a little bit, you know and back in my day. It was a Playboy magazine now It's whatever. Okay, so let it you give him a little leeway But the you know my 10 year old if I see him doing that he's probably gonna miss dessert for a week So it's all about good parenting Larry maggot last question and then we will take a short break Seem to have toned down your marketing But for the longest time you were promoting your products for spying on spouses In fact, I'm looking at press releases where you specifically say that and I'm wondering whether that's still part of your company's marketing No, no, it isn't I mean, you know you evolve and you mature as a company and over time And I think from an ethics standpoint We felt that that was just something we don't we don't really want to promote. I mean obviously you know first of all we Enforced the fact that the computer that our software is installed on has to be owned by the individual that is installing it So that's that's key The thing I only technically owned my wife's computer. Okay, fine You know, but that that's something we're not preventing I mean we can't prevent you from doing that, but we're not encouraging you either It's just it's just a matter of maturing as a company. We feel that that's probably not So you have changed that aspect your market. We have good. We don't talk about that anymore great Andrew. Thank you very much Hello and good afternoon and thanks to the committee for inviting us to speak today So ethos safe is a Boston based company. We actually have offices in Lexington, but so we're local folk And what we're really focusing on is that we to focus on online content moderation And we're really trying to remove the risk of user-generated content our focus being To talk to brands about their social media programs. So for us social media is Online forums their blogs their wikis Their uploads of content of any kind videos and photos in addition to social networks But we're really focusing our attention right now on how the brands Mostly the large consumer brands are using social media and therefore keeping those particular UGC programs Safe in relation to the ethos of that brand. So for instance, there will be many brands who Will not necessarily Have as high as standards about what is allowable in their UGC programs And there are other brands that will be very strict about that So essentially we provide a turnkey solution to moderate any kind of user-generated content And we are essentially a tagging and analysis platform So what we do is that we take content a copy of content into ethos safe and we tag it So that we can understand if that content is appropriate for the User-generated content or social social media program that came So our platform consists not only of technology, but of also human Reviewers so we have a technology where we have a smart database where we store the content that we get from all of the Clients who use our platform that smart database can understand if it's seen the content before and I'll tell you in a minute How we do that we also have a number of artificial intelligence components that is really doing the analysis So we're doing simple things right now word matching spam filtering profanity filtering those kinds of Understanding of text coming through That is the area that we will expand probably the most in our technology platform and that we can because of the way The system is written. We can add all sorts of components into the technology platform so that We can analyze any kind of content that comes to the system We also have 24 by 7 highly trained reviewers So this is how it works. So this is the tagging platform that is ethosave So content comes into ethosave and at first we generate a digital fingerprint for that content So that content if it comes into our system many times Which is what we have found in our studies is that inappropriate content? That is generated by users is frequently the bad stuff is frequently the same so you know you know all about that how it gets grassroots Penetration and people pass it around So if it comes in to our system at any time Depending on where it comes from if we've seen it then that's going to make this process much faster So we generated a digital fingerprint for each piece of content that comes in We check our smart database to see if we've seen it before if we have seen it before and it can tag the content Appropriately for the client then we send a message back that says in version one It's a publish or no publish rule In later versions we can have any kind of variety of what happens with the Instruction as it comes out of ethosave, but right now it's a publish and no publish rule If we don't see the content in the smart database we'll have technology take a look at the Content that comes in and if the technology can tag it in some way, then it will also send back the the instruction back to the program that it came from so What what we're trying to do here is that each client of ours has their own set of rules So we have a set of tags with about 50 to 60 tags that we that are standard tags They all have to do with appropriateness or inappropriate Ness so they could be sexual content. They could be hate language They could be any kind of violence and that's really what we're looking for in this first version So we're really focusing on safety so our clients choose which tags they want that are allowable on their site or not so certain certain Brands would not want any of that kind of content ever on their site and other brands will have a lot more tolerance for certain things So what we're doing in this tagging platform is that we're tagging each piece of content that comes in and then we're comparing it to the rules of The client that we are working for if the technology or the smart database cannot tag the content Appropriately then we will send it in front of human reviewers So we have a platform and an interface for the human reviewers to do this very quickly So it is an orchestration technology that comes in and presents this content right to all to our reviewers And so it's a very quick very fast process and our differentiator here from other online content Moderation plays is that we look at all of the content and that we are really focusing on a Somewhat smaller type of social media program We don't pretend to be able to handle all of my space or Facebook we can handle a brand's portion of that So if Adidas is putting up a Facebook page and they want it to be moderated then we can moderate that for Adidas But we but we are not trying to boil the ocean with all of Facebook of my space really focusing on the brands And these are the categories that the content is being tagged with So again, it's it's really around safety. The system can handle any other kind of custom tags So if you if a brand wants to do other kinds of tagging we can do that and we can do a publisher No publish rule on those as well Okay, any questions great Michelle another great concise presentation wonderful. Thank you very much I know if my space or Facebook have ever been compared to the ocean before but now they are this is good Who has a question for Michelle Teresa? That's you Lisa Polores from Polytechnic University of NYU What sort of capabilities do you have to do that human moderation? You said that you had highly qualified people 24 by 7 I'm wondering do you have people all over the world working? Three shifts to do this. Yep. So actually we're using a work virtual workforce model So kind of a work-at-home model right now We're only covering the United States were it were a beta product essentially So we're covering the United States and English only right now And so we will have reviewers in every time zone of the United States The way that we currently have a setup is that it's four-hour shifts So we're really trying to attract the educated at-home worker Who may have other responsibilities for instance full-time mothers who want who can work during the day when their children are in school or Disabled people or just people who want to work at home And so people can work one shift or two shifts But we will have enough people on board for 24 by 7 coverage and Michelle Just when you say you're a beta product you are though in use at this point. We are not in use We are currently looking for beta customers John, I just had a really quick follow-up question to that. How do you ensure the quality? And consistency of the human reviewers, right? So so our system is really set up as a checks and balances So we have supervisors on board who will also be doing spot-checking and the way that we have the moderate and the reviewer setup Is that we have more reviewers than we need for each shift? So that some frequently we send content through the system more than once so that we can check on tagging And so we we're we're trying to make sure that the content is seen by multiple people so that we can make sure that we are getting consistent tags We also have a pre-testing so in order to get hired you have to show computer literacy Obviously and also some other tests that our chief operating officers is providing Thank you. Good questions Others for Michelle. You go see Brian Brian living from UMS MR. So is it a technical limitation or is it a have to do with the fact they're using human human Human workforce that you couldn't scale up to something the size of the largest social networking sites Or is it a business decision? It's really kind of a business decision. We started the business to Answer the needs of clients that we had and it kind of really took off from there It's very hard to tell if The ethos of Facebook and MySpace in general can and will actually embrace full across the board content monitoring and It may not it may not be our job to do that. There are other Places that are doing and as KB who is one of our competitors mentioned earlier Facebook and MySpace are actually building those systems probably themselves So what we're really focusing on is helping brands move into the social media space Do you mind if I ask another question? Please Brian. Thank you So given that you're gonna have lots of customers that have their own sites with these different discussions and that you're gonna Look at user content. Are you gonna tag users across different sites? And if you do where do you keep that information? Who is it available to is it the leadable upon user request? It's right. I'm gonna have Eric Marthenson my chief architect come up and answer that question Sure, so the Sorry So the content that comes in are you talking about tagging content or users across sites or exactly Content tag tagging users across sites and then yeah, we do we do optionally capture Information regarding the users that contribute content So the brand can pass through anything from a unique user ID Which is kind of site specific to an email address to a username And we can use that to try and trend across different sites in the beta in the beta version. We're not doing that right now We're just capturing the information But we do have that capability and we do have the data store built for that So do you plan to sell that information to up to third parties? No, would you notify those users if there was a break a breach of your security of our security? Yeah, we would probably pass that back to the Back to the brands We we operate as a white label service. So to a large degree. We're invisible to To to the end user and what kind of information are you capturing about them? Is it just their email? Are you capturing their IP address and information? Yeah, yeah, the specific fields that we're capturing would be a user ID User ID email address IP address And I think that's it the All that information is optional. So the brand has control over whether they want to send that to us or not and And it is deleteable upon user request upon your clients request not the user. Yeah, the user would never know of us, but right, okay Thanks, Brian. Other questions Right now indefinitely It's an honest answer there it is part of part of our solution is that the smart database which is keeping that content and As future virgins come out We will to be doing analysis on that like Bayesian filters and those kinds of things to make that database smarter So that when content comes in we can recognize it more easily Scott Bradner So even Google is back down for up from forever forever. Why are you starting out with forever? That seems to be a remarkably dumb thing to do Scott Scott Bradner's email address and I Tell me what his car looks like Yeah You know it might be I think I think one of the reasons that we're not addressing a specific specific retention policy is Because we haven't had any pushback about the fact that we do retain the data indefinitely If that were to bubble up, I think we would address it and we are starting to look at this more and more Yeah, consider this pushback Dueling added, but you just said that the user would never know about it. I mean, they're a white label thing So it's invisible. So unless you're part of your requirements is that the people you work for Publish eyes to their to their customers. What's going on there? Would never know so you wouldn't have the opportunity to figure out how much people love it If that's the right term Yeah, we do We do address the issue of transparency in our own in our own documents and I Believe we I need to double-check on this But it'll be part of our terms of service is that we ask for something to be included in the privacy policy Regarding how they're moderating their content and our involvement So on a day-to-day basis, we're not communicating directly with a customer We would love it if the sites put a little you know made safe by ethos safe badge on our site We're not you know, we're not counting on that We do ask that they address our our involvement in their privacy policy Part of our contract with the brands is that they would put that it is being moderated by a third party And so it would be in part of their privacy statement. Hi Great one last question. I just took a minute. I just for clarification. Don't tell us please Doug Krugman Protocol partners Most of the data we're talking about here is publicly posted user-generated data Yes, why are we so concerned about the privacy of this data? Right? I'm not I'm not why why would we be concerned about that? If indeed it's all it's it's it's only that it's not published in the in in the context Which is questionable? private private space or something like that then it's probably not an issue but Making things make making it clear as to what you're doing is an issue And it's it's certainly a significant it's a significance concern and has been expressed to me It's a good same concern of Creating databases of people's actions that can later come back to haunt them. Yeah, so just remember that we're not There I suppose you could say there were actions But it is really the content that we're most interested in And so it is optional from the brand and from our clients to decide if we're also capturing this personal information about people So if you want me to add to that Scott part of the problem is that they said they were collecting You said you were collecting information from multiple sites of users who would not be linked otherwise And so now my activities across the whole entire Internet are in one database and on one side I may have been I may have found it acceptable to give my email and on another site I may have posted an anonymous question, but now somehow I'm linked together And especially if you're if you didn't say that you're not tracking children So if you're tracking someone's activities when they become 18, it's still in this database forever I'm somehow then I'll go out for a job interview and I'll then ask this company to produce all the postings I ever had as a child and suddenly I'm responsible for things Maybe I shouldn't be very helpful exchange Do you guys want to find a word on this and then we'll move to I could be better. We Are our technology in terms of tracking users across sites is not particularly sophisticated You're looking at email addresses Usernames so the only opportunity we would have to link somebody is if they use the same information if somebody posts anonymously on one site and posts Under a username on another site. We really can't match them up. We could try to do it by IP address But that's extremely unreliable. So I would consider that to be junk data It's pretty reliable and a lot of the sites request an email address and then don't present it out in public But you would have that information and you'd have to keep a record of that This was a site where I just think you should keep in mind For us right now in our conversations with with the with the clients that we're talking to it isn't an issue for them And so we're of course catering to sort of getting the feedback from them But this is a great conversation for us I suspect that some of the technical advisory board and others be happy to give you a bit of free consulting on how long They did retention policies out to be It does sound like you have strong opinions There are some good and strong opinions and lots of others the Berkman Center who think long and hard about this So I'm sure I'm sure I'll be happy to talk to you about awesome. Please join me in thanking you