 of fun. That guy is serious about fun. Any writes some stuff that's serious and gets published in serious places. But I'm not that guy. That was then, this is now. I am Len Baker-Lew and Len Baker-Lew has fun with serious topics. So I'm not serious, I just care about serious topics and have fun with them and so I call myself a playful polymath and bullshit detective. So this is my website, you'll find that, you'll find BrookAllen at BrookAllen.com, you'll find Len Baker-Lew at LenBakerLew.com. And so he writes articles like this thing about how a performance I did with some guy we don't know who he is and how critics mangled that and how you can use hypothesis, for example, to talk back to critics. Critics shouldn't have the last word, it should be a discussion. And I've also written, I'm writing a series of books, the first is called The Case of the Worthless Newspapers, Calling All Irregulars. And you can get that on Amazon and it's gotten good reviews, like four and a half, whatever that is, stars. But I suggest you don't do that, but instead you go to LenBakerLew.com and turn on your hypothesis browser or you could just click on the via link. And so this is the story and the thing about this story that you'll notice is that it's annotated. And the reason that it's annotated is because Sherlock Holmes has some questions but he hasn't gotten to the answers and readers have been annotating with the answers, helping them out. And in looking, I've done nothing to publicize this at all to anybody, but it turns out the annotators who have been showing up are the people who are professional annotators, perhaps some of you in this audience who have noticed on these annotations come by on the stream, looked at it and said, oh my God, I need to break from this serious work, let me have some fun annotating something for fun, right? And so to give you an idea, there's evidence like here's some Chinese fortune cookies and here's a fortune cookie that's only partially visible and some people have figured out what it says on the other side of the fortune cookie. So think about how you would solve that problem. Right? Okay. So the first book, the first adventure, the adventure of the worthless newspapers is when Sherlock Holmes realizes that newspapers are doing more harm than good these days. And that's one of the reasons why they don't have a viable revenue model is because they're hurting, they're hurting people. And so he decides that what he needs to do is organize his Baker Street irregulars again. So in the second book which is called The Case of the Radically Open Secret Society, this is where this story sort of like gets ends on midnight last year, on the 31st of December, where he leaves to go find people who can become the Baker Street irregulars. And what he discovers, and this is new, you're going to be reading about this very soon, what he discovers is that the Baker Street irregulars have been going along all the time. They've created a radically open secret society meaning anyone can join but you can keep your identity secret because you may need to solve a problem to avoid retaliation. And just last night I talked to Sherlock and he discovered that here in San Francisco there's a conference that's running across town. It's called We Annotate. And it's different from this one in that they're more collaborative. And the speakers who get up don't talk about what they're doing, they don't say I annotate this, they talk about what they're capable of doing so that they could help others on what they're trying to solve. So instead of talking about what you're doing, if you went there, you would talk about what you're capable of doing and then people in the audience could come up and say, could you help me with this, could you help me with that. Now, and they use a product they've developed called Hypotheses, registered in Spain. And Hypotheses is for people who kind of work on more than one thing at a time. So those guys, those BS irregulars, as they call themselves, have organized themselves a little bit differently from yours. And really interestingly you'll discover when this book comes out that including in the evidence will be their agenda and their program. And many of the people that we're presenting over there will have mysteriously names mangled that look kind of like your names. And so you might want to go on there and annotate that document to correct the misimpression. So you can view this as say the onion, like fake news that admits it. And your job is to make that fake news real. And my job is to create a story that will appeal to my seventh grade self. Who here read Sherlock Holmes when you were a kid? Raise your hand. Okay. I would not be here if I did not read Sherlock Holmes as a kid. Who here liked science fiction when you were a kid? Right. I would not be a maker but for science fiction. I own my life to two things. Science fiction that gave me the idea that I can imagine anything except breaking the laws of physics. Right. And okay. And Sherlock Holmes where I could imagine I could not only see but observe. Right. So these guys, the Baker Street Irregulars who look a lot like you guys, they also read science fiction and Sherlock Holmes and they went out and they invented a way to help each other solve the mysteries. So my call to action to you is get on Len Baker Lou now, catch up, solve some of the mysteries and calling all the regulars and watch out for what's going to come up next and correct any misimpressions of who you are. Thank you. Awesome. Thank you. And hey guys, can you guys hear me okay? All right. My name is John Pettis. I'm the founder and CEO of Fisket and what I'm going to show you guys today is a demo of something that I've been building for the last four years. One of the first people that I sat down and talked with four years ago when I had this idea was Dan. And Dan was very generous with his time and advice. But what I'm going to show you guys is something that, so we're a social venture. We're a completely broke group of ten folks building this nights and weekends and what we're trying to build is the best possible online discussion. To our minds, that's a discussion that favors facts, logic and civility, three things you actually never see in online discussions. So what I'm going to show you guys here is essentially a very tight and focused implementation of annotation to a specific purpose. And I think you guys are going to see that come through in how we're doing this. And before I forget and get into the demo, we want to work with you. I want to meet as many of you guys as it can before we get out of here. So if you're looking for a project, if you want to contribute to the elevation of public discourse, come and find me. I really want to talk to you later. So Fisket is based on an old blogging technique from the early 2000s called Fisket, which was when bloggers would quote a sentence from another blog or an article and then write two or three paragraphs just destroying it and then quote another sentence and then crush it and then quote another sentence and crush it. And it was essentially the ultimate rebuttal to somebody's terrible article. And it became my favorite thing in the world because when, whenever in life do you actually say, hey, this is contested, let's go to the evidence and see who's right. So what we built here is an engine to allow any person in the world to fisk any article online. So here's the way that works. And I'm just assuming you guys are seeing things behind me. So this is a, this is the David Brooks New York Times column about after the women's march, which happened, and you can see how this is structured. So we operate at sentence level granularity. So you can just tap into any sentence and start commenting on it. You guys understand the concept of getting a red pen out and marking up an article. So that's right. Really straightforward. What we allow you to do is both comment on individual sentences, but also drop tags on them. So we have this palette of tags here that allow you to describe a sentence for whether it's true or false, has specific logical fallacies, has other non-phalacious issues like unsupported or bias wording, which believe it or not, is pretty common in the news. And then compliments, which, which was added later because I didn't think of that at first. But it's actually really great because then you can see whether or not people are being even handed in their treatment of this article. Do they give credit where credit's due? And what this creates is this very attractive, very pretty commentary on an individual article, which is its own unique piece of social media that I can share out to Twitter, Facebook, wherever, snapshot it, send it an email to, to a friend. But here's the big paradigm shift, because what we're really doing here at Fisket is we're taking discussion and we're putting it into structured data. And I'm sure a lot of you guys are already seeing that's what's happening up here. And the reason you put stuff into structured data is so you can build cool stuff with it. So when you get, for example, on this article has 24 different people who have all weighed in on this article. It allows us to go over, let's see here, and build what we call an insight page. And an insight page is going to show you some of the numbers that have been generated by the public engaging with this particular article. But here's what's cool. On the back end, we've gone through and counted all the tags on each sentence. And we've compared it to all the people who could have put that tag on there. And we've run it through a Bayesian statistical test of significance. And so on the insight page, we show the article again, but now not with somebody's opinion, we're showing you only the statistically significant tags that have popped out of the data set of people just naturally talking about this article. And what you're looking at is a completely scalable third-party objective quality check of this article that could actually, at scale, give you a first draft quality check on every article that gets published on the Internet. So this is what we're doing. Our motto at FISCIT is we turn discourse into data so we can do something. We're going to be working this summer to take this process and make it recursive so that you can comment on somebody's comments using this exact same process. But we're going to add new tags for personal attack, profanity, and off-topic. And then we're going to wire that to a slider bar that we call a Troll Filter. And it's going to start at zero. And as soon as you go from zero to one, the 30% trolliest comments will disappear. And then one to two, more of them drop out. So the user controls what their bullshit threshold is. And we think this is going to revolutionize people's willing to engage in public discourse. Because online is the new public square, but it's awful today. And nobody's got time to read a bunch of Hitler stuff and get personal attacks, right? Nobody's got time for that. So we want to build a space where people can actually engage with each other and control their experience. And we think it's going to be extremely favorable, especially for women and minorities who are dramatically underrepresented in online public discourse. So this is Fisket. We're figuring it out as we go along. But I'd like to talk to you guys, and we'd love to have your help if this is something you're interested in working in. I'm John. Come find me. Thanks. My name is Joshua Choi. I'm from St. Louis University. I want to increase our awareness of something. I'm not here to present a prototype or anything. I'm here to talk about a motivation or an application for open annotation that I don't think has been talked about much before. I'm talking about using open annotation to practice and memorize things, facts and skills. You know, people often denigrate memorization nowadays, and it is often executed poorly. But the truth is we all have to memorize knowledge all the time. And being able to recall skills correctly and quickly is especially important in stressful, time-pressured, or life-threatening situations. But we forget a lot of things. You're forgetting things that I said a couple of seconds ago, and you've forgotten stuff that was said yesterday. We, even when we try to review when we learn we forget things. We, oh, we might naturally practice some of the important skills through our usual work. But facts and skills, they naturally, continuously, and rapidly decay. And our frequently, our natural practice can't keep up. An antidote to this is supposed to be practice, review, and reinforcement. But practice, review, and reinforcement, they're all difficult to perform oneself. It's tedious to keep track of all the important knowledge you know, where it is, and when you've last reviewed it. Now, open annotation, the point is I think it can, even though it can already help us study and remember, you know, we can already bookmark things, highlight, describe, tag, link, and comment on resources on the web, or even parts of them, and this is terrific. This is amazing. But when it comes to personal studying, it seems to be active recall that may be especially effective for remembering. Active recall. By that, I mean that's something. A flashcard perhaps, or a friend asks you to actively remember an idea, and you have to consciously attempt to retrieve it from your memory. This can be especially effective to retain memory, versus reading, note-taking, mind-mapping alone. And this is pretty well replicated in the peer-reviewed cognitive psychology literature. Ask me later if you want some. And also furthermore, spreading out these active practices of each idea over time, rather than pushing, doing them all at once, soon, is actually a lot more effective for teaching. This famous phenomenon is called spaced repetition, and it, too, is well-established empirically. And this active recall, active practice, active testing, spaced repetition, what they all have in common is that they're useful, they're effective, they're tedious, and they're difficult to perform manually, especially when you have a lot of knowledge to learn. You may know this from personal experience, and I do too. But if there was a memory application that supported open annotations, you could take highlighted or linked or whatever portions of resources that you're trying to learn, whether they're marked by you or someone else, and you can turn them into automated, personal, smart practice tests. Such an application would allow the learner to highlight web resources or use the highlights from other learners. When it's time to study, it would, you know, it would censor, cover up, or omit all selections that are to be reviewed. So instead of marking them with a background color, it would just make them blank. And one by one, it would scroll the resource to one of those passages, it would highlight it in a special color, and it would require the learner to recall what its content is. And when the learner is ready or ready to give up, the content is revealed. In other words, it's sort of like flashcards or closed deletion, but in situ within its own context. This has been done before, but nobody has applied open annotation to it. The application would also keep track of how well the learner recalls each passage and when they were last reviewed. Every time a passage is tested and reviewed, the learner will tell the app how well they remembered it. Did they get it wrong or right? Was it easier or hard? And the app would then determine when the passage is next due to practice before you forget, spacing out the repetitions. This metadata, the times last reviewed, difficulty ratings are all private, personal, but they're also annotations, annotations in the highlights and whatever themselves. And now there's a medical student reading an article on approaches to care for abdominal pain in the emergency department. When a patient with an abdominal pain enters a hospital, the student needs to be able to recall many facts and skills from this article and others rapidly and accurately. Anytime they spend on looking up slowly recalled information or verifying uncertainly recalled information, that's time not spent talking to the patient or examining them or thinking about their particular special situation. But diagrams, graphs, tables, text, they all become blur mere minutes after reading them or hours or days or weeks until they're reviewed again. But this article is also one of thousands to remember. How can you keep track of all that? I want that student wants to be able to highlight crucial passages from that chapter in addition to the highlights and other annotations from their friends and peers. They want to turn them into a practice test automatically, one that they might review for the rest of their career in order to maintain that knowledge. That student is me. And that's why in fact I am here. I think open annotation could help me and many doctors become better doctors and professionals or whatever through active practice. What other knowledge can we use this on? Anything in the greatest public library in history, the web. It might be as urgent as the school exam tomorrow, as permanent as the cities of the world, as artistic as Hamlet, dry as Emax commands, vibrant as foreign language, sacred as religious scripture, playful as combos in a fighting video game, or as important as brain surgery. Whenever you read through a textbook, an article, a manual, a diagram, a lecture, there will be some things you only need to remember for a few days, and there are some things you will need to remember for the rest of your life. The more effortlessly you remember what you need, the more your attention is freed to improvise, to think about higher level things. Remembering tough things is a distraction from what really matters. And we could go further. A teacher can have their, you know, they can assign reading to their students. They can have their students highlight stuff and then actively practice them. And the teacher could have their application aggregate the data and show, say, a visualization or heap map of what parts of the reading are easy to learn and what parts the students are having difficulty on. Those parts they could focus their class time on. This is, this might, and the students, you know, afterwards they would be left with a permanent smart quiz they could use for the rest of their lives. This would be a lot better than the tremendous amount of time and money we use right now on making students crunch right before tests, regurgitating the teacher's passwords on that day and forgetting everything two weeks later. I think this is very applicable to a lot of things. And I encourage all implementers of open annotations to think about this motivation of annotations, one that we haven't really thought about yet. There are existing implementations, but none of them use annotations. You have to manually enter everything themselves. I've had to enter 18,000 flash cards for a national board once into an application by hand. Okay, is that a good stopping place? Because it's been seven minutes. Hi, everyone. I'm Aviv Avadya. I'm here representing media window. And I want to talk to you about annotation and misinformation, the two shuns. So the, the problem here with the sort of one of the underlying problems of misinformation on the web right now is people don't know what to trust. And why don't they know what to trust? Well, there's a whole lot of reasons for that. But the way this sort of manifests itself is the amplification of misleading and sensationalist content. And the sort of relative deamplification is no real word for that that works in this exact context of credible information sources. And then there's this term sort of fake news, but that's a gross simplification of the real problem. So okay, how bad is this? There's something done a bunch of research on. Look at the top 5000 shared stories each day on Facebook for for the last year. And you see this this pretty, I don't know if you can see it to my mouse. There's this very rapid increase and spike at the election. And then you know it goes down a bit and then another another spike at the inauguration. And this is work in progress. There there's some caveats on these numbers. But it gives the overall trends. There is a definite increase in the spread of this sort of information from these very non credible sources because of the ecosystem, because the economics of news as it currently stands. And because of the way platforms and advertisers interact with that. So how bad is it? Well, it's pretty bad. 17% of information being shared near the election. It's probably not ideal. And it gets worse when it matters because the incentives and the fear and the emotion around it drives this this production of misinformation and drives a consumption of it also. So yeah, so this when going back to this problem, you can see that this there's a there's a it's real. So what do we do? Well, we need to help people evaluate what to trust. We want to de amplify that misleading content and want to amplify these credible sources. And so the approach that we're taking is we're collecting credibility data. So that provides context about the trustworthiness of sources. And we provide that data and sort of analysis and scores about the credibility of these sources and of this content to the public to platforms and to advertisers. And in doing this, we want to keep keep a strong focus on the rigor of our analysis, the rigor of our data, ensure that their bias doesn't enter into that. And there's a few ways we can do that. Not enough time here to go into those. But talk to me later and comprehensiveness. So and part of comprehensiveness is ensuring that it's as cheap as possible to look at each source or look at each at each article, each author, etc. And how does this actually help? Well, platforms, things like, you know, everything from the Facebooks to the bit least to the all the news aggregators, anything that has links in it or that has content, if we can make it, we provide data to them, they can make it easier for people to evaluate if they should trust a source, we can provide the data so they don't have to do all this media literacy work that, you know, Mike talked to us about yesterday. And advertisers can decrease the profitability of that misleading information. And both of these approaches have shown to be effective. If you make it easier for people to see that something is incorrect, or even just to understand sort of properties of that thing that they might believe correlates with incorrectness, that's valuable. Like, oh, there's no about page. Oh, like, the only people who talk about this are other sites that are copies of this site. Well, that's a useful piece of signal when you're trying to evaluate, oh, should I trust this thing that my friend posted on Facebook. Those things actually work. And profitability, obviously, affects the spread of particular sorts of content. So we reach people where they are on the platforms that use to share information. And we hit Mr. and farmers where it hurts in their wallets. So how does annotation fit into all this? Well, there's two parts of this process. There's collecting the credibility data, and there's providing it. Annotation fits pretty well into both. When we're collecting data, we want to collect information like the primary claim for article. Evidence for that claim. If there's any sourcing for that evidence. Is there any external evidence or counter evidence not on that site? And there's also non annotation data that we use. You know, how old is the site? Does it actually have authors? Do they have names? Do they exist? They often don't. But there's a whole set of things where you want to be able to get that data. And you can do that either with a human or with a computer or with some hybrid human computer process. And so we're exploring all of those. And just as a quick shout out, Fiske, I just just spoke, and BIDS also is working on this, and there's a few other orgs. So this is, I think this is a really interesting area. And we want to sort of collect all those signals and ensure that platforms, advertisers and the public have those signals in an easy way that they can consume and use in order to evaluate information. And so going back to, so providing, there's another place where annotation sort of fits in, which is, can we provide that evidence in context to people? And right now what we're doing is, you know, the easiest thing to do is, okay, here's a piece of evidence, put it into the form that put it into our database. But it would be much better if when you're providing that data later, they can actually have a sort of deep link into the particular piece of evidence that is being cited. So that's the thing that is currently not so easy. And that's something that, you know, hypothesis, for example, is likely to be working on as of fairly recently. Anyway, so as a summary, we can use annotation to collect credibility data and provide it in context to readers. And that data can be used to help readers evaluate what to trust and help decrease the advertising revenue for misinformation. And, yeah, we're always happy to have help, volunteers, collaborations and sort of the comprehensiveness is directly correlated with dollars to some extent. So that's also really useful. And thanks. Thanks, Abhi.