 Thank you so much and hello everyone, very warm welcome. It's sunny day for a fabulous lunch talk. I have the great pleasure to welcome and introduce Professor Sandra Bremen, who's a professor of communication at the Abbott Professor of Liberal Arts at Texas A&M University. She's a very distinguished scholar. I had many professorships around the world on three continents, I believe. Europe, Latin America, as well as Africa. She has written extensively on information policy issues, both from an international and a domestic perspective. And in fact, she has helped to build this field of information policy and shapes it and continues to shape it through her scholarship. Her book, with the title Change of State Information Policy and Power, is really a seminal piece of work in the field and has shaped certainly also my thinking when discussing the role of information in this changing world and the question related to power and policymaking. Her work has been supported by many foundations. She had various scholarships, is a very accomplished colleague. She also serves, actually, and that's where we met, at the MIT in a role as the editor of the information policy book series at the MIT Press, was the chair of the Communication Law and Policy Division of the International Communication Associations, among many other distinguished positions. So really happy to welcome you to Berkman Klein today and to listen to a talk that actually is worth at least eight talks because what Professor Bremen is going to do is essentially to present a really fascinating body of work and analysis she has conducted over the years of internet requests for comments and looking at actually the role of engineers and computer scientists who, by solving other addressing technical problems, become actual policymakers. And so the question is, how do these, the technical community, as we often call them, in their role as policymakers, think about policy issues? How do they address them? And so Professor Bremen has written, I think, seven or eight papers on this topic and will do the magic today to fold them all into one talk, looking at lessons for emerging media. Professor Bremen, thank you so much for being with us. Thank you so much, Urs. And thank you for the opportunity to speak to you at all. Of course, what goes on here at Berkman Klein is important for all of us around the world. As Urs mentioned, this is an unusual effort for me because this is a massive research project that so far has produced eight journal articles. It's another half dozen to come. And I appreciate the challenge offered by this opportunity to think about what the findings from this research would be, how they could be useful as takeaways for those of you who are working with new media kinds of issues. So that's the question. What can we learn from the internet design process if you're designing apps or services or other kinds of things in today's environment? This is what I'm going to do today, just introduce the study, introduce the RFCs in case there's anyone in the room who doesn't know what they are. And there are five areas in which I'm going to try to summarize my research findings in terms of takeaways for today. So the study was an inductive discourse analysis of the internet RFCs for the first 40 years. It was comprehensive for the first decade, meaning I read every single line of every single document. It was topical from 1980 to 2009. It's worth the while was appropriate when I wrote the proposal. The NSF funded this. I was looking at documents that were like three paragraphs to three pages long, not realizing that quickly they became 150 pages long, and the project was much larger than I had imagined. And actually, the NSF wanted me to go on after the first round, but I realized that my grandchildren could not finish this project. So my goal is to show the kinds of work that can be done, demonstrate the method, and I hope others will take up every single line of every one that comes after that. So my launch question was the way Earth phrased it so beautifully, but how did those responsible for design the internet think about policy? When I define information policy, I include not just the formal operations of geopolitically recognized governments, but also the informal processes and decision-making procedures of private sector as well as public sector entities and governmentality, meaning the cultural habits and predispositions that enable and support both governance and government. So for me, this was in the domain of governance and certainly a place to look at what it really means when we say code is law at the architecture level. So what it's not is in-depth analysis of decision-making on specific technical issues. In this way, my work is completely orthogonal to that of Laura DeNardis. But what it is is policy analysis, its recuperation of history. I've met a lot of people involved in this process who come up to me. I was terrified when I first started talking about it, thinking they would laugh me out of the room. But in fact, they said, I was there. You're right. Utterly fascinating. Had no idea you could get this stuff out of it. I think of this as social-technical boundary work and as support for theory building regarding large-scale social-technical infrastructure. There were a lot of methodological challenges. I am neither an electrical engineer nor a computer scientist. So for me, reading this stuff was like reading Martian upside down in the mirror. It turned out that it really is every single document and every single sentence. So you might be reading something that is actually an explanation of code and embedded in the middle of it. There's a side comment about privacy. You simply can't stop reading. So in that sense, it was the most difficult reading I've ever done. It was I had 30 students who were ultimately involved in the project. But the level of sophistication required to do a secondary reading of technical documents from a policy perspective in large part required really a high level of sophistication. So there were lots of things they could do, but some things they could not. And of course, everything keeps changing. I wanted to make the methodological point because I get asked every single time I give a talk from this, automated analysis is absolutely useless when it comes to this corpus. I did an experiment with privacy, which is the most discussed policy issue, no surprise. If you do an automated search on the obvious terms, you get to 12% of the documents, but many of those actually have nothing to do with privacy when you read inductively. You get to 18% and often quite different documents from the one you find when you do word search. If you did natural language processing, you'd wind up with a map of the world because the subject matter and the terminology keep changing. So I coded for about 70 variables within the text and code classify the text along a number of variables as well. Okay, screwed up. Okay, so does everybody knows what these are in this room, I can, anybody? One thing to point out, so there were almost 5,700 documents in the first 40 years, which is what I was looking at. It's worth noting, I haven't seen anybody else talk about this yet, but how many different authors, how many different kinds of entities and how many different countries were involved in the design process? I'm gonna do this repeatedly. Genres differ, they have a formal distinction among types of genres. We added another set of genre distinctions and there's a wide range of functions that they serve. This is what it looks like totally randomly chosen. After the abstract, you can go into your one to 300 page document that is analyzed in the technical problem involved. Again, freely available online hosted by the IETF. So a lot of policy issues show up really early and I've underlined some that may be surprises. They were already talking about commercialization in 71. They were already talking about access to the network in rural areas. Thank you, Canadian government. They were talking about energy issues in 1972. High school students were hacking it already in 75 and so a lot of what we look at today has a very long history. When it comes to thinking about the policymaking process, we see this, I'm gonna flip through my sides rather quickly because of what I'm trying to do here but I'm happy to share the slides if that's useful afterwards and the articles are all freely available. So they're defining the policy subject. What are we even talking about here? What is the network? This, it is through the medium of the RFCs that the governance mechanisms of the IETF and ICANN evolve. There are implementation programs, guides, establishment of behavioral norms and something that is under investigated is that the RFCs were a venue through which a lot of vendors engaged in their conflicts and had their conflicts resolved. So it's a very interesting place to look for details on commercial battles. When we think about the relationship between technical and legal thinking and one of my original motivations was to find a way to try to bring together the technical and legal communities around this area of shared interest and engagement. I find that the RFCs do offer support for some people who are quite critical of the design process, very little on disability, nothing on the elderly, but there's also evidence that really counters the critics. So it's very popular to criticize the internet for not having been international enough, for not being sensitive enough or making it easy enough for people who use other languages besides English and other alphabets besides the Roman, the Latin, but actually they were talking about that from the very beginning. It was simply that there were a whole lot of technical problems that had to be resolved before that could be achieved. So it's, in my view, not legitimate to criticize them when the issue was trying to solve the technical problems. The implicit policy analysis was surprisingly rich. So as one example, the view of privacy as contextual and as boundary definitional appears in the RFCs decades, decades before it shows up in the social science literature and of course it showed up in social science literature long before it started showing up in the legal literature. When it comes to policy making, there is announcement of positions in the 70s that we will not get involved with wiretapping, defining online telephony. They get involved with thinking about general legal issues like antitrust and fraud. Of course, internet specific legal issues like spam and phishing responses to US law because they realized they had to comply and talk about the technical inadequacies of the law, what Congress did regard to spam in the 90s was laughable and the respond to laws of other countries besides the US as well. When it came to policy analysis, some of it was quite explicit. Some of it provides technical background for policy issues that are still enduring and I would point to a piece that is yet to come, has been given as oral presentations but not written up yet, but the RFCs on fair queuing and quality of service are actually key to understanding the problem of network neutrality which again we are going to have to address with great seriousness. They offer critiques of statutory law, explanations of technical contradictions in laws and regulations and so forth. And then there's a lot of what I would call implicit policy analysis, technical analysis of dimensions of policy issues that are not yet apparent in legal discussions but the sensitivity to the fact that they would be legal problems was there among the computer scientists and electrical engineers. There is more discussion of political thought than I had imagined to be the case. So there is talk about the free speech value of the network long before the NGOs get involved. Comes up first in a joke, they become serious fairly quickly. Jurisdictional issues are not just the geopolitical problem that would be familiar to all of us but one of the pieces I've published is on the tension between geopolitical citizenship and what I'm calling network political citizenship because there are areas for example when you're dealing with malware which is your primary citizenship identification will determine how you would respond to the problem and it may be very different kinds of behaviors required by the different kind of citizenship that is your primary allegiance. There were efforts to be agnostic regarding what is the country when it came to the DNS and some very interesting complications there when it came to internationalization. They talked about uses of law in a technical environment so they actually rely on US constitutional principles when they're thinking about how to design and change the IETF. They think about compliance and they think about what is legality in the first place, what is governance. So these are areas in which I think there's some findings that provide models for kinds of things you wanna think about when you're working with other new media in today's environment. One is that design criteria really can be treated as constitutional principles. People talked about that for the internet before I came along that I did an analysis of what I thought were the fundamental constitutional principles as they were established here in the first decade of the design process, 69 to 79. So they're logistical principles and as with the US constitution they talk about sort of how is this thing gonna work operationally. More interesting, those are important but more interesting is they do talk about user democracy, the false imaginaries about the history of internet design are myriad and growing. So just this week again you read the, you're asked to review the journal manuscript that talks about the fact that when the internet designed it was only for the military and now we have all these uses and users. Ding. So it was designed for all types of uses and users. What I'm calling technological democracy which means that from the start the effort was designed not only for the cutting edge, the fastest computers, the greatest bandwidth but also remember to design always for the slowest bandwidth and the least computationally effective computers because the goal was to have everybody in the network from the start. There was the goal of experiencing computing at a distance as if it were local. We aren't thinking so much about telepresence anymore as we've all gotten used to it but I think it's worth noting that that was a social goal for the designers early on. Very aware of the tension between the need to have user control and extreme flexibility with what it was that needed to be standardized in order for the network to operate at all. They were interested in stimulating innovation for the sake of innovation. Interoperability meaning including compatibility backward and forward and extensibility across types of networks irrespective of the innovations and the scales involved. Other kinds of social goals included promoting social interaction among users and we'll get back to community again. So there are conceptualizations of the uses were extremely broad and rich driven by a number of motivations and conceptualizations and yes science fiction does play a role. RFC one, they're assuming from day one that it's going to be everybody doing everything. By 1971 already you had these private sector these commercial uses were already being discussed. General Motors is already in as a corporation and many into the conversation and many other corporations followed in quickly and from already by 1971 they're supporting users who are not in the original grant funded group that came out of DARPA. Government uses for seeing the first decade I think it's also interesting in the military needs that was actually a separate conversation so it shows up in the RFCs only by the authors. So when you see MITRE you can assume certain kinds of interest or in underlying assumptions and some of the problems like malware address they're not directly addressed in the text but note that e-government was already being discussed by 1972. Users, the most interesting thing surprised to me I hadn't thought about it ahead of time although I had already been writing about post human law before I engaged in this work. I published first on post human law the replacement of human decision making for legal purposes by machines in a variety of ways. I published on that in 2002 but I wasn't thinking about that when I entered this but they were as designing as much or more for the non human users as for the humans and they actually use the word demon this is a Greek concept that showed up again in the medieval period to refer to agents who were neither human nor divine. So the demon users in the internet are operating systems, software, other layers of the network structure and so forth and actually they kind of preferred thinking about the demon users and the humans were totally annoying and mystifying why do they care about names they can remember and so forth although when they designed for humans it of course was good for the demons as well. At any rate other kinds of user distinctions that were important notably they did get engaged in social science already in the first decade trying to study what was going on and how users reacted but of course that social science was done by computer scientists and electrical engineers using themselves and their students as subjects so those of us who are social scientists might question some of what they did. Addressing diversity. So there are a lot of ways of thinking about this and it's a big issue and should be and remains an important issue and will remain so but the most prevalent area in which this kind of problem was discussed especially in the first decade but throughout the first 40 years was in the question of internationalization. So I just wanted to highlight some of the different ways in which that's actually approached and one of my pet peeves that I'm responding to here and in some of the other things I'm saying today again is that often people who are involved in social design considerations for the internet or for other technologies tend to think that issues are singular in nature and if they think of one kind of fix they got it but no issue is singular in nature and it takes a whole lot of different kinds of things to deal with any kind of social issue. So when it came to internationalization you had authorship and participation in the conversation the influence of international organizations standard setting organizations and so forth were involved extension of the network beyond the US which took place by 1973. The issues that they talked about I'll come back to internationalization showed up in how they conceptually and operationally developed their definitions and it shows up in their design criteria. These are just again I don't have time to talk them all through but some of the kinds of issues that are discussed in the RFCs as they think about what it takes to operate in an international environment and these will be familiar to us from other domains but these are the computer scientists and electrical engineers were thinking about these kinds of problems and more. Tariffs things like tariffs and dual use technologies the fact that technologies can be used both for purposes of peace and for purposes of war they were not ones that they thought about until they started going international and bumped into the fact that oh yeah there actually is a regulatory environment out here that we have to get involved with but then they did. Again the language issues in 1971. Another way of thinking about the problem of issue nuance and multi-dimensionality comes at it from the point I just made about issues. So there are multiple issues and I've published on this in which they are coming at any single problem from multiple dimensions but I wanted to take the example again of privacy is the single most discussed for obvious reasons policy issue. So just again during the first decade they came up with multiple ways of thinking about how to deal with privacy issues not concerns that were raised on behalf of the humans on behalf of the network and on behalf of the data. When it came to human protections there was all kinds of attention early on of course to logging in issues there was a very interesting concept that had a life of two or three years in which they decided that you would have an internet birthplace so wherever you first came online that would be your geopolitical and your identification for the rest of your internet life. We have lost that but it was an interesting conversation masking your input and as WikiLeaks likes with the air gap a variety of offline arrangements were being discussed in the 1970s as ways to protect what you were doing online. When it comes to the network of course there was discussion of private networking and how to protect that private networking again offline storage snail mail as ways of putting in what we would now call an air gap. Termination of activity is a means of achieving privacy stopping processes flushing input and output information and destroying files things that are again becoming familiar in practice. They looked at message design from a privacy perspective and packetizing how they went about packetizing header design again different issues as they addressed humans and demon users connection identity and they thought about the data so that included information architecture a lot of attention when they were thinking about file name and path name structures how to use metadata and of course encryption. Coping with instability may be the problem that you're all very familiar with as well it's hard to think of an app or a service or a program or anything you would want to do online that doesn't have instability as a problem. There's hordatory value in the design process for lessons not learned. They began to, they started by insisting on backward compatibility but of course IPv6 is not backward compatible so they weren't able to sustain that. It's worth noting that the internet RFCs have become a model for large scale social technical decision making about infrastructure. I think it's because of things like this that they learn things about how to handle instability. So the difficult interestingly enough they began by thinking that every decision they made in 1969 and 1970 would have to be permanent. So these at the time largely graduate students involved were quite paralyzed they were uncertain. They kept thinking there must be experts somewhere who would come and tell them what to do. If they just like hung around long enough that that guy would show up. But then they realized that everything was going to be susceptible to change and of course that's still the case where that's the languages, software, hardware, how you're thinking about levels of the network, who are the users, what are the user practices and it just keeps ramifying. A number of techniques were developed to deal with that kind of instability and I'd like to talk through a few of those. So definition on conceptual labor. This is really, I think that those who are not trying to resolve technical problems, there's often a lot of, in my view, absurd disdain because there was an enormous amount of conceptual and really theoretical work even to think of the simplest concepts. It's actually in the RFCs that they first agree upon what a byte is for computing purposes. So they're having to think about what are communication processes? What is the network? What are the elements of the network? They had to figure out what was the difference between experimentation and what it really meant to change a protocol. That's a conceptual problem. It's not a technical problem. They had to distinguish between what was error and what was idiosyncratic use and actually in the fake news environment that's still an ongoing question offline as well. They were dealing with localization, how you deal with the global in the local context. A lot of rhetorical tools were used. So there was a real emphasis on expressing your design assumptions, talking about the design constraints, sharing recommendations, and an interesting insistence on the language, of the written language, the texts of the RFCs, and basically the position that you have to be really precise and if it isn't precisely there in the written text, not the code, if it isn't there in the written text, it doesn't exist. We're gonna assume it's not there. At the same time they realized that of course there's going to be a gap between text and reality, but they kept trying to drive the designers towards precision in the written language. They were aware that models shaped their perceptions of problems and the possible solutions in really very sophisticated ways. They saw that texts were problem solving provocations, but again not the equivalent of implementation. And I came to think of what I started calling skins or design wraps as affordances for thinking about new kinds of problems. Very interesting, I guess politicians might find this familiar, but the ways in which they manipulated the process as means of coping with instability. So one trick was delay. Everything's uncertain among us, we got lots of problems to solve, let's take this one and just wait, and maybe things will settle down and we can get to it in a calmer moment. Although there was a drive towards very detailed and precise specification in the written texts, they also used incomplete specification as a means of sort of riding over what might be unstable and what might be changing even within the very near foreseeable future. And there's a kind of parallel again and legal thinking when we go to Cass Sunstein and he's thinking about incomplete theorization and the law actually fulfills much of the same functions as we see in the RFCs. They treated experimentation as a form of a culturation, they didn't use that word, but anyone who reads social theory would use that word. There was the impact of personal force. As a grad student, John Costell says, I will be the naming czar, and then he was. There were, I wasn't gonna discuss it here, but especially in the first decade, there's a lot of language there that made clear that they didn't think anybody else would ever read these. Lots of people are saying things that your mother really told you not to say to anybody and it's really quite fascinating. After about 20 years, it becomes exceedingly dry and I appreciate the fact that David Clark at MIT has, I've had this discussion repeatedly with him and he agrees with me that the amount of political and policy discussion you're gonna get after about 20 years in is much lower because the nature of the genre publication process has become so formalized. But early on, it's really rather juicy. On working network measurement, as I mean, keep testing it at first, then when a number of years without actually looking at what was happening and they finally realized they had to be gathering data about what was going on in a regular way, details could be provided on a need-to-know basis and of course the RFCs themselves, that conversation was a critical and remains a critical policy tool. Enormous deference to the community. So strong normative pressures, my piece, the Framing Years, goes into this in the most depth. So the site where design decisions take place, the community preferences and compliance interact. If you went ahead and tried to institute something without running it by the community, that was vigorously castigated. But of course there were limits. So the internet design community said no to property rights and domain names, but the legal community, of course, was able to achieve a yes. The internet design community wanted user input but kept bumping into sort of knowledge realities and the difference between those with and without technical knowledge and they keep moving back and forth on that kind of thing and you can see it in the organizational design, a committee that's for all comers and then a subcommittee that sells with technical knowledge and they keep opening and closing. And living with paradox. They expect instability in commands and identities but in order to operate, you have to assume that everything works just as we said it's gonna work. There's the push to document everything and again that precision in language but they assume there's a lag between the changes, the technical changes and what actually shows up in the documentation even though the documentation should come first. There is the press, again, to be very precise in the protocols but also happily using symbolic expressions like Fubar just to like smooth over and get past something in which actually you can't be precise. And then what I call the use of paradoxes as caniness and I was inspired by a political scientist who talks about the canny state. Ron Sera is her name but she talks about India and the canny state is that in which the government makes dual use of the borderline between international and domestic policy making. So for a government that wants to do something domestically that is not what the international environment wants to do, you just say to the international environment I can't, my domestic people just can't do it. If the government wants to do something that is pressed from outside whether it's another country or the international environment and the country doesn't wanna do it, you just say we gotta do it because the international environment forces us to do it and you play the game depending on what it is you actually want to accomplish and you can see that kind of caniness in play by the internet designers who are using paradoxes and position using the fact that there are paradoxes to position themselves in order to get particular kinds of things accomplished that they wanna get accomplished. So in sum, they recognize this is 1972 that network topology is a complicated political and economic question and you've listened to a lot of detail here so let me stop here and see what kind of conversations you may have about how you can use this and the problems that you're trying to solve today. Briefly, full text of many, most of my publications are up there at the website. Thank you so much, what a wonderful talk. So before we open up, may I ask one question to start with and that is we having so much debates these days about how to better bridge the worlds of computer science and engineering and the policy, governance, social science space looking at problems concerning artificial intelligence, looking at other emerging technologies and with this rich experience in mind, also perhaps looking at what has worked better and what has worked not that well in the history at this interface, are there some sort of, in addition to the very rich menu you already presented, but are there some sort of higher level takeaways? What are success factors to build these bridges? Because my takeaway was there was a lot of policy thinking in the engineering world, but I think, I hypothesize, there was less technical thinking in the policy world and so how can we do a better job building these bridges, maybe looking from one side and then from the other? That's the big question and as I mentioned, that was really trying to get there was a primary motivation for this work and I have not done enough work to get there yet, but here the things, the avenues that I think of, I actually did an analysis at one point of there used to be an internet policy caucus in the nineties in Congress and I did an analysis of the then 181 people who were in the caucus and none of them had technical backgrounds and there was a brief period when there were opportunities to talk to staffers in Congress about the technical side of things that came and went rather quickly but that would be one approach worth thinking about. I think that David Clark actually has a book, the draft of which is on his website and which will be once revised coming out actually in the information policy series at MIT Press that talks about design issues that have been, David for those of you who don't know him was one of the key, was one of the really, on the design side kind of the equivalent to Vince Surf as political guru and design guru for the history of internet design has been involved since the early 1970s and led the NSF funded efforts to design the future of the internet and he now has a sole authored book that talks about the design principles, the design questions and the issues that have been important in the past and that are important going forward. He initially thought of that as going just to the technical community but we convinced him that no, this needs to be written for the broader policy community as well so there's that kind of writing task. I think there's a job to be done with IEEE and ACM in terms of bringing this kind of thing actually into the engineering classes as well and on the law school and where the law is taught side, actually I did a report for the Rockefeller Foundation in the 1990s about how law was being taught in this area across the country and across kinds of disciplines and there were two big takeaways, I'm sorry it was the mid, it was about 2004, 2005, there were two big takeaways and one was that most communication law courses were history because they ignored everything that had happened since 2001 as if it wasn't there but the other was this problem of the technical side of it and I would love to see an institution develop, it could be done online and serve all institutions, a technical training thing for people who are thinking about the law so that they would gain that kind of sophistication and be able to think about these kinds of problems. Time after time, Congress does things that are just simply laughable from a technical point of view. Is it great to introduce yourself? Hi, I'm Ryan Beatish and I'm a senior researcher at the Berkman Klein Center and thank you for this really fascinating talk and I was wondering if you could talk a little bit more about the ways that some of the early decisions might have created, I don't know if path dependency is the right word, but sort of created the momentum and helped enable some of this policymaking approaches later on, in particular I'm just thinking about the very fact that they decided to use the RFC process at all and the fact that they wrote these things down in such a way and that they created these sort of rules like anything that's not written doesn't exist and so I'm just curious to hear more about your thoughts on how those sort of initial decisions then impacted these sort of policies, almost constitutional like approach that they've taken. I think that's a great question. It's not a kind of work that I have done. I should mention that this is actually a side path for me, this is not at the center of my research. So this I thought was a casual sort of ha ha side thing. But so I haven't done those kinds of analyses. One thing of course that you wanna do is as with shepherdizing you can see which later RFCs depend on certain earlier RFCs and that's available without analysis easily but the analysis of how that went would be one path of doing that. And then I think you would really have to take an issue specific approach like Laura DeNardis does and go across decision making venues and watch the movement over time from the technical arena to the policy arena and back. So not many people have done that kind of work but I encourage it. Hi, thank you for your talk. My name's Mary Gray. I'm a Berkman fellow, Berkman Klein fellow and I'm a senior researcher at Microsoft Research and a faculty member of Indiana University. Sorry to get that all in. I'm required contractually. My question, I love, I'm a social scientist and anthropologist by training. I love your approach. I think it's incredibly rich and I'm wondering this seems outside the scope of what you've done but have you considered tracing who's responding and doing the agenda setting around the RFCs. I'm struck by how at early on it's juicy. There are clearly players who are setting agendas who are coming from specific domains and I'm always curious slash suspicious when things become boring because that usually means they're being professionalized towards somebody's benefit so that people tune out. So have you thought about mapping the networks of who's setting agendas for these RFCs? So I mentioned that I had 30 students involved but I had one exception, an outstanding undergrad but mostly I couldn't ask them to do the policy analysis because that was just not available to them. So what do you have them do? So something that hasn't been analyzed yet but we actually have the data on is we looked at every single author and every single employer and that's how I get to 14 kinds of institutions and across all those countries but that data hasn't actually been analyzed yet. You could take that analysis and put it onto who's using which RFCs and other kinds of histories that other people are doing about histories of different kinds of decision making arenas and get to that kind of answer. Kyle Drake with NeoCities. First of all, thank you, really important work. This is, I also like the structure you had here. With this analysis. And I wanted to go like one step beyond even like companies sort of organizations influencing things and potentially making them boring so that people tune out and sort of influence them which is sort of just a natural process of like anything where like money can be made in this country. But I wanted to step back just a little bit from that even and sort of ask you and maybe it's an answerable question has any of this research sort of informed your views on technological determinism? Is the technology at the end of the day in the driving wheel is sort of in the driver's seat to driving this? Or is our humans just sort of like attempting to steer from like the passenger side or are sort of you know are humans in charge of this ultimately or do you think that the technology sort of like is driving us and we're sort of trying to figure out how to like manage it? I think it confirmed the position I would have walked into this research with which is more a position of technological conjuncturalism than determinism. So I'm always suspicious of anybody who uses a monotheoretical approach to anything. So am I influenced by Marx? Yes, I'm influenced by Marx but I'm Marxian, not Marxist because that's one factor but only one. So to analyze any given historical conjunction there are multiple causal trajectories, each of which may require a different kind of theoretical approach to understand it and then that multiplies when you go across levels of analysis. I'm very influenced by complex adaptive systems which is a means of bringing together than I think a theoretically pluralist approach to analyzing any specific historical conjuncture event phenomenon. So I walked into this research with that and I think it confirmed it. So I have published on the internet and what I call the auto-poetic state. Again, what this offers as human affordances for, and the question of whether or not doesn't mean we're gonna be democratic or undemocratic. It offers a lot of opportunities and it comes down still to what we're gonna do with them and the flexibilities are still enormous. Especially because it still is a mathematical game so we're still into whatever seems to be a moment of stasis regarding where the points of control can still be easily outrun. Hi, I'm Greg Leppert, affiliate at the Berkman Klein Center. It's clear you have such a wonderful, rich appreciation for the complexity of all of this, having gone through. I'm wondering where your mindset started before you went into this review process, how it evolved and how other people might, without taking your complete journey, also come to appreciate the complexities that were involved in all of this. I'm not sure I could answer the second question but the book Change of State that was mentioned was actually a 25-year project, trying to think about how to think about, using information policy as an umbrella term to refer to any kind of laws and regulations, legal principles as applied to any form of information creation, processing flows and use. So that presents a pretty complex picture of what that map is and the realms in which we have to think. And so I think the sort of decision to go down this path relates to what I said earlier. Change of State talks about the fact that in this environment information policy does involve not just government but governance and governmentality. The book Change of State looks at government, it looks at 32 different legal issues and interactions between what we know from social science research findings and what we know from the history of legal developments about how they're affecting each other in some trying to answer the question of what are we doing to ourselves. So that focused on government and this was my effort then to address governance is sort of the innocent question that walked me into it. I guess I'd encourage you to read Change of State. Yeah, I'm Yasuo, I'm a fellow of the Berkman Center also and thanks for the talk, for the brilliant talk. My question is almost a follow up to a service question because I was chair of a working group at WTC and for like about three years and I realized that the funding for participating in the working group and in the work of the working group was very important to the agenda setting of the group itself. So in our group something very unconventional happening because CGI funded five Brazilian institutions to be in the group and work with us. So this had a huge impact in the work of our working group. So I wonder if there's any study about the funding or I mean the money flowing this ecosystem of like standards because I think it's... Absolutely, JP Singh has done a brilliant book on the role of those kinds of factors in determining what happens in international organizations. That it's one thing to say you're all in the game but it's another thing to say you can keep people on the ground in Geneva for months on end and the implications of that. So that's a great book to read about the problem in general. One of my disappointments in the internet governance community is that most of the people who are studying questions of that kind are looking at the idea of the internet governance forum which is a talk shop, not a decision making shop and frankly to me of minimal interest for that reason and so I think they're kind of missing where the ball is. Hi, I'm Sasha Kostanza-Chuk. I'm associate professor of civic media at MIT and a faculty affiliate at the Birkenklein Center. Thanks for the talk. This kind of follows on Yasso's question. So John Jennings has recently been talking about trying to develop a critical race design studies approach where he's looking at insights from critical race studies and legal theory which looks at the ways that structural forms of inequality, structural racism but also it's people in that space have broadened that to look at intersecting structures of oppression and resistance, race, class, gender and so on. How those structure the law and how the law then reproduces structural inequality and so I'm wondering going through all of this material and looking at these early moments of development of the network architecture and so on but then also speculating forward. What would be, what are the sort of key moments when communities, whether it's race and gendered communities or other types of subjectivities are sort of excluded from the processes but then break in or were there debates about that or if there are, I'm just really curious to hear your reflections on sort of critical moments in the eruption of conversations about structural inequality and how it might be reproduced through network architecture. Is that something that you see? Where does that happen? And what can we learn from these moments that you've looked at in that conversation? Another great question. I think you would wanna look at other kinds of histories of the design process, critical legal theory of course since the 90s has talked about critical race issues. I think that here the only thing that would have been visible around the RFCs that I am aware of which doesn't mean it isn't there but I haven't studied it and I don't know of anybody else who has but Vint Cerf delights in telling the story of the guy in India, the important issue was publishing the RFCs and that anybody could have access to them and so he actually met a guy in India who said he figured out how to build a network that could connect with the internet simply by sitting alone by himself and reading the RFCs. So there was an access point from the documents but otherwise again I would just identify that as that would be a great work for somebody to do. Hi, I'm Christoph Graber. I'm a faculty associate at the Bergen Klein Center visiting from the University of Zurich in Switzerland where I'm a law professor. So I was most fascinated by your reflections on the intersection between law and technology. And I would like to know with regard to what you mentioned related to the RFCs as written text if I recall correctly you said this language has to be very precise and what is not in this text is not in the world, more or less, does not exist. So my question would be in comparison to legal texts that you find in statutes. How could an RFC be compared with a statute? I mean in a statute you have a judge who needs to interpret the language, the text, the wordings of the statutes and there is often deliberately left room for interpretation be it that the parliament is not able to actually come to a certain conclusion so it leaves it up to the judge who then has to deal with it with this openness. How is this similar or different with regard to RFCs and is there any kind of judge who then deals with these issues related to interpretation of the text? What a great question. Is that attached to the same question? No, okay. So I think my understanding is that the ultimate judge in the case of protocol development is going to be the community and what they agree to do and so where there is a lack of clarity then there would be experimentation and then ultimately somebody's going to make a more precise proposal on some small point. There's a whole lot of something I didn't get into here but there's a whole lot of oh, we could unbundle the problem and something we thought was a single technical issue we can break into four points and deal with them as smaller problems and so that would leave something unresolved but we can fix this bit. So there's that kind of unbundling and bundling of the problems so I think it would go on in the experimentation, the refinement of what it is, is the size of the problem that you're looking at and then ultimately the community level decision making as I understand it. Hi, I'm Saul Tannenbaum, I'm a free agent. I actually want to give another answer to that question. I mean the IETF mantra is sort of rough consensus and running code and there are events called connect-a-thons with people with products that run in a certain protocol space all get into some sets of rooms and try and inter-operate with each other and that's sort of one of the mechanisms that operates to come to a decision on what this stuff really means. So that would fall into the category of experimentation for, yeah. But I had a completely different question sort of taking a step back. I mean we've sort of loosely talked about money because there's a lot of money in RFCs but the RFCs themselves are freely open while in other domains the standards themselves would be a revenue opportunity and the intellectual property in the documents are open and available to anybody. Have you done any work on looking at how that evolved in the conflicts that must have arisen there? No, I'm very interested in that because initially they were not copyrighted and then there's a moment and it comes after a battle and I'm aware of that but I haven't looked at the battle but it was interesting to me even in the first decade there were a number of RFCs that they said were not available, which I had guessed would be because they dealt with military issues that were classified but in the end as far as I could get to the claim that it was intellectual property rights that the employer of whoever was putting that forward wouldn't let them have it out there so no, that would be worthy of an article in its own right. Hi, my name is Caroline Toro and I'm a researcher at the Fletcher School at Potofsy University and I wanted to ask you more about that international insights that you had earlier particularly were there international participants in drafting or responding to these RFCs and were then the RFCs when they had concerns about those that participated or about third actors that perhaps didn't get to be part of the process? Oh, okay, so I haven't read all of the documents that would fall into that category that's a really, I know what all the documents are and that would be a question that could be answered but I haven't asked it that way but what I'm aware of so far are that when we're talking about international we mean representatives of governments like the Canadian government came in in 1972, employers of employing institutions that were based outside the US although their employees might have been in the US and then individuals who are outside the US irrespective of where it is that their employing institution was based and we tried to distinguish between the geographic location of the author and the geographic home base of the employing institution but therefore if you weren't asking the question about whether or not they're international there's nothing different in terms of the kinds of documents so they're putting together, they're presenting the results of experiments, they're proposing protocols in particular areas they're responding, they're critiquing protocol suggestions put out by others the RFC genres distinguish between are you asking a technical, are you raising a problem for people to address, are you putting out a draft, are we discussing it, is it absolutely the standard, the official standard and they're participating in all the same ways that everybody else is. I would think it would be of the texts that I'm familiar with there are some that are working on behalf of third parties so again the Canadian government, they were concerned that was a period when Canada was very concerned about telecommunications across its vast space and they just wanted to make sure that from the beginning the internet was available even to the Inuit and all of that so that would be a third party example. I think also the NGOs would have come in and been more, they're the group, the category of institutional employers that would have been most active on behalf of third parties. We have time for about one or two more questions. Then I will ask one if I may. So you had a bit of a deep dive in your talk focusing on privacy issues and how the debate emerged quite early actually. I was wondering whether you could share some highlights with respect to cybersecurity because obviously security is such a hot topic these days in particular and one wonders, right? Has the community thought very carefully about the security aspects or they didn't miss that, right? Because it's still somehow puzzling that what I learned today, there were much earlier than I thought actually all these considerations that the internet could become this very big thing with many users and all these different actors including bad actors and the like. So why or why not did cybersecurity ultimately be one of the kind of design principles in all of this? Is there an answer to that question? So early on there was this position, I'm not gonna help the US government surveil. People say things like this and there was a kind of resistance to facilitating surveillance. This is just one piece of a cybersecurity problem. I know more about cybersecurity from the legal side so I started studying the use of information policy provisions and arms control treaties in the late 80s. And again, we have come back to that but I've spent a fair amount of time with the Tulin manual on, now on cyber operations in Tulin manual 2.0. It would be worth mapping the Tulin manual onto like the last decade of RFCs. That's not work I've done but I am imagining, so my educated guess here is that at some point after the first 20 years it does become so professionalized and dry that you're only focusing on the technical problem and kind of la la la pretending you don't know what the other issues are. And what's interesting in the Tulin manual, in my view, I don't know how many people here know the Tulin manual but this is a group of international experts funded by NATO to look at ways in which existing international law applies to in the first edition which was 2013, Cyber Security and Cyber Warfare. The second edition just outfolds those into the general category of cyber operations and they identify a number of ways in which existing international law applies but then they also identify a number of areas in which the two dozen experts involved could not reach a consensus and I love this actually as a genre model because they also go into exquisite detail about each of the arguments that is presented and the responses to it. So you can really track the conversation in a way that is not usually available to us. But so I did an analysis of the first edition and all the areas in which the international experts could not agree on ways in which existing international law applied and each of those pieces, this is published, each of those pieces points to a way in which the actually the survival of the Westphalian state in the existing international system challenges to that state and to the system and questions about whether or not it will survive. But what's very interesting is as a way of trying to keep the Westphalian state and the international system alive in this environment they actually propose, implicitly propose, put forward the notion of what I'm calling the right not to know. So if for example a state is aware of everything that's going through its networks then there's the problem of accountability even if you weren't the subject of attack or the source of attack if it's flowing across your territory. So in existing international law that's the army tank going across your territory you've got responsibility. How do you deal with that in a cyber environment? So that basically they've moved towards a don't know, don't ask, don't tell, don't know. So we have for the first time a right not to know. So bringing those together would be really worthwhile. That's a serve as far as I've been able to go with it so far intellectually. Hi, my name is Karen Sue and I'm a master in design engineering student here at Harvard. This question is closely related to your question about structural inequality. You mentioned that one of the earliest design principles was that of designing for inclusion by catering to those with the lowest capacity. In your opinion, how has that played out? Is it still relevant today? Could there be things that are done better today? Great question. I think the problem today is not so much on the infrastructure side is on the services side. So we're driven to upgrade computers because services require us to, in order to manage their operating systems or manage their software. So I think the response in today's environment where we're all aware of still the enormous inequities out there both within and across societies is the Linux path, is the open access path that allows you much more flexibility regarding what it is you need in order to do what you wanna do. But that's not network issues to my knowledge. It is more of what happens, that what you connect to it and what you ask of what you connect to it. Thank you very much. Please join me in thanking Professor Ben for a fascinating talk. Thank you. Thank you. Have a good day, everyone. See you soon.