 Hello, we're going to get started. We are a sell out crowd. We are short of chairs. There seems to be no remedy except standing. So this is a standing room only event. So very nice man and well deserved. I'm David Weinberger, I'll be moderating lightly. So there's as I think most, if not all of you know, there are some basic things you should know about how these work. So Ben's going to talk, we're going to open up for questions. This is a recorded streamed event. It will live on the internet forever. So just be aware of that as you participate. We will have a hard stop at 1.15. Am I leaving anything out? OK, so it's a great pleasure to introduce Ben. Ben is an affiliate at the Berkman Klein Center. He's also a PhD candidate at Harvard's School of Engineering and Applied Science. His background is just perfect for his topic. He's introducing his book, The Smart Enough City. So he has been, he spent a year with a city wide analytics team in Boston. He was at New Haven's Department of Transportation Traffic and Parking, where he worked, for example, on parking meter software, as well as bike lane, bike policy, and the like. He was an Eric and Wendy Schmidt summer fellow in the Data Science for Social Good program, where he was working with Memphis using machine learning to identify blighted homes. This book, which is fantastic, it is technically a trade book, which means it's accessible to people who are not experts in the field, who are not academics, or come into it with a great deal of technical knowledge. It is provocative. It is, if I say surprisingly well written, I don't mean anything about Ben. It's so well written, a topic that deals with such detailed and deep technical issues to be able to communicate in a way that is clear and accessible is really, really remarkable. The series, The Strong Ideas series from MIT Press publishes both in hard copy and in open access both versions. So open access, as everybody here knows, you can go, you can read it, you can share it all for free. Online, both versions should be available on April 2nd. So Ben. Thank you, David, so much for that introduction. So David deserves a lot of credit here, because he was really instrumental in bringing this book to the surface in the first place by putting together this really great Strong Ideas series. And David was absolutely instrumental in helping me bring together my early vague ideas for something that maybe resembled a book into an actual outline and something that turned into the book that's coming out in just a couple of weeks. So this is the first real book talk that I'm giving for the book. And I can't imagine a better venue for it than here at the Berkman Klein Center lunch talks. The Berkman Klein Center has been a really great source of inspiration and support and friendship for several years dating back even before I started writing the book. So in broad terms, the book is about the movement for smart cities. And I'll describe what that means in a minute. And more broadly, it's about the dangers of trying to solve social problems with technology and attempts to chart a path for how we can actually think about the role of technology in addressing social ills, but how it can be integrated into broader and more holistic visions for society. So I'll talk for about 25, 30 minutes. The first half, I'll give a high level overview of the book's core thesis and then I'll get into some of the more grounded principles and examples that I talk about in the book, including case studies from various cities across the country. So to start out, we should define what we're actually talking about today. What is a smart city? And I thought that since the movement for smart cities, the term itself largely comes from the technology industry we should start at the source. This definition from Cisco provides a good starting point. They write, a smart city is one that combines traditional infrastructure, roads, buildings, and so on with technology to enrich the lives of its citizens. So it sounds pretty straightforward. We're applying technology to solve social problems in cities. And this maps onto the pretty typical use of the word smart nowadays, which tends to mean applying digital technology to existing processes or practices or items. And this applies whether you're talking about a smart city or a smart home or a smart watch or even a smart toaster. And smart cities have become pretty widely adopted as this broad vision for the future of cities. Not just tech companies, but also local governments, the federal government, and national governments around the world are adopting the smart city and this frame of thinking about what the future of urban life and city government is going to look like. Just about every city has adopted or deployed or at least explored some form of smart city projects and often brand themselves just by how smart they are. Kansas City, Missouri calls itself the world's most connected smart city. That's really fundamental to its vision for itself nowadays. So smart cities are incredibly important. They're really the vision that is shaping much of 21st century urbanism as we move into this more deeply into the century. But as with any utopian vision for the future and especially utopian visions grounded in technology, we need to scratch the surface of these promises and ask several fundamental questions. The first is whether or not these technologies are actually possible. Are the promises even able to be achieved by technology? The second question is whether, regardless of whether or not it's possible, whether we actually think that these futures are desirable, whether we would want these technologies to be implemented at all, regardless of feasibility. And with smart cities, I would argue that the answer to both of these questions is no. As I will describe, the smart city is ultimately a vision full of false promises and hidden dangers. So my favorite example that really encapsulates the dangers of smart cities is a simulation by a group of researchers showing how, with self-driving cars, we can have cities without traffic lights. We can get rid of congestion and have streets where cars just zoom through without having to deal with congestion. And when you first look at this, it looks pretty remarkable. It's very different than the type of urban gridlock that we're used to seeing. But if you spend a couple more seconds watching this video, something might jump out at you, which is that it doesn't look like a city at all. There are no people, there are no buses, there are no parks, there barely are even buildings and doesn't even start to tell if there are even people in the cars. So what we've seen here is that there's just this abstraction of the city into this problem. And what's remarkable here is that this is not just some simulated generic urban intersection to show the promise of this technology. It actually depicts a specific intersection in downtown Boston, one at the intersection of Mass Ave and Columbus Ave in Back Bay. And if you spend a couple of minutes there, it looks nothing like this. You'll see people walking around, you'll see cars waiting for people to cross the street, which seems like a pretty reasonable reason to stop. You'll see countless cyclists and you'll see many people getting off and on the one bus, which goes down Mass Ave and there's a stop right at this intersection. And of course this spot in Boston is not just a transportation place where transportation happens. There's broader layers of social and political context. To give just one example, this is along what's known in Boston as Methadone Mile. It's the epicenter of the city's opioid epidemic. But all of these different people and their needs and the broader institutions and systems that exist in this place have been completely erased in order to create a landscape where traffic efficiency, where rapidly moving self-driving cars are possible and even look like an attractive future. And so it begs the question of, we must ask whenever we are presented with technological solutions to complex social problems, what sorts of futures will these technologies actually create? The attitude that leads us to find smart city vision so attractive is one that I call tech goggles. And it captures just what makes these visions come to bear and seem so appealing. It's a perspective on the world that causes one to see the city as a technology problem, to see every issue as a problem to be solved by technology, to diagnose only those issues that have a technological solution. And tech goggles are grounded in two fundamental myths about the role of technology in society. The first is that technology is the fundamental driver of social change. They believe that progress happens when you introduce new technology into existing or outdated systems. The second myth is that technology provides neutral and objective solutions to social problems. So not only does technology drive progress, it drives optimal and socially beneficial solutions that everyone could agree on. It's often considered that technological solutions have no politics. They simply are better ways of doing things. There's a great quote from the president of IBM describing smart city systems where he says if the leaders of smarter city systems do share an ideology, it is this. We believe in a smarter way to get things done. So as he would tell it, there are no politics here. There are no broader social issues to be thinking about and different types of needs that may win or lose out as we adopt these solutions. They simply benefit everyone. But of course this vision is completely blind to or perhaps intentionally blinds us to the broader normative and political elements of these technological propositions. We end up evaluating the technological solutions along purely technical criteria such as efficiency, overlooking broader consequences in a way that tends to entrench established systems and prevent us from thinking about or achieving more systemic reforms. Most fundamentally, tech goggles creates severe distortion in our understanding of society. It allows us to look at a city street and all of its complexities in different people and simply see an abstract technical process that needs to be optimized using sensors and data and algorithms and other forms of new technology. And this to me is the fundamental issue with smart cities. It's not just that the technology is unable to actually even achieve the benefits that are promised. In many cases, if you dig into the technology, what they're promising is incoherent and unachievable. But more fundamentally, the problems that we think we're solving are fundamentally distorted versions of the actual problems that exist in society because all we're seeing is the technological aspects of those problems. And as we try to solve, and our attempts to solve those distorted understandings of the problems often create more issues than they solve. And so smart cities are not simply, and tech goggles more generally is not simply some misguided and naive set of ideas. It's a powerful ideology that has been adopted by tech companies and city officials and many other journalists and others across the country and the world. And it has the potential to fundamentally reshape cities. As we adopt technology following these utopian visions, the technologies that get implemented will reinforce the ideas of tech goggles and cut off our path towards more systemic and fundamental reforms. And so I argue that to the extent that the smart city revolutionizes urban life as is so often promised about these visions, it will be by transforming the landscape of urban power and politics rather than creating some sort of optimized technological utopia. Reconstructing urban life and city governments following these techno utopian perspectives will lead to cities that are superficially smart but under the surface are rife with injustice and inequity. So I argue in the book that we need to adopt a new way of thinking about these things. We must reject the ideology of smart cities that would have us think that all that we need is to have as much technology as possible and instead adopt what I call smart enough cities. That value technology and see the role that it can play but fundamentally are grounded in broader and more holistic social and political visions. Rather than trying to be smart in other words positioning technology itself as the end, cities need to be simply smart enough using their technology only to the extent to which it can actually benefit their broader social goals. So throughout the book I attempt to draw these contrasts between smart cities and smart enough cities. Highlighting just where the smart city visions go wrong and providing a roadmap for how we can pursue smart enough cities. How we can work towards more responsible and equitable cities that's still where technology still has a role to play. So I wanna get into now for the second half of the talk into some of the principles I developed for how we can promote smart enough cities and how that corrects for some of the gravest failings of smart cities. Principle one is to address complex problems rather than solve artificially simple ones. Now this cuts really right to the core of tech goggles which often abstract complex and intractable social issues into ones that appear solvable. That's sort of the nature of mathematical modeling. You simplify the world into something where your mathematical model appears tractable. And that's incredibly appealing for technologists because it makes the problems easier but it also gives us a distorted understanding of just how well we have solved the problem. And those engineering solutions often make the problems worse. We've already sort of seen this at play with the simulations of self-driving cars which make it seem like we can optimize traffic but only because we've eliminated all of the other things that actually happen in cities and that might slow traffic down. These visions are remarkably similar to utopian proposals from last century put forward by the automobile and oil industries showing us cities with high rays running through them rather than traffic. And we know that those turned out pretty badly. So I fear that we're likely to, if we follow these visions we may end up recreating many of the most grave errors of city planning from last century. But there are cities that are thoughtfully adopting new technology to address mobility. In Columbus, Ohio in 2016 won the department of transportation smart city challenge which came with $40 million to create a first of its kind smart transportation system. And when I first heard about this plan and I was about to read Columbus's plans I was really skeptical. I expected to see something like the visions and simulations on the previous slide. Self-driving cars to solve all of the city's problems. But in fact, rather than coming up with some futuristic fancy but ultimately ill-conceived plan Columbus focused on the connection between transportation mobility and social mobility. They focused in particular on improving mobility in a particular low income neighborhood with high infant mortality which was often because healthcare was inaccessible to many of the women and mothers in that neighborhood. And as they thought about how to address these issues rather than coming up with the optimal technology on paper they went out and they actually talked to people. They talked to the neighborhood and the people and the different mothers who lived in that area and to understand what sorts of challenges and needs they actually faced. And the set of responses that they developed were pretty remarkable in just the diversity of items and the way that they addressed different challenges that the people who live in this area face. They range from better transportation apps and payment systems to on-demand rides for pregnant mothers to providing childcare so that women with children could better go to the doctor's office and go to job interviews. When talking about and reflecting on these projects one of the leaders of Columbus's project told me we really needed to look at it from a more holistic viewpoint. As geeky technology people we wouldn't have thought about these things had we not considered the whole picture. So it's incredibly important to focus on real people and real problems rather than expect technology to create optimal solutions. Principle number two is to implement technology to address social needs and advance policy. Now this may sound so obvious as to be practically trivial and not worth saying but it's actually a really important place where many smart city projects go wrong. The allure of new technology prompts many city leaders and other members of cities to adopt the types of goals that align with technology thinking about efficiency and convenience rather than focusing on the real values of democracy and equity and justice that are actually fundamental to thriving cities. The city's broader strategy becomes colonized by the tech logic of smart cities. A compelling example of this is in civic engagement and democracy where many cities have been adopting apps like 311 systems which allow you to notify the government where there are items like issues like potholes and graffiti and broken street lights. And while well-intentioned these types of apps fundamentally conceive of democracy as technology and optimization problems, they're grounded in the goal of making politics more simple and efficient and able to scale. And so they don't actually address any of the core fundamental challenges that prevent functioning and thriving democracy in civic engagement. And because of that oftentimes these tools end up benefiting the most well-off in particular in cities. So what's needed are programs and technologies grounded in meaningful empowerment and engagement that address the real barriers to democracy. Various cities have explored technological tools that are grounded not in efficiency but inefficiency. Recognizing that civic actions such as deliberation and dissent and capacity building are actually fundamentally inefficient but are necessary to what democracy actually means. Other cities such as Vallejo, California as well as here in Cambridge have adopted programs of participatory budgeting providing the public with meaningful power and decision-making control over municipal budgets and programs and priorities. These programs are grounded in the types of values and processes that technology so often stamps out and that is precisely why they are valuable. Principle number three is to prioritize innovative policy and program reforms above innovative technology. Smart cities are grounded in the idea that innovation is good and that innovation means new technology. And so often in smart cities we throw technology at long-standing problems expecting the technology to solve those problems rather than addressing some of the underlying issues that are fundamentally not technological in nature. I think the most dangerous example of this in smart cities today is predictive policing. These are algorithms that police departments have been adopting to help them predict where crime is going to happen with the idea and these are particularly attractive in the face of growing outrage about discriminatory police practices and abuse and energy for police reform. But these algorithms are a way of pushing that energy away of saying we can have algorithms make these decisions instead of people and therefore that's how we're creating change. But these algorithms are fundamentally flawed in several ways. There are important technical flaws. The systems don't actually have the impacts that are promised and the predictions themselves are often biased due to the historical discrimination of police departments. But more importantly in going back to the principle at hand here, these algorithms distract us from the fundamental choices that we have to make about the role of police. What types of issues they should be addressing and not addressing and what their priorities and practices should be. By focusing on technological enhancement we're simply greasing the wheels of existing systems rather than reimagining those systems entirely. Treating technology as progress prevents us from pursuing or imagining deeper reforms that can spur real progress. But this is what happened in Johnson County, Kansas. They took a very different approach. Many years ago they recognized that they were falling short because many of their policing systems were inadequate because many of the people who were ending up in jail were there because they had mental illness or related issues that weren't getting addressed around homelessness and drug addiction. And because these issues were going untreated they were ending up in jail because that was sort of where their services were equipped to bring people. So they've spent a decade developing programs that divert these low level offenders with mental illness away from the criminal justice system and into treatment. And in recent years they've begun at deploying machine learning algorithms to help improve these systems, to provide proactive treatment to prevent people from coming into contact with police in the first place. And these algorithms have shown remarkable promise to identify high need individuals and keep them out of jail. And so the contrast between Johnson County and predictive policing is quite clear I think. We can see that smart enough cities are driven by clear policy goals and long term planning efforts. In Johnson County many of the things that made this possible started 20 years ago. And they generated these benefits not by creating some infallible perfect optimization problem to make their existing practices better but by reforming their programs, by evaluating where their programs were falling short and thinking about what they could do to change how they were dealing with social disorder around the city. Only then could machine learning come in and actually create a positive impact. Principle number four is to ensure that technologies design and implementation promote democratic values. This is an incredibly important one in smart cities because the technologies impacts have to do with far more than just their explicit nominal function. It's often grounded in how that technology is structured, who owns that technology, how they pay for it, who controls the technology. Smart city technologies are very often designed explicitly with the goal of collecting as much data as possible about individuals. The smart city is, while often talked about in terms of optimization and convenience, at its core it's a covert tool in many cases for increasing surveillance and profits for tech companies. It is a dream come true for tech companies. We can see this in New York where they've adopted link NYC kiosks, a program to deploy free public wifi across the city. And while this is an important service to provide for the public, the way that these kiosks are structured is incredibly dangerous. They are owned and managed by a Google owned company. And so it should be little surprised that how these kiosks are paid for is not by government funds or taxpayer dollars, but by collecting as much data about people who use the item. They're designed to collect your data and to provide targeted ads to people. And as we can see, there are thousands of these throughout the city, meaning that the price of free public wifi is a city-wide surveillance network. But these are not inevitable outcomes dictated by the technical demands of this technology. These are often the political and economic arrangements desired by those who control and manage the technology. In Chicago, Chicago is developing a program called the Array of Things, which on many ways might seem similar to link NYC. It's a city-wide set of sensors to collect data, but it's actually very different. It provides a very stark contrast. These sensors are designed to collect information about environmental conditions and other things that are going on in the city and are explicitly designed because they are funded. It's an academic and city of Chicago program. They're funded for public benefit as opposed to private monetization. And the city performed extensive public engagement and civic engagement processes to ensure that the governance of this technology was actually grounded in the broader public's values about what data should and should not be collected. In the process of developing their policies and governance strategies for the AOT, they altered many of their practices based on feedback that they got from the public. And of course, there's much more to be done in this venue of protecting privacy. Many cities, including Cambridge just recently, have passed surveillance oversight ordinances that provide the public with meaningful oversight and insight into what sorts of data and technologies are actually used by city governments. And more broadly, many cities, I think, are just starting to recognize that they have an incredible responsibility and power as market makers, that rather than simply accepting whatever terms Google comes to them with, they can actually enforce democratic standards and protect privacy in the development of these programs in the first place. And what they've found is that protecting privacy is actually essential to innovation and in fact often enhances, rather than hinders, the appropriate deployment of technology. So principle number five, and this is the last one, shifts gears to be a little more operational. And it's about how to use data effectively. And what's incredibly important is to develop capacities and processes within municipal government for actually using data. Oftentimes, discussions of smart cities and pitches from tech companies talk about the solutions as if simply having the most technology, having as much data as possible, will by itself solve problems. But this presumes that technology operates in a vacuum, when in fact it's incredibly important to success to be grounded and integrated into municipal structures and practices. This came to bear quite strongly in New York City, which in 2015 faced an outbreak of Legionnaires disease. This is an acute form of bacteria that was incubating in air conditioning systems at various neighborhoods across the city. And so to address this outbreak, the city officials had to figure out where these air conditioning systems were, which ones were infected, which ones needed to be cleaned, which ones were already cleaned. And this was an incredibly daunting challenge. This is not a data set that already existed. And they simply did not have the practices in place to actually bring the information together. The data that they had was often incomplete. It had errors within it and often data sets were unable to be brought together and synthesized to paint a full cohesive picture of what was going on. Fortunately, they were able to, through various processes, bring this together, but afterwards they learned their lesson. They created what they call data drills, which are sort of akin to fire drills, but for municipal data use. So they've developed data infrastructure tools to enable synthesized pictures of what's going on around the city, developed generalized technical skills for various members of different departments across the city, and created processes for sharing and managing data. Ultimately, all grounded in the understanding that taking action with data fundamentally relies first and foremost on having that data be properly managed and understood in the first place. So hopefully I've at least begun to convince you that there are many dangers to smart cities, that the seductive logic of smartness distorts our view of cities and subverts opportunities to truly improve urban life. But despite the hype that tells us that the age of smart cities is inevitable and imminent and desirable, we can and we must chart an alternative path to pursue smart enough cities that integrate technology into holistic visions of democratic and egalitarian urban futures. Thank you. We have a couple of mics going around. Hi, you mentioned 311 as a technology that could have unexpected negative effects, but you didn't go into that at all. Can you expand? Yeah, so 311, there's a couple of different ways that I think they can be sort of distorted by the logic of efficiency. I think first and foremost, what we've seen in most cities is that the reports themselves, 311 reports, people reporting potholes and streetlight issues are ultimately, there are many more reports in the wider, more wealthy neighborhoods where people both have access to smartphones but also have a sense of trust in the government. So they have a sense that the government will solve their problems. So this was something that I worked on in the city of Boston and we looked at whether or not 311 data actually provided any value. And what we found was both the data was highly skewed to more well-off neighborhoods and that the data didn't actually map onto other estimates of conditions in the city. So estimates of bad streets from 311 reports actually had very little correlation to our own internal estimates of where street quality was bad. So it really was quite a bad tool for... Is this even if 311 is not restricted to smartphones but also can come in from telephone or calls? So even in Boston, it is coming through with multiple channels but smartphones within a couple of years of introducing the app, smartphones represent around 80 or 90% of the reports that are coming in. And I think just more fundamentally, we can see how the logic of efficiency promotes certain types of engagement that are more transactional than others. So if you look at what types of issues these apps care about, it's issues like broken street lights or potholes or graffiti. The types of issues that are far more fundamental but are ultimately inefficient or issues, if we have bad schools or bad transportation systems or too much crime in neighborhoods, these are issues that are inefficient. These are issues that require a great deal more than simply transactional, telling your city official what's going on. And so I think that even just in their fundamental structure, they are prioritizing very limited types of issues but still sort of focusing on that as the nature of sort of new forms of engagement. Hey, so this got me thinking about a lot of things. So one of my friends is a mayor of a city in Western Mass called Holyoke and he was like a young millennial mayor. And so he got elected at a very young age and was like kind of like thinking about, I think around the time he was obviously grew up with a lot of technology and was like, hey, I'm gonna implement all these technological things and then had all these tech companies and other related folks making pitches about, hey, move to technology. And even though he was kind of understood what some of the social to urban city with, and then you had the people, the community was kind of demanding more efficiency, wanted to engage differently with the public. And we figured out the barrier was actually, not even department heads, his direct reports were just like the rank and file or the people who've been working in city hall for years who technically don't live in the places where you get proximate to a lot of social problems in the city. And so one of the things we thought about, I don't know if this is, became something you saw in some of the cities, but before we even do the training around implementing new technology, how do you do training around some of the social problems that exist in the city before you think about like the technological solution? Because I think what we realize is folks who worked in the city didn't really see how this technology might advance or might help improve some of the social issues. So I don't know if that's something you found in some of the places you worked or learned about, but this idea that even if you have a great progressive millennial person and you have a community demanding, there's some people in the middle who actually are gonna be the front line in helping troubleshoot and implement who you need to kind of move along as well. That's a really great question and a really important point. And actually something that I experienced in Boston when I was a data scientist there that I think fundamentally shaped my understanding of a lot of these issues. So in Boston we were one of the projects I was working on was to create a new open data portal. So many cities nowadays, many of you have probably heard about this, have these open data systems where they have websites, where public municipal data is out there for anyone to use. And in the process we were in the process of reimagining what this could look like and what our goals were. And after six months of really sitting and being on the inside and thinking about what functionality this should have and what sorts of data to have, we started going out and spending time in public libraries and thinking about, okay, let's just go talk to people who live here and think about what do they wanna see? What sorts of data sets do they care about? And what we found was pretty remarkable. We'd set up shop in one of the libraries around Boston and we'd have a little stand and we'd go up, we'd wait for people to walk by and we'd say, hey, we're here from the Boston Open Data Team. What data do you wanna see? And they didn't care, they just walked right by they'd say like, I don't know. And then, so we sort of shifted our strategy a bit because we were like, we actually wanna talk to people. So we stopped asking about technology. We started asking about, oh, you have two kids. How do they like the schools? How do you like the parks? How often do you come to this library? And it was this really, and we would have amazing conversations lasting oftentimes up to an hour. And so it was this really amazing shift in recognizing that while we were conceiving of everything in terms of the technology, both internally in terms of what is our process for moving a data set from a department to review to open data and thinking about all the tech functionality, it really reminded us that what we're ultimately, the goal here is that people are not engaging with their real life issues as technology problems. I mean, they're not technology problems and they rightly don't see them as such. So I think that was really eye-opening for me and definitely shifted some of our approaches to thinking about what role open data could play and how to actually help facilitate those outcomes. So I thought it was really amazing and I think this gets to the story in Columbus where they sort of had a similar process of going out and talking to people. And I think some form of almost mini ethnographic work or you can call it design thinking or user-centered design, I think those can be incredibly powerful ways to get the people who are focused on implementing this technology to really understand the real people and the real issues. I think that oftentimes that has to come from leadership though because there is this disconnect where a lot of technology is focused often on sort of the middle manager level or employee level but ultimately that has to somehow turn into public benefit and often there's a disconnect between what is convenient for someone who's on the inside trying to manage all of the 311 reports and make sure they meet their metrics for managing 311 reports as opposed to what actually matters for the people on the ground who might be dealing with those issues in the first place. I'm really, really enjoying your discussion in this book especially about the critical thinking side or sort of the people side of technology. I am a technologist, I am a computer scientist and I'm having a great deal of difficulty sitting in some meetings. I'll pick for example like driverless cars, autonomous vehicles and the adoption of them. I'm enamored naturally by the technology of things it can do for people with vision impairment and all these wonderful things for communities who otherwise would not be mobile. However, not enough people seem to be asking the question. Well for instance, I was at a talk with a member of the New Urban Mechanics and they were talking about the data, the 311, all this, it was fantastic. So I brought up the question, I said, in terms of driverless cars, I know that people are testing. Who's really in the room who's making the decisions about these? I said, do you have somebody who has the numbers for the Bureau of Labor Statistics of all the displacement for all the Uber and the Lyft drivers and the taxi drivers and the bus drivers and the school bus drivers and truck drivers will not have jobs. Do you have, and it's like, no, we don't, you know, like, not that they haven't thought about it, but it's sort of like, yeah, we'll kind of get to it, but it wasn't like, it was a non-answer. I've asked people, technologists, would you like, would you put your child on a driverless school bus today? Now, not other people's children. I said, you're a precious cargo. I've never gotten a straight answer. It's a yes or no question, and I've gotten gobbledygook. So I'm just curious about the decisions about who's making the decisions, who's included as a resident of Boston. I've never been invited. Who's making the decisions about what will be forced upon us or how we should adopt the technology? That's one of the most central areas of these smart city conversations. We're so often, the people who so often are left out of public conversations are continued to be left out of these conversations. I think in part, there's a sense, one of the things that the tech companies do really well is that they create this sense of urgency and the media will often play into that. There's a sense that we need to rush forward, we need to be thoughtful and innovative and adopt this technology now, and at least explore it and will manage the consequences or think about how we're gonna govern this technology later. That's the case with many of the places with self-driving cars where many cities are adopting, having these pilot programs, they're reducing regulations so that self-driving car companies can come in because there is this sense that this has value, that this needs to have value and that any city wants to be moving into the future. So I think pushing back on that sort of urgency that the tech companies can put forward is really important and I think we are starting to see shifts where the sort of unequivocal optimism around the technology is shifting where it's no longer just an obvious win to bring in new technology and have these programs but people are actually starting to see the downsides that a mayor saying we have this new technology is no longer just going to win points. I think we can see with what happened with Amazon last week that bringing in tech companies, people are starting to realize that there are other issues that need to be considered, which then comes along with other people who need to be in the rooms when we have these conversations but I think that the surveillance oversight ordinances is a really powerful way of actually enforcing that control, making sure that it's ground there need to be public hearings about new technologies and that the public needs to actually have insight into that technology and ultimately have some form of veto power over technology that they don't approve of. So I think there's always going to be this pressure of making sure that the conversations move beyond the insiders in the government and the tech companies and the vendors who are trying to push these solutions on them. It's an ongoing struggle and I think I've seen a lot of cities where in New York and Toronto where there's the whole sidewalk labs projects, a lot of people are with really inspiring activism, forcing other voices to be in those rooms but I think it's a constant struggle and I think that's something that there are forms though of regulation and processes that we can be instituting often with some struggle to make sure that those voices are not just at the table but have a meaningful voice at the table as well. Thanks, that was a great talk. I'm wondering if you looked at any non-American cities because one of the things I'm dealing with in a different project is I was in Kigali recently and moving away from the self-driving cars where Google managed to convince the government of Rwanda to have digital addresses, right? So that you could use it on Google Maps. So nobody uses these addresses, that's a fast problem but this is a country where the state was highly efficient in the 90s in finding people who did not like throwing them into prison and other genocide in 90s, right? But it seems as though nobody figured out that this was a context we needed to worry about, i.e. when it's easy to find somebody online in a country that is coming from a genocide, what does that actually mean? And I'm wondering if you looked at non-American cities and what are some of the conversations that they're having because in Kigali, nobody had this conversation. Google went and convinced them and suddenly they're posting these addresses on everybody's gates. So I didn't look in too much detail outside of North America just to sort of be able to focus on the specific context here but I think there are a lot of interesting parallels. There are a lot of, this is a global movement. China and India are creating these new smart cities from scratch, building rapidly developing cities that they're building from nothing to a smart city and it's just a span of a couple years. China in particular has, if we think the surveillance here is out of control in China, it's often many magnitudes more than that. But there are other places that have actually really inspired me. Barcelona is a really great example of a city that has been pursuing smart city approaches but really from a fundamentally different approach grounded in democracy and civic control rather than simply technology. They've actually been really the one place that has most strongly worked or bargained against tech companies to create policies and public private partnerships that really give the public more control and making sure that all the data that's collected is ultimately about public benefit rather than private profit. So I think there's a lot of lessons to be learned from looking at these different contexts and across the board to your point about just making sure that we actually understand what the dangers of these systems are. Again, a vendor trying to sell a tech solution is always gonna talk about what the positive outcomes are and just how great the technology is. But so it's important as we've seen over the last couple years a sort of general deeper understanding and education about how technology can be misused. And I think even in the US though, many of those issues still come to bear where there's both, I often find myself conflicted or seeing these two conversations that often don't talk to each other on the one hand saying cities need more data, it's great, we can optimize services. But also recognizing that in many cases there are specific aspects of the government, especially law enforcement, that we maybe don't wanna have access to certain types of data. So we can't simply recognize these systems as monolithic and often I was at a presentation six months ago where people were talking about how to create unified data systems so that everyone in the city could have access to all of the data. And while that might be great for the transportation or parks department, all of the examples that are out there about police and law enforcement officers having access to more and more data simply leads to more and more abuse with game databases and predictive policing and risk assessments. So we often think really deeply about exactly which aspects of governance we care about supporting and not supporting which goes back to the point about really engaging in fundamental evaluations of which systems we care about and want to keep and promote and which ones you might wanna fundamentally change. I think behind you first and then we'll go to you. Hi, this was a great talk but I'd like to follow up on your question about your sort of vision and conversations on open data in Boston because I just came from a meeting with a Cambridge City Councilor talking about how insular the Cambridge open data strategy is. And if you wanted to put up something opposite to what you've done, it would be the perfect case study. What were the practical outcomes of those conversations you had in public libraries? What actually changed about the Boston strategy and what data sets did you go after after that and what were their actual data to address the concerns of families you encountered in the libraries? So I would say it was more of an unrealized vision. We were as is often the case with these things that we were facing in Boston was and it's actually sort of a good lesson in how these things go. So there was a year, maybe 18 months sprint to get the first pass at this new open data portal launched and right after it ended, we hit the point where the grant that had been funding this program ended. So all of the people who were in charge left, all of the interns who have been involved in getting this project up to speed, including myself left very soon after that. And so capacity for this program, there was the initial sprint to get the thing launched and then immediately sort of nothing has happened since. But I think the vision that I brought was really recognizing a disconnect between the way we talk about open data and recognizing that just throwing data sets onto a website doesn't actually create any impact. The data set needs to be somehow engaged with with some sort of process that's going to somehow create impact. The theory of change around open data is often very shallow. So one thing that I had wanted to build out was sort of recognizing the sense of, maybe think about it as a conduit or intermediaries where we might be releasing data but the average person is not looking at that data or even caring about that data. So I would have wanted to see much more strong partnerships with specific journalists and academics and researchers and community groups who themselves maybe have the capacity to think about how to use that data and could inform our strategy without getting us out of the idea of just thinking that we would throw it out there and that someone in the city or somewhere in the world would simply look at that data and solve some problem. One thing that I actually quite liked from Cambridge that they did a couple of years ago was they put out a sort of, I forget what they called it exactly, but it was sort of like a problem spec sheet where with all of their data sets they actually, for some of them at least, they put out, here are some things that we'd be curious to see an analysis of or an app built around to at least give some sense to the public of what problems they're actually being faced with the open data. So I think those ways of just sort of bringing the connections together to say, here's what we as the city might find value in and also learning a lot more about what the public might be able to do and be interested in. But I actually think open data, it's a really hard problem to make it useful, yeah. I really enjoyed your talk. I was wondering in terms of your work and your research, what are the implications for policy at the state or national level where party politics might come into play? That's a great question, something I probably haven't thought about enough. I think at the state level, I mean, state and federal, we are increasingly seeing similar movements towards, especially these internal data science teams. I know the state of Massachusetts has a data science team to inform policy and the federal government, especially under Obama had new offices like 18F and USDS, the United States Digital Services. So those are one form of thing that they're doing that's quite valuable to just build in deeper technological expertise, both for building internal tools and for evaluating external tools. And I think just providing more funding for these types of programs, the DOT Smart City Challenge actually turned into a really great program in Columbus. And so continuing to provide support for these broader municipal projects, recognizing the role of technology, but avoiding falling into the trap of simply picking the one that seems most technological or framing it solely around smart city projects that then ask for the city that's doing the most to create a city-wide layer of tech and sensors rather than thinking about how the technology's being used. And then there are the broader issues that we're seeing in urban policy generally where cities just don't have often the decision-making power because of the way things are fragmented, because of the way tax bases are fragmented. Many of the issues that we're seeing are sort of, there are two things that I think are particularly challenging. One is the underfunding of cities which allows these deals with tech companies to seem so attractive because Google can come in and say, we will provide you with free public Wi-Fi and don't worry about how we're funding that, we're just gonna give you that free service because you can't pay for that yourselves. So that's often the trade-off that's being made because cities aren't able to do these things themselves. And then, I had a second one, but I'm blanking on it now. I'll come back to you if I remember it. Yeah, Cibella. Hello, thank you so much for your talk. You gave an example earlier of how Google was providing free public Wi-Fi in New York and how this could be a surveillance city in a way. And you were suggesting that cities should have the power to negotiate better privacy and things like that. So in general, what are your thoughts on cities having the power to request the data that belongs to these large companies in order to use that for public good? So for example, you may think of a city in a developing country where there's not enough infrastructure or sensors to actually collect data, but there's a large tech company there. And so being able to request the data in order to use it for public good, what are your thoughts on that? Yeah, I think that's one of the things that can be, and that's something that tech companies have resisted. In Boston, before I came in, there was a partnership with Uber that essentially was them only giving the most meager amount of data and led to some little debates in the press even. And it's sort of funny because when tech companies will resist it, they'll often talk about protecting privacy, which they never care about in any other context. So, and there actually are some privacy issues to think about when you're giving this data over to government and thinking about in terms of public records, but generally, this is exactly the type of thing that can be negotiated, right? And if you're in New York City, you're the largest, most powerful city in the country, there's no reason why you simply have to agree to whatever terms a tech company says, here's what we're gonna do, take it or leave it. So I think companies or cities have started to say, in the contract, if you're gonna be working here, we will have access, whether it's a monthly dump of the data or we'll have some access to this information because the companies are often collecting far more data than a government could or certainly is, and especially in the developing world where cities aren't even collecting data often to the extent that cities here are. So I think that information can be incredibly valuable as a tool and is exactly the sort of thing that we need to be thinking about around what are we actually gonna be signing up for? But it's something that often hasn't been thought about when I was in Boston, we, I think only recently have cities started to recognize that these are important issues to even put on the table rather than thinking about them after the fact or not thinking about them at all. In Boston, we were restructuring a privacy policy for our end around our free public Wi-Fi network and we realized that we didn't even have full control over what data was or was not collected because of what the vendor we had been working with actually set in their terms. We hadn't even thought to consider privacy or data collection in the process of setting up that agreement in the first place and procuring that technology. So increasingly, I think these, instead of just thinking about efficiency and optimization, these broader concerns about structure and access and public access to these things should be increasingly fundamental considerations when we're thinking about this technology at all. Maybe right behind you, yeah. So my question is a framing question prompted by your reframing of the smart city into smart enough city. So something I feel like I've seen over the past few years especially in Europe is the reframing as a living lab. And there's some cities in the US that I think use this but it seems to be sort of a European idea and it seems to contain some of the thoughts about a bottom-up co-creation. Another thing though that it is also kind of forwarding of the idea of experimentation is I was just curious what you thought. First, what you think the work of the framings does whether it's useful if it's obscuring sort of the same process going on everywhere but then also specifically the experimentation element what you think the place of that is in these projects. Yeah, if you spend enough time in the smart city space you'll see a lot of different words and labels thrown around. Everyone's got their sort of catchy way of putting what they're doing or what the city should be. And I think we shouldn't discount although sometimes it can be overwhelming to see just how many terms are out there we shouldn't discount the role of this framing. So at least for me the goal is really I think that when you talk about smart cities no matter how thoughtful you try to be smart cities put you on an axis where being smart means having more technology and that sort of positions using less technology as sort of the dumb city, right? Those are the two options that you have. And so even though there are a lot of people who try to talk about a truly smart city or other forms of smart cities or this is what I think is smart cities I think just even as we use that language and allow that framing to come to bear it fundamentally shapes what we see as the ways of evaluating things. If you're thinking about smart cities the axis that you're on or the ranking that system that you're gonna be thinking about is how much technology do you have? So I think these reframings are really powerful and I've at least tried to get towards one where the goal was clearly not around anything that has to do with just how much technology you're using, right? And I think experimentation broadly speaking is a really powerful thing that cities are just starting to adopt oftentimes and that's a great example of how we can think about innovation to mean something much more broadly than just adopting new sensors or algorithms. So innovation in terms of broader practices or thought processes around what it means to pursue change is often way more powerful than innovation that means buying some new sensors. So I think cities in Europe and I know Boston is a place that has been doing a lot of thinking about experimentation, thinking about urban science and what it means to actually have your programs be grounded in maybe not formal academic research but we can think of as research projects and say does this actually have an impact? Because recognizing the complexity of cities means that we have to understand that what seems really nice on paper may not actually work as intended. There can be backlash effects or just unintended weird consequences especially if you think about second and third order effects. So experimentation is a great tool and broadly speaks to the growing movement for cities to become more and this is maybe a place where some technology thinking is useful but there's the idea of agile development and so thinking about agile testing and whatnot can be a really great way for cities to think about what they're doing. A great example of this actually is in New York where if you've been to Times Square in the last couple years you'll see that all of Times Square pretty much has been replaced. There's no traffic going through most of Times Square, it's pedestrian plaza with seats and all of that. They first just experimented with that by getting some really two dollar beach chairs at Target and putting them out and just seeing what people actually sit here and they found that it had a huge impact far more than they ever would have expected. So that's now a massive change that will probably hopefully stay in New York for the foreseeable future. That was started with a couple people going out one morning with some beach chairs just to see what happened. So I think recognizing urban space is fluid and that there are many different ways to pursue change in progress is important and we have to think about the alternatives and what might and might not work rather than simply saying this is what's good or this is the system we've procured and so now we're stuck with it for five years. So we have time for a couple more questions. Thanks so much Ben for this really wonderful talk. My question is also related to the framing and about holding cities accountable perhaps to this new framing and yeah the question is really just that how would we hold cities accountable and how can we really distinguish in practice? I know you've put up a few really helpful examples but more in real time the difference between a smart city and like a smart enough city and ensuring that the public sector is really moving forward with this idea because in the same way that we've seen the term smart city being co-opted not only by the public sector by industry and all different stakeholders to kind of claim that they're using technology for the social good I could see the same sort of the same sort of dynamic happening with the smart enough city and so I'm thinking about yeah what you might envision for holding cities accountable and also I'm thinking about how we measure the impact of any sorts of technologies being put into place. Yeah this is an area I've seen I've been very encouraged in this realm or this domain over the last couple of years where I think three years ago more of these technologies would be thrown out and there'd be critics but that was sort of not what people were talking about and at least now there's been a lot of great work talking about the dangers of technology across the board not just in cities and I think we've become much more skeptical of algorithms much more skeptical about data collection and especially much more skeptical about these companies like Google and Facebook and Microsoft and Amazon no longer viewing them as sort of the great messiahs that will bring us to the future. So I think continuing to build out that broader public awareness and understanding is really important so we can get to a point I think as I alluded to earlier where Mayor coming in and saying we have all of this great new technology we've just brought in Amazon or Google or Verizon to do this thing no longer just gets easy political points even if there are a couple of critics behind the scenes and that needs to be something where broadly the public is skeptical of and needs to be convinced such that it's actually only even politically viable for someone to bring those projects forward if they really have thought about these things and made sure that they've addressed the types of concerns that are gonna come up. So that's one place but I think there are more grounded, tangible things we can do through I talked about the surveillance oversight ordinances in New York there's a task force for creating algorithmic oversight. So those types of programs I think are pretty valuable ways of not just enforcing power from the outside but actually getting some form of grounded, tangible voice into these decisions and things like that. So many cities are starting to explore these types of regulations and that's really important and I think just more broadly in cities generally we're seeing more and more people expecting to have voice in these decisions so we shouldn't view them simply as democracy around smart cities or technology but democracy in cities generally this is just one manifestation of municipal government that is not really that fundamentally different from many other things that the government does. But increasingly we are seeing people expecting to have these types of voice and we're pushing back against the things like not being sucked into the allure of the 311 systems that would say you have a voice because you get to tell us where there's a pothole and we're gonna be your customer service agency and come fill your pothole pretty quickly. So maybe yeah one more. Thanks for your presentation and about two years ago a capstone class that I taught worked with the Kansas City Missouri mayor's office to develop communication plans for related to their smart city initiatives and at the time students conducted interviews and focus groups and open we heard the citizens feeling their voices were not heard. Of course the city wants to do something like this research right but then they lack resources and I think that's something you touched on earlier earlier as well. So I wonder while doing research for your book and whether you encountered any good examples of a city's collaboration with nonprofit entities or higher education entities in asking better questions and maybe perhaps collecting and analyzing data systematically. You know I think to go back Boston is an example of a city that has thought about these questions a lot about a year ago they published a broader urban agenda where they asked something like 150 or 200 questions ranging from super large to super small about cities. So that's one perspective on how to think about research and experimentation oriented approach. There are a lot of places that have begun in certain spaces developing partnerships with academics and other institutes so the data science for social good program that I was part of about five years ago we worked with cities and governments across the world providing sort of the data science expertise that they might not otherwise have. But these sorts of funding and resource issues are often very much at play in all of these conversations and highlight and I think both in many cases were the optimizations that we're trying to achieve with smart cities is so often because we've been facing austerity for so many years this idea that we need to be as efficient as possible because we're so cash trapped because cities don't have resources. And yeah so I think that's a really important issue and so one example of how the resource limitations play out as a story I actually ended up cutting from the book but in Chicago where they had predictions to identify where kids would be getting lead poisoning. They were able to identify by pulling together various sources which homes were gonna have lead in the paint and which of those homes also were gonna have an infant who would get the lead poisoning. And they developed an algorithm, they worked with the Chicago Department of Public Health and developed a really great way to start thinking about how to deploy these systems but ultimately they're fundamentally limited by the fact that they just don't have enough resources. They don't have enough people to actually go out and inspect the homes, they don't have enough resources to then go fix the homes that are created and they don't have the enforcement in place to hold landlords to account and to force landlords to actually fix these issues. So fundamentally they have this great, nice tech solution, it's a great integration of technology to solve a social problem but we're seeing even there that you're limited by resource constraints and funding issues and legal barriers that don't put power into the right places. Thank you so much for such a clear and compelling and grounded and important talk. Thank you very much for having me. Thanks. Thank you.