 Many European cities want to be smart cities, frontrunners in the use of big data and artificial intelligence, but a smart city is not necessarily a just city, a green city or a white city. Smart city policies need to be guided by public values and the corona crisis has given a boost to smart technologies from teleworking to remote learning to e-commerce and there are public values at stake. This is my first example of how smart technologies can affect public values. A cute example from Colombia. Small robots delivering food to corona patients who are in quarantine and this way contact between humans is minimized. Do we want this to be the future of delivery services? How would the proliferation of robots impact the distribution of public space, especially on pavements? Public space is a scarce good. Do delivery robots steal jobs from delivery workers? Are those delivery jobs decent jobs that should be retained? Or does the production and maintenance of robots create new and better jobs? These are questions that the government or a local council should ask itself before allowing and promoting delivery robots. I see my clash with public values like the quality of public spaces and with social justice. The same questions can be asked about the use of robots as waiters in bars and restaurants. Serving robots have become popular during the corona crisis but do we want them to stick around? This example is about Fixed by Street. That's an app which allows citizens to report broken furniture or waste dumping or graffiti to their municipality. In the first map you see the number of reports in the neighborhood of Modembeck Saint-Jean in Brussels. In the second map you see the number of reports in another neighborhood of Brussels, Excel. You see there's a big difference between the two neighborhoods. That's probably because Modembeck is a poor neighborhood with large ethnic minorities. Excel is a richer neighborhood, trendy with many students and EU officials. So what would happen if the decisions about maintenance of the public space were automated and based exclusively on the reports via the app? Rich neighborhoods would be better cleaned and maintained than poor neighborhoods since people in these neighborhoods have less digital skills on average. So the efficiency of smart city solutions may lead to discrimination and inequality. Another example, facial recognition cameras. The London police are already using street cameras that are connected to live facial recognition software and to a database of digital photos. This system can help the police to identify and arrest wanted criminals, but in order to do that the faces of all passes by have to be captured by the cameras. So there's a clash of values here between security and privacy. Although if you are black you might not feel very secure. The current facial recognition software has a discriminatory bias. Non-white people are much more likely to be flagged up in error. This example is about mobility as a service that might be the future of transport. Mobility as a service platforms offer travelers a personalized door-to-door journey using different modes of transport, trains, trams, buses, taxis, shared bikes, shared cars etc. And with a single app you can reserve, use and pay for all these modes of transport. In Helsinki the commercial app Wim has helped reduce the number of car rides, but those apps want to track your smartphone all the time and also they would like to have access to your digital calendar so they know in advance where you want to go. There's a tension here between sustainability on the one hand and privacy on the other hand. In some cities the houses of very elderly people who live alone are equipped with sensors that monitor their well-being. Care workers don't have to check upon them so often. They save time, that's the idea. But in practice some elderly people deliberately leave the refrigerator door open for too long just to receive a phone call from a care worker. So technological fixes in the care sector often overlook the fact that illness and mental decline are social phenomena. Very senior people want to communicate, they want company, they want personal advice. Your home automation might not be as efficient as engineers think it is. There's a clash between efficiency and the values of connectedness and human dignity. In the English city of Bristol 25% of the population has already been processed by an algorithm. The algorithm uses 35 sets of personal data to assign risk scores between 0 and 100 to all children and young people. These scores predict the risk that an individual becomes a victim of child abuse. The more your characteristics match with the characteristics of previous victims, the higher your risk score. And the Council of Bristol is considering using the system also for predicting criminal behaviour and drug abuse. That raises a lot of questions like is trust in the government undermined? If every contact you have with the government can lead to a higher risk score, is it acceptable to give a risk score to someone just because they fit a certain profile instead of judging them individually? Do people have the right to know their risk score? There's a clash between efficiency on the one hand and good governance on the other hand. So now I would like to ask who would rather live in a stupid city than in a smart city? You can post your reply in the chat and fortunately a smart city does not have to be a dystopian city of mass surveillance, dehumanisation and inequality, but we have to realise that there are important values at stake. Local politicians have to discuss the values they want to see protected and promoted by technology, discussed with citizens and in the city partners. New technology needs public debate and democratic control, especially since different values