 All right. Good afternoon, everyone. Welcome to our symposium on the pros and cons of a data-driven world. This symposium is the first time that Studium Generale and the Tilburg Young Academy collaborate on organizing an event. Studium Generale, you probably know, organizing events around society-relevant topics, discussions on campus in the city. Tilburg Young Academy is a group of relatively young academics, assistant professors, associate professors, for some of you, maybe your teachers, who try to make this university a nicer place. And we do so, for instance, by organizing discussions on interesting relevant topics like data-ism, the pros and cons of a data-driven world. I've been asked to share with you one practical piece of information, which is that if you're interested in receiving a certificate from Studium Generale, you can join five different events, write a short reflection on this, and then you will receive one. This might be interesting for students, but maybe also for full professors who would like to add a very nice line to their CVs. We'll work on this. So why are we having this event tonight? What's the rationale? What's the background behind it? Right, there we go. One thing we've been noticing over the past 10 years or so is that some things are changing, seem to be changing within our society. If you switch on your TV, for instance, when I switch on my TV, I did this, well, for instance, about a year ago. I had a tough day. This was also immediately the last time I did this. I'll explain why. I noticed that when I was moving from channel to channel, there seemed to be a lot of competition going on. For instance, singing was presented as a competition. I wasn't really interested in looking at singing as a competition, so I switched to another channel. There was a program on dancing. I took NICE, a program on dancing. But again, it was sort of presented in this form of a competition. Who is the best dancer? Who is the best singer? Which is kind of weird, right? Because singing and dancing are not necessarily like Olympic sports that you expect a competition to be shown to you in. Another example that's even more bizarre, I think, is playing with Lego blocks. This is now presented in the form of a competition on TV. But why is playing with Lego a competition? And finally, even baking cakes, and even more specifically, baking cakes by kids, is now a competition. Those kids are baking cakes. They get a grade. There's a ranking, and the best kid or the best cake gets a prize. But since when did baking cakes in kids become a competition? Also, of course, universities love competition and rank. There are all those competitions between universities. There are international rankings of universities. What university is doing best? What program is doing best? Tilburg University very much like those rankings. If you look at the web page, almost half of the web page is dedicated to rankings. For instance, we as Tilburg University are placed number 17 in the world in business administration. Congratulations to everybody in business administration. But why should universities be ranked? You could say it's part of this phenomenon of dataism. So the idea that maybe even everything can be reduced to so-called objective data. Singing can be captured in data. Dancing can be captured in data. University performance can be captured in data. Social media, of course, your posts on X or on Instagram is only successful if you have that many number of views or likes or shares. Of course, we all have those smartwatches now, right? They track how many hours you slept during your night. If you want to know how well you slept during your night, you don't look. It's your mirror, but you look at your watch. And your watch tells you why you slept really well the last night. Of course, scientists just started collecting data on everything, the number of citations your articles get. When you publish, this is a very junior researcher as you will immediately recognize. And even individual scientific articles get metrics. How often did other scientists look at your article? How often did it get cited? How often was it shared on Twitter? Data, data, data, very informative, of course, if you're into those kind of things, but maybe also a little bit weird, right? Alright, that's a bit of background to this program this afternoon. Seems to be a lot of data being present, being collected, being analyzed in society as a sort of basis for all sorts of competitions. Is this a good thing or not? I'm very excited that we have an amazing lineup of speakers for you. Three full professors of whom two are vice-deans. One used to be a vice-dean. I think this is the most impressive lineup I have ever seen in my life in a scientific event. Are you sure it's a ranking? Afterwards, there will definitely be a ranking of the best presentation. So the program is as follows. We'll have three relatively short talks for you, one per keynote speaker. Then we have an incredibly short break of only a couple of minutes, after which we'll have a final panel discussion with a couple of more people who are here, students, a program director, and our three keynote speakers. That's the program. So since we're already running late, I see, without further ado, let me introduce to you the very first keynote speaker of today, which is Professor Esther Keimoden. Esther is well-known for her work on the philosophy and ethics of technology. She's a full professor at the Law School, and also she was the founding and very first president of the Young Academy. So it's great to have you here, and we very much look forward to your talk. Thank you, Davies, for the introduction, and thank you for having me. I feel very honored to be here. First, let me tell you what the focus of my talk will be. First, I would like to take a bit of a philosophical perspective and ask ourselves the question, why are we actually so eager to collect and use data? So what is so appealing on having all these data? So I want to first kind of look into that. And then secondly, I want to kind of switch towards the technologies that we oftentimes use. David already listed some of them, smart phones, smart watches, all these technologies that we use in order to collect and oftentimes also analyze that data. Can I call you back, Ed? I'm in the moment here. I don't know how it is for you guys, but I've been back from my holidays, I think, for two weeks now, and it is very, very difficult to stay in the moment. We all try that, right, to kind of be in the here and now don't think about what you have to eat for supper or the problems that you have with your boyfriend or girlfriend. So it's very difficult to stay focused and stay in the here and now. Coaching and influencers, they make a living out of trying to help us to stay in the moment. But I would like to put forward that we as human beings, we are not made to just live in the here and now. This is, depending on your take on human life, we are cursed or blessed by the fact that we know that there is a future ahead of us and that we don't know for certain how this future will look like. Because if you think about it, there's actually a lot of uncertainty about that. So we as human beings, we have to deal with that uncertainty. We know there is a past ahead of us, a past behind us and a future ahead of us. We are aware of this uncertainty in the future and we like to pretend, as if we know that what will happen, but actually we don't. To make it even worse, that's my philosophical part of my identity, I always try to make people feel bad. We're also social beings. So it's not just that we have to deal with an uncertain future. We also have to deal with fickle others that have oftentimes a mind of their own. And although we like to think that we know for certain what other people are thinking, we actually don't. But we live in, we are social beings, we still need to cooperate to live together. So also there there's quite some uncertainty that we have to deal with. If you think about all this uncertainty, you wouldn't be able to get out of bed if you don't have any strategies to deal with that uncertainty. So a lot of the things that we do are actually strategies to deal with an uncertain life and meditating, agendas, regulation from the law school, architecture, the way that this building is being organized, kind of shapes and seers our interaction and collecting data. Collecting data, especially in our era through data-deriving technologies, is also a strategy to get grip on a complex life. By collecting data, we try to kind of, for instance, through algorithms, try to see patterns, try to predict what will happen in the future. Not through the smartphone, you try to kind of get a grip on your health. So data is a way that we human beings, a strategy that we use to deal with uncertainty which is inherent to human life, to being human. So that collecting data and turning it into meaningful information, I would say is a key aspect of what it means to be human. And this, I believe, was beautifully presented by the German artist Raffaella Vogel, who had a temporary exhibition in our own Tilburg de Pont Museum recently. Unfortunately, you can't visit anymore, but you can still find some of the pieces online. And one of the installations that was in the de Pont Museum was called the Missed Education of Miss Vogel. And in the Missed Education, Vogel painted animal skins with a collection of knowledge diagrams. And with these, she wonders how to represent information objectively. Together, they form more or less a kind of big mind map of the artist's knowledge and interests. So these range from Karl Marx to horse training and from jazz music to popular culture. So you can, as you can see, you can walk around and between the painted animal skins and the learners, I think, important things about data and information. First of all, structure. In order for data to become meaningful, we have to structure it. So every painting or every animal skin is structured in a different way. And I think this immediately brings us to a very important second aspect of data and how we use and transform data into information. Choice. So what we see on these animal skins is not a direct representation of the information she has. She makes decisions on how to present it to the audience. So the choice of materials, animal skins in that sense, is both enabling and limiting. And there are things that you can do with these animal skins and things that you cannot do. So whenever we use data, we make choices. And these choices are always incomplete in the sense that the artist also doesn't pretend to be complete. It's a missed education. This is not all her knowledge. She makes decisions on what to keep, what to include, what to exclude. She's also aware that she also doesn't know everything. Apparently there are holes in our knowledge. There are also holes in her installation. And you really get this feeling that, so this line of animal skin, so to say, it gives you the feeling if you walk there that it could actually go on and on and on until forever. So actually what I think what Raphaela Vogel shows is that there's this inherent need of human beings to try to get grip on their lives through collecting, structuring and organizing data in a meaningful way. By doing that you always make choices, you include certain aspects, you exclude sometimes information. And as a result it's always incomplete. So when we take these characteristics of using data to make sense of ourselves, others and the world around us, and we put this in the context of our data-driven era of algorithms crunching data, smart devices and proactive services. What, if we do that, so if we take these characteristics with us, what do we then see? So let's move from holes in knowledge to holes in the ground. So what am I talking about here? So this picture represents an app that citizens can download to indicate where the road surface was in need of repair. And this could have been in Tilburg but this was in the US. And this led to the unexpected and undesirable situation that in some parts of the city there was a maintenance backlog. And the reason for that was that in certain parts of the city there were fewer smartphones, users and therefore fewer reports. So notwithstanding all good intentions, trying to collect data on the state of the road to have a proactive service. And this data-driven intervention actually entrenched social inequalities that are often already all too prevalent in our society anyway. So this is also often times referred to as representation bias. So this arises when parts of the input space of the data are under or over-represented. Another example, in 2020 in the UK, we are talking about corona time, there was a problem of the A-level. So in the UK, A-levels are the exams that you usually take when you're around 18. And these are the last exams before you go to the university. And they greatly influence their outcome, greatly influences which university a student can attend. So universities decide on their offers, they decide that on considering the grades of the students at their A-levels. And often times you need to kind of get a certain grade in order to go to your preferred university or program. So with COVID-19 there was no possibility of having these exams, so they thought like, okay, we do it with data. So they used an algorithm to predict what the final grade would be like. And they predominantly used two pieces of information, of data, the ranking of the students within a school and their school historical performance. And the goal of the algorithm of the statistical model, it was not very sophisticated, was that the overall results should have been more or less the same as the previous year, said to make sure that there's not any strange things with the outcome. But afterwards data suggested that fee-paying private schools, independent schools in the UK, disproportionately benefited from this algorithm. These schools saw the number of A-level grades increased by 4.7% while comprehensive schools, so public funded schools, only saw an increase of less than half of that. So why is that? Because the algorithm plays so much importance on the school's historical performance, it was causing more problems for high-performing students in under-performing schools, so oftentimes public schools, where the individuals' work would be lost in the statistics, while average students at better schools, with a better historical track record, seemed to have been treated with more leniency. So the idea was that we could capture everything in data turned out to again lead to discrimination. So trying to summarize what I want to put forward for this panel discussion is that we really kind of have to acknowledge that trying to capture the world in data really answers to this deeply felt need of human beings to deal with uncertainty. So I'm not in the sense, like if you have to categorize people, I'm not against data-driven applications or services, because they do give us a lot of knowledge and insights in the future and we as reflexive human beings need help, need instruments to deal with this uncertainty. But the risk is that we blindly trust data. And this is something that we, I think also within our university and as academics, really have to be careful about, especially in the case of technologies. What we see is everything is hidden behind a beautiful interface, sleek designs, easy to use. We don't really see what's happening or understand what's happening behind it. And this is something that I'm worried about, that we too blindly trust technology or data-driven technologies. Think back to what Raphael and Vogel showed in this installation. There are always choices being made. Who are making the choices? What is the reasoning behind these choices? It's always incomplete. What has been left out? What are we not seeing? How has been interpreted? So we really have to be very, very critical when we're using data-driven technologies. And then I do think that they can have a beneficial and helpful function in our current lives. That was the end of my presentation. Thank you very much Esther. Very inspiring. Many questions already for the panel discussion after the break. Our second keynote speaker is Professor Reijer Gelach. 15 of her search in Tysham, the School for Economics and Management. Full professor in the economics department there. Well-known for his research on climate change, among other things, technology and empirical economics. It's great to have you here. We very much look forward to your insight. I'm sitting on the A12. That's what I'm most famous for. I didn't want to say it. Tys by now. Okay. My presentation is complementary to Esther's presentation. I'll talk about datasm in research and then with the focus on economics. And I can't help. I'm a researcher. I'm also a lecturer. And I just love to tell people things that I hope you don't know. So afterwards I hope, apart from datasm, you also have a better understanding of many more things. I myself, I'm quite, I'm loving data. I'm quite quantitative in nature. So, Tim Birken, one famous Dutch economist, a winner of the Nobel Prize, said, To measure is to know. Waiten is made. So what is that to know? To know is being able to predict what's going to happen in the future based on the data that you've got. So if you want to have higher income in the Netherlands, many people at least, policymakers want it. What should we do? We should improve schooling in the Netherlands. You can see that from data. If you want to clean our world, what should we do? Does it help to have more environmental regulation in the Netherlands? Well, that's a difficult question. You need lots of data sort of to figure that one out. So we economists have always looked for how can data help us answer really important questions in research. And the point that I'd like to show you sort of is how most of you are younger than me. So when I started as a researcher, we typically worked with 100 data points. Now we work with hundreds of millions or billions or whatever. So what happened? So you sort of at the back of your mind, you know, what is, you now have all mobiles, but how did things go? How did it happen like what it's now? So typically when I started before the 90s, we had data in the sense that we had some set of data like whatever. And then we wanted to know there's something related to something else. So we set up a model. You don't need to worry too much about, let me see. So the point is typically we had, oh yes, thank you, that maybe. We typically had 100 observations. So to think of an important question like we wanted to know as economists, why you observe that the rich countries always stay rich and the poor countries always stay poor. And we economists want to understand what's going on. Is there some colonial, you know, that we exploited poor countries and are they poor forever? Or is there something else? And these kind of data could help us answer that kind of questions. So we built a nice model and here you see this is the type of data we had at this time, you know, 100 observations. And using OLS we could see things like actually it's not that rich countries always stay rich. It is rich countries typically always invest more in education, invest more in capital, et cetera. And that's why this is rich. And we could use this kind of simple data analysis for that. There's a problem on that for us, causality, you know. Does data tell you the truth? Well, data may but statistics may not. So we want to establish that X causes Y. But it could be that there's some other thing set that actually is causing Y. Set also causing X. And then, you know, you observe that X and Y are correlated but actually it may have nothing to do with each other. It may all be down to set. So we were worried about these kind of things. Then 10 years later we started to have more data. So we started to collect data systematically and then for each country would have say 20 years of observations rather than just one point. And we could see, look, if you see that schooling and income always goes up together or goes down together, then that is actually a much better sign that these two relate to each other than if they're just correlated between countries. So this was a huge improvement able because of a big increase in data. So the increase in data allowed us to do better causality tests. So this is a typical example. And now comes sort of the first potentially problematic issue is that in this point in time you could already publish anything anymore empirically unless you had this type of data available. So this was sort of the first version of dataism. Because if you would not have this panel data, so data with many types of firms or countries over a long many years, they would say we're not sure about causality mister or missus. So then economists came with a nice idea which is called instrument is really a big issue in economics. And the idea is if we have some set that we know set only causes X and it does not cause why directly. And we know that X may cause why and we observe that set actually is correlated with why then it must have gone through X. So the example here is let me see whether I had the example there. Yeah, so that's the paper that I had. If you have corruption and you have reason to argue that corruption actually translates into poor environmental quality. You have no reason to argue that corruption is really directly causing more polluting industry. But you do see that more corrupt countries tend to have more polluting industries. Then you may argue it may be the environmental policy, the lack of environmental policy that actually causes an increase in the polluting industry. Now then we got so that led us to datasm 2. At some point in time you almost can't publish anymore any empirical analysis if you don't have this beautiful instrument structure. Now we move on further 10 years later and it really gets out of hands. So now we don't work with 1000 data points or so. We have a fairly large data sets because we have thousands of households. We have I'm working now with electricity prices for every hour for every region in Europe for many years. And then within every hour there are bits by many firms for many prices. So we're talking about billions of data. Now what happens in a certain way if you want nowadays in economics to publish empirical research. You almost have to show that you have some very big data set and that you have some very smart process to get the information out of the data. If you don't do that then the referees will say well we're not sure whether this really is fairly added. Now is this a problem or not. So the empirical observation that I see is that handling very big data has become a token of capacity quality of the study. Now what it means in a certain way economists know that signaling I signal I'm a good researcher by doing some heavy data stuff. And well the advantage is that possibly if smart researchers actually are the ones who can work as big data then the signaling works quite well. So then we have a nice signaling system that says well these guys sort of have a better understanding of the world. They are allowed to publish and the other well they don't understand so much they are not able to do these nice features with big data. So let's not allow them to publish. Whether this works or not is important because there are so many really difficult questions that we have to answer. So smoking causes cancer too much eating causes diabetes. These are questions that really we are now addressing this today with very big data. You have to figure out. So here we are convinced here. There are all kinds of different reasons that me maybe diabetes that there is reverse causality etc. So the end my I myself I tend because you know you're just personal prejudice. I'm good with data so I tend to think that it's in a certain way OK you know because it favors my type. Let's be honest about that. So but if we believe that sort of the correlate there's not really a correlation between understanding big data and understanding deep fundamental sort of structure of problems. Then we have a problem in research in the sense of this type of data is entering our empirical work. So I think I was right was on time. Thank you very much. Thank you very much. Very interesting. We are nicely on schedule. That's great. So I can introduce our third keynote speaker to you Professor Philip Yoes account to see department of this university. Also former vice dean of education. So he will also partially talk about data is in the context of education. Please the floor. Okay. Thank you very much. Thank you for inviting me. The problem as a third speaker is that many points I wanted to make already said by the first two speakers but I'll say something anyway. Okay. So so you introduced me also as a former vice dean education. So I'll say something about education as well but I know we will have interesting panel questions about this. So I'll keep most of the discussion for the panel. So I'll focus most of my talk on part one which is more digital innovation in our society and the dangers and the prospects. But I'll focus especially on innovation and I'll give you some better view on what's going on there. But I'll start with some history. I like history reading history books and so on. And maybe some of you know who this person is Alan Turing. Yeah. So I see some people nodding. Did you see by any chance the the movie the imitation game. Yeah. Okay. So it's a I think it's a very good movie. I mean that's my personal opinion. And so if you've seen a movie you know that this person was quite instrumental with decoding the enigma machine. The enigma machine that the Germans used to code all the instructions they give overseas overland and so on to position their troops and doing all that kind of stuff. So he was able to figure this out how to decode the enigma machine. And by doing so he discovered basically a tool that later on became like a computer. So digitizing everything. And in 1950 he wrote a very controversial paper and this was referring to the imitation game. So the movie title is actually also what the paper is about kind of thinking about and he's a visionary in that sense. So it's called the Turing test. So when can we actually really talk about a machine that has intelligence like artificial intelligence. So he was the first one kind of talking about artificial intelligence 1950. Okay. So now we are more than 70 years later and now everybody is talking about chat GPT and so on but he already kind of wrote about all these things and many people thought he was completely mad. Okay. And then he even devised a test to figure out whether a machine is intelligent or not. So you put a person in the room the evaluator you have two other rules one with a human being and one with a machine. You ask a question through writing and then you get an answer back and the answer would reveal to the evaluator the human being whether. The answer is coming from a machine or from a human being and he said whenever the evaluator is kind of clueless then you are dealing with an intelligent machine and this is what we call artificial intelligence. And he talked about all the types of questions you can ask and so on but he was definitely visionary in what we currently experiencing today. So talking about artificial intelligence it's not a new thing that's my message here. It's not working anyway. Okay. Okay. Okay. Okay. Okay. So now many things have been said already so I'll quickly go through this. The fact that our society is changing rapidly with digital innovation is pretty obvious. Okay. Knowledge is definitely freely available everywhere. You say something and people immediately Google something and see whether you're right or wrong and so on. So given that it's freely available universities in itself are not kind of their primary primary task is not only to transfer knowledge but also to kind of transfer a skill set. Okay. So you need to have skills like creativity and critical thinking that is actually quite important in a world where yeah knowledge is almost freely available. But you also have and that's sometimes the scary part and I'll show you why it's scary especially for me. Okay. So you have to wait. So you. Okay. Okay. Okay. I'll use the arrows. So some people at Oxford almost 10 years ago published the paper the future of employment. It's a little bit outdated but what you see is the probability the top professions there is the probability that these jobs will be lost in 10 years. So now we can actually evaluate it's almost 10 years since that was published and accountant. I'm an accounting professor. Okay. I'm dealing sometimes with real accountants. I'm an academic. So the and accountants are still around. So accounts are still around all of this are still around. I mean they are even expanding. I mean it's it's amazing to see and they found new ways of making themselves useful. But the job itself is actually changing very much. The job they had 10 years ago is very different actually from the job they have right now. And that's what I will illustrate with some examples later on as well. And you can actually say this for some other of these categories. So in a sense the job itself accountant is still around but it's different from 10 years ago. Telemarketers maybe that's changing. So universities definitely need to think about education in terms of preparing someone for a job because the jobs are changing anyway. Okay. So you need to have a skill set and these types of things. So again for a student digitalization is definitely affecting students blended lives. I mean it's pretty obvious that when I look at my own kids sometimes my daughter is kind of say. Okay this is for my be real and suddenly I need to be out of her sight or whatever because that would spoil the whole be real. And this is the way of life. Okay. So you have a digital life and you have a real life. Okay. So in the room and Esther was already referring to this too. And a very important thing is the demand for freedom and choice. Okay. And apparently the digital world is offering much more freedom and choice. I mean apparently. Okay. Now many applications where you actually see this is that feels like healthcare mobility consumption trades even warfare. I mean we are confronted with this terrible war in Ukraine. And sometimes I have the impression this is a computer game. They shoot rockets. It's all digital. They have satellite images. So it's very dehumanized although the effects are affecting real humans. Okay. Financial decision making. A lot of my research is involved with financial decision making. And there you see that digitalization has a huge impact on the professions but also the way we make financial decisions. Think alone, your experience, how you interact with the bank. How you make, how you pay but also how you make investment decisions yourself. It's very different from 10, 15 years ago where you have personal advisors and these types of things. Whereas now, I mean it's very different. Now given all these changes, the question is how do organizations deal with these changes? And let's start. I mean we are at the university. How is a university dealing with all these changes in society due to this digitalization? And one very interesting economist also from the same time as Alan Turing but he lived a little bit longer is Joseph Schumpeter. Who basically introduced this whole notion of organizations that do not have this capability of ambidexterity using your left and your right hand will disappear. Okay. So they will disappear. And the challenge is how do you manage these the left hand and the right hand and the left hand is basically you always need to explore new possibilities. Okay. So that's a way of innovating like you experiment, you try to discover new things and so on. And you put some resources that you have, some money that you have into these experiments. And so many of these experiments fail. But okay, that's one thing. Now we are a university. We have more than 20,000 students. You cannot constantly experiment with 20,000 students. So what you need to do is you need to have a very smooth process. You need to have a very good learning management system canvas. And you need to have these improvements, innovations that make things even more efficient. Okay. You have new technologies, better cameras and so on and so on to make online learning for everybody more accessible. This is exactly the other type of innovation. Yeah. And the question is how do companies make the choice. And I was very privileged to be chosen by the one of the former reactors to be part of this whole program that was called deep. Okay. The digital education enhancement program. And I had a task force and we had to think about how to organize educational innovation at our university. Given our new profile that we had. And there I was immediately confronted actually with all these problems that Joseph Schumpeter was saying. Because the first thing I noticed was, yeah, but you are asking for a lot of budget for experiments. Yeah. What's the result of an experiment? I said, well, if you don't do it, you won't notice it the first year. But at some points, you will be outdated. Corona happens after I was 15. Suddenly, the university thought, wow, we need to invest in digital equipment, digital things without any experiments. And this is why many experiences that you may have had were not optimal because we didn't actually invest a lot in these exploring these possibilities before Corona. So, but I want to use another example. I'm an accounting professor. So I read, I'm not only looking at interesting movies like the imitation game, but I read financial reports of companies. This is really poetry. Okay. So believe me, I mean, once you actually get into it, I mean, it's fantastic. And one of these, it's telling a story about a company, what they want to do, how they want to change the world. And BMW is something that caught my attention. Now, last week, the Neue Klasse was introduced and so on in the news and this and that. But that was already two years ago in their financial statements. So I already knew this. But yeah, this is why financial statements are quite useful. And what you actually currently see, this is from March of this year, is that they, on their front page, they kind of try to say that the future of BMW, an old company, depends not only on their way of how to, they want to, how they see mobility, but, and of course electrification. That's a new type of mechanics that they use. But digitalization is crucial and their sustainability is part of their strategy as well. So then you say, ah, wow, they are kind of showing nice pictures and so on about digitalization and they have nice features that they will introduce. The new car will have avatars, so you approach the car and suddenly you see an avatar. That's what they will introduce and apparently that's giving you a special experience. But then you basically look at what they see as the key performance indicators. Like how will they, what do they strive for and where are they investing their money in? And of course, one of the performance indicators is, wow, we need to be profitable. So profits are on top. And then they kind of use data. Again, it's data. It's really what we've seen with Esther's last slide. The key performance indicators are really from a very complex production company going to the essence of what they actually see as key data where they steer the company in a certain future direction. And they say it's all about electric cars and reduction of CO2 emissions, which is fine. Okay, they even put some targets there. And then the question is, how do you actually measure it? Now, Ray knows all about this. Many of the emissions are basically an emission that is not part of the production but part of the use of the car afterwards. It's also part of the supply chain. If you have batteries being produced in China where they use a lot of coal, I mean, that's not very good. How do you actually measure this? And this is the big measurement issue that eventually needs to boil down in whether they reach these targets, yes or no. And that's the huge challenge of getting data. So you need to have an enormous amount of data to actually do this. And then you need to have accountants. I'm back to, it's the promotion talk for accounting, I guess. And then you need to have people verifying whether these data are indeed correct because otherwise you are cheating. It's greenwashing. It's fictitious data that shows that you reached the targets. So it's a huge challenge. Yeah. Now, I showed this car and what you see here, maybe some of you have seen this. If you go to San Francisco, I was there last summer, like a few months ago. You may actually see this thing there. So the Waymo taxi. Anyone? Can you raise hands? Yeah. Okay. So you've seen one. So I was spending some time in Silicon Valley last summer and even I didn't realize but I went for a run in the morning. And suddenly I saw these fully automated cars without the driver driving around us. And I was apparently in the test area of Stanford University. So it was full of, so I was the guinea pig, I guess. Do they see me or not? But there was a very interesting story about this. It's very data driven, okay? Because you need to have lots of sensors doing this. This is the future. Fully automated cars without a driver. But there was an accident. A fire truck hits one of these taxis with the passenger inside without a driver, of course. And apparently the car was not programmed to understand the noise made by the fire truck. So the fire truck basically hits this car because it didn't stop. It simply continued right at the crossing and the person was severely injured and so on and so on. So again, a little programming error. So, I mean, this requires a lot of data. And hopefully there are some improvements but there are many challenges. Apart from, I'll summarize these challenges. What I'm actually getting at, I didn't show a taxi that was run by BMW. I showed a taxi that was run by Waymo, which is a company owned by Google Alphabet, a parent company. And if you look at the largest companies in the world, these are the 10 largest companies as of yesterday, in terms of market cap, like the stock price times the number of shares being traded somewhere in the world. Apple, Microsoft, Alphabet, Amazon, Nvidia, huge success last year. Tesla to a certain extent and then Meta platforms, it's all tech companies. It's all platform type of companies. They seem to have the biggest future expectations of value that they can create. And then you see some Berkshire Hathaway, okay, well, that's an investment company. And then a pharmaceutical company. And, well, the Saudis, they have lots of money anyway, so that's an oil company. But that's very different from, let's say, 40 years ago because that would be steel companies and other types of companies. So tech companies are massive, are huge, and this is why Google is way further ahead, actually, than BMW. So going back to Schumpeter, is BMW, yeah, safe for the next 10 years? I don't think so. Okay, so maybe they are in their Kodak moment. Kodak also believed that the film role would stay forever. And, yeah, in five years they were bankrupt because of digital cameras. So, now, is there a problem? And this is a little bit for discussion here. Given this ambidexterity and these companies changing and the digital expectations of customers and these types of things, it could lead to huge reductions of employees, yeah, because you don't need all these engineers in BMW anymore, because most of the cars are basically sensors and data driven, so you need to have programmers, not these mechanical engineers anymore. And that's a little bit the problem that the German car manufacturers or the traditional car manufacturers actually have. They have many people employed that are not very useful in these new cars, fully connected cars. Now, Eric Brilliofsson is a Stanford University economist. He's saying, look, it's not a gloomy picture when Internet appeared in the 90s. It's not that everybody got unemployed. So what will change is what I said before. You will actually have a different type of job. So you will need to use technology differently. And he gives this very interesting example of a radiologist. You go to the hospital, to the radiologist, and basically they discovered that computers and artificial intelligence is way better, actually, in screening everything that needs to be screened, analyzing all these images than a human being. So that part of the job, of the 36 tasks that a radiologist does, should be left to artificial intelligence, and then the person should specialize in something else, which is sedating people, consulting patients, and these types of things. So the jobs will be different. Now, going back to my previous slides, what worries me a lot is actually the power that these huge companies have, like the Googles, the Facebooks, and so on. And I'm not against capital markets, by the way. So definitely not, but, I mean, there are some things like, first of all, inflation. If they have market power, you know, that they can set prices. Legal protection of consumers may be an issue, and that's the battle that we currently see. Product safety, they try to get you addicted to their products. So people and the be real is quite addictive, TikTok and so on. So before you know it, you spend four hours per day on social media, wasting your time to a large extent, but that's by design. And what's even worse is that they try to kill innovation if they don't come up with the innovation themselves. So the killer acquisitions, taking over company and basically killing it. So that's all bundling products. You can only use the product if it's part of a bundle. Patent battles, you start, you have a great legal team, so you can never win, actually, from Google. So that's the... Now, it was actually in financial times a few days ago. We'll see. Next week, the big Google process will start. The Federal Trade Commission will actually try to stop the market power of Google, Microsoft and Amazon. So that's interesting to see, and we've seen this before. So more than 100 years ago, we had the big oil running everything, and the biggest person, now we talk about Jeff Bezos and Elon Musk and so on, but then it was Rockefeller. And Rockefeller was completely dominating the world oil business. And then there was also a trial in 1906 or 1908, and then they broke up the company, and suddenly you saw lots of innovation going on in the industry. So definitely this is a call for action, legal action, so maybe I should look at Esther, actually, for the legal part. What I also see with this whole digital thing that I showed is increased inequality. Increased inequality because the jobs are changing. And the educated people, education becomes much more important. And it's a little bit the story of the potholes that you had. If you have a smartphone, you can actually act. If you don't have the education, if you don't have a good system, education system where you have social mobility, then it will be terrible. And this was the shock I had. I lived for 10 years in the United States, and I'm really kind of shocked whenever I look at this country how the social inequality will even be bigger because of this whole digital innovation and the type of jobs that will be produced because you need to have to be educated in the skills that you need to develop these digital tools and more. And then the other frightening thing sometimes is dehumanizing these transactions. So you get advice from people, but you have biases. You gave excellent examples. Cybercrime is one of the biggest investments these days because once you actually have, yeah, depend on digital technology, yeah, cybercrime is there. So connected cars, the biggest fear is that someone takes over in Russia and you don't drive your car, but you drive to another destination. And then interactions. I also fear, and especially for the younger generation who grew up in this whole digital environment that communication skills are an issue. And we've seen this at the university, like the effect of COVID where people stayed home, where they were fully connected digitally, had a huge impact on the generation. On the young generation, I have kids like 25 and 22, and also they are definitely affected actually by the disconnection and their skills, their communication skills and building trust and so on. So they behave differently. So I want to basically, I had some slides here, but I'll keep it actually for the discussion, okay? Because my time is up. Okay. Thank you very much. A lot of food for thought once again. Three very complimentary presentations, I think. That's a great basis for discussion. The final part of our program today will be a panel discussion. Okay, welcome back everybody, or welcome still here. My name is Inge van der Ven. I'm also a member of the Tilburg Young Academy and an assistant professor of the Department of Culture Studies. And I'm going to lead this panel discussion with some already familiar faces by now. But you also see to my left side here, three new faces. And I wanted to ask you, maybe if you could briefly introduce yourself, starting with Judith to my left. Thank you. Yeah, my name is Judith Kuniker. I'm also a part of the accounting department like Philip, and I'm also the academic director of that program, the Master of Accountancy. So I guess the promotion talk will continue a little bit. This is all one big scheme. Member of the Tilburg Young Academy. Absolutely, yes. Don't forget. Thank you, Judith. I'm Elaine. Did you use the microphone? Oh, my name is Elaine. I am doing a board year this year at the student party front. So I'm here as a representative of one of the student parties in the university council. And you know everything about facilities? Oh, yeah. My function this year is general director of facilities. So I'm concerned with all the facilities that the university has to offer. So you can think like the library, sports facilities, food, catering, basically everything on campus, but also digital things. Thank you. Well, my name is Jeroen. I am the vice chairman and international officer of student party Sam. We're also a party in the university council. So we're here to represent the voice of the students and everyone involved. Thank you so much. We're very happy to have you here. We also need the student representatives for what is about to come. So we prepared some of actually about four statements for the panelists right now. And we're going to dive right into it because we don't have enough, a lot of time. But later on, we really would like to hear your input as well. So let's just go with the first one. Strongly relying on quantitative data makes our society a better place. Well, maybe we can already know a little bit what you think about this, but maybe we start with Reijer for this one. Well, it's fairly simple. For a better place, you need better people. And to make good decisions, you need knowledge. So you need data. But so with more data, you can also have a very bad place if you have a bad dictator. So it's an interaction between having good data and having a good person who can use data, a good club of people who can use data. So, well, I'm sorry, I'm a researcher. It's the wrong statement. Thank you for this very nuanced answer. It's a good opening statement on your behalf. Adding to that, the strongly is the word that I have a bit of a problem with. So what I also tried to say during my talk was that yes, data-driven technologies can definitely help us to make the world a better place. But strongly, to me, it sounds a bit... that we're not really critical about it, that we're blindly just relying on what this data is telling us. And that is something I think we should be very careful about. Thank you. I really like this distinction that you made also before between trust or faith and a blind trust or blind faith. I think that's very helpful here. Thank you. Okay, so maybe you heard about what happened in December 2019 when Ursula von der Leyen gave her maiden speech and it was on the Green Deal, the European Green Deal. And that's actually to make a better Europe, a better society, environmentally friendly, and so on. The key aspect here to get to the Green Deal is data. So what happened, the European Commission a few weeks ago approved the whole new set of rules to gather data from 50,000 larger organizations, publicly listed firms but also non-listed firms, to measure the whole environmental social impact of these organizations based on a set of, I don't know, 500 page set of rules, how to measure it, what it does, and make it comparable. And that's what Reir actually said. It's key to actually measure the progress, but it's also key to actually have verifiable data because otherwise you have the risk of greenwashing. And what we're currently seeing is that there is a huge extent of unverifiable data because this whole set of rules is not applicable yet, it starts next year. And everybody's claiming certain things, yes, we are green, BMW is saying, yeah, we are doing great, but it's actually not true. So I think I strongly believe that we need to have verifiable data, 1,100 data points by the way, that's what the European Commission is asking companies to report on. So yeah, I believe strongly relying on it, what you already said is that you don't have to need to trust it blindly. It can put you in a position where you can learn way more, you can compare way more. So yeah, there are more opportunities with data, but it's not necessarily about, well yeah, it kind of depends on how much data, I mean, I guess more is better. But also how you use it, I think that's way more important. So yeah, it's already the people and the data, it's like a dynamic between it. So yeah, it's not necessarily data, but how do you use it? What makes a society a better place? Yeah, it depends on how you use it. Okay, but I'm hearing a lot of examples where the data is either incomplete or wrongly interpreted or someone does something wrong with it. But you also have examples, maybe one of you, of where it's intrinsically not a really good idea to collect a lot of data. So are there downsides to data more essentially, if you know what I say? Esther was a little bit driving at it before with this issue of control and wanting to know everything, and maybe there's limits to that, I don't know. The big companies collecting all data about this is intrinsically bad, I believe, so yes. But those are the wrong actors then? Yes, but that's the point, yes. So they collect data because they want simply to make more profits. And then the acts of collecting data is in itself neutral? Well, no, because once you've got the data, you're vulnerable. So it's intrinsically bad that Google, Amazon, Facebook, they all collect all the data about all of us, intrinsically bad, we should forbid it. Okay, very strong. I'm going to go over to the next one and then start at this side because if we all answer that question then, I'm afraid we'll run out of time. And now, oh, now it's doing for me, it's doing the same thing. I have to be patient. No, not that one. Let's skip that. Oh, jeez. Okay, so we're going to give it a little bit of a different direction because I feel that this particular topic hasn't been addressed as of yet. And because there's lots of students here, I really want to know what you think about this as well because it's definitely an aspect of data collection that we should be talking about. So the issue is the statement is quantitative student evaluations of teaching are a good thing. Very simply put. And I'm going to start with Judith for this one. Yeah, because I'm reading a lot of those as the director of the program. But I think before you can answer the statement, you actually have to ask yourself another question. And the question is, and bear with me for a second, what puts a student into the position to assess how good a teacher is? For example, if you think our education is like a service and you are our customers, maybe that would be the case, but we are not offering services. In the best case, how we would define our students is you are our proud product. I think it's the best description that we help you to develop your skills. The customer is the labor market and society. And they can say whether we did a good job because you eventually fill the positions, you are employed, you help companies to grow and to make at least a little bit of profit, that's fine. And then a second version or another thought is like our teachers, they go to a lot of education, training, we acquire qualifications to give education, to provide sessions. Now, if I would say if students have sort of the same went through there, you're more than welcome to assess even on a quantitative scale how good we are. Having said this, I still want to add here, that doesn't mean that we don't value the input. But when I'm looking at these evaluations, yes, quantitative measures that catch my attention, but what I immediately do is I go and read the comments. And this goes back to it's not only quantitative. We need to have the explanation. It's a starting point for us to investigate and then ask our sounding boards in the education committee. But is it exactly that was sort of bugging you during the course and I can then pick up on this and make it better for the next cohort of students. But purely relying on this quantitative measure because to be honest, it's laziness. We send out 20,000 evaluations. It's easy to get back numbers, but it doesn't tell you everything. So it's not all and we should always accompany it with narratives, I would say. As a student, we took the liberty to collect our own data to answer this question recently. We let the students answer what they thought of these kind of student evaluations. First, I wanted to add that just purely quantitative evaluations we don't think are a very good thing. I don't think you can really get a lot from it just because just based on numbers and based on grades, you don't know what you have to improve and what caused the grade. So you don't know where the weaknesses and where the strengths are. So I always think you need to have that further explanation what you said about how you grade someone and how you evaluate someone because otherwise you don't really learn anything from it. We think then to the laziness of the students. In general, students don't really enjoy filling out all those evaluations but that comes more of a place that they don't benefit from it because they've already finished the course and they doubt a little bit what kind of effect it actually has because you can't really see for yourself if it's better next year unless you fill the class and you're taking it again. But yeah, you don't necessarily always see a big difference and when they fill out these kinds of evaluations usually just the bad things come out and not the good things because otherwise students don't feel like they have to fill it out because they're like, okay, everything's fine. So why would they benefit from my evaluation? And that's basically the input we gathered from some students at the university. Thank you, Elaine. I think we all can learn a lot from this perspective. May I add something? Of course. So I also teach at YATS in the BOSTA data science program and there I think they have an interesting kind of practice when it comes to quantitative evaluation. They also have quantitative evaluations and also open space where you can, as a student, put in comments. And so they ask the course coordinators to reflect on the evaluations and then they also put it on Canvas for the next year. So that's interesting. So you get more of a kind of conversation, of course, not with the students that fill out the student evaluation but over generations you try to improve the education. And I think that's a nice way of making it more contextualized and not just that it makes a difference. So I like that. Does it have to do with accountability as well or held accountable for what you did last year and you can show students, like, look, I already improved. I'm getting better. Also sometimes course coordinator, lecturer, said, like, yeah, I hear what you're saying but what you were saying, like, you think it's not important that I teach you this now but I know it is important so I'm going to keep teaching it even if you don't like it. See, we know better. Yeah. Yeah. And I think that's also good. Yeah. Okay, that's good. Another point that we might want to work in there somewhere is I think David at the beginning touched upon it is that some of these data collections are also a little bit biased. So there are instances in which people also get confronted, especially in the qualitative part with racist comments or sexist comments. And I'm sure none of you have ever done this or not very nice comments. And I think this emotional level we can also bring into the discussion because we are also humans. Yeah, so I think they could be extremely useful but they currently are not. The lecturers are complaining on a large scale and we know we have data that indeed there's bias against women, not only racist, also about gender. We also know that if you give a chocolate, you can increase your overall evaluation score, that kind of stuff. Remember that. So there are all kind of weird as appears. I think we should pay more attention as a university. I think we should improve this. That's actually what I wanted to say about the bias and evaluation. If you only look at the quantitative measures and again, it's going back what you say we should get better. We should be less lazy. And actually, for example, take our colleagues with us and say, look at my session and give me open feedback about how I was doing together with input from the students. Again, I didn't want to say that is nothing that is not worth to get this input, but it needs the context also as you said again, and that costs more time again. And I think this is hopefully where we are also going a little bit. And I do can assure you that we do something with the feedback. I understand that this is a little bit, sometimes, yeah, sure, of course. No, we do. Maybe I can say something about what we do with the feedback. So for quite a number of years, I've been part of one of the members of the promotion and tenure committee, like to promote and tenure faculty members at our school. And of course, we look at research, we look at citizenship and, of course, education. It's the cash cow of the university. 80% of our budget is actually from education, so it's quite important. And of course, we do look at time trends over five years, what a person has done. And an easy way to do this is to have the summary measures. The numbers. So quantitative information is definitely helpful for us. But then we look at all the evaluations and comments, so we find out. But that's not enough because we know they are happy sheets. They are time moments. We really dislike the very low response rates often, which is really a problem. And then you run into the problem of getting extremes. And then it's immediately putting the score down. If you have a few ones out of five, then it's not very good. But I want to actually stress here that these are indications. So what we try to do is to get feedback from colleagues, what Judith actually said. So they need to get the certificate, the teaching certificate, where colleagues are in the room where they have mentors. And we look at the report that these mentors write, how they structure courses, how they do things, all these types of things. And we ask the academic director, who is responsible for a program, to comment on all different types of dimensions, including innovations. Because what I said before, the embedded dexterity, we need to have teachers who basically not only do whatever they've been doing for 50 years, for the last 50 years, but who innovate, who try to capture new things. So again, the use is minimal out of the whole data set that we use, or data set is the wrong word, but information set that we use in the promotion and tenure committee. Yes, something else we've got to say. We also got some feedback from students that they say if you do course evaluations during lecture time. So for example, you tell them to fill it out during lecture time. But personally, I also experienced that they would just talk to us and we would just discuss personally with our teacher. And the lecture setting does need to be right for it. So it does need to be a little bit more of a small lecture. You can't do that in a big lecture hall for 300 people. But then you get a lot more feedback and students feel more free to say what they really thought. So it does have to be the right setting. But personally, I also experienced with other students too that that works very well. But then it's not documented digitally. But that's also a good way to gather inputs. And then you can also, then you actually have a discussion with students and a teacher. So students feel more heard than if they fill out Google Forms. Thank you. I'm really happy some of these alternatives are coming up already because I do feel that we share a sense of the way that things are handled now is not helping anyone, not the teachers, not the students. But there is hope. I'm going to what's the last statement, which is also more student-centered and maybe some people in the audience also have an opinion on this. The omnipresence of data impacts student health, studies and life. So you notice that impact doesn't necessarily have to be good or bad. So maybe you have some examples of how it affects your life. Yeah, definitely because now there's more data and it's also, most of it is freely accessible. So it's really easy to get caught in this competition which we started this symposium with. It's really easy to get caught in that and it can affect your mental health a lot, but also your studies and just your life in general because right now there's way more data to compare to. And yeah, you feel like you have to be the best. You have to be better than something than someone because right now if the data, we talked about it with course evaluations, you get this end number but not the reasoning behind it. And what you see, what you can compare to with other people is the end result, it's the end number. So if you only compare to that end result you can get caught up in this data but you don't see the story behind it. And that's really important for me personally and also what we see with life more students is really important to focus on what's behind the data and today with all these end results being right in your face and having to do more research to get to how did they get there, how did they do that, what's the story behind it. It's way harder, so it's really easy to get caught up in like a competition which doesn't say anything but feels like it's super important. So right now I think it mostly negatively impacts students' lives but it also shows more opportunities to get more data so that's like the other side of it. Thank you. Being from literary studies myself I really like that you underwrite the importance of stories that is behind the data as you put it so well. That we remain very conscious of the fact that data are a representation instead of the real thing because I think it alienates us a little bit if we think of ourselves as data points and it causes a lot of harm. Thank you. Is there anyone from the audience who has an example or wants to weigh in with an experience of how data affected their life as a student or a human? Don't be shy. Okay, I can bring you the mic. Thank you. Thank you very much. So I feel like whatever goes wrong it's my fault or our fault. If something bad happens online we clicked on the terms and conditions and we didn't read them so it's our fault. If in personal life there is so much information available so we should have taken a mental health course or anything to fix our relationships and so on. So everything it's easier to say that you should have known. Very interesting. So like personal responsibility you could say everything boils down to your own individual responsibility because it is out there and you should have read the small letters. Yeah, absolutely. Anyone feel something completely different or want to add to this? Someone here? Yes. Yeah, I think it's a very valuable analysis because it really touches upon many different domains in life. So actually the tech companies that you were talking about they actually also use it. They're just a platform, right? And you, if you use Airbnb for instance they just provide the platform and then it's up to us to kind of see if you're doing the right thing. The same applies to banking in a certain sense. When back in the days you had an old fashioned bank robbery it was the money of the bank that was stolen. But now if your money is now being stolen because you got in a fishing problem then you have to prove that you were very careful about your passwords and things like that. So I think an important aspect that we maybe not really touched upon during our talks but that the internet, the networking of all this data and making it available to a lot of us also makes us all responsible. In other words, Rijer was saying about the fact that some of these actors have very much, way more power over the data but not necessarily take the same responsibility for that data is I think really one of the challenges that we face on the individual level and what you were saying about you have to read the terms and conditions or you have to fix your own mental health but also on the economic level, on the legal level lots of domains in society we are facing this problem. So I think it's a very valuable point that you raised. Of course. So one thing that I saw like a few years ago we introduced a new learning management system, KANFAS. So maybe most of you are familiar with it and use it on almost a daily basis and it was a long discussion we had whether we would open up a feature of KANFAS which relates to learning analytics. We would as a teacher actually see a lot of the uses on a daily basis we would get reports okay that student is using this document and so on and then you give an assignment and then you know exactly what's going on and then it would be a signaling mechanism to say well you need to intervene or you need to do there was a lot of discussion also with students about this and I don't think unless I missed something that it's now widely used I don't think it's opened actually up to and again what I would suggest to do is to start before you consider this to more scientifically do a pilot and look at what the effects are and try to understand with a bunch of smart people whether it would have positive and there are lots of things about positive effects but there might be things that you did not anticipate like negative effects, behavioral effects students get demotivated you know that you are monitored all the time it gives you stress these types of things but you need to do this and I would not exclude it as anti but I would start experimenting and of course be very transparent about this when you use students as guinea pigs Thanks a lot Philip I think we have to conclude this panel session and I want to thank you all so much for your contributions and I think I'm going to give the word back to our presenter of the day David, can we get an applause?