 Welcome with me with a big round of applause in your living room or wherever you are. Dejoram is a science communicator. He got his university education and his first scientific experience at Max Planck Institute. He will give you now a crash course for beginners to have the best insight into the scientific method and to distinguish science from rubbish. Dejoram, the stage is yours. Hi, nice to have you here. My name is Joeram Schwarzmann and I'm a plant biologist and today I want to talk about science. I have worked in research for many years, first during my diploma thesis and then during my doctoral research. I work both in universities and at a Max Planck Institute so I got pretty good insights into the way these structures work. After my PhD, I left the research career to instead talk about science, which is also what I'm about to do today. I am working now in science communication, both as a job and in my spare time when I write about molecular plant research online. Today I will only mention plant science a tiny bit because the topic is a different one. Today, though, we are talking about science literacy. So basically how does the scientific system work? How do you read scientific information and which information can you trust? Science. It's kind of a big topic. Before we start, it's time for some disclaimers. I am a plant biologist. I know stuff about STEM research, that is science, technology, engineering and mathematics, but there's so much more other science out there. Social science and humanities share many core concepts with natural sciences, but have also many approaches that are unique to them. I don't know a lot about the way these work, so please forgive me if I stick close to what I know, which is STEM research. Talking about science is also much less precise than doing the science. For pretty much everything that I'll bring up today, there is an example where it is completely different. So if in your country, field of research or experience something is different, we're probably both right about whatever we're talking. With that out of the way, let's look at the things that make science science. There are three parts of science that are connected. The first one is the scientific system. This is the way science is done. Next up we have people who do the science. The scientific term for them is researchers. We want to look at how you become a researcher, how researchers introduce biases and how they pick their volcanic layer to do evil science. Finally, there are publications. This is the front end of science, the stuff we look at most of the time when we look at science. There are several different kinds and not all of them are equally trustworthy. Let's begin with the scientific system. We just don't do science. We do science systematically. Since the first people try to understand the world around them, we have developed a complex system for science. At the core of that is the scientific method. The scientific method gives us structure and tools to do science. Without it, we end up in the realm of guesswork, anecdotes and false conclusions. Here are some of my favorite things that were believed before the scientific method became standard. Gentlemen could not transmit disease. Mice are created from grain and cloth. Blood is exclusively produced by the liver. Heart-shaped plants are good for the heart. But thanks to the scientific method, we have a system that allows us to make confident judgment on our observations. Let's use an example. This year aged me significantly and so as a newly formed old person, I have pansies on my balcony. I have blue ones and yellow ones and in summer I can see bees buzz around the flowers. I have a feeling though that they like the yellow ones better. That right there is an observation. I now think to myself I wonder if they prefer the yellow flowers over the blue ones based on the color. This is my hypothesis. The point of a hypothesis is to test it so I can accept or reject it later. So I come up with a test. I count all bees that land on yellow flowers and on blue flowers within a weekend. That is my experiment. So I sit there all weekend with one of these clicky things in each hand and count the bees on the flowers. Every time a bee lands on a flower, I click, click, click, click, click. It's the most fun I had all summer. In the end I look at my numbers. These are my results. I saw 64 bees on the yellow flowers and 27 on the blue flowers. Based on my experiment I conclude that bees prefer yellow pansies over blue ones. I can now return and accept my hypothesis. Bees do prefer yellow flowers over blue ones. Based on that experiment I made a new observation and can now make a new hypothesis. Do other insects follow the same behavior? And so I sit there again next weekend counting all hoverflies on my pansies. Happy days. The scientists in the audience are probably screaming by now. I am too, but on the inside. My little experiment and the conclusions I did were flawed. First up, I didn't do any controls apart from yellow versus blue. What about time? Do the days or seasons matter? Maybe I picked up the one time period when bees actually do prefer yellow, but on most other days they like blue better. And then I didn't control for position. Maybe the blue ones get less sunlight and are less warm and so a good control would have been to swap the pots around. I also said I wanted to test color. Another good control would have been to put up a cardboard cut out of a flower in blue and yellow and see whether it is the color or maybe another factor that attracts the bees. And then I only counted once. I put the two data points into an online statistical calculator and when I had calculated it told me I had internet connectivity problems. So I busted on my old textbook about statistics and as it turns out you need repetitions of your experiment to do statistics. And without statistics you can't be sure of anything. If you want to know whether what you measure is random or truly different between your two conditions you do a statistical test that tells you with what probability your result could be random. That is called a p-value. You want that number to be low. In biology we are happy with a chance of 1 in 20, so 5% that the difference we observe between two measurements happened by chance. In high energy particle physics that chance of seeing a random effect is 1 in 3.5 million or 0.0003%. So without statistics you can never be sure whether you observe something important or just two numbers that look different. A good way to do science is to do an experiment a couple of times. Three at least and then repeat it with controls again at least three times. With a bigger data set I could actually make an observation that holds significance. So why do I tell you all of this? You want to know how to understand science not how to do it yourself. Well as it turns out controls and repetitions are also a critical point to check when you read about scientific results. Often enough cool findings are based on experiments that didn't control for certain things or that are based on very low numbers of repetitions. You have to be careful with conclusions from these experiments as they might be wrong. So when you read about science look for science that they followed the scientific method like a clearly stated hypothesis, experiments with proper controls and enough repetitions to do solid statistics. It seems like an obvious improvement for the scientific system to just do more repetitions. Well there's a problem with that. Often experiments require the researchers to break things. Maybe just because you take the things out of their environment and into your lab maybe because you can only study it when it's broken and as it turns out not all things can be broken easily. Let me introduce you to my scale of how easy it is to break the thing you study. All the way to the left you have things like particle physics. It's easy to break particles. All you need is a big ring and some spare electrons you put in there really really fast. Once you have these two basic things you can break millions of particles and measure what happens so you can calculate really good statistics on them. Then you have other areas of physics. In material science the only thing that stops you from testing how hard a rock is is the price of your rock. Again that makes us quite confident in the material properties of things. Now we enter the realm of biology. Biology is less precise because living things are not all the same. If you take two bacterial cells of the same species these might still be slightly different in their genome but luckily we can break millions of bacteria and other microbes without running into ethical dilemmas. We even ask researchers to become better at killing microbes. So doing more of the experiment is easier when working with microbes. It gets harder though with bigger and more complex organisms. Want to break plants in a greenhouse or in a field? As long as you have the space you can break thousands of them for science and no one minds. How about animals like fish and mice and monkeys? There it gets much more complicated very quickly. While we are happy to kill thousands of pigs every day for sausages we feel much less comfortable doing the same for science and it's not a bad thing when we try to reduce harm to animals. So while you absolutely can do repetitions and controls and animal testing you usually are limited by the number of animals you can break for science. And then we come to human biology. If you thought it was hard doing lots of repetitions and controls in animals try doing that in humans. You can't grow a human on a corn sugar based diet just to see what would happen. You can't grow humans in isolation and you can't breed humans to make more cancer as a control in your cancer experiment. So with anything that involves science and humans we have to have very clever experiment designed to control for all the things that we can't control. The other way to do science on humans of course is to be a genetic life form and disc operating system. What this scale tells us is how careful we have to be with conclusions from any of these research areas. We have to apply a much higher skepticism when looking at single studies on human food than when we study how hard a rock is. If I'm interested in stuff on the right end of the spectrum I'd rather see a couple of studies pointing at a conclusion. Whereas the further I get to the left hand side the more I trust single studies. That still doesn't mean that there can't be mistakes in particle physics but I hope you get the idea. Back to the scientific method. Because it is circular it is never done and so is science. We can always uncover more details look at related things and refine our understanding. There's no field where we could ever say okay let's pack up we know now everything good job everyone the science has been completely done. Everything in science can be potentially overturned. Nothing is set in stone. However, and it's a big however, it's not likely that this happens for most things. Most things have been shown so often that the chance that we will find out that water actually boils at 250 degrees centigrade at sea level and normal pressure is close to zero. But if researchers would be able to show that strange behavior of water it is in the nature of science to include that result in our understanding. Even if that breaks some other ideas that we have about the world. That is what sets science apart from dogma. New evidence is not frowned upon and rejected but welcomed and integrated into our current understanding of the world. Enough about the scientific system. Let's talk about scientists. You might be surprised to hear but most researchers are actually people. Other people who are not researchers tend to forget that especially when they talk about the science that the researchers do. That goes both ways. There are some that believe in the absolute objective truth of science, ignoring all influence researchers have on the data. And there are others to say that science is lying about things like vaccinations, climate change or infectious diseases. Both groups are wrong. Researchers are not infallible demigods that eat nature and poop wisdom. They are also not conspiring to bring harm to society in search for personal gain. Trust me. I know people who work in pesticide research. They're as miserable as any other researcher. Researchers are people and so they have thoughts and ideas and wishes and biases and faults and good intentions. Most people don't want to do bad things and inflict harm on others and so do researchers. They aim to do good things and make lives of people better. The problem with researchers being people is that they are also flawed. We all have cognitive biases that shape the way we perceive and think about the world. And in science there's a whole list of biases that affect the way we gather data and draw conclusions from it. But luckily there are ways to deal with most biases. We have to be aware of them, address them and change our behavior to avoid them. What we can't do is deny their impact on research. Another issue is diversity. Whenever you put a group of similar people together they will only come up with ideas that fit within their group. That's why it is a problem when only white men are dominating research leadership positions. Hold on, some of you might shout. These men are men of science. They are objective. They use the scientific method. We don't need diversity. We need smart people. To which I answer... Here is a story for you. For more than 150 years researchers believe that only male birds are singing. It fits the simple idea that male birds do all the mating rituals and stuff so they must be the singers. Just like in humans female birds were believed to just sit and listen while the men shout at each other. In the last 20 years this idea was debunked. New research found that also female birds sing. So how did we miss that for so long? Another study on the studies found that during these 20 years that overturned the dogma of male singing birds the researchers changed. Suddenly more women took part in research and research happened in more parts of the world. Previously mostly men in US, Canada, England and Germany were studying singing birds in their countries. As a result they subconsciously introduced their own biases and ideas into the work and so we believe for a long time that female birds keep their beaks shut. Only when the group of researchers diversified we got new and better results. The male researchers didn't ignore the female song birds out of bad faith. The men were shaped by their environment but they didn't want to do bad things. They just happened to oversee something that someone with a different background would pick up on. What does this tell us about science? It tells us that science is influenced consciously or subconsciously by internal biases. When we talk about scientific results we need to take that into account. Especially in studies regarding human behavior we have to be very careful about experiment design, framing and interpretation of results. If you read about science it makes bold claims about the way we should work, interact or communicate in society. That science is prone to be shaped by bias and you should be very careful when drawing conclusions from it. I personally would rather wait for several studies pointing in a similar direction before I draw major conclusions. I'll link to a story about a publication about the influence of female mentors on career success that was criticized for a couple of these biases. If we want to understand science better we also have to look at how someone becomes a scientist and I mean that in a sense of professional career. Technically everybody is a scientist as soon as they test a hypothesis observe the outcome and repeat but unfortunately most of us are not paid for the tiny experiments during our day-to-day life. If you want to become a scientist you usually start by entering academia. Academia is the world of universities, colleges and research institutes. There is a lot of science done outside of academia like in research and development in the industry or by individuals taking part in DIY science. As these groups rarely enter the spotlight of public attention I will ignore them today. Sorry. So this is a typical STEM career path. You begin as a bachelor or master student. You work for something between three months and a year and then woohoo you get a degree. From here you can leave, go into the industry, be a scientific researcher at a university or you continue your education. If you continue you're most likely to do a PhD but before you can select one of the exciting options on a form when you order your food you have to do research. For three to six years depending on why you do your PhD you work on a project and most likely will not have a great time. You finish with your degree and some publications. A lot of people leave now but if you stay in research you'll become a postdoc. The word postdoc comes from the words dog as in doctorate and post as in you have to post a lot of applications led us to get a job. Postdocs do more research often on broader topics. They supervise PhD students and are usually pretty knowledgeable about their research field. They work and write papers until one of two things happen. The German Wissenschaft Zeitvertrags Gesetz bites them in the butt and they get no more contract or they move on to become a group leader or professor. Being a professor is great. You have a permanent research position, you get to supervise and you get to talk to many cool other researchers. You probably know a lot by now not only about your field but also many other fields in your part of science as you constantly go to conferences because they have good food and also people are talking about science. Downside is you're probably not doing any experiments at yourself anymore. You have postdocs and PhD students who do that for you. If you want to go into science please have a look at this. What looks like terrible city planning is actually terrible career planning as less than one percent of PhDs will ever reach the level of professor also known as the only stable job in science. That's also what happened to me I left academia after my PhD. So what do we learn from all of this? Different stages of a research career correlate with different levels of expertise. If you read statements from a master's student or a professor you can get an estimate for how much they know about their field and in turn for how solid their science is. Of course this is just a rule of thumb. I've met both very knowledgeable master students and professors who knew nothing apart from their own small world. So whenever you read statements from researchers independent of their career stage you should also wonder whether they represent a scientific consensus. Any individual scientist might have a particular hot take about something they care about but in general they agree with their colleagues. When reading about science that relates to policies or public debates it is a good idea to explore whether this particular researcher is representing their own opinion or the one of their peers. Don't ask the researcher directly though every single one of them will say that of course they represent a majority opinion. The difference between science and screwing around is writing it down as Adam Savage once said. Science without publications is pretty useless because if you keep all that knowledge to yourself well congrats you are very smart now but that doesn't really help anyone but you. Any researcher's goal therefore is to get their findings publicly known so that others can extend the work and create scientific progress. So let's go back to my amazing B research. I did the whole experiment again with proper controls this time and now I want to tell people about it. The simplest way to publish my findings would be to tweet about it but then a random guy would probably tell me that I'm wrong and stupid and should go f*** myself. So instead I do what most researchers would do and go to a scientific conference. That's where researchers hang out have a lot of coffee and sit and listen to talks from other researchers. Conferences are usually the first place that new information becomes public. Well public is a bit of a stretch usually the talks are not regularly recorded or made accessible to anyone who wasn't there at the time. So while the information is pretty trustworthy it remains fairly inaccessible to others. After my conference talk the next step is to write up all the details of my experiment and the results in a scientific paper. Before I send this to an editor or at a scientific journal I could publish it myself as a preprint. These preprints are drafts of finished papers that are available to read for anyone. They're great because they provide easy access to information that is otherwise often behind paywalls. They're not so great because they have not yet been peer reviewed. If a preprint hasn't also been published with peer review you have to be careful with what you read as it is essentially only the point of view of the authors. Peer review only happens when you submit your paper to a journal. Journals are a whole thing and there have been some great talks in the past about why many of them are problematic. Let's ignore for a second how these massive enterprises collect money from everyone they get in contact with and let's focus instead on what they're doing for the academic system. I send in my paper an editor sees if it's any good and then sends my paper to two to three reviewers. These are other researchers that then critically check everything I did and eventually recommend accepting or rejecting my paper. If it is accepted the paper will be published. I pay a fee and the paper will be available online often behind the paywall unless I pay some more cash. At this point I'd like to have a look at how a scientific paper works. There are five important parts to any paper. The title, the author list, the abstract, the figures and the text. The title is a summary of the main findings and unlike in popular media it is much more descriptive where a newspaper leaves out the most important information to get people to read the article. The information is right there in the title of the study. In my case that could be honeybees, apis, malefara show selective preference for flower color in viola tricolor. You see everything is right there. The organisms I worked with and the main result I found. Below the title stands the author list. As you might have guessed the author list is a list of authors. Depending on the field the paper is from the list can be ordered alphabetically or according to relative contribution. If it is contribution then you usually find the first author to have done all the work or the middle authors to have contributed some smaller parts and the last author to have paid for the whole thing. The last author is usually a group leader or professor. A good way to learn more about research group and their work is to search for the last author's name. The abstract is a summary of the findings. Read this to get a general idea of what the researchers did and what they found. It is very dense in information but it is usually written in a way that also researchers from other fields can understand at least some of it. The figures are pretty to look at and hold the key findings in most papers and the text has the full story with all the details all the jargon and all the references that the research is built on. You probably won't read the text unless you care a lot so stick to title abstract and authors to get a quick understanding of what's going on. Scientific papers reflect a peer-reviewed opinion of one or few research groups. If you are interested in a broader topic like what insects like to pollinate what flower you should read review papers. These are peer-reviewed summaries of a much broader scope often weighing multiple points of view against each other. Review papers are a great resource that avoids some of the biased individual research groups might have about their topic. So my research is reviewed and published. I can go back now and start counting butterflies but this is not where the publishing of scientific results ends. My institute might think that my B counting is not even bad it is actually amazing and so they will issue a press release. Press releases often emphasize the positive parts of a study while putting them into context of something that's relevant to most people. Something like bees remain attracted to yellow flowers despite the climate crisis. The facts on a press release are usually correct but shortcomings of a study that I mentioned in the paper are often missing from the press release. Because my B study is really cool and because the PR department of my institute did a great job journalists pick up on the story. The first ones are often journals with a focus on science like scientific American or Spektrum der Wissenschaft. Most of the time science journalists do a great job in finding more sources and putting the results into context. They often ask other experts for their opinion and they break down the scientific language into simpler words. Science journalism is the source I recommend to most people when they want to learn about a field that they are not experts in. Because my B story is freaking good mainstream journalists are also reporting on it. They are often pressed for time and write for a much broader audience so they just report the basic findings. Often putting even more emphasis on why people should care. Usually climate change, personal health or now Covid. Mainstream press coverage is rarely as detailed as the previous reporting and has the strongest tendency to accidentally misrepresent facts or add framing that researchers wouldn't use. Oh and then there is your weird uncle who posts a link to the article on their Facebook with a blurb of text that says the opposite of what the study actually did. As you might imagine, the process of getting scientific information out to the public quickly becomes a game of telephone. What is clearly written in the paper is framed positively in the press release and gets watered down even more once it reaches mainstream press. So for you, as someone who wants to understand the science, it is a good idea to be more careful the further you get away from the original source material. While specific scientific journalism usually does a good job in breaking down the facts without distortion, this same can't be said for popular media. If you come across an interesting story, try to find another version of it in a different outlet, preferably one that is more catered to an audience with scientific interest. Of course, you can jump straight to the original paper, but understanding the scientific jargon can be hard and misunderstanding the message is easy. So it can do more harm than good. We see that ham now with hobbies who are not epidemiologists, who are not people who study epidemics, who are making up their own pandemic modeling. They are cherry picking bits of information from scientific papers without understanding the bigger picture and context and then post their own charts on Twitter. It's cool if you want to play with data in your free time. And it's a fun way to learn more about the topic. But it can also be very misleading and harmful while dealing with a pandemic if expert studies have to fight for attention with non expert Excel graphs. It pays off to think twice about whether you're actually helping by publishing your own take on a scientific question. Before we end, I want to give you some practical advice on how to assess the credibility of a story and how to understand the science better. This is not an in depth guide to fact checking. I want you to get a sort of gut feeling about science. When I read scientific information, these are the questions that come to my mind. First up, I want to ask yourself, is this plausible and does this follow the scientific consensus? If both answers are no, then you should carefully check the sources. More often than not, these results are outliers that somebody exaggerated to get news coverage or someone is actively reframing scientific information for their own goals. To get a feeling about scientific consensus on things, it is a good idea to look for joint statements from research communities. Whenever an issue that is linked to current research comes up for public debate, there is usually a joint statement laying down the scientific opinion signed by dozens or even hundreds of researchers, like for example from scientists for future. Then whenever you see a big number, you should look for context. When you read statements like, we grow sugar beet on an area of over 400,000 hectare. You should immediately ask yourself, who is we? Is it Germany, Europe, the world? What is the time frame? Is that per year? Is that a lot? How much is that compared to other crops? Context matters a lot and often big numbers are used to impress you. In this case, 400,000 hectare is the yearly area that Germany grows sugar beet on. Weed, for example, is grown on over 3 million hectare per year in Germany. Context matters and so whenever you see a number, look for a frame of reference. If the article doesn't give you one, either go and look for yourself or ignore the number for your decision making based on the article. Numbers only work with framing so be aware of it. I want you to think briefly about how you felt when I gave you that number of 400,000 hectare. Chances are that you felt a sort of feeling of unease because it's really hard to imagine such a large number. An interesting exercise is to create your own frame of reference. Collect a couple of numbers like total agricultural area of your country, the current spending budget of your municipality, the average yearly income or the unemployment rate in relative and absolute numbers. Keep the list somewhere accessible and use it whenever you come across a big number that is hard to grasp. Are 100,000 Euro a lot of money in context of public spending? How important are 5,000 jobs in context of population and unemployment? Such a list can diffuse the occasional scary big number in news articles and it can also help you to make your point better. Speaking of framing, always be aware who the sender of the information is. News outlets rarely have a specific scientific agenda, but NGOs do. If so, the oil company will provide a leaflet where they cite scary numbers and present research that they funded that finds that oil drilling is actually good for the environment, but they won't disclose who they work with for the study, we all would laugh at that information. But if we read a leaflet from an environmental NGO in Munich that is structurally identical but with a narrative about glyphosate and beer that fits our own perception of the world, we are more likely to accept the information in the leaflet. In my opinion, both sources are problematic and I would not use any of them to build my own opinion. Good journalists put links to the sources in or under the article and it is a good idea to check them. Often, however, you have to look for the paper yourself, based on hints in the text like author names, institutions and general topics. And then paywalls often block access to the information that you're looking for. You can try pages like ResearchGate for legal access to PDFs. Many researchers also use SIHUB, but as the site provides illegal access to publicly funded research, I won't recommend doing so. When you have the paper in front of you, you can either read it completely, which is kind of hard, or just read the abstract, which might be easier. The easiest is to look for science journalism articles about the paper. Twitter is actually great to find those, as many researchers are on Twitter and like to share articles about their own research. They also like to discuss research on Twitter, so if the story is controversial, chances are you'll find some science accounts calling that out. While Twitter is terrible in many regards, it is a great tool to engage with the scientific community. You can also do a basic check up yourself. Where was the paper published and is it a known journal? Who are the people doing the research and what are their affiliations? How did they do their experiment? Checking for controls and repetitions in the experiment is hard, if you don't know the topic, but if you do know the topic, go for it. In the end, fact checking takes time and energy. It's very likely that you won't do it very often, but especially when something comes up that really interests you and you want to tell people about it, you should do a basic fact check on the science. The world would be a lot better if you'd only share information that you checked yourself for plausibility. You can also help to reduce the need for rigorous fact checking. Simply do not spread any science stories that seem too good to be true and that you didn't check yourself or find an incredible source. Misinformation and bad science reporting spread because we don't care enough and because they are very, very attractive. If we break that pattern, we can give reliable scientific information the attention that it deserves. But don't worry, most of the science reporting you'll find online is actually pretty good. There is no need to be extremely careful with every article you find. Still, I think it is better to have a natural alertness to badly reported science than to trust just anything that is posted under a catchy headline. There is no harm in double checking the facts because either you correct a mistake or you reinforce correct information in your mind. So, how do I assess whether a source that I like is actually good? When I come across a new outlet, I try to find some articles in an area that I know stuff about. For me, that's plant science. I then read what they're writing about plants. If that sounds plausible, I am tempted to also trust when they write about things like physics or climate change where I have much less expertise. This way, I have my own personal list of good and not-so-good outlets. If somebody on Twitter links to an article from the not-so-good list, I know that I have to take that information with a large quantity of salt and if I want to learn more, I look for a different source to back up any claims I find. It is tedious, but so is science. With a bit of practice, you can internalize the skepticism and navigate science information with much more confidence. I hope I could help you with that a little bit. So, that was my attempt to help you to understand science better. I'd be glad if you'd leave me feedback or direct any of your questions towards me on Twitter. That's at Science Joram. There will be sources for the things I talked about that are available somewhere around this video or on my website, Joram.schwarzmann.de. Thank you for your attention. Goodbye. There, Joram. Thank you for your talk. Very entertaining and informative as well, as I might say. We have a few questions from here at the commerce. It would be, where's the signal, Andrew? I need my questions from the internet. All of them are from the internet. I would go through the questions and you can elaborate on some of the points from your talk. So, the first question. Very good. The first question is, is there a difference between reviewed articles and meta studies? To my knowledge, there isn't really a categorical difference in terms of peer review, meta studies, so studies that integrate, especially in the medical field, you find that often they integrate a lot of studies and then summarize the findings again and try to put them in context of one another, which are incredibly useful studies for medical conclusion making. Because, as I said in the talk, it's often very hard to do, for example, dietary studies and you want to have large numbers and you get that by combining several studies together. And usually these meta studies are also peer reviewed. So, instead of actually doing the research and going and doing whatever experiments you want to do on humans, you instead collect all of the evidence others did. And then you integrate it again, draw new conclusions from that and compare them and weigh them and say, okay, this study had these shortcomings, but we can take this part from this study and put it in context with this part from this other study. And because you make so much additional conclusion making on that, you then submit it again to a journal and it's again peer reviewed and then other researchers look at it and say, and yeah, pretty much give their expertise on it and say whether or not it made sense what you concluded from all of these things. So, a meta study when it's published in a scientific journal is also peer reviewed and also a very good credible source. And I would even say often meta studies are the studies that you really want to look for if you have a very specific scientific question that you as a sort of non-expert want to have answered because very often the individual studies, they are very focused on a specific detail of a bigger research question. But if you want to know, I don't know, dietary fiber very good for me. There's probably not a single study that will have the answer, but there will be many studies that together point towards the answer. And the meta study is a place where you can find that answer. Very good. Sounds like something to reinforce the research. Maybe a follow up question or it is a follow up question. Is there anything you can say in this regards about the reproducibility crisis in many fields such as medicine? Yeah, that's a very good point. I mean, that's something that I didn't mention at all in the talk because for pretty much like complexity reasons because when you go into reproducibility, you run into all kinds of sort of complex additional problems. Because yeah, it is true that we often struggle with reproducing often. I actually don't have the numbers how often we fail, but there's this reproducibility crisis that's often mentioned that is this idea that when researchers take a paper that has whatever they studied and then other researchers try to recreate the study and usually in a paper there's also a material and method section that details all of the things that they did. It's pretty much the instructions of the experiment and the results of the experiment are both in the same paper usually. And when they try to sort of re-cook the recipe that somebody else did, there is a chance that they don't find the same thing. And we see that more and more often, especially with like complex research questions. And that brings us to the idea that reproduction or reproducibility is an issue and that maybe we can't trust science as much or we have to be more careful. And it is true that we have to be more careful, but I wouldn't go as far and to be like in general like sort of distrustful of research. And that's why I'm also saying like in the medical field, you always want to have multiple studies pointing at something. You always want to have multiple lines of evidence because if one group finds something and another group can't find it like reproduce it, you end up in a place where you can't really say, does this work now? Like who did the mistake? The first group or the second group? Because also when you're reproducing a study, you can make mistakes or there can be factors that the initial research study didn't document in a way that it can be reproduced because they didn't care to write down the supply of some chemicals. And the chemicals were very important for the success of the experiment. And things like that happen. And so you don't know when you just have the initial study in the reproduction study and they have different outcome. But if you have then multiple studies that all look in a similar area and out of 10 studies, eight or seven points to a certain direction, you can then be more certain that this direction points towards the truth. In science, it's really hard to say like, okay, this is now the objective truth. This is now, we found now the definitive answer to the question that we're looking at and especially in the medical field. So yeah, so that's a very long way of saying it's complicated. Reproduction or reproducibility studies, they are very important. But it doesn't, I wouldn't be too worried or too, what's the word here? Like I wouldn't be too worried that the lack of reproducibility breaks the entire scientific method because it's usually more complex and more issues at hand than just a simple recooking of another person's study. Yes, speaking of more publishing. So this is the follow up to the follow up. The internet asks, how can we deal with the publish or perish culture? Oh yeah, if I knew that I would write very smart blog posts and trying to convince people about that. I think personally, we need to rethink where we do the funding because that's in the end where it comes down to another issue that I really didn't go into much detail in the talk because also very complex. So science funding is usually defined by a decision making process. At one point, somebody decides who gets the money. And to get the money, they need to qualify to decide, like there is 10 research groups or 100 research groups that write a ground and say like, hey, we need money because we want to do research. And they have to figure out what they have to decide. Like who gets it because they can't give money to everyone because we spend money in budgets on different things than just science. So the next best thing that they came up with was the idea to use papers, the number of papers that you have to sort of get the measurement or the quality of papers that you have to get the measurement of whether you are deserving of the money. And you can see how that's problematic. It means that people who are early in their research career who don't have a lot of papers, they have a lower chance at getting the money. And that leads to this publisher-perish idea that if you don't publish your results and if you don't publish them in a very well-respected journal, then the funding agencies won't give you money. And so you perish and you can't really pursue your research career. And it's really a hard problem to solve because the decision about the funding is very much detached from the scientific world, from academia. There's multiple levels of abstraction between the people who in the end make the budgets and decide who gets the money and the people who are actually using the money. I would wish for a funding agency to look less at papers and maybe come up with different qualifiers. Maybe also something like general scientific practice. Maybe they could do audits of some sort of labs. I mean, there's a ton of factors that influence good research that are not mentioned in papers like work ethics, work culture, how much teaching you do, which can be very important but is sort of detrimental to get more funding because when you do teaching you don't do research and then you don't get papers and then you don't get money. So yeah, I don't have a very good solution to the question, what we can do. I would like to see more diverse funding also of smaller research groups. I would like to see more funding for negative results which is another thing that we don't really value. So if you do an experiment and it doesn't work, you can't publish it, you don't get paper, you don't get money and so on. So there are many factors that need to change, many things that we need to touch to actually get away from publish or perish. Yeah, another question that is closely connected to that is why are there so few stable jobs in science? Yeah, that's the Wissenschaft Zeitvertrags Gesetz, something that I forgot when we got it. I think in the late 90s or early 2000s that's at least a very German specific answer that defined, that this Gizetz, this law put it into law, that you have a limited time span that you can work in research. You can only work in research for I think 12 years and there's some like footnotes and stuff around it, but there's a fixed time limit that you can work in research on limited term contracts. But your funding, whenever you get research funding, it's always for a limited time. You always get research funding for three years, six years, if you're lucky. So you never have permanent money in a research group. Sometimes you have that in universities, but overall you don't have permanent money and so if you don't have permanent money, you can't have permanent contracts. And therefore there aren't really stable jobs. And then with professorships or some group leader positions then it changes because group leaders and professorships, they are more easily planned. And therefore in universities and research institutes, they sort of make a long-term budget and say okay, we will have 15 research groups. So we have money in the long-term for 15 group leaders. But whoever is hired underneath these group leaders, this has much more fluctuation and is based on sort of short-term money. And so there's no stable jobs there. At least that's in Germany. I know that for example in the UK and in France, they have earlier permanent position jobs. They have lecturers for example in the UK, where you can without being a full professor that has like its own backpack of stuff that has to be done, you can already work at a university in the long-term in a permanent contract. So it's a very, it's a problem that we see across the world, but Germany has its own very specific problems introduced here that make it very unattractive to stay long-term in research in Germany. It's true, I concur. So the coming to the people who do science mostly for fun and less for profit, this question is can you write and publish a paper without a formal degree in the sciences, assuming the research methods are sufficiently good? Yes, I think technically it is possible. It comes with some problems. Like first of all, it's not free. First of all, when you submit your paper to a journal, you pay money for it. I don't know exactly, but it ranges, I think a safe assumption is between $1,000 and $5,000, depending on the journal way you submit to. And then very often there's like some formal problems that I've been recently co-authoring a paper and I'm not actively doing research anymore. I did something in my spare time, helped a friend of mine who's still doing research with some like basic stuff, but he was so nice to put me on the paper and then there's a form where it says like institute affiliation and I don't have an institute affiliation in that sense. So as I'm just a middle author in this paper, I was published there or hopefully if it gets accepted, I will be there as an independent researcher. But it might be that a journal has their own internal rules where they say we only accept people from institutions. So it's not really inherent in the scientific system that you have to be at an institution, but there are these doors, there are these pathways that are locked because somebody has to put in a form somewhere that what institution you affiliate with. And I know that some people who do like DIY science, so they do outside of academia that they need to have in academia partners that help them with the publishing and also to get access to certain things. I mean, in computer science, you don't need specific chemicals, but if you do anything like chemical engineering or biology or anything, often you only get access to the supplies when you are an academic institution. So I know that many people have sort of these partnerships, corporations with academia that allow them to actually do the research and then publish it as well because otherwise if you're just doing it from your own bedroom, there might be a lot of barriers in your way that might be very hard to overcome. But I think if you're really, really dedicated, you can overcome them. Coming to the elephant in said bedroom, what can we do against the spread of false facts? 5G corona vaccines. So they get a lot of likes and are spread like a disease themselves and it's very hard to counter, especially in person encounters, these arguments because apparently a lot of people don't are not that familiar with the scientific method. What's your take on that? Yeah, it's difficult. And I've read over the years now many different approaches ranging from not actually talking about facts because often somebody who has a very predefined opinion on something, they know a lot of false facts that they have on their mind and you as somebody talking to them often don't have all of the correct facts in your mind. I mean, who runs around with like a bag full of climate facts and a bag full of 5G facts and a bag full of vaccine facts all like in the same quantity and quality as the stuff that somebody who read stuff on Facebook has in their in their backpack in their sort of mental image of the world. So just arguing on the facts is very hard because people who follow these false ideas they often they're very quick in making turns and they like throw a thing at you one after the other and so it's really hard to just be like but actually debunking fact one and then debunking the next wrong fact. So I've seen a paper where people try to do this sort of on a argumentative standpoint. They say, look, you're drawing false conclusions. You say, because A, therefore B, but these two things aren't linked in a causal way. So you can't actually draw this conclusion and sort of try to destroy their argument on a meta level instead on the fact level. But also that is difficult and usually people who are really devout followers of false facts, they are also not followers of reason. So any reason based argument will just not work for them because they will deny it. I think what really helps is a lot of small scale action in terms of making scientific data, so making science more accessible and I mean I'm a science communicator so I'm heavily biased. I'm saying like we need more science communication. We need more low level science communication. We need to have it freely accessible because all of the stuff that you read with the false facts, this is all freely available on Facebook and so on. So we need to have a similar low entry level for the correct facts, so for the real facts. And this is also, it's hard to do, I mean in the science communication field there's also a lot of debate how we do that, should we do that over more presence on social media, should we simplify more or are we then actually oversimplifying like where is the balance, how do we walk this line. So there's a lot of discussion and still ongoing learning about that but I think in the end it's that what we need. We need people to be able to find correct facts just as easily and understandable as they find the fake news and the fakes. We need science to be communicated clearly as a student Cheryl on Facebook. As an image I don't want to repeat all of the wrong claims but something that says something very wrong but very persuasive, we need to be as persuasive with the correct facts. And I know that many people are doing that by now especially on places like Instagram, you find more and more people on TikTok, you find more and more people doing very high quality, low level and I mean that on a sort of jargon level, not on a sort of intellectual level. So very low barrier science communication and I think this helps a lot. This helps more than very complicated sort of pages debunking false facts. I mean we also need these, we also need these as references but if we really want to combat the spread of fake news we need to just be as accessible with the truth. A thing closely connected to that is how do we fine tune our bullshit detectors since I guess people who are watching this talk have already started with the process of fine tuning their bullshit detectors but when for example something very exciting and promising comes along as an example CRISPR-Cas or something, how do we go forward to not be fooled by our own already tuned bullshit detectors and false conclusions? I think a main part of this is practice, just try to look for something that would break the story, just not for every story that you read, that's a lot of work but from time to time pick a story where you're like oh this is very exciting and try to learn as much as you can about that one story and by doing that also learn about the process how you drew the conclusions and then compare your final images after you did all the research to the thing that you read in the beginning and see where there are things that are not coming together and where there are things that are the same and then based on that practice and I know that's a lot of work so that's sort of the high impact way of doing that by just practicing and just actively doing the checkups but the other way you can do this is find people whose opinion you trust on topics and follow them on podcasts and social media on YouTube or wherever and like especially in the beginning when you don't know them well be very critical about them it's easy to fall into like a sort of trap here and following somebody who actually doesn't know their stuff but there are some people I mean in this community here I'm not saying anything new if I say if you follow people like Mincorrect like the Torch incorrect they are great for a very I actually can't really pin down which scientific area because in their podcast they're touching so many different things and they have a very high level understanding of how science works so places like this are a good start to get a healthy dose of skepticism another rule of thumb that I can give is like usually stories are not as exciting when you get down to the to the nitty gritty details like I'm a big fan of CRISPR for example but I don't believe that we can cure all diseases just now because we have CRISPR like there's very limited things we can do with it and we can do much more with it than what we could do when we didn't have it but I'm not going around and thinking now we can create life at will because we have CRISPR or we can fight any disease at will because we have CRISPR so that's a general good rule of thumb is just calm down look what's really in there and see how much or tone it just down like 20% and then take that level of excitement with you instead of going around and being scared or overly excited about a new technology and you think that's been found because we rarely do these massive jumps that we can that we need to start to worry or get over excited about something very good so very last question which tools did you use to create these NAS drawings oh a lot of people won't like me for saying this because this will sound like a product promo but there is I used an iPad I used an iPad with a pencil and I used an app to draw the things on there called Affinity Designer because that works very well then also cross-device so that's how I created all of the drawings and I put them all together in Apple Motion and exported the whole thing in Apple Final Cut so this is now the show like a sales pitch for all of these products but I can say like for me they work very well but there's pretty much alternatives for everything along the way if I'm I mean I can say because I'm also doing a lot of science communication with drawings for the Plants and Pepev projects that I'm part of and I can say an iPad with a pencil and Affinity Designer gets you very far for high quality drawings with a very easy access because I'm no way an artist I'm very bad at this stuff but I can hide all my shortcomings because I have an undo function in my iPad and because everything's in a vector drawing I can delete every stroke that I made even if I realize like an hour later that this should not be there I can reposition it and delete it so vector files and a pencil and an undo function were my best friends in the creating of this video very good there you are thank you very much for your talk and your very extensive Q&A I think a lot of people are very happy with your work and are actually saying in the pad that you should continue to communicate science to the public that's very good because that's my job it's good that the people like that thank you very much so a round of applause and some very final announcements for this session there will be the Herod News Show in the break so stay tuned for that and I would say if there are no further no we don't have any more time sadly but I guess people know how to connect to you and contact the Yoram if you want to know anything more