 this workshop on methods for synthesis of qualitative research, which is hosted here at Stockholm Environment Institute. And I'm here together. My name is Linda Bell. I'll be moderating this day. And I'm here together with my colleague, Lucinda Sousel. Yes, my name is Bilena Matsura. And we put together a program that I hope you will find useful. We are going to hear five presentations about five different methods. And we will have 20 minutes for the presentation and 10 more minutes for the questions and answer in this first part of the day, which is going to be webcasted, so from 9.15 to 12.30. Yes, and for our viewers on the web, please submit your questions as well. If you have questions, we'll be happy to lift them into the program as well, of course. And the first speaker, I think we'll introduce him right away. The first speaker is Neil Hadaway from Stockholm Environment Institute. And he will be speaking on the topic of SEI experience and thematic synthesis, a very interesting topic indeed. And Neil, I think you will take us through a couple of different interesting experiences that you've had here at SEI, right? And we will have questions in about 20 minutes. So please note all your questions for Neil. Yes, and we will be trying to be strict on time, so we will have me, Linda, or Matilda timekeeping. So please look at the green paper. OK, good. Thanks very much, everybody. Thanks very much, Linda and Miliana. As they just said, I'm going to talk to you about an experience that we've had here at SEI, conducting a thematic synthesis with the subtitle Confessions of a Quantitative Ecologist. And I have to start off this presentation with a caveat or several caveats that I come from a quantitative background. My PhD was in crayfish conservation, which involved a huge amount of coding and statistics. But that said, more recently I engaged in a qualitative analysis back in 2014, actually, when I was doing a project on stakeholder engagement in systematic reviews. And now I did a thematic analysis, which is very similar to a thematic synthesis, which I'll introduce in a bit. But my background is quantitative. As a quantitative scientist, I'm not a qualitative researcher, but I started to get involved with it. It's really opened up my eyes to big differences in the way that we look at things from the way that we define data, for example, but also to the benefits of having a qualitative approach to evidence synthesis. So I'll start off by talking about quantitative, our experience as quantitative researchers and the experience of doing a qualitative synthesis for the first time. I'll introduce thematic synthesis as a methodology. And then I'll just outline the results of our thematic synthesis because I think that's interesting. So this comes from a project that started earlier this year in May called Bonus Return. It's a bonus funded project looking at reuse of carbon and nutrients using ecotechnologies in the Baltic region. So this is the overall aim to identify and test ecotechnologies for reducing emissions by turning nutrients and carbon into benefits. The problem for us was that this term ecotechnology wasn't really defined. The term was in the call. We used the term a lot in our grant application. But after we started, we realized that there wasn't really an accepted definition of what ecotechnology means. As we started off with a systematic map within this project, it was really crucial to establish a really strong search strategy and inclusion criteria and without acceptable definitions, this is where you really hit problems. So the first part of a systematic map or review, as many of you know, is to establish the search and all of these definitions and make sure that everybody is singing from the same hym sheet, as it were. So we knew roughly what was meant. Everybody roughly knew what was implied. It's a similar term to ecological engineering. It's basically technologies undertaken in ecological settings for the benefits of society. But anybody who works with definitions knows that you can't use the word technology or ecological if you're determined to define as ecotechnology. So this is one of the problems that we had. What did we mean by technology? Are we talking about hard technology, like machinery, or nanotechnology, or something that you put in the environment that's artificial? Or do we mean soft technology, which could be things like practices or behaviors? So what along this spectrum of structures, mechanisms, processes, behaviors, policies, do we count as being included? And then what do we mean by eco? Do we mean in the environment? Do we mean using nature? What do we mean by nature? Or making use of biological processes? So these are the kind of things that we wanted to know. And it wasn't implicitly obvious to even the experts in our group. So we highlighted a need to understand how the term ecotechnology has been used. We decided to do a systematic review of how it's been used in the literature. So the population that we were interested in is scientific research articles in traditional academic literature. We wanted to see how research has used the term. And we were looking for any type of ecotechnology definition. We highlighted two different types of definition, the explicit definition where people say, an ecotechnology is blah, blah, blah. And then also implicit or example-based definitions where people give an example of a technology and then refer to it as being a technology. And so we decided to use thematic synthesis to build a conceptual model and generate a definition or pick the best definition in the literature. And thematic synthesis is a method that was first published in 2008 by James Thomas and Hardin. And the definition of thematic synthesis is that it's basically the same thing as thematic analysis, if anybody's familiar with that. It has three key stages where you code the text that you're interested in line by line. So you go through and you identify relevant text within the article. And then you pull out themes that describe the text in the article. And from those themes, you then go on to the final stage of building an analytical theme and constructing a framework. So the first stage is extracting data from your studies. The second stage is part of data extraction, but also collating and coding. So it's changing, collating, and summarizing the themes that you find. And the final stage is the synthesis or interpretive stage. You can't have a thematic synthesis without this final stage that goes beyond just extracting and tries to look for patterns and similarities or differences across the evidence base. Thematic synthesis overlaps heavily with framework synthesis, which is pretty similar. But you start off with a conceptual model. You go to the literature and you see how other people have built conceptual models. And you then adapt your conceptual model or framework based on the frameworks that you experience in the literature. And both of these methods are less interpretive. Thematic synthesis in particular tries to stay true to the source material. So what you're looking for is just themes that have been described within the literature and then trying to bring them together into a universal conceptual model. So the process for the bonus return was that we had a body of literature that we performed line by line coding in EpiReviewer. So we went through the PDFs and we highlighted and extracted blocks of text where ecotechnologies were defined. We then went through those quotes and tried to identify themes. But those themes were staying very true to the original text. And then we went through and of all of the themes, we grouped similar themes or identical themes together and changed some of the wording a little bit to make it consistent. And then we took those themes and tried to put them into groups and build a model based on how those themes related to one another. It'll become clearer when I show our conceptual model in a bit. But we started off with about 1,200 search results. We searched for the word ecotechnology. So we only had three search terms in our search strategy based on different spellings of the word ecotechnology. So it's quite simple. We searched Web of Science, Core Collections, Scopus, and Google Scholar. And we identified 657 unique articles after removing duplicates. We were able to retrieve about half of these. Some of them were quite obscure articles, which unfortunately weren't able to get in full text. And within those 330 articles, we found 49 explicit definitions of ecotechnology. And around 73 articles where they'd given an example of an ecotechnology that related to carbon and nutrients, which was the subject of interest for bonus return. We then performed thematic synthesis based largely on the 49 explicit definitions. And then we tested our conceptual model based on those original definitions and the 73 examples. We tried to see where each of those 73 examples would sit. And then we had our final conceptual model built. Just one example I wanted to show of one line by line coding that was extracted. It's an explicit definition of ecotechnology. And it's one of the most rich. We classed how many themes were in each definition. And this was one that had, I think, six themes within it. And it was that ecotech is generally understood as the embedding of human activities into the cycles of the ecosphere and also into the social, cultural, and economic organizational structure of societies through using the whole range of biodiversity. However, in a holistic and low invasive way, ecocentric and not anthropocentric were the aid of efficient engineering in order to preserve the well-being of society by obeying eco-principles, e.g. sufficiency. This was one of the better ones that we found. So you see the problem that we had. We didn't actually find a single definition of ecotechnology that was explicit enough to capture all of the themes that we identified. And so each of these bullet points isn't a second level theme. So we extracted the text, we summarized the text, and then we went through all of those summary themes to make sure that there wasn't any overlap or duplication. And each of these is one of the kind of restructured original themes. And we grouped them into nine blocks that we thought made sense. So we have things like making nature work for society, making society work for nature. Those were the overarching groups there. Good for society, good for nature, good for both society and nature, talking about profitability or efficiency, combining processes, integrating nature and society, improving processes or learning from the environment, components, equipment or machinery, and then processes and behaviors. And so this structure of the nine groups kind of emerged by itself as we collated those bullet points together. And we then produced a conceptual model which is three mini-conceptual models. One talking about the type of technology, was it hard or was it soft technology, meaning components and equipment or materials, and then a spectrum to behaviors and processes. And then the benefits were either good for nature or good for society along a spectrum. And the processes were the third group which was on a scale from society versus nature and then society and nature, which were things like improving or learning from the environment or combining and integrating processes and then making society work for nature or the other way around. And the terminology that was used within those definitions was quite influential in, for example, where they sat on this axis. So if people were talking about using nature or forcing nature to work for society, it would be very much up here. So it's not just the content, it's also the way that the definition was expressed. And then we've got that conceptual model again and for the four high information or information rich definitions, which each had six emergent themes within them, we mapped them to see where they fit. So how many of these different concepts, different groups of concepts, were covered by the definitions. Some of them you can see were a bit more restricted within each so that the anonymous one only covered a couple of concepts. Interestingly, none of these and none of the other information rich definitions talked about whether it was hard or soft. That was always something that was kind of implicit. Occasionally, they talked about behaviors and processes. So we had more themes within the behaviors and processes, but I think only one article mentioned hard technology, which was very surprising because as we started out in the project, we were all thinking ecotechnology has to be hard technology, right? So this opened our eyes a bit. But the conclusions here were that because we also identified where the word ecotechnology was used within each article, was it within the title, the abstract, the keywords, or the full text? And the vast majority of definitions were within keywords or titles and then also abstracts with not so many in the full text. And that gives an indication that it's being used as a buzzword. And because it was being used within those locations and then not defined anywhere else, it means that it slowly changes its definition over time or it can slowly change its definition over time because there was no definition to start with and no usable published explicit definition that encompassed all of those concepts. It means that it's potentially a bit of a dangerous word to use. It was sometimes conflated with ecological engineering, although one definition that we had said that they were two related but different concepts. But it also shows an interesting change in use over time and actually the use of the term is starting to drop, it's starting to be less of a popular buzzword now, perhaps being replaced by things like green economy or circular economy, other words that are becoming more popular. So we found it interesting that it was being used by bonus return but there's a really good learning experience here for everyone, we think. So because of this, we have to use the word ecotechnology so we decided to define it ourselves and we've used our conceptual model to make sure that every part of that conceptual model is captured by our definition. We define it as human interventions and social ecological systems in the form of practices and or biological, physical and chemical processes designed to minimize harm to the environment and provide services of value to society. So we hope we've captured quite a lot there. And then just to summarize the talk as well, hopefully I've given you a bit of an indication of how we appreciated qualitative synthesis. It was a really useful way of examining, taking a step back and pausing and examining how a word had been used to make sure that we start with the right definition. And unfortunately we weren't able to find one in the literature so we hoped we've been appropriately brave enough to come up with one ourselves but that production of a conceptual model helped us to ensure that our definition covers all the bases that could be covered by the various different information poor definitions in the literature. For me personally, I find thematic synthesis is really intuitive. You're looking for themes, you're making sure that they are similar and used in the same way across your evidence base, the themes that you pull out. You're then pulling them together, clustering them, putting them into a conceptual model. It's something that I probably do sometimes implicitly when I'm reading a body of evidence but to have a methodology for doing it systematically was really useful. And it's a very accessible one because all the other forms of qualitative synthesis I find very, as a quantitative synthesis, very scary and a bit too different from what I've been doing before but this is quite accessible for me. And as well for a bonus return, it's been vital for helping us start on the right page. Even if it took some time, it wasn't a huge undertaking because we only had 330 full texts. But that's it, thanks very much. Thank you, Neal. It seems like it took a lot of time and effort. I mean, quite painstaking in the process itself. How much time would you say it took altogether? It only took about three weeks, actually. I should be right with the microphone. So it only took about three weeks. It was the quickest systematic review I've ever done. And not being a qualitative scientist, I'm not sure how well we did it. We hope we did a good job. But it was very quick. I think being an experienced systematic reviewer meant that it was very quick to write a protocol, very quick to do the searching. It was actually a really useful initiation and training experience for our group who are helping us out with a systematic review at the at Wolves in Poland. But actually reading the articles and extracting themes and then pulling them together was quite a quick process. And I think it needed to be to maintain the intellectual inertia, the kind of the thought inertia. The three week period, probably at the end, spread out where I needed time to go away and talk to colleagues. I talked to Billy Anna quite a lot about how to build a conceptual model. My first conceptual model was really awful and had quantitative axes on the nose, trying to put like sized bubbles and things. So that took a bit of time, but it actually didn't take much in terms of hours of work. So did you have a... There was an interdisciplinary team around you, so people with different... Yeah, was that helpful? It was helpful. None of us were qualitative scientists, so that's where we feel a little bit fraudulent, perhaps. But we had a great group of people with different backgrounds in terms of evidence synthesis and subject expertise. So that was really vital in making sure that we've covered all the bases in terms of the definition and papers that should have come up. Well, thank you so much. So questions from the audience, please, sort of if we have any questions online. Yeah, Matilda. Thank you, it was really interesting. I had a question about, did you think of it as a step, like the first step of doing a third further analysis or have you thought about it? Because I thought the thematic, your like model in itself was really interesting. Did you do any summarizing that you would sort of spread that to the commissioner as well in itself? Like do you consider it more as the first stepping stone before you do the searching? Yeah, we viewed it as a necessary step as part of the bonus return project because we had to come up with a definition and if we just made up our own, we wouldn't have much justification behind that. So it was a really necessary first step for the project. But I think it's also really useful to have a constructive and positive discussion with people like bonus to make sure that they realize the possible risks of using buzzwords. I was thinking that just that makes it clear for them as well, why, what they're talking about. Yeah, yeah, I think so. And I'm not sure they really knew exactly what they were expecting in terms of innovations. I think they were quite open in terms of whether they were hard or soft technologies. One discussion that we had was even should serious games be included as an eco technology, like policy instruments to help test policies and that kind of thing. And that was only one person out of about 20 who suggested that. And we didn't find any evidence for that in the literature but there was definitely a tendency to more soft behavioral technologies. So that was quite an eye-opener and I'm sure there'll be interest in that from the wider community because for those people interested in those kind of technologies, then this gives them evidence that they are really technologies and they can be treated in the same way which I think they probably appreciate. So yeah, I think it does have a broader use but for us we were quite utilitarian. We needed this to be able to move on. Okay, so Caroline. Thank you, Neil. I was wondering if you could say something more about if and how you, within the team, mainstreamed your coding and interpretation to... In terms of how we did it, practically speaking. Yeah. Yeah, so we used EpiReviewer which is a review management software which allows line-by-line coding and it was produced by the EpiCenter which is part of the Institute of Education in London specifically designed for qualitative synthesis in the sphere of crime and punishment, education, international development. I can't remember the other fields that they work in but specifically for qualitative synthesis. So it essentially allows you to read a PDF within the software and highlight text that you think is relevant, either it's a quote or you're highlighting text that the authors have written. It then extracts that text as a code and you can then assign extra descriptive information and codes to that section of text. So that allows you to work in a very transparent and well-documented way and the software also allows us to work as a team very easily. So you can assign tasks to people and you can test consistency of coding and that's how we did it because we had, I think three or four people from Poland doing the line-by-line coding and retrieving PDFs and that kind of thing. So it was a great portal for storing information and sharing workloads that meant that we could work very easily between Poland and Sweden. I'm not sure if that answers your question. It was a very nice, interesting presentation and very interesting research results. I have also done a similar work. I have two questions. First, why did you stick only three keywords because these keywords also shape all your results? For example, there are synonyms to technology such as it can be used in different communities and also synonyms to echo. For example, green innovation, environmental technology, environmental innovation, so you can find many, many pairs that can be used as a synonym. That's the first question and also your result, the definition you developed is very, very good, which I liked and interesting but also it resembles me the definition of echo innovation that we hear and which was not also included in your keywords. The reason we only had three synonyms was eco-technology is one word, eco-technology with a hyphen and eco-technology with a space. I should have shown it. We had a wild card to allow plurals and different spellings at the end as well. Our scope was only interested in the word eco-technology because that was the terminology used in the crawl and the terminology that we promised in our application. So we recognized there were lots of other terms like green economy, like eco-innovation, like environmental innovation, like nature-based solutions, but we were specifically interested in a definition. It might be interesting definitely to have a look at definitions of other terms and to see how they compare and overlap, in particular ecological engineering because that's an overlapping term that I think would be used quite differently and by a different group, different community of researchers. But the reason we stuck with just that definition is because we were only interested in eco-technology, so that was our narrow focus. And I can't remember the second question now. So the second question or a comment, your final result and the definition you developed is very interesting because it is what other communities define as eco-innovation. So actually your definition is like, can be perceived as eco-innovation. So this is like, do you think it's a limitation of the research or this is something? It's interesting. No one in the project mentioned the word eco-innovation. I don't think any of us perhaps know about the term eco-innovation, so that's something I'll take back. That's really interesting. We talked about innovation, but not eco-innovation. I don't think it's a limitation of the work. I think we are very focused on just that definition, but it's, yeah, I think again, with things like ecological engineering, there will be overlaps in definitions. So some interesting, useful future work might be to do the same thing on these other terms like circular economy, like eco-innovation, like green economy, just to see how they're being used. I think there really is a danger of using buzzwords because it keeps bodies of evidence separate and it makes our job as systematic reviewers quite challenging because we have to try and identify these synonyms. Some of them overlap totally, some of them don't overlap completely. So if everybody can just agree to use the same word, it would be really nice, but I know researchers and they won't. But it would be interesting to do this on terms like eco-innovation, so that's great. I'll take that away, thanks. Okay, thank you so much, Neil. Give a hand. Thank you. And you will be around today if there are further questions. Yes, all day. Because I think it was really interesting. Okay, the next speaker, also I should say that Neil's bio is attached to the program as how the other speaker is bio's if you want to read more about their backgrounds and so on. So the next speaker is Monica. Suskvich, you must say that, right? No, you correct me, please. Monica Suskvich. Suskvich, yeah. From the Estonian University of Life Sciences, very happy to have you here and you'll be speaking on the topic of social learning and natural resource management, right? Okay, thank you. That's correct. Yeah. So dear colleagues, it's really nice to be here. Thank you for this opportunity. And as said, I work in Estonia University of Life Sciences as a lecturer and as a postdoctoral researcher. And maybe also I'll say a few words about my background. Then you will maybe better understand the choices I have made and the example I will be showing you. So my background is basically connected to different kinds of concepts that include stakeholder engagement, stakeholder participation, and the application fields are then connected to nature conservation, ecological networks and spatial planning. So this is basically my background. And I should also say that I am not a systematic reviewer. I have mostly experience in qualitative primary research, so doing interviews and the textual analysis. But what I am here is that Billion asked me to briefly introduce what is framework synthesis and to give a brief example of that. So I'll try to give that example and in the bearing in mind that to highlight some of the questions that Novi's researcher might encounter when he or she is doing such a work. So maybe briefly about if we depict synthesis different kinds of methods on epistemological continuum, then we can see that the framework synthesis lies in the more realist type of methods. And it means that you are less interpretive, but you more summarize or aggregate the results. So what it is, it's practically comes from, the term comes from primary research. It comes from framework analysis. And it's practically a way or technique to criticize or to rebuild an existing framework. Or you can also choose a new one or build a new one if you don't find an appropriate one. And because it's quite structured and it's also said that it's quite less time consuming compared to the other qualitative synthesis methods, then it's also quite popular for policy advice because it's supposed to provide you quite quite precise answers to the questions. And what are then the basic steps or phases in the synthesis? So everything starts from a question. So where do you get this question? It's usually the background literature that you consult, but also you can consult the stakeholders, policymakers. And then in parallel, you can build your framework or select an existing one. And then you search for the literature, you include the studies and then data extraction. And once you reach to the results, then you will hopefully get like a new frame or refined frame which you can then use to criticize the existing framework and also the broader background literature. And also provide if your question is like more precise then you can also provide some policy advice or some practical advice. So maybe now about the example. It's about the recent preview that I have been coordinating and doing as part of my postdoctoral studies here in Stockholm Resilience Center a couple of years ago, but this was quite recently published. And so I will talk about this. I should also mention that it's not like a broad, how to say project. So it's basically my postdoctoral project and I had good course, but it's not like a big research project in this sense. So about the topic, it focuses on learning in natural research management. And why did we chose such a topic? It's basically the reason that in natural research management, the learning concept has been taken up quite popularity and it's said that you can have a lot of like positive outcomes from this concept into the practice. So you can have practical management, better decisions, you can have changes in the institutions, you can have like ultimate better environmental effects and so on. So we were interested in what does the literature say about it? What is there? And then maybe about the challenges that I or questions I encountered. So this was an interesting journey actually for me because it was the first time I was doing this. I mean in the synthesis level, not working with the primary data, primary qualitative data. And I had a lot of questions in my head when I started this work. And the first set of challenges was then how to build or select the framework. So in the literature, in the methodological literature, it's usually said that you should select like an existing framework. But what if you don't find like a suitable one that would suit for your research question and for your data. So, and which disciplines you include because learning is like a very broad concept that you can find from the learning from the educational literature to the organizational studies to the natural resource management to where it has been traveled. So, and which concept to use? Do you use like learning based concept of course and also which kind of natural resource related concepts to use. Then, so we choose to build a new framework. It's not totally new of course but it's like suited for our purpose. So what you can see here is the first upper corner on the left hand side is the more like which has been more elaborated in the literature. So learning outcomes from the perspective of cognitive change, relational change skills. But so to speak new ones is you can see in the right hand side are the ones that how these outcomes translate to the actual practice of natural research or environmental management. So this is our preconceived set of topics or frame. Then what about if you have the framework more or less and then you start searching the literature and including studies, it's not a linear process by no means. So you can have a lot of confusion in your head but for example, what are the, what is the appropriate search strategy as an overall strategy? So and also what are the keywords or suitable search strings? And what if you have a search for the literature and if you have the pile of the studies, then what, how to include or exclude them? What are the suitable criteria for that? And here, actually I think here it was the phase where I struggled or we struggled the most because there are very few like really detailed examples from the natural research management literature. There are examples of course from the other domains from health domain, from other domains. So but not so specific from the environmental domain. And even when there are like examples from keywords that have been used, there are not so many examples that detail out the exact approach, for example, like search string level. What are the exact search strings that have been used? So it's very difficult to comprehend and to invent like a good strategy for a beginner. And so we ended up with this kind of search model or search depiction. And we went from like over 1,000 records to 50, about 50 papers, final papers included in the analysis. So we call this like a systematic literature search, but like the analysis is more qualitative than. And what about then the final phase? Data analysis, what should one do with the literature? So what was our, what were our questions and the challenges in this sense? So maybe one of the most important ones was that we needed to revise the categories in the initial phase, we needed to revise the categories in the initial framework quite a lot. So when doing the analysis, you find new topics. And as Neil also introduced in the beginning about the thematic synthesis, that actually the framework synthesis goes hand in hand often with the framework synthesis. So you find new themes and you find new relevant aspects that you need to include. So you need to revise your framework, but not too much also because it's like a preconceived set still. So you need to find the balance. And it's maybe in some, it's like a continuous balancing act between generalizing your findings and also to how to take into account for the context that is in the individual studies. So it's not the quantitative counting of numbers, of course. And also at the practical level, it was the language use that was very different in different studies. So they all came from the natural resource management but which is very broad. So for example, even the learning term itself, it's conceptualized or dealt with very differently in the literature. So for example, in one set of literature, it can be a process and in another set of literature, it can be an outcome. So and also the outcomes that we identified, it's sometimes considered as process and sometimes as an outcome. So what was our approach there was that we tried to be as close as possible to the meaning of what the original article said. So that was the strategy that we took. So here you can see the revised framework that we built in with the topics. And this is like to sum up. It's kind of a way to build like a conceptually and empirically grounded framework that you can use then for like criticizing an existing framework or model or like building more like practically related topics in this field. So but of course, as I mentioned also, for me it was a challenge in this sense that it was like not a big research project but like postdoctoral more or less not individual study but still it was, I think that in the future the team would be really important in this sense and also the time because yeah, I think it takes time and good results don't come just like that. So, but many thanks and I hope that maybe it was something for Fuse. Thank you so much, Monica. I think it was very interesting and you were talking about this continuous balancing act. So was there a moment where you were almost despairing or were you at the same time kind of feeling that it would be okay in the end or was it, it seemed quite difficult for part of the project at least, but. Yeah, well, it's at some point you just need to set the line that here it's the boundary and you have to finalize your findings and it's no matter how you feel about it but of course, yeah, it's a balance between the context and between like the general finding the general topics and the general issues. Well, thank you so much for sharing. I think we have some questions from the audience about this work. Should we have anybody, yeah, down there. Hi, my name is Anna. I share your experience, but I did it during my PhD. It's nice to have supervisors but it would be better to have a team. I agree very much so, just a quick question. How did you settle on it being a framework synthesis? Did you settle it from the beginning or did you, because I did the same, exactly the same but it was more of, I had a systematic review master's course a couple years before then I thought it would be a good idea to do a review and then I was reading the books about systematic reviews and I thought, okay, framework synthesis might work but right now after Neil's presentation I'm wondering if I should have done a thematic one instead. So just your reflections, how did you get with it? Yeah, thank you. Yes, it's a very relevant question actually and I forgot to mention that but it's very good that it came up. No, actually, when I began I didn't know too much about the opportunities that I have actually. I knew that there is a qualitative synthesis pro-topic, so you can choose but what kind of different methods you can have there, actually I didn't know. So it came along with the process that I was reading the methodological literature from other fields and I also contacted my friend, Piljana and for the systematic searches part and actually the thematic and framework synthesis methods I also learned from the literature and then I consulted my colleagues and then I get to more know about the method and then I thought, oh, maybe this would be appropriate for the question and for the topic. So it wasn't clear from the start. Okay, other questions from the audience, we open here. Thank you for the interesting research results. I have also two questions. First, once you get these 53 full texts, how did you read them? Like did you look for specific keywords to see the learning or yeah, that process I really want to understand to learn from you and second question, how did you go along with the limitation that Web of Science only look at the abstracts? Maybe there was like a very important information in the full text in the first stage when you were looking for the keywords. Yeah, well, maybe I start from the second question. We took it of course as a pre-assumption that the most important info is reflected in the abstract and in the keywords and in the text in the title. So, but of course, if it hasn't, then it's the thing that we missed and there are of course, I guess different kinds of approaches that you can do when like, for example, like screening through like a certain number of the results, for example, from each page. That's also, I guess, one way of approaching it. But yeah, so I guess basically that you take it for granted that abstract reflects it. And the second question, well, it's basically, I applied also in addition to the framework synthesis, also the thematic synthesis, which I didn't go into deep in the detail right now. So the second part was actually thematic synthesis. So we built themes from the keywords and from the like, from the keywords we got from the keywords that the basic terms that emerge from the text and also from the excerpts from the text. So this is basically like thematic analysis that Neil was explaining. So this was the approach. OK, any other questions from the audience here? Do we have anything from the web as of now? OK, well, thank you so much, then, Monica. Would you have something you want to ask? I just wanted to ask, would you now use different methods than the framework synthesis for the first part? Or would you go with the thematic synthesis all the way, or you're happy with the outcome of what was produced by the framework synthesis? I think it provided quite a good structure in this sense. But for the future work, I think that it would be also nice to be more inductive, bottom up. But this was just, I think, provided good answer, more or less. Thank you so much again, Monica. Thank you. OK, so for the next speaker, we will be using not our eco technology, perhaps, but our other technology by connecting Ruth Garcide from the University of Exeter, who will be speaking, actually, giving us two talks this morning. The first one will be on the Introduction to Realist Reviews. Very interesting topic, I think, indeed. And yes. Go, OK. I will keep going. Yes, great. And after that first talk, we will have a coffee break. You can stretch your legs of it. And then Ruth will come back and talk about metatheanography, also a very interesting approach. And I'll try and gather questions from you afterwards. Thanks. I'm just trying to share my screen. Can you see that? You're on the big screen. Yeah, we're going to have to see you when I should be able to do it. Ruth, if you can put on your headset, your phone's at least. Can you put in your headphones or headset? Do you have phones? Ruth, can you put on a headset? Great, because we can hear ourselves echo. Much better. Shall I share my screen? Can you see that now? You can see the screen. It's small, though. Just trying to do it onto slideshow. It doesn't seem to like it at me. I might just close it and reopen it. Seems a bit up and on just again. Hello, what was that? Are you on your phone, Ruth? What was the question? Are you on your phone? Am I on my phone? No. OK, because it's showing up and down. It's showing very... No, I'm not. Oh, yeah. Very, very difficult to hear you. Sorry, everything's suddenly gone on at no slow. I'm just trying to do my power points. Hi. I'm thinking maybe we should run your power point from here and have your voice only. Can you hear me? I can't hear you properly. The sound from your end is very poor. I wonder if we should just reconnect. Yeah, OK. Can I call her? I will call her again. Yeah. Should I tell her just that we are going to run the power point? Ruth? Can you hear us better? I'm trying to call her. Should I call her on my phone? It's between our internet spinal nerve. So it's her end, so there's no pain in it. We heard you there for a second, Ruth. She's not answering the chat. Yeah, maybe if you turn it off. Oh, Ruth? Medium. Shoot. It's like the stakeholders. Thank you. Without video. And she died and put to presentation. Get it ready there. Yeah. Yeah. No, No, we don't have a, we didn't set up the Skype to Oh, I see what you mean. She's not answering the chats here, so there's not much we can do if she's not. She's not answering the chats here, so there's not much we can do if she's not. She's not answering the chats here, so there's not much we can do if she's not. She's not answering the chats here, so there's not much we can do if she's not. Okay, so a switch in the program because of technical problems. So thank you very much, Karolina Frédixson from Skolfordens Institutet for now speaking on experience from conducting a systematic review based on qualitative research and metasynthesis. And very interesting. You also have a few copies of your publication here in Swedish, but we have many happy translators in the room, so don't be shy. Thank you. I also have an information sheet if you want that instead. So I'm at the Swedish Institute for Educational Research. We started in 2015 and this autumn we published our first systematic reviews. And I'm going to talk about one of them. We do systematic reviews for teachers on teaching. So the teacher are meant to be able to use them in their teaching when they plan, when they perform their teaching. I will stick quite close to my manuscript. My English is not so good as the previous presenters. But I hope I will try to take it slow also. So you understand what I'm saying. This systematic reviews, we conducted it in a project group consisted of my colleagues at the Institute, Ida, who is here today. Eva Bergman, Maria Bergman, Sara Fundell. And we also got great help from another colleague, Linda Ekström, with a response on her manuscript. And we also had two researchers in the project. It was Eva Noreen, who is at Stockholm University, and Joakim Samuelsson at Linköping University. I'm going to concentrate this presentation to the actual synthesis, but also in brief say something about the different steps in the systematic review process. As you know, there are different steps in the systematic review. There are the search strategies, inclusion criteria, relevance and quality appraisal, and the synthesis. But beyond this commonalities, there are considerable diversity, as you know. Since the methods used depend on the review question and the primary research included in the review. And one thing that differs is the synthesis. They are broadly characterized as either aggregative synthesis or configurative synthesis. And we have done a qualitative analysis, and so we have followed the logic of configuration. And so the aggregative synthesis follows more of a quantitative analysis. And configuration. We have a definition of this from Sandalowski. It's the action of placing study findings alongside one another in order to build up a picture of the whole and how they relate to one another. The references, the literature that I have cited is, I have a list in the end of the presentation. You can get it there if you want to read more. So, you know that there is an ongoing discussion about the need for developing methods for synthesizing this kind of research, qualitative results. And one example, of course, is this workshop. In a large part of educational research are of a qualitative character, and the discussion is going on also there. And one example of this is from an article talking about the need for synthesizing educational research. And this systematic review that I'm talking about is about mathematics education, so that's why I'm taking this. So, mathematics education has benefited from qualitative methodological approaches over the past 40 years. Only the number, type and quality of qualitative research studies in mathematics education has increased. Little is known about how a collective body of qualitative research findings contributes to our understanding of a particular topic within the field. And they also say, in other words, there's a lack of knowledge about how to integrate or synthesize findings across qualitative studies in mathematics education in order to influence policy and practice. So, that was a bit about the background. The search strategy, and it centers around our research question, which was the following. This is a research question, what are the characteristics of classroom dialogue in mathematics that involve pupils in collaborative mathematical reasoning and what characterizes the teacher's role? So, it's about classroom dialogue in mathematics in whole class. And the classroom dialogue in mathematics can take on different forms and thereby involve pupils in different ways. So, quite a common way is that the teacher dominates the conversation and the pupils are passive receivers of the instruction, participating only with short answers to teacher's questions. We were interested in finding out what it is that characterizes classroom dialogues in which pupils are active participants and what is it that the teachers actually do to facilitate the pupils' active participation. And from our research question, we, with the help of our information specialist, constructed a complex search string, divided in three different blocks and carried out searches in international databases. And we had the following inclusion criteria. We were interested in empirical findings that in which the participant were teacher and pupils in compulsory school. The teaching or the methods in teaching should be whole class dialogue in mathematics. The result should concern the relation between the teaching and the pupils' participation. So, it should not only be about pupils' participation and description of that. It should not be only about the teaching, it should be about those two things interacting. And we wanted, the result should be in the form of recorded observations of the dialogue. So, recordings of the dialogue between the pupil and the teacher. And it should be regular teaching in the classroom. So, our qualitative results concerned interaction. It's not about the pupils or the teacher's attitudes or their perceptions of something, rather it concerns what they do, their action, how they interact. And I'm going to go through this because I don't think maybe you see the numbers. But this is an illustration of the amount of studies that have gone through the relevance and quality appraisal. Our search in international databases resulted in 10,528 studies. And we at the institute first went through the studies in order to sift out those who were clearly not relevant. And then we ended up in 900, which the researchers then went through. So, they first read Titan Abstract and then they ended up in 195 studies. And then they read the studies in full text and then they ended up with 25 studies. And after the quality appraisal, we had 18 studies left. So, there are 18 studies in the systematic reviews. And then to the quality, I haven't prepared any PowerPoint on the actual extraction of the data or the result, but maybe I can say something about this later. We analyzed the studies by comparing them and looking for differences and similarities. All articles are about classroom dialogue characterized by pupils' active participation. But they have a slightly different focus. You could say that they detail different things. So, some of the articles are characterized pupils' engagement in different kinds of talk. And different kinds of talk are defined as exploratory talk. And that was what we were interested in, the exploratory talk, because then the pupils are engaged and participate. And that is unlike in disputational talk or in cumulative talk. And we had articles, results that describe the teacher and the pupils' different roles. And they detail this, they go into and then also describe changes in these roles over time, for example. And one thing that they describe, they describe different kinds of questions that teachers ask, open and closed questions. And that's an example. And we had articles that nominate or give name to certain teacher actions that promote pupils' participation. For example, the teacher revoices a pupil's statement or reinforce valued behavior. The thing that I have in, what's the fiatstil? Bald. It's a concept that, it's the research's interpretation of the interaction, you see. So, we are dealing very much with concepts in the analysis. And we also had articles that identify different kinds of interaction norms that govern the conversation. It was like social and sociomathematical norms. So, in Braun Line, this was the kind of results that we had in our studies. And we wanted to see how do they relate to one another. And we tried to build up a picture of the whole. And here in Dark Blue, we have our questions. We were interested in classroom dialogue, as you see in the round circle there. And one of our questions was what characterizes these kind of classroom dialogues. And we had the result that detailed different kinds of talk. And we highlighted exploratory talk as a kind of talk that involved pupil. And we had the question of what characterizes the teacher's role. And we had results about that. Among other things, it said it was about that. The teacher actually supports the pupil's participation in exploratory talk by, for instance, asking open questions, listening carefully to the pupil's ideas and make use of them in the conversation. And the studies was about that and described that in detail. And this is another role that the teacher has in IRE. It stands for initiation, response and evaluation. And where the teacher dominates the conversation and the pupil's tasks are limited to providing correct, often short answers. And the results goes on. We had the result that denominated the teacher's different actions. Name them. The teacher exemplified, the teacher reinforced, the teacher voiced. And this is about actions that promote the pupil's actual participation in the dialogue. And we had also other kinds of teacher actions, where the teacher asked the pupils to explain their solutions or motivate their solutions, compare different kinds of solutions. And this was actions that engaged the pupils in the mathematics. So it was kind of slightly different actions. And also we had this result concerning different norms. So this different kind of engagement in the conversation and in the mathematics concerns different kinds of interactional norms. We have the one, the social norms, promoting pupil's behavior, promoting pupil's participation in the dialogue. And we have the socio-mathematical norms, promoting pupil's participation in the actual mathematics. So I won't go into the results so much. It's just an illustration of our synthesis. But we have concluded that the teacher faced two main challenges when it comes to engaging pupil in exploratory talk. One is to engage the pupils in the dialogue at all. That is, so the pupil contribute to the conversation and also listen to what the other pupil says. The other challenge is to engage pupils in genuine mathematical inquiry so that the conversation not only centers on the surface level, but for example concerning procedural issues or on how to go about. And this is what the synthesis is about. They go into these kind of challenges. So we say that we have done a methasynthesis inspired by metta ethnography. And with methasynthesis we mean that it's a generic term representative of a collection of methodological approaches which represents rigorous attempts to render what exists within a body of evidence-based qualitative studies into a coherent and synthesized product. And so the methasynthesis is about the actual synthesis, not the whole process of conducting a systematic review. And we are inspired, we say, by metta ethnography and especially what they call line of argument. And this citation begins with our line of argument recognized that often people study different aspects of a phenomena and that it might be possible to think through this to offer a fuller account of the phenomenon by arranging the study's metaphors or concepts in some order that allowed us to construct an argument about what the sets of ethnographies says. That is a line of argument. So it's a way of collecting the studies and say what they say taking together. And how can this knowledge be used then, these qualitative results? How can it be generalized? So we say that this research do not provide quick fixes for teacher about action or activities guaranteed to work regardless of context. Pupils are different both in terms of abilities and experience and other contextual factors also vary. However, this result can provide us with useful knowledge of actions that have the potential of engaging pupils in collaborative mathematical reasoning and maybe more importantly how and why these actions are of importance. So, and this I will show you some quotations that are in line with our argument about generalization. It's from Larsson and he says generalization is about the potential use of a piece of research. It's an act which is completed when someone can make sense of situations or processes or other phenomena with the help of interpretations which emanate from research texts. And we can compare the use of substantial portion of qualitative research with the development of a diagnostic repertoire. It is interpretational tool for identifying patterns in the everyday world and make better sense of the world around us. And here is the literature that I have cited in this presentation and we have had a lot of help from when we did our analysis. Thank you so much, Carolina. Very interesting. I think one question that came into my mind immediately is of course your focus on this particular group. Of course the teachers are supposed to use your work in a very direct way which I think is a little bit different to some of the other reviews that we are seeing today or some of the approaches. Also I think we will speak about stakeholder engagement in the afternoon in a more general way but this is interesting. So how will you go about then to actually get the teachers to kind of make use of this? Yes, that's a good question. That's a very good question. I think the institute is still preparing for it but when we wrote it we thought about the teacher very much. So we have very many transcripts from dialogues illustrating what our line of argument you could say so that they could recognise themselves in the situations and so so that the report should be easy to read. That's interesting. So you actually crafted the report partly with... Yes. Yes, okay, that's interesting. Okay, yes, Matilda. Can I just follow up on that because I was curious of is this a method that is used in Swedish like this mathematical talks? Is that something you know that is used by Swedish teachers or was this someone else who thought this might be something that could be introduced? You have it in the curriculum that they should... I'm not very good in English, I'm sorry, but it's written in the curriculum that they should talk mathematics and develop their proficiency in reasoning mathematically. So it was very close connected to the curriculum. So it's actually prescribed in the... It's actually written that this is a method they should use as teachers. Is that correct? I think it's described as a goal for the pupils to reach through and this is then how can the teacher promote the pupils' active participation in the mathematical dialogue. Questions? Any other questions? Viljana. Mat, the synthesis that you mentioned, how often is it used and in which contexts it's used usually if you can summarize in a sentence? I didn't quite understand. So how often the metasynthesis, how often is it used and for which kinds of questions it is usually applied? Okay, I think metasynthesis, it's a broad concept. It's a more generic term that encompass different methods for qualitative synthesis. Sorry. So one question I had was it seemed to me quite similar to the thematic synthesis that we'd experienced. I wonder whether thematic synthesis is a type of metasynthesis. But I guess what I saw that was different between yours and ours was that I guess mine was very different because every research article was dealing with the same conceptual model. Whereas perhaps with your system you had different parts of your conceptual model dealt with by different groups of studies. So that was the way the metas ethnography influence came in that you were building a bigger picture than you could do with any individual article because the studies were focusing on maybe just this part and somewhere on that. Is that right? Metas ethnography is bridging conceptual models across different studies, whereas thematic synthesis is more aggregative to thematic synthesis and yours is more... Yeah, maybe. But ethnography also has other kind of synthesis that they described in their... It's a refutational synthesis where the results contest each other, refutational. And the reciprocal where the results are about roughly the same thing but that they use different concepts. So they have... And we use this line of argument because we thought that fitted for our purpose. Yeah, so that's deeper than thematic synthesis then. Yeah, maybe. I don't know. That's very interesting. Thank you. I'm curious if this is your first review. Did it end up being what you thought it would be like? Not the process itself but also the sort of conclusions. Do you have any lessons for the future? Yeah, we have learned a lot. I can say we learned a lot. But, well, we are quite content. We ended up with a report. We had very good researches, I should say, in our project also and a good climate at the institute to work collaborative with the method. Yeah. Okay, thank you so much. Thank you. Thanks a lot. Okay, so after this reshuffling of the program, we will have a coffee break now and then during the coffee break, I think work to get the Skype working, yes. So we welcome you all back in about 30 minutes or 25 minutes, perhaps. Oh, sorry, excuse me, sorry. Oh, I was only letting you go here out in the snow and play. No, sorry. Please be back in 15 minutes. Thank you. So, we're trying again. Hopefully more successful this time. So we will now go on to hand over the word to Ruth Garcide from the University of Exeter who will start by giving us an introduction to Meta ethnography, right? And then we'll have a few... Sorry. I'm sorry, excuse me, I will skip the realist. We had too much realism here in the morning, so that's why, no. But so, sorry, the first talk will be the realist reviews, of course. And then we will have questions from the audience, a little break for Ruth, and then go on to the next talk, then on the Meta method. Okay, so please, go ahead, Ruth. I hope you can hear me all right, and sorry for the technical difficulties earlier. I'm going to give a brief introduction to realist review first this morning. My name is Ruth Garcide. I'm a senior lecturer in evidence synthesis at the European Centre for Environment and Human Health, which is part of the University of Exeter Medical School. And my role is to do systematic reviews of evidence synthesis across a range of different topics. And I'm particularly interested in complex questions that require the use of a range of different types of evidence, and particularly I'm interested in qualitative evidence synthesis. I'm a co-convener for the Cochrane Qualitative and Implementation Methods Group. I have been working on different methods of qualitative synthesis for about 10 years now. So, I think my presentations are going to be perhaps slightly different to the ones you've had this morning, so I'm going to give a kind of overview of the methods first and some of the assumptions behind it, the sorts of questions that it answers, and then give some examples. In this case, I'm giving one example, but we can talk about other possibilities at the end if that's helpful. So, if I can make the thing move on, there we go. So, what is a realist review, the basic questions first? So, this is a type of systematic review which is often referred to as theory-driven, and its goal is more about explaining and understanding what's going on, rather than judging and summarizing the evidence which is out there. So, it takes a slightly different perspective to either reviews of quantitative or qualitative purely evidence. It's sometimes referred to as a logic of inquiry, a mechanism for looking at things, rather than a methodology. And it can incorporate both qualitative and quantitative evidence into it. Some of you may have already seen this helpful diagram from the epicenter from James Tominson Colleagues, which tries to explain different types of synthesis approaches from the more aggregative over here on the right to the more configuring or interpretative. And they think of these, if you look at the row along the bottom, in relation to theory. So, while a meta-analysis and content analysis is looking for the sum of what's going on and looks to test theory, so you're looking to test hypotheses and see whether or not they're true. And although I think there's a lot of words and conversation about reducing bias and how objective these kinds of reviews are, anyone who's done them will tell you that there is interpretation about what goes into the review, how you define the question, and so on. But that these tend to happen before and after the synthesis, not in it. So, there are ways of framing the question or using the results from the synthesis. And then, right over on the left-hand side here, the more configuring or interpretative reviews, and they've got metроethnography here as an example, which I'm going to speak about yet next. And in this case, the interpretation happens during the synthesis to build meaning. And then you're generating theories. So, you're using empirical data to propose theory. And then realist, which is where I'm speaking today, I think you've already had a presentation about framework synthesis. You're exploring theory. So, there's just one way of thinking about the different approaches, their relationship to interpretation and their relationship to theory. So, some people have said, well, there's so many methods out there. There's loads of methods within quantitative systematic review. There's lots of methods in qualitative evidence synthesis. Why do we need another method? And it's been proposed as a way of thinking about complexity in questions which are subject to systematic review. You can also have realist evaluations for complex interventions in primary research as well. And it takes a different kind of perspective to traditional positivist research, and particularly in the recognition that interactions between mechanisms and context are really important. So, it's not just does something work, it's how does it work in this context with this mechanism. And it wants to explore, create, and sometimes test mid-range theories of impact. And a mid-range theory is somewhere between the nuts and bolts daily stuff and the grand theories of everything. It has to be something which is explanatory and slightly abstracted, but also needs to be quite close to the data. And according to Mark Pearson and colleagues, realist approaches are grounded in the realist philosophy of science which holds that it is possible to discern generative mechanisms, but they are thought about within the social systems in which they operate. So, we're not looking at these very sort of pure positivist ideals about what works and thinking about social systems often as things which bias the findings rather than thinking about them as potentially very important aspects which may be the reason or the way in which something becomes successful or not successful. So, it does have a different sort of epistemological and ontological background. So, having said that realist evaluation and realist review is thinking about complex interventions, just some definitions around what complex means and these are taken from the MRC report about what complex interventions are in the context of health. So, these are mostly sort of public health type interventions that have been considered in this way for the health background. So, this is thinking about the fact that there may be interacting components both in the intervention and control groups that there might be a number of different behaviors that need to be changed or impacted on by an intervention. And those behaviors might be both those who are receiving an intervention but also those who are delivering an intervention. It may also be that there's lots of different groups who are targeted. So, for example, in a big obesity program there might be lots of different types of activities that go on under that activity which might include trying to influence schools' behavior, the behavior of councils or local planning people. It might be targeting children and adults, people who are obese and people who are not. And those types of complexities in an intervention soon add up to very different things. It may also be that there's a lot of different outcomes which you're hoping to influence and that they may be very variable as well. And also that there may need to be a degree of flexibility or tailoring in the interventions to ensure that they work. So, I'm sure all these aspects are very familiar for people who are thinking about conservation and environmental management where the interventions that you're looking at are almost invariably complex. And so, realist approaches are potentially very useful. So, they all really came about trying to what we call unpack the black box where there may be very long and complex chains of influence. So, where A, doing an intervention leads to not just C, D and E, but maybe X, Y and Z further down the line that these are very complicated and that in the past we've been quite bad at articulating exactly what is going on in the black box where the miracle occurs. So, we're hoping to unpack the black box. We're thinking about chains of causation which may not be linear, maybe very long, may have feedback loops within them. And importantly, we're thinking about interventions where people are what makes something successful or not successful. And those inevitably become complex interventions and also that these may be very highly context dependent. So, one way of expressing this is to say that the causal relationship between two events can only be inferred where underlying mechanisms of action and the context in which that mechanism occurs is fully understood. And in realist terms this is the way that that's often thought about that it's the mechanism of action within a specific context that leads to a particular outcome. So, the implications of that are that the same outcome might be achieved through a different mechanism of action in a different context, that they may not be the same in all circumstances or that the mechanism which is used may have a different expression in different contexts to make it happen. And you sometimes hear these referred to as CMO configurations. The language in realist evaluation review is quite specialist and I think people find it quite off-putting. So, the context mechanism outcome configurations are one way that people talk about realist reviews and realist evaluations. So, the types of questions that a realist review aims to understand and again, this is a very common expression or set of words that you hear in relation to realist reviews. It's not what works like it isn't an effective review. It's explicitly what is it about this program that works for whom and in what circumstances. So, it's focusing more on how and why things work or fail and trying to understand how those things come about. So, this is a diagram which tries to capture that. So, a program intervention may change the context or the mechanisms which changes the interaction between mechanisms and outcomes and produces a set of outcomes. And the other thing that realist reviews try to do is to think about both intended and unintended consequences. So, one of the problems with complex interventions is that they sometimes do things that you were hoping they wouldn't or hadn't even thought they might do. And thanks to Andrew Booth for this slide. In terms of what the processes are, this slide shows it's called a traditional Cochrane review but any traditional quantitative systematic review has these steps. And realist reviews also have a fairly structured set of activities that go along with them but they may be different to what you're used to in a standard systematic review and they tend to be a lot more iterative. So, that original slide that I showed you with the configuring versus aggregating systematic reviews, it said for a traditional review the interpretation is done before the protocol is written and maybe done after the analysis has been done but it doesn't happen through the synthesis whereas for a realist review these are iterative processes so that the interpretation may be happening as you go along. So, that means that the early stages are often about trying to define your terms, trying to understand what the correct question is for a particular topic and to articulate candidate theories and theories are just explanations here. Again, the searching may be much more iterative and inclusion criteria might change through the process as you realise that things that you didn't know were going to matter do matter and things you thought might be important maybe don't have the kind of information that you want. And you don't necessarily appraise the quality of studies in traditional ways either in a very pragmatic way of thinking about quality. So, studies might be included in the review because they are providing a particularly relevant contribution to the theory development and so it may not be the traditional ways of quality assessing. Similarly, extracted data may not be the same from all papers which again would not be normal in a traditional systematic review. So, the data is synthesised to achieve a refinement of programme theory and the programme theory is to determine what works for whom and what circumstances. So, again, it's a very precise definition about what you do and what your outcomes are but may be different to other ways of doing it. The other things I think are probably more similar, dissemination and recommendations are the same throughout but it does have a sort of different focus. So, these are the key steps clarifying the scope, searching for evidence, appraising and extracting data, synthesising evidence and disseminating but again they may go back and forth. You may do some stages several times and refine them through the process. I won't go through this in detail but I can share these slides. Again, this is just another way of thinking about what questions are being asked at different stages of the review and you'll see that it has a focus on defining what things are, what's the nature and content of the intervention, what are policies trying to do, for example. And the searching is all about trying to develop programme theories so it's all about trying to think about explanations for how things work in particular situations. And I've taken this from a paper by Rycroft Malone and all. So, one of the other ways of thinking about what's going on is to realise that interventions and policies are theories and that's a core informing belief, if you like, of realist review so that if we propose doing something even if it's not very well articulated and there isn't a programme theory or there isn't a logic model or there isn't even maybe a textual explanation about how we think something's going to happen, then are always implicit if not explicit theories behind interventions. We're assuming that if we do this then something else will happen as a result of it. So, I'm just going to very briefly present for the last few minutes on a realist review that I've been working on here with colleagues in the UK which is looking at social prescribing. And social prescribing is becoming increasingly popular in the UK. It doesn't have a particularly well-developed evidence base and the idea is that within primary care for some patients the appropriate prescription is not necessarily a drug or health technology that it may actually be something social. So, for people who have, for example, ongoing mental health problems or have stress that what they might need is encouragement to join social groups who are doing artistic or physical activity activities or involved in charity work or whatever. And this has been taken up with great enthusiasm because the NHS is running out of money and without very many ideas about how these might work. And one of the things we said was that actually the first bit of that process is completely not understood. Nobody said what the service referral needs to look like. So it doesn't matter how effective your knitting group or your physical activity group is if you don't actually get transferred from primary care into the thing. So we wanted to use realist approaches to try and explore and explain different methods of referral into social prescribing and say why they may or may not work. So we were primarily trying to develop theories about how we might best support people into social prescribing. So we did a lot of searching, but in two main phases. So the first one was to identify different processes for social prescribing and to use these to develop programme theories. And by programme theories, we developed a series of if-then statements as a way of expressing those. And then once we developed the if-then statements, we asked our advisory group to prioritise the list of if-then statements and we then did targeted searches to try and find evidence which would support or not support the if-then statements that we were developing. So we had a broad set and then a very targeted set of searches. Overall, we extracted information from 109 papers of which quite a small proportion were kept conceptually rich and developed a lot of our theory while the others perhaps just offered a little bit of supporting information. We then used these to develop programme theory which were expressed through these if-then statements and we identified that there were three key stages how you enroll people, how you engage somebody in an activity and how you keep them going. So we used our advisory group to prioritise within those three stages which statements we should try and develop further. And as an example, these are the sorts of if-then statements which we've come up with which propose programme theory for how these activities are most likely to work. So, for example, the first one, if the patient believes the social prescription will do them good, then they will be receptive to a referral. And we further developed that with a text in green to show what influences whether or not the patient does believe these things, some of which may be modifiable within the consultation or beyond. So these are our kind of programme theories and we use that to develop a kind of overarching diagram which showed at what stage different aspects need to be in place in order to try and ensure that people were transferred from primary care into the community. And this is our main output from the real estate review. So I'll wrap up, but just to say that Ramesses is a project which is trying to develop and support both realist and metanarrative reviews. There's a project page. There's a really active gist mail discussion list, which the address is there called Ramesses. And they're incredibly supportive. If you are doing a realist review and you're struggling and you want to know, you know, what should I read about this? What should I do about the other? It's a very responsive discussion list. They've also published the Ramesses groups and reporting guidelines for realist reviews, which again are very helpful for conducting them as well as for reporting them. So I really recommend if you're doing a realist review or would like to just like to know more to sort of get yourself on the gist mail discussion list. And that's all for me. Thank you so much, Ruth. I'm going to stand over here so you can see me, I think. There. Okay, very interesting. I think we have a number of questions in the room, I'm guessing. I was interested, you mentioned also with the realist reviews that you're using some of this gray literature, right? Could you say a few words more on that because I think that's quite interesting and how would you define what would be included in that part of your literature review? Yeah, so we used a lot of gray literature for this because the academic literature is not particularly well developed and there are a lot of kind of local case studies and evaluations where people are describing, not necessarily the effectiveness of what they've done but they're describing the processes of what social prescribing activities they set up, who they think it's good for, how they've tried to do it. So we're lucky that there is a social prescribing network locally and we asked them and our expert advisory group to tell us about any, you know, non-academic publications that they knew about and then we sort of snowballed from there really to try and get a hold of more information. Thanks. Questions from the audience, sorry. Thank you. I'm curious if you have any examples of where you've used realist reviews in decision making. Is it mostly for implementation or what kind of decisions within, for example, healthcare? Could you base on this type of review? Okay, I don't have an example from something that I've done although we're hoping that this review about social prescribing will be informing policy and practice but the person who is kind of a, you know, Mr. Realist is a guy called Ray Porton and he produced a report for the Department of Health and for NICE, the National Institute for Health and Care Excellence in the UK when the government was proposing banning smoking in cars with children. So obviously that was a new policy so there wasn't any evidence out there about whether it was effective to stop smoking in cars but what Ray Porton proposed was that you could understand something about how this might come about and whether it was likely to be effective by looking at other regulations around cars and smoking. So one was about smoking in public places and one was around the rules for having children in car seats and he also used information about the law and the laws that were in operation trying to stop people speaking on mobile phones while driving and so that's, I think, one of the really interesting things about a Realist's method that the focus is on the mechanism rather than on the intervention components so he was able to say, well, there are things that we can understand from this broader evidence base which isn't about smoking in cars but if we combine things about regulations for drivers regulations that protect children in cars and other smoking regulations we may be able to understand something about how and why and whether this law to try and prevent people smoking in cars where there were children was likely to be successful and some of the ways that that law and policy was introduced drew on those experiences as a way of trying to ensure that it was most likely to be impactful. For the questions, sorry, you had one, Belian, I think. Hi, Ruth, thank you so much for this presentation. Super clear and great and I wanted to ask you, you mentioned the program theories and that you built if and then sentences. Have you used your stakeholder networks to do that or have you done that, build your program theories within the team? In the example that I showed you, we initially built our program theories in the team and then we took them to the expert advisory group for some input, more of a kind of sense check, I suppose. And then we also, as I said, we had 40 statements that we developed as the candidate theories and then we took them to the advisory board to ask them which ones they thought were most relevant or important for us to further develop because we just didn't have the capacity to do all of the, all 40. So that was the way that we involved our stakeholders. I mean, I think you probably could develop them with stakeholders from the beginning. It would take more time. But I've also in another review, which wasn't really a realist review, but did try and take more of a sort of theory-based role about what the health and wellbeing impacts are of taking part in conservation activities and we use the expert advisory board there to articulate a kind of theory of change or logic model for how they thought things were working at the beginning of that review. So we have done that in other ways, but not actually within that realist process. Yes, Neil. Thanks very much, Ruth. That was really interesting. I've been really texting on my phone during part of your talk because my sister works, she runs a practice in Tower Hamlets in London. And she does a lot of social prescribing. Yeah, is that the Bow Street practice? No, she used to work at the Bow Street practice. Now she works somewhere different, but she has told me that she thinks she was aware of this review, but she's swamped with so much evidence she doesn't know what to focus on. Do you think because they deal with complexity more, are they more holistic? Are they a source of evidence that people should perhaps prioritise over more salami slicing systematic review methods? Do they give a better comprehensive picture of a topic, although you're obviously an author of that, so you'd definitely tell her to read it. Should I tell her to focus on realist reviews more to get an overview if she's swamped with evidence? I think that's a really interesting question because I think it does depend what you want to know and not everybody. I mean, a lot of people are quite uncomfortable with realist reviews because they operate in that middle ground between the positivist and the interpretivist ways of thinking. And I also think they're quite hard to describe, so it's hard to describe what you've done and why and if you were to try and document every decision you make about why some people pieces of evidence are in and some people pieces are out, the report itself would be so unwieldy, nobody would read it anyway, so they're quite difficult to write up well, I think, in a way that is both transparent and also meaningful to readers. But I think it's a really interesting question and I think it does depend what you want to know because a lot of people do just want to know well, does it work or can it work? And then the realist review may make that problematize that. But I think I'm really interested in them as a way of thinking as much as a way of producing things because I really think this idea that thinking about standardizing the mechanism where the actual intervention components that serve that purpose might be very different, but say for example you're wanting to do a project which you know is going to require buy-in by the local community. So you want to build in a lot of local community work, working with stakeholders, running events, trying to make sure that you are reflecting the needs and priorities of the community and that must be what we need to do for many different public health and conservation or environmental management type activities we need to bring the community with us. And you can't standardize the what because the whole point of involving the community is for it to be responsive to what the communities want within a certain mechanism. So that's where I think they are really useful is to think about, okay what do we need to do? What do we need to do to build community trust? For example, how you do that might be very dependent on the very particular locality that you're working in, but you know that you've got an aim of trying to produce a particular sort of mechanism to allow you to work with that community. So I think for me that's where I see it being incredibly useful. It's just a different way of thinking about it. You're not parachuting in a one size fits all intervention. You're thinking what is the purpose of doing it in this way? And then refining how you do something in a particular context. Hi Ruth, I think it's been a very interesting talk and I'm very interested in the realist review and I was wondering if you could elaborate a little bit more on what mechanisms actually are, because I found them really interesting and I was talking, I was actually at a seminar with Jeff Wong about and we were discussing whether or not is mechanisms something that's only something in us a reaction that we do work and mechanism be a societal mechanism or so. I was just curious what your view on how would you describe what mechanisms actually are. I think that it's an ongoing question for people within and without realist reviews. So whenever I talk to people who are doing a realist review or doing that review, we're forever going hang on a minute, is this a mechanism or is it actually an outcome that we're talking about? So those CMO configurations are potentially a lot less fixed than you think they are. But I suppose I try and think about them in terms of the hows. How is something working? So if you're thinking of, I don't know of a public health intervention, for example and you want to encourage active travel then the how might be make it easier for people to choose walking and cycling. But the what might be very different depending on where you are. So for a school that might mean facilitating walking buses changing the way you can access the school yards in cars or something like that. And it might be very different in different places. So I think I try and think about it as the how. I don't think it always works like that. But yeah. Well thank you so much Ruth. Now looking forward to your next intervention here, which will be on meta ethnography. Do you need a 30 second break? No, I'm good to go. Okay. Well thank you so much. Looking forward to it. Okay, so the second talk I'm giving today is an introduction to meta ethnography. And I thought I would just roll back for this talk to talk about the nature of qualitative research before we think about the ways in which we might synthesize it. So qualitative research is that which explores people's subjective understandings of their everyday lives. And it can include to collect data observations interviews, group interviews, analysis of texts or even analysis of behavior using videotape. And it involves the application of logical planned and thorough methods of data collection careful, thoughtful and above all rigorous analysis and that's taken from Pope and May's in 2006. So the sorts of questions that qualitative research might ask are things like what do people think about having this condition or what's their experience of receiving a particular interventions what do people think works about an intervention how does it work why do these aspects matter or don't matter how do we best implement something so they tend to be the why how what do you think what's your experience what's your belief what's your practice in different activities and synthesis of qualitative research came started to gain prominence a fair bit later than synthesis of quantitative research so it's worth rethinking about why we might want to do it and I think it's worth saying that there at least in the beginning was a certain amount of strategic thinking driving why people wanted to synthesize qualitative research there was a danger within their evidence-based medicine field that with systematic reviews primarily of randomized control trials qualitative research which had always been a little bit marginalised was likely to become more and more marginalised through the evidence-based practice movement and I don't know if any of you are familiar with the hierarchy of evidence pyramid which the evidence-based medicine movement uses to show where the most robust evidence sits but it's interesting to note that on many versions of that there is no place for qualitative research so you have systematic reviews at the top, randomized control trials and then down at the bottom is expert opinion and qualitative research doesn't fit anywhere so there was a drive for people within qualitative research in healthcare to think about well what could we do to make qualitative research amenable to systematic review and synthesis methods and then there were sort of less strategic and more meaningful drivers as well thought that it's less wasteful we often find that there is very poor referencing between qualitative research studies which are extensively about very similar topics so there is quite a lot of repetition and despite the impression that qualitative research can be very local or very locally relevant only that we find similar findings across studies and actually we might be being wasteful in terms of not making use of the previous research on similar topics and then the ones that I think are very powerful that one way of thinking about qualitative synthesis is that it creates better explanations it creates higher order conceptualization it may produce broader more all-encompassing theories and that this will lead to better truths so truths are more socially relevant more complete or better and there was a question earlier to one of your previous speakers about transferability and I think this is quite important with synthesis that for me the level at which qualitative evidence becomes more transferable to different settings is when it becomes more abstracted so theories developed out of qualitative research are more likely to be transferable across contexts and understandings than individual studies or very descriptive studies sorry there we go ok so what the findings from qualitative research and reviews look like they can be very different things so it might be that you end up defining a new concept or a new theory you might end up with a rich description of a particular phenomenology you might create a new typology and I'll be presenting an example of that later in the talk it might be a description of process it might be more explanations or theories and it might be developments of strategy so there's a whole variety of different outputs of qualitative research and of their reviews and synthesis and sometimes I think people are a bit unclear about what they're hoping to develop and we could do a better job about that so just to return to this diagram from the epicenter metre ethnography you can see is on the far side of the configuring an interpretative approach and it's generally thought that it's appropriate to generate theory and part of the reason I wanted to go back to the beginning and say well what is qualitative research is that metre ethnography is particular in the way that it thinks about which level of data it works at so after it looks at different levels of interpretation about how do we make sense of the world and there are first order constructs which are our everyday ways of making sense of our world and then there are what he called second order constructs which are social science researchers interpretations of this common sense world into academic concepts and theories and so in a research paper and I've got some examples of this at the end if anybody wants me to go through it in more detail the first order constructs would usually be the participant quotes when you're interviewing somebody the way in which they describe how they understand what's happening to them or their experiences or beliefs are first order constructs and in a paper those would be presented as and then maybe descriptive author findings around them as well and then the second order constructs are the academic author interpretation so when you're looking at a research paper these are the explanations the headers the concepts the diagrams any of that stuff and then our job as systematic review is to produce third order constructs which are the reviewer interpretations of this level of data there are some people who say well there's still just second order constructs because they're academic interpretations but I think just in terms of thinking about how you present and layout data it's useful to keep the one, two, three so you find lots of different sorts of qualitative data and qualitative interpretation and this is adapted from Sandelowski and Borosa's paper of ten years ago where they talk about this continuum of interpretation so exploratory thematic surveys where it might just be very descriptive and the second order constructs are very close to the first order constructs and then moving up to more descriptive and more explanatory interpretations where the level of abstraction or critical interpretation gets further away and she says well you know if you're not at least exploring thematic surveys you're not really doing qualitative research and then qualitative research may operate in this continuum of close to the data versus further away from the data and that has implications for the sorts of qualitative evidence synthesis that you can do so returning to what metro ethnography is and does there was a key text in 1988 written by Noblett and Hare about synthesizing qualitative studies it's a really good read it's really thin but it's important to note that this is a method of synthesis it didn't come out of the evidence based practice systematic review word at all so there's no guidance in that book about searching for studies or inclusion criteria or quality appraisal tools and Noblett himself has been quoted as saying he's amazed that this method is being used mostly in fields of professional practice as an evidence based practice it wasn't their interpretation it wasn't their intention at all they developed it because they done or his team had done a set of ethnographies about different schools and they'd been trying to develop some universal findings from those individual pieces of work so it's come from a very different world if you like and it is very explicitly interpretive so it sits on that far end of the configuring and interpreting things defining a synthesis as product of an activity where some set of parts is combined and integrated into a whole and it involves conceptual innovation so new interpretations and new thinking it was picked up in the evidence based medicine world in systematic review and synthesis good many years later in 2002 and this worked example using meterethnography to synthesize qualitative research in healthcare written by my colleague Nicky Britain and colleagues was the first real attempt to try and use it in this context and they demonstrated that it was possible to apply it to different sorts of activities but again this was a worked example so the other stage of the systematic review have really just been developed through case law through people doing it in the world there's no guidance there elsewhere the book itself describes seven stages of synthesis for meterethnography and there are a couple of differences to traditional systematic reviews even within qualitative systematic reviews so one of them is that Noblett and Herr suggest that exhaustive searches and trying to generalize from all the studies of a particular settings is a bad thing not a good thing and that you may end up with trite conclusions if you try and synthesize too many papers this is quite different to what most people think although it's also unhelpful because the number of papers which has been described as too many varies from six to about 40 so it doesn't really help us think about what might be too many and the other thing is that it talks about quality of papers in the synthesis as being better quality if they contribute more to the synthesis so you make a judgment through the process of synthesis and those papers which are more conceptually well developed which allow you to make sense of other papers get a stronger rating so it doesn't have again the traditional go through each paper and assess assess for bias although many people who do it will do that so the the key mechanism of synthesis itself is thought of as translation and this just means that you can read and reread and compare constructs between the different papers and think about whether one case is like another or not so you're permanently testing ideas against each other to say one is like another except that and as I've said before the important thing about the translation is that it happens at the conceptual level so they're explicit about translation being between second order constructs rather than first order constructs so just a reminder of what the first and second order constructs are and we're operating at a second level this is an example to show you the sort of thing that I mean so this is from CVD Cardiovascular Disease Prevention programs and this is just looking at how you build up from first order to third order interpretation so these are quotes from participants in different papers this is about the benefits of using food sampling which gave people to feel relaxed and ask questions this is also about having a person who's able to give information and advice this is how the original researchers have interpreted this kind of information so the first one was saying practical demonstrations may be more effective than written information this one is talking about the value of program champions and how we've developed that is to say well both of these things and these things are actually about the ability to develop a relationship with somebody who's delivering a program and so we've said actually these are although they're giving different examples these are actually about the same thing we've turned them into a third order interpretation to reflect that so this is the first kind of translation it's reciprocal translation where you're looking for similarities so you read all of the papers, you translate the themes into the terms so into the constructs of the others and pay attention to the different metaphors and organisers yes sorry so I've just said that it's very similar to a constant comparative method for those of you who are familiar with that kind of approach and it looks for about whether some concepts are better and this might be because they are more sophisticated or higher level so they cover more of the findings in another study they offer greater explanatory power and just to remind you that you know the interpretation is crucial so you are as the reviewer trying to make sense of these things and there are lots of different ways of juxtaposing concepts so you might be coding in Envivo but equally you might be tabulating findings using Mound Mups and colour coding and text descriptions to try and think about those things can you see this or is this too tiny it's okay cool so this is taken from a review that I did about experiences of women of heavy menstrual bleeding without any sort of malign cause and so this is lined up these are the four studies these are the findings within those studies so this is what the authors have said this is my label for those and this is my interpretation so this is a kind of reciprocal translation and this is a similar one this is taken from this Britain paper where the concepts within the primary studies and their second order interpretations are then collapsed into these higher order third order interpretations and you can see that tabulating these things make it relatively clear about how you build these things up so the second kind of translation that Noblett and Hare talk about is refutational translation and this is where you're looking for oppositional or counter argument findings so it's a bit like looking for disconfirming cases you're specifically looking for ideas that refute some of urging patterns in your initial translation and again this is an example from that review the metro ethnography of heavy menstrual bleeding and this was the refutational findings where only two studies talked about this and they came to an opposing conclusion so one study found that friends, colleagues, families were important to encourage people to recognise that they had a problem and that it wasn't something they should just put up with and they called this entering the sick role whereas this other paper about Elston found that other people belittled women's experiences of suffering very heavy bleeding so it was very opposite and we didn't have any way of explaining why this might be the case they were very similar participant groups it was very similar analysis there wasn't any obvious way of explaining why these different findings were coming about so this was a refutational finding oh and I even did the biggest and then the final type of translation is a line of argument and this is a kind of very holistic way of saying well what can we say about the whole what do we know about all of the findings that we've got here and this may particularly lead to the development of a new model or theory or new understanding and many meto ethnographies try to get to this stage of producing a line of argument this is an example of a diagrammatic representation of a line of argument and this is again from that Britain review it's about adherence to medicine taking and they developed a new typology of patients so rather than just having the dichotomous idea that some people are adherent to their prescribed medication regimes and some people are not they developed this idea that there were actually four groups of people and they defined them as passive acceptors active acceptors rejectors and active firefighters and that these different groups of people had different worries which they resolved in different ways so this developed a new typology of medicine taking and again this is an overarching synthesis a line of argument from that review about heavy menstrual bleeding which proposed that there was a medical disease model and a patient illness model which were very different and this and that the medical disease model which focused on blood loss failed to capture the experience of women and was very problematic and that's my last slide so thank you thank you so very interesting talk indeed questions from the audience down there no Miliana are you I see you have something you want to ask but we're running a little bit late but we I mean if we have one or two short questions we should allow Ruth to answer I think yeah I was curious a little bit because you were mentioning before that it sort of grew out of this feeling that we need to make qualitative synthesis more or qualitative research more acceptable by the recipients and do you feel that these reviews have been positively received by people in the medical field? How will people use these reviews? I'll answer that by telling you that it took the Cochrane qualitative and implementation methods group 10 years after it was formed to get qualitative evidence synthesis in the Cochrane library okay but we're working on it it is improving and actually policymakers are much more open to the value of qualitative evidence synthesis I think then practitioners sometimes so the NICE the National Institute for Health and Care Excellence in the UK which helps to make policy for the NHS now routinely include qualitative evidence synthesis in their certainly in their public health policymaking and WHO is also increasingly interested in using qualitative evidence synthesis and policymaking around how so yeah it's coming That seems positive, okay, Miljana Actually we were thinking to have our interactive part of the workshop just about that about how to increase the value of the qualitative research and how to use more qualitative research and how to affect the policy practice with qualitative research so since in environmental field when it comes to qualitative synthesis you know roots how rare death is so would you maybe have some general let's say comments on how to increase the use of qualitative synthesis and how to increase the impact of qualitative synthesis that will nicely actually be based for our discussion for the interactive part thank you I think in a way it's good that thinking about qualitative evidence synthesis is coming relatively early in the systematic review history for environmental management and conservation so that sort of gives me hope that the results are better and reporting standards are better there's a lot more examples of how to do these things well and how to and I think there's more pragmatism both in the qualitative research field and in policymakers where we're saying well you know this has to be not just intellectually interesting products useful products which can inform policy and practice so I think examples of where evidence is produced which can help people to understand the environment and I mean that in the kind of broadest human environment sense which they're trying to produce change how they understand that and what the qualities are that may cause their intervention to fail or succeed I mean one of the things that has happened in the health world is that the penny has dropped around one particular question so if you ask a question that says what are the barriers or facilitators to success of this particular intervention people suddenly understand the job that qualitative research can do on that question and that's both very encouraging but also very frustrating because I think that's a very limiting question to ask around barriers and facilitators but at least it's become a kind of shorthand to communicate what qualitative research can do and what qualitative synthesis can do to help people understand how to get a particular change into practice but I I mean I think it's a we'll learn by doing because we don't there may be different supportive qualities and different barriers in the environmental world so yeah thank you so much thank you for both of these very interesting both talks very interesting I think we'll keep on discussing them this afternoon so hand for Ruth please thank you thank you so much and now for our final talk before lunch we're happy to welcome Monika Hultgren from SBU and Monika will be talking using two recent SBU reports talking about the circular method right am I pronouncing that correctly? okay one report okay great okay so my name is Monika Hultgren so I work at the Swedish Council for Health Technology Assessment and Assessments of Social Services and I've been asked today to talk about our experiences of using circle to address our confidence in findings from qualitative evidence synthesis so I will give you a short overview of what circle is what assessments are made and then go into a recent report from SBU where I can exemplify these concepts and I would just like to also acknowledge the great circle coordinating team for giving me some introduction here so circle is a systematic and transparent way of assessing how much confidence to place in findings from qualitative evidence synthesis it was developed by researchers with a background in qualitative research and systematic reviews and the idea was to make findings from qualitative evidence synthesis more useful in decision making and they started developing this methodology in 2010 to support the use of qualitative evidence synthesis in a WHO guideline and the basic idea is that you address your confidence in each individual review finding and you will for each finding end up on one of these four levels so you might have high confidence representing that it's highly likely that the review finding is a reasonable representation of the phenomenon of interest and then you go down moderate confidence, low confidence and finally very low confidence it's not clear whether the review finding is a reasonable representation of the phenomenon of interest so how do these assessments basically you're considering four different components you have methodological limitations relevance, coherence and adequacy of data and I will go through all of these separately I just wanted to stress first of all that so here you're looking at each individual review finding you're considering these different components and this is always subjective assessments so there are no sort of rules or guidance it's not a mechanical process in any way so you need to make subjective assessments and therefore you also have to be very transparent with your assessments so the first component methodological limitations the extent to which there are problems in the design and conduct of the primary studies supporting the review finding and here you need to use a critical appraisal tool as most of you know there's not really a lot of consensus around one of these tools there's quite a lot of different tools out there and the circle team are currently working on addressing whether we need a new tool for this this methodology or if there's something useful out there the second component is relevance so here we need to address the relationship between the context in the individual studies informing them the finding and the review questions in the context for your question so the extent to which the body of evidence from the primary studies supporting a review finding is applicable to the context specified in the review question and there are three different types of relevance issues we have indirect relevance, partial relevance and uncertain relevance and just to exemplify using the population here so if you're interested in for example experiences of children 10 to 18 years and you have studies you're using studies with younger children this would be an indirect relevance issue partial relevance could be that you have direct evidence but only partial from the whole spectrum you're interested in so you might be interested in all children's experiences but you only have data from experiences of girls or asylum seekers and uncertain relevance would be that you might be interested in these children 10 to 18 years but it's unclear what ages the children have in the studies so in all of these cases you would need to consider whether you lose confidence in the review finding the third component is coherence so here you make an assessment between of the fit between the data in the primary studies and the review finding so you would become less confident in the review finding if the data contradicts the finding or some of the data is ambiguous and I'll come back to this component when I'm exemplifying from our report because of course your methods for producing the findings will affect whether or not you have contradictory data and the fourth component adequacy of data so here you assess the degree of richness and quantity of data supporting the review finding you might become less confident in your finding if you have very thin data or only a few studies or a few participants and again this is a judgment call just like the other components so there's no rules as to how many studies or how many participants or how rich data this you need to address in relation to each particular finding so in the end then you make an overall assessment about these four different components and if you have various concerns about either of them you go down one step in your confidence level so this was a really quick overview about the methodology now I will go in and exemplify these concepts using a recent report from SBU so just very short for those of you who don't know what SBU is so we are a governmental agency that has done HDA health technology assessments for 30 years celebrated our 30th birthday and we've also assessed social services the last two years and we do not do guidelines so our target audiences can be for example decision makers in healthcare and social services or other agencies that do produce guidelines and the report I'm going to talk about today we were commissioned by the Swedish government in 2015 to assess diagnostic tests and interventions for children with fetal alcohol spectrum disorders this is an umbrella term describing a wide range of effects that can occur if an individual has been exposed to alcohol during their mother's pregnancy and this is a quite controversial spectrum we realized when started looking into this so there are people that claim that this is a quite common condition about 5% of all children in Sweden might have the condition and it explains a lot of problems in society there are others claiming that these don't exist whatsoever except I have to say fetal alcohol syndrome which is one medical diagnosis within the spectrum which is a quite severe condition so we realized this is not so easy to start addressing diagnostic tests how should we it won't really be meaningful to start looking at sensitivity and specificity for a test if we don't even know if it exists or whether it's valuable for an individual to be diagnosed in such a way so our overarching question in this report was whether being identified as having these conditions would improve the health and social situation for an individual or family and we had two main questions how did the different FASD related conditions impact the child, his or relatives in society and what are the social, medical, economical and ethical effects of interventions for children with FASD related conditions and as you see these are quite big questions so we did I think at least four or five interviews within this report and I'm going to talk about one of them which had to do with experiences of living with these conditions so we asked what are the experiences of living with FASD we were interested in both individuals identified with these conditions and their parents we found two studies addressing the experiences of individuals with FASD and 16 studies addressing the experiences of parents and this was the team that did the synthesis of the qualitative research so we were a group with experiences in systematic reviews, a few of us had done qualitative evidence synthesis beforehand, a few of us had done primary qualitative research and we also had expertise within the field as well as methodological support from Heather Montecas who's part of the coordinating team for CERQA so before I go into the results there I just wanted to point out what our aim was and how this affected the generation of the findings so our aim was to highlight general experiences of these individuals we were not interested in generating hypothesis or explanatory models we wanted to stay quite close to the descriptive data to generate findings that if we would sort of invite people with these conditions in Sweden and their families and present the results they would be nodding their heads and sort of recognizing what we were seeing so with that aim it was quite easy for us to do a synthesis based on primary studies with different analytical approaches because we went very close to the descriptive data in the studies and we also worked with the findings until they became coherent which meant that some of our findings became quite general like we were interested in experiences of receiving a diagnosis and there we ended up saying something like parents to individuals identified as having FASD can experience both positive and negative consequences of receiving a diagnosis so quite wide general comment and this was based on sort of our target audience and what we felt would be a reasonable level to address our confidence in so this is the findings we we came up with so we had three sort of categories we had parents experiences regarding their child's disabilities parents experiences of parenthood in relation to FASD and parents experiences regarding society and all of these are so these are sort of groupings of second level themes which we did our circle on and I will show you that in a minute and all of these second level themes are then supported by first level themes so an example of one finding then would be parents experience that living with a child with FASD burdens the whole family as you see a quite general statement and here you have the first level themes that inform the second level theme and as I mentioned one needs to be very transparent about the judgments for each of these components so besides discussing a lot of this in the actual report we also had tables showing all the different components for each finding so here you can see that we ended up with having moderate confidence in the finding parents experience that living with a child with FASD burdens the whole family and we've rated down because we had moderate methodological limitations as well as some minor concerns about adequacy and we ended up summarizing our results so this is just an example but here we've summarized the results with the highest confidence we had quite a lot of discussions how we go from assessing the confidence to expressing it in our report so the process of doing this was quite straight forward for us and we're used to doing this for quantitative data and so on addressing how certain we are in whatever we're claiming based on our reports but for the qualitative data it was quite difficult to find wordings for this uncertainty so in the end we sort of more stated this but based it on the findings with the highest confidence so in the end we sort of put all these different pieces together we had quantitative synthesis on prevalence of different disabilities given that you would be diagnosed with these conditions we found very little there we had very little information benefits and harms of interventions for these children so given that there's no real point to diagnose someone you wouldn't know what their prognosis are for different conditions and you wouldn't know what to do to help them based on the diagnosis so the most weight in this report became on the qualitative side describing what it's like to live with these conditions hopefully that can inform how to address these issues in the long run so just to summarize the circle is a systematic and transparent way of assessing how much confidence to place in an individual review finding and you would be considering four different components methodological limitations, relevance coherence and adequacy of data and the aim of this assessment will affect how the findings are generated and the confidence you will place in the findings so in our example we generated coherent findings several with quite high certainty so thank you interesting and may I ask you so this now is a finished review so it's available on your web page and it took about two years to pursue the review is that correct? seems quite comprehensive so it's a review, like I said it includes several quantitative as well as qualitative quite a big body of work included in that so questions from the audience on this method or on the study or anything else hello I was thinking wouldn't be better to looking at your research question and the nature of the research question do you think it is the method you have chosen was the right method or could you also use other methods like qualitative case study at which you ask directly to the parents about these questions so my question is like why did you did literature review at which I saw that there were not much literature about wouldn't be also another alternative to ask to direct you to parents instead of doing a literature review okay I'm not completely sure I got your question so you're asking whether we could have gone out and asked parents actively is that, sorry okay usually like when you have a research question and then after your research question is established you choose a method so maybe there are two questions why did you choose this method to answer your research question that's the first question and the second question is why didn't you interview with parents to answer your research question so first of all I guess of course your question will inform your method our agency works with doing systematic reviews so we don't do primary research for example so that's a practical answer to your question we did collaborate with patient sorry a patient organization and they very much confirmed all of these results so that was a sort of parallel work they had questions they were answering and so on so yes any other questions? thanks this was very useful and clear when it comes to circle I wanted to ask you mentioned thin data and rich data how do you define what is thin and what is rich that's extremely difficult have you had some kind of predetermined criteria or you were just reading and then understanding okay there is not so much data out here so we will put in the thin box and then wow there is a rich description around the concept so okay let's put it in where do you put the limit or border between the two so that's the whole thing I think it's very difficult to put a limit and the limit will be somehow different for each finding so what we did was once we had generated the findings we took out we went back and took out everything from all the studies that informed that finding and then you reread again everything with all the citations and think okay is this really enough to claim what we are claiming of course and we were a group and we went around and I should say also this is kind of a circular process so it's not that you sort of finish your review and then do the circle because as soon as you start you realize that oh okay this isn't really coherent we need to go back and okay this is much more thin than we thought it would be and so on so I don't have any so I guess it's a very iterative process so you read once and then you okay reinterpret what's wrong with my English and then you try to see whether you did a good job and then you go back again and reformulate what in many times we need to sort of yeah that makes sense yeah thanks so do we have a final question otherwise I think we're going to take a break for lunch I think you might need some new energy and we will reconvene I think in about one hour after which we will have an interactive session I don't know if you would like to say a few words on it now already or to make people kind of prepare mentally for the afternoon yeah why not so okay yes but I think by this we will just finish the first round of the program so thank you very much to the viewers on the web thanks a lot and thanks to all speakers for the morning thanks a lot