 Γεια σας, my name is Staths Kostet Nides and I'm the project coordinator of the SEPEH Erasmus Plus project. On behalf of the SEPEH consortium I'll talk you through some of the challenges that we encounter while co-creating educational chatbots and how these challenges led us to the construct of a crowd-based aspire framework. Educational chatbots have started to appear. There can be divided into two big categories, the chatbots that have some educational intentionality and the ones without an educational intentionality. And they can be really simple at the remember stage of the bloom taxonomies, but they can be really complicated reaching even the create level of bloom taxonomy. The SEPEH projects are strongly believing collaborative design. At the University of Nottingham we use participatory design for open educational resources for the last 15 years reaching people around the globe and creating really high impact. The aspire framework has been widely used to develop smartly reusable learning objects. The aspire framework was chosen based on its high success on the participatory development of reusable learning objects which is bite-wise web-based open educational resources for healthcare education and its successful adaptation for large courses and virtual reality resources. The aspire framework stands for aims, storyboarding, population, implementation, release and evaluation corresponding to the following steps. Participatory workshops, specification writing, peer review of the specification, development of the open educational resource, review of the open educational resource, evaluation with stakeholders and then publishing the open educational resource. We use the design science approach to construct an adapted version of aspire framework in order to fit the development of chatbots for healthcare education. The design science approach has two main activities. First the artifact is built and then it's being evaluated to identify how it performs. Multiple rounds can follow. In the first iteration we tested the existing framework during the demonstration stage. We contacted an online participatory workshop with the student and then we defined the specifications which we reviewed. We have to adapt on the fly the specifications as a text-based form could not be easily fulfilled and defy the intended paths. The first pile of development revealed different areas that needed to be considered such as multiple directions at each point are possible, prevent the learners getting frustrated by poor learning experience, training data provided by the developer. During the release phase we found out that the more wise, complicated the chatbot is, the more time needed to recompile after small changes. So several areas of improvement that did fight the evaluation stage through three project team's meetings and one meeting with stakeholders. In each of those meetings we detected workshops, participants, learning technologies all web developers and tutors or content creators reflected on the co-creation process. We evaluated its phase of the aspire framework separately and we tried to find out how its phase can be improved and what could be the benefits and the contributions for its participants in the aspire process. That led us to the second iteration of the design science approach. A set of objectives for the framework were identified to feed the development of educational chatbots. Those were to raise awareness of the stakeholders regarding chatbots. Stakeholders in a participatory workshop to co-design a chatbot usually have little or no experience of using chatbots thus they are unaware of their functionalities and capabilities. To reform the storyboarding workshop we refined detail on the assets and various chatbot interactions. Other traditional storyboarding workshop with different screens proved to be non-functional, a graph-based workshop organized around themes and discussion could be more useful. To obtain more training data, training data need multiple users in order to be efficient and valid. Further more alternative answers should be provided in the training of the chatbot by experts to reform specifications, to reflect on reflection between questions. The traditional specification describing and using specific texts and assets needs to be connected with relevant triggers or questions by the user. To achieve this objective in the next stage design development of the design science approach we added an information phase in the aspire framework using presentation of chatbots and experimentation with chatbots prior to the participatory workshop. While the first part of the workshop remained the same in the defining themes we developed a crowd-based platform for learners to add potential questions and answers and the type of interactions and assets which they expect from a chatbot. The specification moved from a linear format to a topic-oriented format. The specifications can now be built within the crowd-based platform from experts by reviewing and correcting learner's input, a process emphasizing relations between questions. From a review based on text form where the reviewer can follow the linear object we moved to a crowd-based co-creation tool in which the reviewer can review questions, answers and connections. Additional experts can conduct reviews on all responses in more organized way. During the development phase of the aspire the crowd-based platform for co-creating an educational chatbot provides the chatbot structure and data output which can be subsequently uploaded into a chatbot development environment such as RASA. Technically reviews more challenging as user input can be unpredictable thus evaluation based on multiple scenarios preferred. Letting technologies development time is minimized but still required to sanitize the data and ensure the flow of the discourse. Following such detailed specifications we minimized the number of recompiles needed to release a chatbot. An initial pilot chatbot evaluation within the project team showed that the adapted aspire framework fits better to the development of chatbots. Different stakeholders agreed that the modified aspire framework utilizing the crowd-based platform to co-create educational chatbots can be more efficient and adapting to its user needs. Further evaluation is needed with end users of the chatbot measuring both their experience during the co-design workshop the use of the crowd-based platform but also measure the usability of chatbots including chatbot personality, user experience and error handling. This work already communicates the difficulties identified through the adaptation of aspire framework to develop an educational chatbot. While further evaluation is needed with learners and content creators this piece of work suggests an adapted crowd-based aspire framework can be utilized for designing educational chatbots. This work is supported by the Rasmus Plus strategic partnership in higher education that's a PEC project of the European Union. Thank you for your attention.