 For example, in WCG, we have four main virtual organizations, one for each big LHC experiment. So the VO role here is to group together people working in the experiment that need access to the computing infrastructure. So the VO is basically the integration point between the users and the resource providers. The sites provide computing and storage resources to the VO as a black box. Then it's up to the virtual organization to choose how to best organize access to these computing storage resources. And to do this, the VO typically organizes users in groups that grant different access privileges on the computing infrastructure. So in order to support the VO model, the WCGAI basically needs to define how users should be authenticated, how users are registered in a VO, and how the membership life cycle is managed. So for how long a user should be part of the VO, what happens when the user leaves the VO? And all these aspects, how user privileges are managed? So how can administrator grant specific users, specific particular privileges, how can the user be organized in groups and grant other activities? And especially how authentication information and authorization information is exposed to services so that authentication authorization can be implemented correctly. Then we have the infrastructure perspective. So the point of view of the sites will provide the resources and the services that are computing the storage services not to a single VO, but to many VO's. So for the infrastructure point of view, it's very important to know the identity behind any computational activity running on their sites. So basically they need to know the identity, the authentication attributes, the authorization attributes for each job running on their site. In order to implement authorization so that proper access control is enforced to implement accounting and also logging and OTT, that are important for traceability. Also sites typically needs flexible tools to contain security incidents and to block misbehaving computing activities. Like for example, when a single misbehaving job from a user is consuming all the VO resources at the site. Sites ideally would want a tool that would allow to block that single user and not the whole VO activity at that site. And finally, we have the user's perspective. Users typically are not very worried about the AI. Actually what they would want is not to be bothered at all with AI. Scientists want to do their science. But if we look at from a very high level point of view at what the users want is basically a simple way to log in into the system and have multiple services in a computing session without having to pipe passwords 10 times or other complex procedures. So basically they want a single sign on solution. They want the ability to delegate their rights to agents that act on their behalf while they are flying, and this can happen because actually scientific computation can take a very long time to complete. And they want also the ability to choose from a very high level under which role they want to act in a computing session. So whether they are hacking as an unprivileged user or as an administrator. So all these requirements led to the definition of the current WCGAI, which is in operation since 2003 and it's still working nicely. This slide, which actually picks a slide from a presentation from the European Data Grid project from 2002, still does a good job, this old slide, still does a good job in describing the main components of the WCGAI, which I will not describe in detail today, but it is here for reference. One thing that I want to underline though is that the main limitation of the WCGAI as it is today is that it's centered and tightly bound to expaginized certificate. So to a single authentication mechanism. This is a problem for usability since scientists aid to manage expaginized certificates, but also complicates the integration of the experiment frameworks with resources that are provided for instance by public cloud providers. So in order to work around these limitations in recent years, we started working on the evolution of the AI beyond expaginized towards an AI that is not bound to any specific authentication mechanism that can easily integrate and securely all the resources that are needed to support the current and the future LXC computing requirements. In these slides, I include some pointers to a detailed description of these activities. I will not go into any detail here. This is just a very brief introduction. The point that I want to underline however, is that this novel WCGAI, which is centered on the IndigoI-M technology is basically building upon technology that is provided by the USCAB. And a common trait of the technology and of the USCAB AI that we will describe today is that the technology that we will describe typically support multiple authentication mechanism. So are not bound to any specific authentication are flexible in this sense. We can classify the authentication mechanism depending of their level of assurance. The technology that we describe provide users with persistent geoscoped identifiers which are important to implement auditing and traceability for the activities for the users. The technology that are part of the USCAB AI typically build upon open standards. So in the case of the WCGAI, the approach is to focus on JSON web tokens and alter and operatic net protocols. This approach was chosen because basically this will allow us to reuse as much as possible components from the industry. So we don't have to develop our own solution to migrate to this new AI. And more importantly, I think that I want to underline that was particularly important for WCGAI is that this new AI that we are developing is meant to be backward compatible with the existing infrastructure. We don't want the infrastructure to stop working because we are migrating the AI. Basically we are designing something that can be backward compatible so that we can migrate gradually and likely this migration we take here. All these concepts that are brief, very briefly in introduced here are defined in this document that is the architecture for which the AI that WCGAI is migrating to is compliant. And this transition that we are doing is happening now. So basically you can see here a screenshot for the first deployment of the CMS. IAM instance is a part of the CMS experiment which is one of the big LSE experiments that we use this infrastructure. So this was a very high level introduction to describe what a real world community AI is. So it's not just an abstract concept it's actually a set of people, a set of technologies and a set of rules that ensure that a computing infrastructure can be accessed securely and reliably. And the technology that we are using is part of the WCGAI portfolio and the same approach that we are following could be interested and adopted also by other communities. So this basically concludes my very quick introduction. So now I pass the baton to Nicolas I think for a more introduction on what the AOSCAP AI is in more detail. Thank you for this really useful. So after this real world example let's see how the AOSCAP AI works. Excellent. So first of all, this is an overview of the main features that are supported. Some of these have already been mentioned by Andrea. So there is support for different dedication providers. There is the option for researchers to use the institution credentials for metagame they can use external providers as a social media or key other identity providers that are managed by community. And this of course federated access minimizes the number of accounts that researchers have to manage and it reduces the complexity and the security risks. There is support for both web and non-web services based on standard. So SAML, Open Indicator Next, O2, X509. There is support for researchers to link different identities. So it is possible for a researcher to link their institution and their social identity. Access is managed based on attributes such as user membership, roles, affiliation or capabilities. Some of this information is provided by the researchers' home organization. Other parts of the authorization are managed by the community. Next slide, please. And another key feature is the interoperability supported by the SCAP AI. Again, based on the standards that have been adopted. Open Indicator Next, SAML, O2 and so forth. In the policy area, we have an architecture that supports the minimal disclosure principle. There are security frameworks that ensure good practices and operational security. There is also a security framework for the coordination of instance in a federated way because in this complex system there are a lot of entities that are involved. So there is the user's home organization. There's the community AI, the service providers. So a federated instance response framework is of key importance to support this scenario. And lastly, there are standards for expressing the assurance. Effectively, this translates to how much a line party can trust the attributes of the user. Next slide, please. Now, you already heard the art blueprint architecture which is the reference architecture upon which the SCAP AI builds. In this architecture that you can see on the right side. So this is the latest version of the art blueprint architecture, version 2019. There is at the top the user identity layer that brings together the different authentication providers we discussed before. At the center, there is the access protocol translation layer which comprises the discovery service that needs to provide all the different authentication options in a user friendly way. There is the proxy that serves as the bridge between the identity providers and the end services. And then there is the token translation services for services that require a translation of tokens. As you can see at the end services layer, there is support for SAML, X509, open ed.connect, all of two. And then there is this blue vertical layer here which is mostly of interest for communities because this is where all the community profiles of the user is being managed. And then there is the grid layer, the authorization which is responsible for mediating access based on the information from the community attribute services layer. Next slide, please. Now, in the AIRSCAP AI architecture, we can distinguish between two different layers. So there is the community layer which contains all the different community AI services and enables the use of the community identities for action resources. And then there is the infrastructure layer which enables access to the resources. Again, through these proxy services that bridge the different community AI services with the infrastructure services. Next slide. So what do we achieve with the community layer? We allow researchers to register just once with their community AI. And then a researcher can always use the community AI to sign in and be able to access community services. This cloud over here, this includes all the services that are offered to a given community. There is also access to generic services. So this means services that are offered to multiple communities. And then there are the AIRSCAP services offered by different resources and infrastructures that are made available through these infrastructure proxies. Next slide, please. So we have the research infrastructure proxy, the research infrastructure proxies that acts as a single integration point for services. This means that services don't need to have the identity provider discovery service because as we mentioned, there are a lot of authentication providers. So it's not straightforward how to do the identity provider discovery in a user friendly way. So this can be handled by the infrastructure proxy or the community AI. And again, these infrastructure proxies allow services to get the information about the user including authentication information in a uniform way because we have different identity providers, different protocols, but through these proxies, the other ones are harmonized so the services don't need to deal with the differences between the different protocols. Next slide, please. Now, this is the same architecture but a slightly different view that's closer to how it is actually deployed. So as you can see with this architecture, communities can enable different authentication options. For example, this purple community AI here enables institutional leggings from Medigame while the orange one also supports social leggings options or kit, et cetera. Again, a community might choose to enable access to specific infrastructure. So this purple community, for example, enables access to both of these infrastructure proxies and the services behind them while the orange one only supports one of the available infrastructure proxies. So the whole concept behind this architecture is to make the service providers life easier by providing them a single point of integration. Next slide, please. So how can communities make use of the AOSHAB AI? There are two main use case. So communities typically want to connect, do to consume AOSHAB AI resources. And there is the other scenario where communities are interested in sharing their services and resources through AOSHAB. Can you go to the next slide, please? So for the first use case, where a community needs to consume AOSHAB AI services and resources. If the community already has, already manages an RBPA compliant community AI, then what is needed is for that community to connect its existing community AI service to the infrastructures where they need to consume services from. Now, in the next slide, if what happens if the community doesn't have its own community AI. Then there is the option to use one of the different multi-tenant AI service offerings that are already available through AOSHAB. If you go to the next slide, you can see that there are different community, multi-tenant community AI services that can serve different communities and views. So there is B2 Access, there is Check-In, there is Edu Teams, there is Yam. Next slide. Now, let's go to the other use case, a community that wants to offer the services through AOSHAB. Well, again, we have two sub use cases. So if the community already has an RBPA compliant infrastructure proxy mediated access to these services, then offering the research infrastructure resources through AOSHAB means connecting the infrastructure proxy, the infrastructure proxy layer of the AOSHAB AI, and then of course connecting it to the communities, the community AIs that they want to serve. Now, for the other use case, if a community has some services that are not behind the proxy, then they can connect the services to one of the infrastructure proxy services that are already available in AOSHAB. So for this part of the presentation, Jens is going to provide a summary of the policy related requirements for communities. So I'll give the floor to Jens. Thank you, Nicholas. Next slide, please. So we've talked a lot about communities. So maybe it's worth just reminding ourselves, we actually had the definition on an earlier slide. The definition of a community is a group of subjects having a common or at least similar activities goals. So what's the difference between a community and a virtual organization and how might they, how might they be related to each other? So a community will typically organize itself around the activities and forming a virtual organization is one of the things that it might do in order to make use of an infrastructure. So a good example perhaps is Clarin, which is a linguistics community which has a lot of diverse activities. So for some activities, for some interactions with infrastructures, it might make sense for Clarin to present itself as a single VO and just say, well, we do all kinds of linguistics research. And for some of the specialized topics that are the specialized activities that are happening within Clarin, it may make sense to set up smaller VOs that are more specialized. It all depends on, there's no right solution. It all depends on how to best interact with the infrastructures. However, of course, if the infrastructures are interoperating, then it makes sense to have just a single VO that faces the infrastructures. So perhaps there's a recommendation to have as few VOs as possible, particularly because traditionally, it is non-trivial to set up a new VO in terms of the kind of amount of work that needs to be done in registrations. Next slide, please. So a community needs obviously to have its goal and policies defined, what are we going to do and why are we doing it and how are we doing it and who can be a member of the community. And this is all just to get the community to think about what its activities are. We sometimes speak to user groups who haven't really thought about it carefully enough and they say, can we start using your infrastructure and get some resources, please? And we go to them and say, yes, what would you actually like to do? Also, we require that they do find acceptable use. So we know precisely what they're going to do on the infrastructure. And ideally, that should be compatible with the use with the infrastructure's acceptable use policy as well. So a community might wish to pin it down a bit more to say that, well, yes, we are doing linguistics research and anything else would be in appropriate use of resources. We also require that communities take responsibilities for having users accept the AUP and to ensure that they confirm that they accept the policy. So if they use the proxy that Nicholas talked about, the community proxy had come through, then we want them to have already accepted the community's AUP. So we can just, as an infrastructure, as a service provider, we can treat the community as a whole through the proxy and we don't have to deal with individual users. Also relevant, obviously, is data protection because they have personal data when they, because they need to run the community that needs to manage attributes and stuff. They also need to participate in incident responsive. There's a problem that involves a member of the community like inappropriate use or something, then we require them to participate in the resolution of this incident. Also, if they make any changes to their acceptable use policy or membership policies or anything, they should inform the infrastructure. And finally, if they run their own services like IDPs and things, then obviously they must maintain those in a responsible way. Next page, please. So finally, the membership workflow is fairly self-explanatory. So depending on how the infrastructure manages community memberships or the community IDP, how they manage memberships, either find the community and apply for membership or you get invited. And somebody in the community, typically a delegated person will approve that or reject it. And if they've rejected, ideally they should give a reason for rejection. And then you go onto the full membership where they might assign roles and responsibilities to you. They might give you the role to review other applications for membership or some such. Then regularly there's a renewal process where people are required to say, yes, I still want to be a member of that community. This is just to, as an extra safeguard, if people leave the organization or do something else that there's a check maybe once a year, that they're still happy to be members of the organization and they're still happy to accept the acceptable use policy. And then they can get removed from the community or they can get banned. Typically it is the infrastructure that bans users on misuse, like Andrea mentioned earlier in this talk, but in principle a community could also ban a user from misuse. Next slide, please. And with this, I hand over to Nicholas and or Christos for the next part. Thank you. Thank you, Jens. So in this part of the presentation, we're going to discuss how communities kind of manage access to the resources. So I would like to give the floor to Christos now. Hello. So we had a lot of background information about how you can manage a VO, what should we do, flows to ban users, to include users. In the next couple of slides, we try to put some very high level visualization of how we see things working in the context of use. Of course, many days are not there, but we would like to take one step back at this point and give you the high level view of how we see people interacting with use. Basically, it should all start with a community that wants to actually use resources from the European OPSS Cloud. So the representative of the community, probably a manager of this community will go to a community service and the user will register on that community service and request the creation of a virtual organization for the use of her community. Next slide, please. So when the virtual organization is created, the VO manager now will be able to invite members and there will be probably sent email invitations with multiple ways of some of the can invite members, depending on the tools that will be used. And then they will be able to actually manage the virtual organization, create groups, assign roles to these users. Suddenly there's a structure on this VO that possibly represents how the collaboration actually works. Next slide, please. So now we have a team of people. It could be 10 people, it could be thousands of people grouped within the virtual organization. Then the next step, and of course this doesn't have to be sequential, but in this slide that we take it as a sequential order, you need to have services. I mean, there's no point to have this organization without actually having service on purpose that we create all these AI structures that users, communities, collaborations could be able to share access to services and share resources in a secure manner. So the typical thing is to start adding their own community-based services, go through the steps in a more or less automated manner, add metadata for some services, connect open and deconnect services, enable the services. Next slide, please. So at this point, basically we have a virtual organization, a number of users, a number of services connected to that virtual organization and users can start to collaborate and access those services. But as we have been discussing, there is a case that many communities would like to use also resources offered by other resource providers, not resource providers that are strictly within their community. So next slide, please. So the way that we see things is that basically, again, a community representative should be able to go to the EOS marketplace, find services that would be of interest to the community, request access for these services and enable the services for the community. So next slide, please. When this is done, effectively, the VO has expanded its access to services. And now you have your own community services, connected with your VO, but also services provided by other resource providers. And those services might be the case that they're sitting behind other infrastructure proxies. Now, this is where basically what we want to see happening is that this whole linking of services should be more or less automatic. The moment, basically, a community is authorized to use a service from a resource provider, the whole technical integration should be outside of the site of the community itself. It should happen more or less in automatic manner. In the same way, like what Jens was describing, was that the users, when they become members of the community, to accept some policies, terms of use, all this information has to flow transparently from the community service to the end services, regardless if they are directly connected to the community or connected via infrastructure proxies. The complexity of the underlying infrastructure is not something that should be exposed to the users and the community itself. And apart from policies, the same goes for attributes, access rights and all these things. So the moment, basically a group of people has been granted access to a resource that should be able to access it without taking care of how things could be mapped across resource providers, et cetera. So this is the high level of view of how we sit in happiness. We are not there yet. So right now there are many steps that are still manual in this process, especially when we integrate resources from external resource providers. But we are working towards this direction to make this integration seamless. And back to you now, Nicolas. Thank you, Christo. There's also a question in the chat from Raymond about how to manage the access rights in a view. Raymond, there is a slide describing the... So the next slides are about authorization. So after we finish with those slides and you still have a question, we can discuss it. That's OK with you. Super. So authorization. Basically, we can distinguish two main types of authorization information. The first one is the user attributes, meaning group and role information, assurance, affiliation of the researcher within their home organization or the community. And then there are capabilities. It's defined the resources that a user is allowed to access. And this also means defining specific actions that the user is allowed to perform on a resource. And capabilities essentially allow to expose authorization information in a compact way. Could the next slide, please? Now, we haven't said this, so we have this distinction, but this doesn't mean that both capability-based and user-active-based authorization cannot coexist. So let's take a commercial search application as an example. So you're all familiar working with Google Talk. So when you share a document and allow anyone with a link to access, this is essentially you're giving this capability. While where are you sharing? But you can still share the document with specific people based on their email address. So this is an example of user-active-based authorization. So having these two different models does mean that they cannot be combined. Next slide, please. Now, let's see some examples of what would mean with each type of attribute. So group and role information. The user is a member of group X. Or there's also to express that the user is a member of a given group with a specific role, for example, manager. Assurance information. This includes information, for example, that the user's identifier is never reassigned, globally unique, and persistent. Or we know that the name of the user after following a face-to-face ID vetting process. Or that the affiliation information that we get from the user's home organization can be updated at most in a day or a month, depending on the home organization's practices. And then for the affiliation, for example, someone is a member of the faculty at University X, or that the user is a member of a community. Next slide, capabilities. So with capabilities, we express things like the user can access a resource named examresource.org, or that the user is more complex and more than scenario, that the user is allowed to perform, create, and delete on a storage resource, which is under a resource called VM dashboard. Next slide, please. So let's take some of this information and get into more details. So for the group membership and all information, for which there is no standard way of doing this in OpenID Connector O2, we have the guidelines, specification from ARC, which standardizes how to express this information, and most importantly, how to exchange it in an interpretable way across different infrastructures. So based on this guidance, followed by the ISCABI AI, we have the other person type of an attribute, in the case of SAML, or the other person and type of claim, in the case of OpenID Connect, which we use to encapsulate this information. And this group membership follows this role syntax that you see here. So essentially, we encode the name of the group, or there could also be a hierarchy of groups. Optionally, if the access model requires users to have specific roles and give them different access rights, there is also the possibility to encode role information in that attribute. And then there is also this part, the group authority part, that identifies the source of this group information where it is being managed. In the next slide, you can see some more realistic examples of how this group membership and role information attributes look like in practice. So in the first example, we have a basic example where this type of information encodes the membership of a user in a VO called the example. In the second example, we have a group hierarchy. So there is the VO, then there is a parent group, and then there is a subgroup, or a child group. Now, if there is also role information for that user, who is a member in a group, there is also this component that allows us to encode the role information of the user. If you go to the next slide. So regarding affiliation information, as we mentioned before, we have two basic types of affiliation. We have the affiliation of the researcher within the home organization, remember the example, faculty at the university, or the affiliation within the community. So again, we have a guideline specification document from ARC that standardizes how to express this to different types of affiliation, particularly in cross infrastructure scenarios. So in summary, there is the VO person external affiliation attribute, which is intended to convey the affiliation within the user's home organization. And then there is the person scoped affiliation for expressing the affiliation within the community. And similarly, in OpenLED Connect, we have claims specifically for these two types of affiliation. And what is interested here is to see the flow of this information from the researcher's home organization through the community up to the end service that they're trying to access. So we have the affiliation within the home organization, which comes from the institutional home organization. And then we have the community AI, that ring code information in the VO person external affiliation attribute. And then the community AI is also able to assert the researcher's membership in a given community. So you can see here an important part of the role of the community AI, which is to enrich the researcher's profile with the community attributes. So now we know that the user is a faculty at the home organization and also that they are part of this example community.org community. And then this information can pass through the infrastructure proxy and end up in the service that the user is trying to access. If you go to the next slide, thank you. So capabilities. So again, to standardize the way that we express capabilities, we adopt a guideline specification document coming from the ARC community called GZ027, which defines which attributes to use to convey this information and how this should be encoded. So again, we have the other person entitlement attribute used in the case of a SAML service. And then there is the other person entitlement claim in the case of open and disconnect reliant parties. Again, we have a URL based syntax. Now, instead of the reserved word group, there is this risk literal, which denotes an entitlement conveying capability information. Then there is, of course, this part which identifies a given resource. And then in scenarios where there are hierarchies of resources, there is also the ability to specify one or more child resources. And then if more fine-grained control is needed, there's also the possibility to encode specific actions. And as with the group specification, there's also this component that identifies the source of the information. And in this example, you can see how a capability is encoded in a more realistic scenario. There is the parent resource identified here. This is a child resource. And then in this example, we also include specific actions that the user is able to perform on the identified resources. Next slide, please. Now, how this information can be used to actually manage access. There are three main models. So the first one is the centralized policy information point. The second one is centralized policy management and decision-making. And the third one also adds enforcement. Let's go to and see one by one how each of these authorization models work. So if you go to the next slide, this is the simplest authorization model. In this scenario, we have a user that accesses a service and the user is directed to the proxy, to the IDP. They authenticate. So the information from the home organization reaches the proxy, the yellow box here. Then, as we mentioned before, this identity is enriched with the community managed attributes. So the proxy in this centralized policy information point model is responsible for aggregating all the relevant authorization information for the user, for example, the groups, the roles, the affiliation within the community. And then all this information is being sent to the end service, which is responsible for processing this information and then making an authorization decision and enforcing it. So in this model, the community AI acts as an aggregator of the authorization information. And it is the end service that processes this information, takes the decision, and enforces it. Let's go to the next model in the next slide, where the community is also responsible for the decision making process. Typically, this means that after the proxy aggregates all the authorization information, for example, the groups of the user and their roles, instead of giving this information to the end service and then expecting the end service to take a decision, it is the proxy that processes this group information and, in the end, produces generate capabilities that are sent to the end service. And, of course, this model simplifies the end service's life because then the service is only responsible for enforcing the capabilities that are sent by the community AI. Of course, having said this, on the other hand, it means less control from the end service. So depending on the end service requirements, they can go for either the first model, where the proxy is just an aggregator of the information, or if they want a simpler approach and they want to delegate the decision making process to the community AI, they can go for the capability-based approach. And now the third model, if you go to the next slide, it's in practice, it's not a different model. It's more of, can be seen as an extension to the previous two models. So in this model, we have a proxy aggregating authorization information about the user. And then based on this information, or after generating the capabilities, the proxy can also enforce a decision. So it can completely block the access. And this is more relevant in a scenario where there's a security incident. So a given user or a VO needs to be blocked. So with this model, it allows the community AI to centrally suspend access to services for a given user. And again, in this model, we simplify the life of the end services because the community AI can enforce the access decision. Next slide. So after having gone through all the details for the authorization information, Raymond, do you want to have we answer your question? Do you want to? Perfect. Thank you. So in this next slide, we have some examples of how this works in practice. So we've chosen examples where users access infrastructure services through the community AI. And in this first example, you can see how a user, Andrea, you can start the video. So you can see a user accessing EDI notebooks, the EDI notebooks service. After the user selects to sign in, they're being redirected to their infrastructure proxy. In this case, EDI checking. Then they select the community AI. In this case, it's EduTeams. The user identifies their home organization. They provide the credentials, the login, they're back to the community AI service. They see the information released by the community AI service. Then they are being redirected to the proxy. And lastly, they are able to access the service with the community information, with their community amounts to authorization attributes. The next example is similar. So again, we have a service being accessed through a community AI. In this use case, in this example, it is the EDI application database. The user is being redirected to the team. They choose Elxer AI as their community AI. And then as the user selects their organization, in this example, it's Google. They go back to the community AI, to the infrastructure proxy. And they end up at the end service with their community profile. So I see that there is a question in the chat, which authorization model do you recommend in case of community, comprise different services that come from different research domains? I think in order to be, this depends greatly on the types of services. So ideally, a community AI should support all the three models. Again, if the goal is to make the end services life easier, then capabilities is the simplest approach, because that means less burden on the services. On the other hand, we haven't prepared a demo for this, but it is already possible to use the standard access for command line access. And this is one of the, so in the upcoming training webinars, where we will show how each community AI service works. We have already identified this command line access as one of the topics that should be covered. So you will be able to see more details of how this can work in the upcoming trainings. So yes, there is a question about the SDK. So for checking, there is no JavaScript SDK available now. There is a, yeah, as Raymond says in the chat, one way to support this S8 access is through S8 keys that can be exposed through the DAF. If you go to the next slide, Andrea. Yeah, so we're now at the questions and answers. So please either use the chat or slide though, or raise your hand and you can. Nicolas, we have also question and slide though. Can you perhaps, Pavel, can you share your screen with Slido? Or can you try that? Oh, I have to. Andrea, there is also a link to Slido. If you go to the next slide, let's see if you can share this. So this, yeah, it is the first one for communities, the last room in the list. The last. OK, this is the poll where we have identified some topics for the upcoming trainings. So please vote here. The questions on the left. I mean, the questions are near the pools and the blue line. Exactly. So I think I can unmute. Can someone unmute Hannah? OK, let me just do that. Yeah, OK, she's on music. Great, so I'm wondering about situations where we have a service that's behind an infrastructure proxy A that wants to use OAuth delegation to push some data into a service that's behind a different proxy, which may or may not have a common community proxy at the top of the chain. Is this something that's being considered? Yes, yes. It's actually there is an interim solution where we, which is based on extending the OAuth to introspection endpoint, essentially, with this interim solution. And there are already some implementations that support this extension of the introspection point. Essentially, you have the introspection endpoint being able to forward the introspection request to the other infrastructure proxy so that the end service doesn't need to connect to marketing proxies. So the one way to deal with this situation is for the service to connect to all the proxies, which, of course, it's not the ideal solution. But the service can only do that. So in this interim solution, the service can use the introspection, which will then forward the request to the other infrastructure and then return the value status of the token to the service. But the long-term solution should be the open and deconnect federation, which will essentially allow services to connect to multiple infrastructures in a scalable way. OK, great. Thank you. Are there any other questions? There is this comment from Raymond, who decides how, when AI systems can be copied and stacked. Also, what does it mean? OK, so this stacking or chaining, so if there is a community, perhaps, Andrea, can you go back to presentation with the AI architecture in the beginning? OK, so let's take the second example where we had members of the electric community accessing EDI services. So in this example, we have the infrastructure proxy EGI-ZQIN, and then there is the electric AI acting as a community AI. Now, technically, this means that the electric AI and the infrastructure proxy EGI-ZQIN need to exchange metadata. So effectively, the electric AI is connected as an identity provider to the infrastructure proxy. So technically, this is how this integration takes place. Of course, there is the part that Christos described where there is the process where the community manager will ask resources, for example, by going through the AIOS marketplace. So it is not just establishing the technical trust. It's also allocating resources to a given view or community. If I can add something, probably the point is that typically communities choose their solution to manage the solution that best fits their requirements to manage their own community. But here is provided an interoperability framework to put things together so that it's possible to define trust relationship between communities and infrastructures. And there are the technical means to achieve interoperability at the attribute level so that the information, authentication and authorization information can be exchanged across the various levels and understood by services without requiring changes to the code of the services. Nicholas, I guess Peter has a question. Yes, thanks. What I'm actually wondering a little bit is when we are in a scenario where we would like to use procedures behind which there is not so they are directly a person and which is still would like to use the authentication system to access restricted, for example, resources or something like this, how would this work? I mean, this would, I think this is what in the old style X509 was where the robot certificates and how this is translated into the new token architecture. I guess I could imagine something like a scenario that some persons have the role or the capability to sign this robot sort of tokens, but where they would create it and how they would be renewed and things like this. Because usually you won't have long term persistence of this because you don't want to upload the token every day or something like this. So I can answer? Again, sure, Greg. So typically there is, for robot certificates in the big history, we're typically bound to a specific application. And so basically the idea was that the computation was done by a specific application or a group of applications. So in WCG context, the DAFTA management is done by a specific robot certificate typically. And so Auto and OpenEddy Connect provides the concept of service accounts for this. So basically, and provides some flows that basically are the protocols that can be used to make sure that these service accounts basically always have a fresh token when they actually have to interact with the downstream service. So in a way, the service account is a first class citizen in the new model. While in the former model, it was more of a trick. We had to invent a special kind of credential which was the robot certificate to accommodate that. So this is how we are looking at enabling this kind of scenario, so leveraging service accounts. And then, of course, this depends actually on the proxy implementation. You can link attributes also to service accounts, like attributes that could convey particular privileges or something like this. Where this regarding the long-running computation and the ability of getting fresh tokens, typically, there are also ways to make sure that there are flows in place that allow a system to have the ability to request a fresh token whenever they need to interact with the downstream service, without requiring any intervention from an operator or an explicit authentication for an operator. OK, these service accounts live either in the community AAI or in the multi-tenant AAI, without having a link to Edugain and so on. Yeah, I would say that the service account is community specific, so I would say it lives in the community AI. And whether it is a dedicated proxy instance or the community is served by a multi-tenant instance doesn't really make a difference, I would say. OK. Because even in a multi-tenant community AI, again, there are boundaries. So the service account will be bound to a given VO served by that multi-tenant service. It will not be served across all the VOs. OK. Is this correct? This is the next question from Hannah about the user experience. So this is, we have a lot of feedback around this double discovery. So what we are currently working on is the IDP hinting specification, which will allow services to provide hints about the identity provider that should be used. And effectively, this will allow the end services to signal which community AI, for example, should be used for authentication. So when the user goes from the end service to the infrastructure proxy, the discovery process will be transparent because the service can send a hint so that the proxy knows where to send the user. And apart from this hinting signaling support, we are also looking into more advanced scenarios where it would be possible to send hints, for example, to narrow down the list of identity providers. You might have a use case where it makes sense for a service to be accessed, for example, by only RNS and certified IDPs. So with this hinting specification, there is support for signaling only so RNS the discovery or Blacklist a given entity category or so only this subset of IDP entity IDs. So we know that the user experience, the multi-discovery, is an issue. And with this IDP hinting specification, hopefully we will make the login flow more straightforward for end users. OK, thanks. Then the next question. No, I think we already covered the non-interactive non-person access here. How do you see the relation of communities and generic services as Zenodo? And you also, Jose, or just to pronounce the question? Yeah, I was trying to unmute, as well. Yeah, I mean, just based in this diagram you have in there. So I'm thinking of Zenodo, we would integrate with EOS AI. And I was listening to the whole presentation about communities. So I was not very sure how this fits into a service like Zenodo, which is generic and is not targeting a specific community. So yeah, I don't know. It was a generic question, right? Because in another slide, and maybe the next one, you have an arrow that goes directly to the generic service. I don't remember which number. So yeah. Perhaps, Nicholas, I can take this. This goes back to what I was presenting, as where we want to go towards. Right now, it is true that if a service wants to connect one of the community AIs, they have to do this direct connection to each community AI directly, which is at best suboptimal. But this is where we are right now. What we're working to go towards is the notion where services like Zenodo will be able to connect once to the EOS AI and be made available potentially to all community AIs that are part of EOS. And we're going towards this direction by embracing the concept of a federation, of an EOS federation, where basically services will be able to join and then be made available to all the other participating entities within EOS. There is an interesting question in Slido about vision on AI in the next 30 years. I think for the next 30 years, the answer is simple. We should not be talking about it. Well, but I think that there is an aspect in this question that is very relevant, which is the sustainability. And one part of an interesting thing that can be said about sustainability is that by relying on standards and technologies that are used also outside of our context, basically we are reducing the risk on our side, basically. Because if we, as it was done in the past, if we develop our own specific, very specific and very tailored tools that basically work only in academia, then we are exposed to all the full cost of maintaining these tools and ensuring that they are sustainable. While if we take a different approach where we try to embrace as much as possible existing technologies that are also used outside of our context, we reduce the risk of actually being the only communities in charge of maintaining and having to support the sustainability of the services that we put in place. For sure, I think that also IOSC will be a sustainability channel for the central services. So that as soon as the various proxies we will be a key asset, a key component in the infrastructure, computing infrastructure for research communities, the IOSC should act as a sustainability channel. So speaking, this actually a sustainability problem is doesn't need 30 years to surface. I mean, you typically see problems much earlier. We have seen such problems in WCG even after 10 years or 15 years of components that were discontinued and that forced us to start demigration to more standard technology in order to not have the full cost of the sustainability on our shoulders. Exactly, Andrea. And let me add also to this one aspect is actually going towards standards and market standards. I have to share here a parenthesis that I think the academic and research community has been a pioneer in this field. So when things started happening like 20 years ago, there was nothing like that in the commercial world. So there were many things happening at the time that were really pioneering this whole area of trust and identity. Now we see that the whole ecosystem matures. We see standards coming up, not only from the commercial world but from combination of the commercial world and the research communities and with all this experience and we see commercial offerings. They're not here yet to support this kind of use cases but I think in the years to come we'll be able to see things like that happening. But one of the, I think, very important drifts that we have seen during the last decade is going away from the notion that I have to run my own service operated by myself at my basement and actually start relying on receiving components like this as a service, infrastructure as a service, AI as a service and relying basically on the commoditization of these components. So I think this is where we're heading all through the AI. It becomes more of a commodity day after day. We have already now sales providers providing solutions. We have more of them in the future. So I think we're towards the right direction. And there's kind of a related issue of if you have an infrastructure that is meant to last 30 years how do you identify users over a lifespan of 30 years? And that's actually a very hard problem if you're archiving data now, for example that users need to be able to access 30 years from now. There's not very much that survives 30 years except perhaps biometrics. So it's kind of a hard problem but if the AI is sustainable then at least in theory you can daisy chain identify as for users throughout that time and ensure that they still have access to their stuff and 30 years from now. I guess the only ways to keep it sustainable is to, as Andreas said, is to leverage the technology and just follow the development and we cannot imagine how technology will look like in 30 years. Nikolas, so did you ask to vote because I see some votes in the pool but not so many so far. Yes, so, yes. So the idea as much before was to organize some dedicated webinars for the specific services that are part of the AI. So this was meant to be of an introductory training of the concept of the VO, the community AI, how, what are the different models for authorization for managing access while we now want to go into more details and there it makes sense to show how each service can be leveraged to support these use cases and for this we have a poll in slide though we will have identified some topics because there are other things that need to be covered. So please help us with your vote so that we know how to prioritize what we've been doing in the upcoming webinars. Nikolas, we can bring it on the screen that the results also, I don't know who can share it. I can also try. I think Andrea before, yeah. And no, I think it's Rob can do that or I can try. So regular users don't see the results. Yes. I can try to do that. Okay, what did you want to show specifically? The results of the poll. Okay, I think I did it already. Ah, yeah. Do you see it? Yes. So this is the current results. Perhaps, yes, so please vote here or suggest additional topics because we might have missed something. So it seems that the most popular topic is the integrating services in the Oscar. But we can still do more than one. So we just know how to prioritize them. Yeah, yeah. So any other topics that sound like to suggest you can also raise your hand and discuss. I'll put it in the chart. Have we gone through all the questions from the chat? No questions. So if perhaps before we wrap up again to mention that at the end of the slides that will be linked from the public agenda of the event and this I think another 50 slides or so that go into more details into the policy aspect because the policy requirement is a key aspect when setting up a view. And there are also more details on how to set up a view. They require drones and many aspects but of course we couldn't go through during this training. So please have a look at this supplementary material and give us your feedback. Okay, I think we can conclude already from the results that still the most missing part of documentation probably is the practical procedures, practical instructions, how to integrate the service and this is quite important topic to cover among others. Also in the presentation, you can also find links to the documentation where we have documentation for how to integrate services with the different ESCAP AI services. There are also links for documentation for end users, for community managers. So please go to the wiki where you can find detailed information for each tool and there is also a link to the roadmap where it identifies some of the current gaps. For example, the user experience with the multiple discovery that was mentioned by Hannah or the delegation when you have services connected to different timestack reproxies. So these gaps are described in our roadmap page where there's also information on when we expect to have implementation of a solution for each of these gaps. Okay, we finished five minutes before which was, I didn't expect that to be honest. I thought that one and a half hours would not be enough. This never happened, the AI session. So this is unprecedented. This is the first, that's for sure. So we have to understand what was the problem. People aren't able to talk. We should unmute everyone. That's what I did. Be careful what you ask for. Okay, I think we can wrap up. Thank you very much everyone. Welcome to Anur. Thank you. Thank you. Thank you to the people who asked the question. Thanks everyone. Thank you. Thank you for the excellent session. Bye bye. Bye bye. Bye bye. Thank you for attending.