 Well, thank you for coming to our presentation. This is socialising the elephant with the rest of the animals. So I don't know who came because of the elephants or the animals, or whether it was more big data or egeria. Who is using egeria at the moment? Is it mostly getting familiar with what egeria is about? Is that what you're interested in? And your roles, is it information architecture? Is it developers? What sort of roles do you have? Database, data engineering. OK, I'm just getting an idea. OK, so this is all about integrating big data with using egeria. So we're going to talk about the various styles in which metadata is being represented and different ways of managing your information. We'll talk about an introduction to egeria and then we'll talk about how big data sits with these things and how egeria can integrate big data into the ecosystem. So my name is David Radley. I work for IBM. I'm an open source committer on Apache Atlas and I'm a maintainer on egeria and was one of the founders of the egeria project. So this came about because IBM and Atruvia did a project around egeria and we got them really well. We created a lot of code that's now in production. So we're now jointly presenting around egeria and hopefully you'll see the magic of egeria that we do. So I'll hand you over to Jürgen to introduce himself and he's going to talk about the styles of managing information and a bit about Atruvia and IBM before I come back and then do this sort of introduction to egeria. Okay, so over to you again. Thank you, David. So welcome to our session. Let me introduce myself. I'm Jürgen Heimelt. My role is called technical architect but I work in the enterprise architecture department. A few words to my company, our company. Company is called Atruvia. I don't expect anyone of you to know my company. It's a German company. We are an IT service provider in Germany and we provide full service stack for banks in Germany, so-called co-operative banks. We have in total 800 customers, 800 mostly co-operative banks which we serve. We have a co-banking system which we provide them. We have of course all the analytics services. Beginning with data warehouses or central data warehouses, data lakes and new architectures. I want to tell you something about those architectures in the next few slides. Just to let you see, get an impression of how large or small we are. It depends on what you think about it. So about 8,000 employees in the complete group and 1.7 billion euro revenue. Yes. Why we work with Atgeria? We are an IBM customer and one of the biggest customers I think in Germany from IBM. As we have been working on metadata management and data governance, we are looking for technologies out there. So we came to IBM and IBM partnered with us and IBM suggested at that point. There is this open source project called Atgeria and you can fulfill all your requirements with this, especially requirements regarding GTPR requirements and compliance requirements with any other financial regulations. As you may know, in the banking area we are quite regulated, especially in Europe. I think in Europe it's even more than in the US or in Canada. So we need this transparency about the data and that's the reason why we started with Atgeria as a central point of exchange or integration of metadata and central point of integration of data governance approaches. So let's start with a bit of how the data management evolved in the recent decades. It's quite the same at Atruvia. There was one main point in time when the data warehouse was invented. It was in 1988. Barry Devlin was the first one who used the term data warehouse. It was the first time when someone says we cannot analyse data when the data is in transaction oriented and databases and systems. So we have to pull them out of those systems to analyse them better. So Barry Devlin was the first one who used the term and then of course guys like Bill Inman and Ralph Kimball used the term and created everything around the architecture of the data warehouse. And then somewhere in 2011 I think it was someone else invented that says data warehouse is not enough for us. Data warehouse has some deficits and we need new architecture and this is called Data Lake. That's where we're still working on and that's the rise of Apache Hadoop in that area as data management platform and the most used platform for data lakes. We've got also a Hadoop platform running, still running. But we are also looking for new approaches and new architectures in the data area. There are at least two of those new architectures coming out in the recent few years. That is Data Fabric, a term which is very strongly propagated from IBM but also other companies built some data virtualisation technologies. And the second one maybe a year later or so it came out and it's called Data Mesh. So I can tell you, I would tell you a few details of all those architectures and benefits and the drawbacks of those architectures. So when we started with the data warehouse I mean we are using all those animals as pictures for our architectures. It's like an old tortoise nowadays. The data warehouse, as I said, began in the end of the 80s and developed further and further. It's based on one big system, the data warehouse. When it started on the operational side most of the time we only added one core system, in our case the core banking system or in other cases the ERP system or something like that on the operational side. So we haven't got that much different systems there. It was at first quite easy to integrate this data in few maybe external data or data from other systems into a data warehouse and central data warehouse. Most of the time this was batch oriented. That means that we have normally once a day or in the night we have large batch runs which updated the data warehouse. The data warehouse had one big advantage and that advantage is that it was quite easy to govern the data warehouse. We have only a few ways to get data into the data warehouse and we can govern those ways. We have one big data model and we can take care of the data model and be aware that only relevant data comes into the data warehouse and so on. But on the other hand this was also one of the biggest problems of the data warehouse because it doesn't scale. We have a very complex data model in it, normally third normal form and we have complex and costly ETL processes and every time we want to extend the data warehouse it's a very high effort because we have to change the data model. We have to develop new ETL processes or even maintain existing ones. Everything is dependent on other parts of the model and it's very hard to change those data warehouse. That was the reason that it took ages to get new data in the data warehouse. That was also the reason that we invented the so-called data lake. Data lake was a new approach to differentiate a bit of the data warehouse in the way that it takes all the data it can get no matter how its model, no matter of the schema. Just push it down and push it in the data lake. Then in the next step we put a schema on it and try to make sense of the data and try to analyse it. It's more like those small little turtles swimming in the lake but you have to be aware that this data lake doesn't become a data swarm in that nobody knows what's in the data lake and nobody knows how to handle the data. What is even difficult to do there is having a governance on the data, data governance, metadata management is not easy to do there because we have many different sources which we want to integrate. On the operational side we have new approaches, new architectures going into the direction of microservice architecture for instance. We don't have only one system but we have many systems which we have to integrate. We have more external data for instance social media data, market data which we have to integrate. This is not easy to control in a data governance form which data comes into the data lake. The speed goes up but the effort for the data governance and for metadata management, data quality management, things like that is much higher there, at least if we do not manage to automate it. That's our topic today. Coming to the next two new approaches, the data fabric builds highly on data virtualisation technology. This is an idea that we should leave the data where it is and grab it for analytical purposes on the fly. That means we have a virtualisation layer on top of it and we unify the data across the enterprise. Without moving it, we are doing all the stuff on the fly. This can be done in multiple cloud environments so it doesn't depend on premise technologies or even limits to one cloud or whatever. It can take data out of multiple cloud environments and on premise data which we also have. The challenge there is, as I said, everything is on the fly so we have to integrate and cleanse the data on the fly. That means that this is way more complex to do than if we do it in a batch environment which we knew in the data warehouse. What we need to have there just to do it on the fly is that we have some governance policies there. For instance, we have a common way of identifying business objects. That's one point. We have to take care about this. Also on the polyglot storage, that means we have different sources there with different kinds of interfaces like APIs or we also have to integrate event streams. We have to integrate secret databases and of course files as well. That's a big challenge. Also performance is an issue. You can imagine if we let all the data where it is and we have to integrate it on the fly. This can be a performance bottleneck. Performance was one reason in the data warehouse and data lake architectures that we put everything, move the data in another storage technology. This cannot be done here. Tools provide caching mechanisms to work around this performance problems. But you always have to monitor the performance to decide which caches you have to create and things like that. The last one, which is, I must say my favorite, is data mesh. Does anyone know data mesh? Some do, some not. Data mesh is a quite new approach to manage data and which goes in the same direction as the microservices architectures or the domain-driven design approaches which we have on the operational side. It says, let the data, not leave it physical there, but let the data be managed by the business domains. Well, it's built on four pillars and the first one is the domain-oriented data ownership. That means every domain should have its own way to manage their own data because they know the data best, they know how to interpret it, they know how to work with the data and so they should be responsible for the data and they should own the data. But how can they do that if they are not very data-oriented? They must have a platform and the platform must be as simple as possible for them. So a service data platform is the second pillar of data mesh architecture, which must be as sophisticated as possible so that even people who doesn't know a lot about simple application developers who doesn't know a lot about data management can work with this platform. The third one is data as a product. Data as a product is, as we heard in the keynote from Angel Dias, one of those three principles would be product-centric and that's the same with data. You should see data as a product. You should handle data as a product. You should define service levels for your data product. You should define how to use the data and so we create this bundle of data products, so-called data products, which is one deployable unit and consists of the data, of course, but also pipeline code to transform the data and of course metadata. So you should describe your data products as well as you can in the form of metadata. That means also that the data products are in the business domains. That means that this is also at least logically separated, which is, of course, a special challenge for data management and data governance, for instance. That's where we come to the fourth pillar of data mesh is the federated computational governance. Federated computational governance means that, first of all, all business domains should define a set of policies, data governance policies, which everyone has to follow, but in a federated way. There's only one department who's defining all the policies we need, but every business domain defines global policies together and local policies for their own domain. The second term, computational, means that those policies should be described in code and executed on all the data products itself. That means that we have automation there and we shouldn't define policies which we cannot check automatically. So this is an approach, as I said, my favorite and we are trying to implement this in Atrubia currently, where I think in a good progress there. And we use G-area for this approach to do all this metadata management stuff and the federated computational governance with that. That's it from a high-level architecture discussion currently and I will now pass to David to inform you about how we can use G-area for the data governance metadata management stuff in those architectures. Thanks, Jürgen. So I'm going to tell you a bit about G-area first and then we can apply it to what Jürgen's just talked about. I'll put on the picture of a G-area. The problem that a lot of organisations find is that they have lots of silos of data, owned by different vendors and vendor formats, often locked into applications. So there's only certain ways of accessing that data through the application, through specific job roles or the like. What they find is that they're being asked to govern all of their data. They're being asked to surface their data for analytics, whether or not it's come from one vendor's application or another, or it could be a graph database or it could be API information. You want to be able to handle, be able to see all of this, be aware of what you have. Can I see all my data sort of question and without having to go to the DB2 log, then go to the Hadoop per person and then talk to the ETL engineers about what they're doing as well. So the idea behind a G-area is that it has to be open because if a vendor came up with it, and it might be Google, then maybe Microsoft would say, I'm not going to work with Google, I want to work with Microsoft, and if IBM did it, there'll always be another vendor there that would say, well, why don't you put it into ours? So it has to be open source. So open source, where it's a collaboration of people, organisations that think this is a good idea, so that nobody is unjustly put down, everybody is equally dealt with. So in fact, I think we're actually dealing with the data in a similar way that we would deal with open source in an open source community. We're trying to deal with them with respect and be able to see them. So the idea is that we define a set of types in Nigeria for things like assets, policies, glossary terms, tables, relational tables, columns, and it's in a standard sort of vendor agnostic technology agnostic way. It's just assets, relational columns, and the idea is that everybody maps there when I say, well, if you wanted to be part of the Nigeria ecosystem, you map your third party, tables and columns and assets, etc., to the Nigeria versions of those. So you think, well, what's the point in that? Well, you get two advantages for that. This means that you can easily integrate. So instead of doing all of those traditionally difficult point-to-point integrations, you've now got a common language which you can use to integrate. So this is actually a peer-to-peer architecture, so there's no central place for the Nigeria information. It's a peer-to-peer, and the peers would represent different metadata repositories, because what we're talking about is metadata, the metadata that describes the patterns that all the things that we're using. The other big advantages, we've got it now in a standard form. You could query an Nigeria API and say, give me all the assets, give me all the tables, give me all the glossary terms, and you don't need to know where they live. So they're the two big advantages. So using an Nigeria API gives you that view across your organisation's data. We can see underneath this layer is the metadata highway, we call it. Underneath it is the technical metadata, the sources of metadata, which may have odd connections between them, but of course the way that you want to consume it is in a business layer. So we have things like data science APIs, we have DevOps APIs, we have AssetOmar APIs, because the objects that are used for integration and to allow these things to integrate are fine-grained objects, and the way we want to surface them is in business objects that would make sense to the user. So that's the idea behind Nigeria, and the more people that get involved with this, the richer the whole environment is. So this is... We've been trying to use as many animal metaphors as possible, because that was our remit, it's seen from the title. So we've got turtles again. The idea that if we wrench out the data from the application where it lived before, it had a whole series of protection. While it was in that application, you could only get into it in a certain way. Only certain roles could access it. It was maybe a marketing thing. But now we wrench it out. It's like a turtle without a shell. It's vulnerable, so we have to re-establish that protection in some way. So yes, we want to grab everything out, but we don't want to have everybody seeing everything and the like. So the idea is we have a big Nigeria shell. I'm just thinking, a big Nigeria shell that would go over this vulnerable data such that we would be governing it and protecting it in a standard way, in a coherent way, so you can apply policies across your data consistently, rather than having to deal with it each one of the shellless turtles individually. So this is a way of managing and governing your data coherently using Nigeria. So going back to Jürgen's picture, when I first looked at this, I thought maybe the one, the second one, replaces the first one. Maybe the third one replaces the second one because it's an evolution picture, isn't it? But in actual fact, organisations don't get rid of the things. They have a warehouse. They still have warehouses. We have data lakes. We still have data lakes. We might have new data meshes, but it doesn't suddenly replace all of these things. The business has to continue. It's actually we've got these disparate patterns of technology. Never mind different types of technology. We want to do coherence sort of governance over them. But if all of these people in the chain speak Nigeria, I'm just using that as a... We can go into more details about what that actually means. If they all speak Nigeria, then we've got a way of seeing them all in a standard way, allow them to integrate the data which was in the different places, and we do that using these open types. We've got types that describe everything we think is necessary for an information management governance system. It was based on some open standards at the time, and it's extendable. You can add your own types, but we want people really to agree as a consensus in the community that this is the way we view the world. They're all versioned, so you can... All these types of versions. You can make backwardly compatible changes to these types. But the idea is we have a rich language that describes everything we think is important. If there's an area we haven't quite looked at yet, say CI CD pipelines, there's probably metadata around that which experts will come around in the community and then propose new types. Of course, we want our elephant in there, so Hadoop says, and me, so I can talk Nigeria too. This is a question about... This is one of the ways that we're thinking about this stuff. We have data, and this just looks like a series of numbers and letters, and we can recognise some names, we can maybe recognise an address there, but this is really data that we don't really understand. Then, to try and make sense of it, we have technical schema that we put on cross of that, structural metadata to say, well, the first part means a name. We have EMPNO, Job Code, Salary. This is sort of schemas that you might expect in, say, a table in DB2, for example. Or in Postgres. Often they're written for the purpose of the writer. They've called it EMPNO because they know that this is EMPNO, whatever that means to them. Of course, a way a lot of companies work is they want to make these things meaningful for the users, so they have vocabularies, glossaries, which are useful to the actual business. These are the words the business use. If you work at a semantic layer, then you can say, well, an employee has a manager, an employee name, so these are two ways of viewing the organisation. Some people, the technical data engineers are in the weeds, they will be working at the structural level, but when we're looking at trying to expose things to the business users, often these glossaries are so useful because you've been taught to in the language that you know for your domain, say, if it's pharmaceuticals. You have pharmaceuticals words, which I wouldn't understand, but they're tailored for that, for consumption. Then we can map from the structural metadata up to the glossary terms. We've got these ontologies at the top, and we can have something like annual salary that we're going to be dealing with in a certain way in our policies. That annual salary could be mapped to a thousand different tables, entries in an API, could be contents within an event, but it doesn't matter really where it is technically, we just want to govern that information in the same way because at the end of the day, its meaning is a salary. We can get hints as to how to do that governance by saying different things are, by saying things are sensitive. We have the concept of classifications. We don't actually have a sensitive one, but we could go through into the details exactly what the types of governance classifications have got, things like retention is in their criticality, confidentiality, and we have different levels of those. So sensitive was just for the purpose of this. I think I've hinted at this. How do you integrate with Ageria? We talked about you have the third party information technology, you have the Ageria concepts, and we map between them. Once you've done that, you get all the advantages we've just talked about. Going into a bit more detail, we have these two types of what we call connectors. We have a connector framework, which is a very, everything in Ageria pretty much uses these connectors and the connector framework. Everything is pluggable. So even the Kafka that we use by default is pluggable to be able to, just by the way that we've written everything. Everything is pluggable. So from the way that these connectors are, we can have embedded connectors as well. So it's a very sophisticated connector framework. But for this, we have two types of connectors, an integration connector and an adapter repository connector. So I talked about the Ageria ecosystem, which is this group of what we call cohort members. They live together in a cohort and as one of them gets metadata, it can publish it, it can send out that metadata to the rest of the cohort. So you don't have to take all of it. So by being part of the Ageria ecosystem, you can be enriched with data which other people in the ecosystem have. So if you want that to be the case, you have to actually be a member of the cohort and the lower pattern, this adapter repository is the pattern that you would use. I'm not going to go into too much detail here. If you want to go more detail, come afterwards. The other type of connector is the integration connector. And that means I don't want to be part of the Ageria cohort. I'm just going to put all of my data into one of the members of the cohort or it's actually a two-way connector. You can actually bring information out from the cohort or you can push it in or you could do both. So that's the second style. These integration connectors are very easy to write. We wrote two of, is it three we've done now? Three and the early ones were done within a matter of weeks. It's a very easy thing to be able to bring in schema information and asset information. Things that if you imagine how difficult doing point-to-point integrations has been in the past and it can be a nightmare. It can be you do imports, things get stale, nobody owns it. It's between the two even if it's you and both products. Well, I've experienced that multiple times in my career. You probably have experienced that as well. And it just dies on the vape and nobody wants to look after it. Whereas we're concentrating here and Ageria is doing the heavy lifting for you and the frameworks are doing all of the heavy lifting, like I say. So I'm just going to, we've got about four minutes left. Are there any questions up to now? I'm going to whip through just very quickly because the title was The Elephant, so that was Big Data. So we're going to whip through some of the detailed ways that we have, we can bring in Big Data. So first of all, we've got the Hive Metastore connector. So we've got a way of, it seems that many products have a HMS API. So there's the HMS if you're using it with Hive, but PrestaDB has one, Glue has one. HMS seems to be a common API these days to expose metadata. It's quite an old one. So we've done an Ageria connector and it polls for information and it also listens. As we've got, we can put on an Ageria listener within the Hive Metastore to pump out events as things change. A patchy Atlas, we originally started doing this part of ODPI, is where a patchy Atlas sat. We started doing this sort of open metadata work in a patchy Atlas in the Big Data space. But then we actually moved away from them and we thought, well, we don't just want to do Big Data, we want to do everything. And a patchy Atlas is still doing Big Data and we have a connector for a patchy Atlas. So there's some more animals we've managed to bring in with falcons and bees and it looks like a whale on the top for HBase and the elephants there. So we've got the elephant at last. So that's a patchy Atlas, because it consolidates all of the metadata into a metadata catalogue and then we can just bring that information into Ageria. Then we've got something called Open Lineage that Open Lineage I just wanted to mention. So we've got data repositories, but data is moving between them, ETL jobs, other sorts of jobs with the movement of data. So in the same way that everything, one minute, okay? In the same way that Ageria all spoke the same language where we can all speak Open Lineage, I won't get into the details because I've not got any time. But you can, if you can omit, if you can talk the Open Lineage way, then you've got a standard way of seeing the lineage across all of this language. That's another LF project. And the last one was we have a JDBC connector so you can connect it to Hive SQL, you could do it to Spark SQL, which is also part of the Big Data or any other JDBC interface. If you want to contact us, then, oh, it's actually on the bottom, that site is there, that's the website, Ageriaproject.org. We've got Slack channels. We'd love more people involved with the community if there's something connected you need. We could work with us to help that happen. So thank you for listening. Thanks. Any questions so you can, yeah? With images. Depends what you want to do with an image. I mean, obviously images have metadata associated with them, so they're self-describing. I don't think we haven't got anything particularly for images, but if you had a use case around how that metadata might be useful in a wider context for governance, then it would be a reasonable thing to put into Ageria. And it might be that you wanted locations or you might want to associate an image, say, with a person, that was a glossary term, so we can store those sort of things. Was that the question or? Yes. In an example, if we have our own cloud with a lot of image, a practical example with the difference, if the image is a cat or a log, if I have a pytorch. So are you talking about when you run machine analytics against an image, you can say, ah, that's an elephant, or that's a cat, or that's a dog. That sort of processing is with not in Ageria, the sorts of things that we would be looking at, because we're all about the metadata and the integration of the metadata, not the actual data, but if there's metadata associated with that, that would be reasonable to publish into Ageria, such that you can see it in the wide, and what you benefit from it will see it in context of the meaning for the glossary term, and then potentially you can, so you've got ways of making sense of potentially that image. If you'd worked out it was Felix the cat, and Felix the cat already had a glossary term, said Felix the cat, you could link them together, and then you would have found the image. There is some interesting work about how you can generate metadata automatically, potentially using AI, but that wouldn't be Ageria, we would be a recipient of that metadata to store it so that such that you can govern it in context, and we don't store data, we're all about the metadata, policies and glossary terms, whether you view that as metadata technically, but we do those as well. Does that help? Any other questions? OK. Any questions? So I liked your slide where you described how data lake doesn't necessarily replace data warehouse, doesn't necessarily replace other things, like in large organisations they can have multiple... Yeah, they probably will, yes. Yeah, probably will. And you described, you know, the integration, and there was another term as well, it was kind of like a grouping. If you integrate Ageria with, let's say, something like BigQuery, and that organisation, let's say, already has many integrations, data pipelines going all into BigQuery, I guess at what place are you suggesting that Ageria helps replace data pipeline? You know, it becomes very confusing at what layer is the data all connecting, you know, if you might already have data flowing into like a warehouse, and just trying to like play out how that should look like. I don't know if this should, there's probably just ways of doing things and consequences, I would say, but Ageria I like to think of is not a replacement, it's not an all, it's an and. It's in addition. So you can carry on doing exactly what you were doing before, and if you're part of the ecosystem, then you can be enriched by being part of the cohort. So all you're doing is gaming. You're just the same UI's, the same interface, everything you did before you can still do. So Ageria isn't replacing pipelines. It could be bringing in lineage information from those pipelines, so be it design or operational. I didn't go into the open lineage with that operational lineage, but we have support for design lineage as well. We've got quite, and if you're interested in that sort of thing, which some of us strangely are, you can have this quite a good write-up on lineage, the description of what it means, the types of lineage, stitching lineage from different places. So once you've got it in a standard Ageria format, you can stitch lineage so that you can have a bit from this system, a bit from that system, a bit from... So you can actually see it in the same way that you can have a consistent view of your data. You could have a consistent view of your lineage from these disparate systems, which would be quite difficult if you weren't using standardised formats, open formats, because you'd have to understand each vendor's or the technology behind the way it's actually the format they use, as opposed to just knowing what it means because they've mapped to the open formats. Is that makes that sort of answer the question, yeah? Okay, any other questions? Very good. Thank you. So you mentioned data with glue and other data integration platforms. So the question is, so Ageria is specific to metadata. You're not actually moving data. Why would we not use the regular data integration platforms to integrate also metadata? Why would we marry them together with Ageria? What would it give us? I think it just gives you the wider context. A wider context, we're not tied to a particular technology. So often a particular technology, say for example, might be a graph database or there might be a data... There's an example which a trivia had. They have three different catalogs. They had an API catalogue, an event catalogue and a traditional database catalogue. Or with different types of metadata. How are you going to make sense of those three together, which all have a very relevant information that the business needs to see the connections? Nothing can really store that, those relationships easily, but Ageria brings you the ability to be able to see them all in together. So you basically abstract the other way? Abs... Not in a way that it's the lowest common denominator. So it shouldn't be losing detail. And if it is losing detail, then we can always extend the types as a community if we think that there's a need for that. But we can still... It doesn't replace anything that you have. I would just concentrate that it's an and. And the and that you're getting is integration. And the and that you're getting is also a coherent view of all of your information assets. Thank you. OK, thank you. Good question. Any other questions? Yeah. Yeah? How do you how do you guarantee that it works in all cases? And if you... How do you guarantee that it works in all cases? Or in all applications, works or business? Because our different business. So it's a different structure business always. So I... The question is, I think, how do you guarantee that this will work for all businesses? Because I think in the technology landscape, which I think we mostly are, we've done most of our work with. Database is a database. It's got tables. It's got schema. It's got an asset. It's got a name. It's got descriptions maybe. It's got foreign keys. Now, within that class of databases, everything that's a database is basically going to be a subset of that. I'd like a child of that. It might have slightly more functions if it's Postgres, or it might have slightly more if it's DB2, but basically tables and columns. And the same with APIs, the same with all this technical metadata. And if there's a new type that we didn't have, then we would create a new thing for this new technology. Now, in terms of business, that's the technology layer. If you think above the Algeria line, the business is not concerned with tables and events and APIs. It's dealing with... So one way you might be able to do that would be with the glossary, will be an ontology, might be the approach. But you can have business objects, of course, within analytical reports as well. Is that... That's the way that I think where all the flexibility could come in. Does that make sense? When you define your business terms, do you do that in the glossary? So that's a way of differentiating between the different domains. So you define your glossary and then you link it to technical assets. And then you can find out which asset meets what in the business domain. Or a schema element, so if it's at that granular. I think we're running out of time. Is there any last question that anyone has? Anyone can come and ask. We can go outside and talk more about it if you want to get involved with the community. Or if you're thinking of adopting Algeria, we'd love to talk to you. If not, thank you for all the questions.