 Treasury Board, we work for Alex who you just heard from, not directly for Alex, maybe like six, seven levels down in the big government machine, but we are the operators of open.candidate.ca, Canada's open government portal. I'm joined today by my colleague Matt Cullen, who works on the team at Treasury Board with me, and our great development team from StatsCan, Will Hearn and Solomon Jeffery, who are going to talk to you about the technology. I'll start quickly, giving a little bit of context. So in Canada, we coordinate our open government activities through the open government partnership. Canada joined the open government partnership in 2011, and we launched our first national action plan in 2012. The goals of the partnership are to promote transparency, empower citizens, fight corruption, that type of thing. There's 75 member countries now, and there's also membership at the subnational level. Ontario is a member of the open government partnership as well. And here's our plan, so we commit to doing a biennial plan to open government partnerships every few years. We have 22 commitments under four big categories. We have open by default, fiscal transparency, innovation, prosperity, and sustainable development, as well as engaging Canadians in the world. And this plan is available on open.candidate.ca for anyone to take a look at. Next, I'll talk quickly about the directive on open government. So you heard Alex talking a lot about policy, that type of thing. The directive on open government is our main policy tool that we use to advance open government and open data within the government of Canada. It has the objective to maximize the release of government information and data of business value to support transparency, accountability, citizen engagement, and socioeconomic benefit through the reuse of that data. So what is the open government portal? The portal is two websites. There's the open government registry, and then there's the public facing website open.candidate.ca. The portal is the one-stop shop for information on open government and open data and information. It was launched in 2011 as data.gc.ca as a pilot. That's what it looked like. Some good CLF for you. It launched with 780 data sets plus the holdings of the federal geospatial platform. In 2013, we relaunched under the current look and it's structured under three pillars of activity. There's open data. This is where GC government of Canada departments and agencies can submit their data via the self-serve interface, the registry, and Canadians can search across all available data sets and download the data in machine, readable, and reusable format. We have open information where users can search government of Canada publications and unstructured information resources, and this currently provides consolidated access to publications from Public Works, PSPC, and from Library and Archives Canada. We have open dialogue. This brings together a number of related activities, including our blog and we do consultations on there around open government activities. I'll do a quick tour through the site. Just a bit of background. The front end is built on Drupal 7 using the web experience toolkit, which Will and Solomon are going to get into a bit more. The back end is powered by CCAN and the search is solar. This is a popular technology stack for open data implementations. This stack is used by the UN, UK, and the European data portal. I'll show you a few features, but obviously I don't have time to get into every aspect. Let's say we navigate to open data. You can see you can view department's data inventories. We require all the departments to do a list of what data they have and what could be made available to the public. We have open maps. We have an apps gallery where we feature applications built by the community that use our data. Let's say we go to open maps. You can go through, you can search, and you can select any type of map that's available and view it on a map with different layers. This is where you can put different data, different maps on top of each other, and compare, which is a cool visualization tool. Next I'll go through a data set. This is what a data set would look like on the portal. You can see, you can search by facets, that type of thing. If we pull up a data set, you'll see it has a bunch of metadata. You can download the data set, you can switch it to French, and you can also comment and rate data sets. When you comment the data set, when you comment on a data set, it gets put into a publishing queue where we review it and make sure that people aren't trying to troll us with a bunch of inappropriate comments. Next I'll get into the open government registry. This is where government of Canada data owners go to add and modify their data in the open government portal. Generally the registry only houses metadata, but for some things we have the ability to host it, and we're going to be working on building that out more. So I'll walk you quickly through the steps I would take to publish an information resource. So you just go open information, you would again you see a bunch of metadata elements that need to be filled out, and then of note you'll see two metadata elements that are particularly important. The first is IMSO approval. So in the government we make departments get approval from the information management senior officer of the department before they publish a data set. This makes sure that the data is not going to be in violation of the privacy act and that it's not some sort of confidential information that we don't want it to public. You also see the ready to publish element. This allows departments to load something into the registry and keep it as draft, and then they can publish it when they're ready. And then this is the steps you would take to actually upload a resource. You go in, fill it out, and then the main thing that I wanted to highlight here was that for the most part you have to have that posted on your department's web server, and you put the URL in there. But in some cases like I was saying you can actually upload the file. As the operator one of the my first thing I do when I get into the office every day is I check the publishing queue. So we're responsible to do the QA to make sure that whatever departments are publishing meets our standards, making sure that it's available bi-lingually, making sure it meets web accessibility rules. So I go in in the morning, I check, you know, say, oh the new Jack consultation, whatever that is, is in the queue. I would go in, look at everything and make sure it's good to go. So next I'm going to hand over to Matt. Welcome to Matt. Thanks Matt. So I'm here today to talk about proactive disclosure, to try and contain your excitement. Basically, a few reasons I want to talk about it today. Number one, it takes up a significant portion of our open information section of our site's information that's released. It's also great for the conversation today because Will, parent and team, have done a significant amount of development work for proactive disclosure over the past two years. So it's a timely discussion point. And also it's an issue that's of strong public interest and high visibility in the media, and rightfully so. You're dealing with tax, you know, releasing information to taxpayers, they want to know how their dollars are being spent and where. So just a brief description of what it is. So departments are mandated to proactive disclose information on a quarterly basis for contracts over $10,000, grants and contributions, travel and hospitality expenses, position reclassifications, as well as acts of found and wrongdoing. So why do we want to centralize? So for the last year and a half I've been working or leading the initiative to take most of the, or all of the government of Canada proactive disclosure information from 90 some odd institutional pages and migrated to the centralized open Canada portal. For a lot of organizations and you can see there proactive disclosure content takes up to sometimes 80% of their information on the institutional page. So it's quite substantial. With that you can imagine how it dilutes the content both for internal and external searches. If any of you have tried to go on government pages in the past and look for anything you notice it's quite difficult to find what you're looking for and that's part of the reason for it. So this would definitely declutter a lot of the internal pages for sure. It creates a heavy management burden for organizations to create and update and archival this content. Another thing we did when we did an environmental scan about what departments were releasing, as far as proactive disclosure, there was inconsistency across what elements. So some departments were reporting three details about a contract, some 20, so we needed to standardize that process. Another thing it required a mixed manual automated publishing processes from the organization. So it was a heavy infrastructure burden for the government of Canada. So by centralizing we'll alleviate some of that once the servers are decommissioned. So the benefits, pretty self-explanatory. Having all the content located in one system creates new opportunities. Have a single searchable interface for all government Canada data. The interface provides facets for common data elements, so your organization, the month, the year, the dollar value. The will will go through some of the screenshots of this work later on so you'll get an idea of what that looks like. The debt is automatically transformed into a downloadable data set which is updated daily and made available to the public. So before if you were looking for information you're going to one website by one trying to get that info whether it's calling the website or scraping it. Now you have one easily downloadable CSV. You can manipulate that data however you want and we're starting to see more and more companies use that. Simplifies the data management across the government of Canada. So the same scheme and tool are being used for all PD types which standardizes the reporting process and elements. All departments are now reporting on the same things for all types of proactive disclosure. And the common platform makes it easier for all proactive disclosure types for web publishers. Like I said if they're eliminating 80% of their workload by not having to publish this on their institutional pages and we're doing that for them that's a significant saving time and money. So where are we now? So the past two years TBS has been working with proactive disclosure policy owners and departments to standardize these reporting elements as with any enterprise government action it's it takes time. So we have created GCP the page with all of our training sessions and supporting documentation for users. Where it's currently over 50 institutions publishing disclosures to the portal and we're working towards centralizing making that reporting mandatory. There are approximately 300,000 disclosures available for users today. And departments have reported back the centralized system allows more subject matter experts to have improved control over their reports that lead into consistent internal and external reporting. So you can imagine how much easier it is for a user to go into our registry, update the file immediately whether it's the deletion or modify it and it's published right away as opposed to sending that to their web group which probably has to go to their comms team you know all for it could be just a little dollar value error. And where are we going from here? So departments can continue to onboard we're working towards improving the data quality that's one of the things that's been lacking. We can do this through validation and standardizing our templates which we are working on. One thing that's of note is that there are currently proposed amendments to the Access to Information Act that's with parliament that could vastly widen the scope of proactive disclosure reporting. So right now 91 departments are mandated to report these disclosures that I discussed earlier with the new amendments if they're approved. It would widen the scope to ministers office, the prime minister's office, crown corpse, court administration services. So you'd be looking about between 250 and 300 departments that are now required to report. In addition to the size the scope is changing. So departments where we were required to report on mandate letters, briefing binders, briefing note titles. So the hope is and I think we're going that way is that our portal will really be a one-stop shop for all of the information and data that departments are going to be mandated to release. And it's kind of timely that we're talking about today because these changes could see significant opportunities for a drooper developer. So a lot of people in this room as we're moving forward we're going to need to expand our site of reporting capabilities and improve our overall UX for all of our users. So that was it, thanks for your time. We will get into the more technical portion of it after but I just wanted to give you a heads up about how the drivers of all the work that they've been doing for the last year or so. And we'll be here for the rest of the day so if anyone has any questions or wants to chat about getting more information on this topic I'm happy to talk. Thanks. So I'm going to jump back into the site here. On the open government portal we feature our analytics. We like to display our analytics publicly so anyone can go on and see what information is out there and how people are using that information. Currently we're only reporting on a limited amount of information because everything is done manually. Our dev team is populating a spreadsheet with the data and our web publishers are adjusting the numbers but we're hoping that as we move into Drupal 8 we can get more automated with this functionality. Quickly what do we have on the portal? You know hundreds of thousands of records. You can see proactive disclosure what Ma was talking about is a huge portion of the content but we also have you know thousands of other data sites available. You'll see in purple here 540 open by default working documents. That's sort of an interesting thing that I'll talk a little bit about more. Next I'll get into some use cases. This is where I get excited actually about open data. What are people actually doing with the content? So one great example of a Canadian business that's making use of our data is the Montreal based firm Local Logic. I had an opportunity with the vendor or the company a few months back and I found what they're doing would be super interesting. They're sucking up data from a ton of open data sources to develop ratings on neighborhoods. They licensed their software to Realtor.ca so if you've ever been looking for a house I'm sure you've been to that site. If you click on the neighborhoods tab you'll see that they're rating neighborhoods on a number of factors. One specific information where they're using our data is they're taking out data on how much railway traffic is on train tracks closer there and they're using that to contribute to the noise rating of the neighborhood. I found that super interesting because for us you know we put a lot of content out there but it's rare that we get to see what people are actually doing with it. I'll get into a couple other interesting use cases and one of the ways that we discover how people are using the data is from the questions that they ask us and they won't necessarily be direct questions but you can kind of get at what they're doing. So we get hundreds of emails a week and normally there's one or two that make you think wow like someone's really using our data for something important. We get a ton of inquiries from financial services firms. One example that I thought was interesting is a hedge from contacted us looking for more detailed information about traffic on highway 407 which is a toll road. Now I thought why would a hedge fund want to know about that and I looked into it and I was seeing like oh wow highway 407 is run by a Spanish company called Cintra and the 407 makes up 33% of their revenues. So you know if you're a hedge fund deciding am I going to invest in this company or not you can see why the highway 407 data would be super important to make a decision which I found super interesting. We get a ton of other questions from big industrial companies. We got one from a solar power company who was looking for more information about solar irradiation rates. I mean I know nothing about this stuff so we hand off those questions to the data owners but it makes me happy to come into work and I know that I'm helping Canadian businesses move on with what they're trying to do. Next I'll talk quickly about Open by Default so maybe most importantly to this crowd Open by Default is our first production in Drew Blake's site and what it is Open by Default is a website where we feature work in progress documents from government and Canadian employees. So you heard Alex talk a bit about they're trying to open up and be more collaborative and this is one way that we're trying to do that. Currently it's in the pilot phase but the end state will be people who are using the government of Canada's records management system to manage their day-to-day documents. In that records management system there'll be a button and there'll be a share of the world button and there'll be you know a technical stack that takes that document and pushes it out on our portal to share with the world which is you know a really big culture change for departments. Next I'll say where are we going in the near term with the portal? First we're going to get to the cloud. Currently we're working on on-prem infrastructure but we're hoping to move to the cloud in the next few months which will you know dramatically reduce our run cost as well as provide us with more flexibility to be able to start hosting more assets so that departments don't have to host it on their own departmental web series. The other thing that we want to get to is visualizations. We would love for people to be able to you know manipulate data throw it on top of other data put it into cool graphs that type of stuff directly on the site without having to download and then do that type of stuff in excel. One of the challenges we have with that is accessibility you know how do you make the alt-x work properly so that it's accessible. That's you know a challenge but we want to get there and you'll see from will later on they were moving to jubilee. And then one last thing before I head over to will that I wanted to talk about is what are what are some sort of reading edge applications in the open data world? One thing the US governments doing that we're not doing is co-locating their data sets where people are trying to do cloud computing. So when you're trying to work with huge like data sets of huge file size like satellite imagery or DNA genomics data it's super difficult to download and transform that data and then analyze it on a desktop computer. So Amazon has actually partnered with the US government to host popular data sets for free in their cloud so that their cloud compute clients can have that data co-located with their cloud compute workloads. I think this is super interesting we're not doing it yet but this is something that we're going to be exploring as we mature. Will I'm going to hand it over to you man. Awesome. Sorry. I have to relaunch it. My slides. All right. Hi there. We were tasked with creating a 10 to 15 minute presentation to demonstrate the work being done on the new open data portal powered by jubilee. Honestly there was a lot to learn far more than I initially expected. However with the fundamentals of dependency injection service containers plugins api etc in hand once you realize the sheer power and versatility of this platform is all of our stuff is online is hosted we've ever since the beginning we've started coding all of our things on the github open data repo everything you see here you can entirely leverage yourself see how we did it and just go through a repo and help us improve it. Okay. Oh that's okay it's the next slide. So I'm going to do my best to keep this presentation brief so the more freely engaged audience answering questions highlighting code this is the following agenda I'm going to go through. I'm going to talk about installation profiles the specific od components and then take your questions. So hi there my name is William Hearn I'm a technical architect at stats can I've been working with the amazing Drupal community for about a decade now I'm still learning additionally I credit Drupal of getting me involved in other areas such as containerization in general for app delivery think Docker rocket OCI and orchestrators like k8s tectonic and nomad. Of note we currently use containers for developer environments private good lab CI runner past the government firewall and even public testing in from via travis sam is going to do a quick introduction. Hi my name is Lamont Joffrey I'm a developer at cold front labs and I've been working with Drupal for the last three just over three years and for the last year and a half I've been a developer at stats can I've been working on both Drupal 7 and Drupal 8 at least for the last eight months I've been doing exclusively Drupal 8 and you can find me on dupal.org as jpeter79 on the slide group dupal slide group as the same moniker and we contribute regularly to both dupal.org projects and on github and gitline. All right profile inheritance before we get into our sub profiles and how they work this is some concept you'll need to uh to understand profile inheritance allows you to customize the installation process to meet one or more of your specific needs you can both inherit and override configuration from the parent profile as well as add or move dependencies additionally you can run additional installer tasks per profile layer for further customization needs if you're interested in leveraging this outside of the open data context take a look at this drupal d.o issue number which is currently in needs review and likely to be landing in drupal core 8.5 all right um so lightning lightning is a top distribution in drupal 8 backed by aquea allowing you to build experiences quickly using the best of drupal 8 in the feature rich extensively tested and secure open source distribution lightning powers uh huge sites such as princeton tesla and uh coming to new fizer redesign uh it also including powerful notable distributions like open edgy thunder and demo framework uh there's four key functional areas that are targeted by lightning it's one is layout which is drag and drop layouts to configure page layouts or drag and drop tools media management and bed images twitter instagram videos and more from drupal or other sources directly into the content pages finally we have workflows which is configure workflows that keep content moving through review and approval stages easily also features improvements to the native penalizer experience and then finally they just recently added this api first lightning ships of several modules which together quickly set drupal to deliver data to decoupled applications via standardized api basically to come with chasing api simple low off and open api of note um are you worried about the workbench to content moderation update path if you're using lightning that they're taking care of that for you that's why we're using it as our as our base framework so now we get to wxt um let's look at the top of my slides which uh extends off of lightning through profile inheritance i just wanted to mention there will be a wxt exclusive talk at 215 so lessons learned built with lessons learned from drupal 7 wet kit in use by several departments including the open data portal one of the biggest decisions was deciding to leverage lightning as our base framework this choice wasn't made careless carelessly but six months in i can say has been an incredible time saver allowing us to more readily focus on departmental customizations uh so with lightning we we know that updates are extensively tested upstream maintainers are incredibly collaborative i encourage any drupal at orc to get in touch with fina proxma balsama they are one of the best drupal the best drupal developers there are and they just want to improve lightning and make it the de facto framework to choose um it provides a great testing suite with custom v hat step definitions even so you can actually drag and drop your layouts and that's all coded in v hat and you can don't have the right to your own for that the streamlines the panels panelizer see tools workflow it increases the developer pool um and it gives me more time to focus on wxt specific issues so wxt following the practice of lightning wxt tries to keep its scope minimal and ensure when functional functionality is added that it can be easily disabled wwxt extend is a wrapper module which enables more advanced functionality and gets enabled during a call to install callback so in your sub profile of wxt you can easily opt out um all the various wekbal themes are supported and can be toggled in either a minified or non minified mode improved layouts we integrate with bootstrap layouts which extends from layout discovery and core and gives us impressive grid control and to support many types of layouts we are slowly moving the layouts from the official ias back from goc uh finally wxt and bootstrap library given in the spirit of modularity just in case someone didn't want the full weight of a distro or lightning we have made it so our just our theme wxt bootstrap can run on its own without any other dependencies other than wxt library you need that because there is still some constraints in jubilee where you need a module to implement some things due to theme system limitations but you just hit wxt bootstrap on the library that's all you technically need to run it just for native core and we've also ported a variety of wxt plugins uh tabbed interface with the media entity slideshow lightbox gallery as field formatters uh share this page with a custom block type etc um right now uh various sub profiles have been created against wxt as the stats can encouraging us to not create monolithic sites um so finally this brings us to open data sub profile of wxt importantly this profile must run in a post-grass back environment so all modules have been extensively tested against it features a variety of improvements whether across the data modeling layer to all the way to the template layer uh the main goal was just a straight port of the legsy dupal 7 site but there has been a great deal of improvements made to the portal i'll be highlighting a selection of the various improved components in the bulk of the remaining slides uh that's not going on unfortunately this isn't going to show as well as i wanted it to but if i hover on open data this gives you a nice dependency graph uh yeah it's not oh yeah sorry yeah we'll probably explain it uh but this is pretty easy you can actually just this is all of open data's components and then this is all of lightning's components and you can see how it calls wxt and core and it shows all the dependencies they're in and this is wxt's dependencies right here um this is just a d3 dependency graph wheel if you have a composer dot lock file you can instantly just upload it it will generate a nice pretty graph for you and you can easily see how the profiles interplay basically you can see that uh what we built uh what we're inheriting from lightning is quite a lot of different components they're already pre-built uh made by a very supported team at opio and uh wxt builds on that as a bunch of extra components and fixes and then basically our open data profile adds a very minimalistic uh add on on top of that just to get what exactly what we need going for open no open graph all right so and uh this this right here sorry this right here my god i actually pre-recorded so i have to time it um so this is just showing this the front end nothing too crazy mobile responsive and that everything is actually ported uh this is a fresh install we didn't i didn't do anything other than install and migrate in our content that nothing else with the nano behind that so just showing the front page layout the blogs all came in um it's all done via bootstrap layouts and it's working quite nicely uh so then if i go down this right here shows you it's kind of hard to make out but but essentially right now i'm going into our install profile for wxt showing you that we're calling lightning as our base profile excluding some of the lightning components we don't want and then we go into um od which you're going to see is calling wxt so this highlights sub-profile inheritance and you can see um additionally near the bottom uh we just added just about a few extra customizations on top of that and if we go into our profile you can see we enable our od extension in wxt extension layer thus making it so you can opt out in your own sub-profile from any of our advanced custom uh custom functionality so finally we're going to go into uh the od components and these are the six i'm going to try to highlight in short order uh the first is landing pages uh then search pages with solar the group module migration user engagement and api first let's get my slides back all right during the course of the dupal 7 life cycle open data has had a repeated scenario of needing to be able to create a one-off page and the ability to cuss to add custom blocks and position them just for that page so the layout content are both unique lightning provides an improved workflow that is also compatible with deployment um just as a bit of a note this was solved in dupal 7 by an over you one would say abuse of the page manager and structure the page manager structure page so now it's actually a proper content type and it's helping us with deployment and workflows so we go down here it's gonna be funny if we do it second time so this is just showing that all of our pages now are landing pages when they came in this is going to go we're gonna yeah they're just our content so this itself is a content page i can edit this draft and um actually only when it's in draft mode can i change the panel's functionality behind it now i'm going into some of our proactive disclosure pages showing that every single one here no longer resides in a min structure pages it is a pure content type and with that we get all the benefits that that involves and this is just showing an example of how the layout system works let me do it again so we can just change the layout and you can see we have a bunch of layouts through bootstrap through columns wxt we're slowly migrating the official i i a layouts uh so i'm going to select the topics landing page this is where things get interesting where you can actually customize the layout a bit more you can add custom grid options span dash 10 for every region or a wrapper region um you can make your own templates like the standard way interpolate but if you add the bootstrap layouts class you can instantly get the bootstrap grid on top of your layouts so it gives you more flexibility when you're creating them uh very very useful and this is just manage content so we can easily drag and drop and we can get content authors to create these types of screens which is in the end goal of this is to make it relatively easy all right next all right so now we have search pages with solar my favorite section my favorite component where i spent a lot of the time um so it's a search api backs moderated deployable panelizer based layouts um compared to the legsy portal of whose architecture was around Apache solar and involved a lot of manual field mappings along with hook query altars the new search api workflow has been incredibly impressive um so first we have the search api framework it's the immensely powerful abstract framework for creating searches and any entity known to drupal keep in mind that that little known to drupal uh right now we're going to get into that in a second um using a variety of search engines then we get search apis for solar it's um it basically just lets you integrate with the solar uh this the solar server and let's it power your research pages it's incredibly high performance you do not want to um have like maybe views leveraging 200 000 queries and trying to display that you should offload that to solar and then through views uh present it um also it leverages the solarium php solar client and powers our app gallery site search uh then this is the new module search api solar data source which is coming in to search api fairly soon but right now it's on github and i've been working with uh a great developer dcam to get this out it helps you include content not originally indexed by drupal that is amazing it basically means i can go to any data set and put and have it in drupal to present it and link it up with views um so it's backed by type data it lets you search and view external documents and it powers the proactive uh disclosure in drupal 8 so we go down uh so this is just showing this again it can be second time all right so this is just showing an actual uh grant page powered by um c-canned web service coming to us and then through views and search api we're just inheriting it we can do searches and quick faceting there is a functionality coming in where we're actually going to be able to agex but where there's still a little bug with the show more where it doesn't agex that content in once that happens the page refresh will also be gone so the experience will be uh much improved for that um then we're just going to go a bit further on i'm just going to see where i am let's go a bit further up let me edit it too long uh then you get the point roughly uh so we have many facets and support for that if i go down a bit more this is the back end so you can see all of our cores are fully mapped ccan ati contracts all backed by yaml there is no custom configuration anymore we could actually do most of this uh via the ui which was great um so then if i go in a bit further i'm actually going on the index itself just showing how we mapped it because we're using docker the connection strings that are very very easy for our local environment um then if i'm going to go back and show you what it does when i actually integrate with it so right now if i go to processors you can see that we can do a bunch of extra work on the data itself um and hopefully right after that yeah so we can do highlighting extra html filters basically further um further processing on the interviews we're bringing back this is where it gets me these are all fields from ccan right now so i'm saying these are all from the the json the rest api and i'm controlling um what type they are this is also the fastest section showing all the facets we have how it's all backed up and linked and hopefully in a second it's going to show us the code layer once i just go through this sorry i'll just get the slides to show again see where we are yeah uh so this is just showing the the the code structure for all that it's kind of hard to see i apologize but basically it's showing everything you saw is backed in code um i can basically config export this bring it to production config import this all comes in that's it there is no like a weird manual behavior for any of this functionality unlike dupal seven where we need to do extensive work for that um now we get to group uh group that lets you create arbitrary collections of entities across your site and then leverage granular acl on those collections uh basically we're using this for communities and level level of memberships so during the post installation what we did is we ran a migration called odext group which imports all of the governments of canada's defined departments as groups from secan via the action api uh with these groups imported later i can assign content to these groups and basically lock it off so only people in those groups could access that content so if we go right here you see i'm going to go very very quickly uh to structure migrations this is just going to show you just the migrations we've done this migrations i'm going to go all the way down to the bottom on odext group what's it loads yeah and you'll see we brought in 191 uh departments uh through this i'll migrate makes it so easy because we can just point it to a json all in yaml and show the mappings and everything comes in so we now that all the organization is coming in i'm going to go to page four showing that we've mapped content to the treasury board secretariat uh so with all this in once we've migrated in all the blog posts and consultations we said in the post process logic that they should automatically become tbs because right now we're going to let them be the gatekeepers for this you can say they can create a blog post create consultations because that's what this group has the the privilege to do now i'm going into the related entities you can see we've added all the these users based on certain roles to the group and we've added certain blog content to this group so it can be uh blocked off group module is a little bit like og but uh a lot simpler uh now we're going to go through um we did migrate all the content from the legacy portal to droop a lake this included paragraphs flags uh we did data model changes so if ever you're interested in how migration works take a look at odext migration in our code base it might help you um because sometimes the droopals upgrade code upgrade from migrate isn't it's more of a one-to-one with just core but if you need to do architectural changes you're going to want to send a bit more custom so yes we migrated node users menus taxonomies media file entities comments blocks paragraphs panelizer um and this is just showing you roughly how you set it up you have to declare a new database string saying what's what's your new database is our pg postgres that's how we're migrating it and then once the mapping's done all you really have to do is say drush migrate od external and then translation everything comes in um so this is just i'll do it again you've already roughly seen the screen but it's just gonna it's not doing it okay uh so this is the overall migrations you can see we brought in about 5000 comments 7 000 users uh we don't have as many nodes 28 000 web form submissions came in um another interesting thing is we've also in just the od migration we brought in landing pages and panelizer pages all via migrate that's how we brought our state from an initial install to everything you see here this as again this was just an install i did uh two days ago with nothing done on top of it um so now we go into user engagement uh integration with flag um comes pre-configured during the regular profile install of open data currently flag provides the plus one functionality used for suggested data sets uh so you can see here here's our suggested data sets what we actually did is what we actually did is be ported because this all used to be votes in drupal 7 so now we said well if you're just up voting and there's no down votes this is kind of more of a flag and this actually helped us further things along because not everything in drupal 8 is done yet so we did convert votes to flags through our migration layer which uh was very interesting to do wasn't actually that hard now we can do uh comments you can't really see the comments they have been styled and they're much more nested there's a bit of gray i'm not sure why it's showing i'm not showing uh then we have our apps gallery which we've done a bit more work on and that all all this came in through migrate so again i encourage you to take a look and then we just went a bit further and i just decided to rate and there you go for the ui components api first uh so this involves external entities in json api this would be the last big component um basically let's look at two examples um where we're leveraging this functionality on external entity screen you should notice that we have um two types ccan and solars we actually made two storage clients that work of external entities and what this lets us to do as you see in the slide so right right here basically we're telling drupal go directly to ccan own and do a arrest query and pretend those results back are entities in drupal that's right we're thought we're actually making faux entities in drupal and we're going to be able to add fields to them so right now for ccan i did manage fields i'm going to edit i'm editing the ccan external type i've mapped my fields from the external data set on the storage settings when it pops up you're going to see you're going to see ccan storage client pointing to open canada dot ca uh this is our pagination settings and then we're going to go to our api key settings which we have none then parameters i'm just doing a simple test so give me all queries of data set nothing too uh too crazy there then if we go back you can see that uh solar inventory works roughly the same way we now have entities in drupal this is from ccan drupal does not have any knowledge about this but because they're now in drupal you'll notice when i go to page three on the open portal catalog that we can do some nifty things now and this is how the open by default portal works i click on the open data portal catalog we can do comments and votes on external entities that have no business right now being in drupal that's amazing the fact that we can do this and then through jason api this works with jason api and um ccan can then do another query getting all this stuff back so we're completely working with ccan through web services unlike drupal seven where we actually did some uh database chicanery to make things work um so then that's for that now last thing we're going to go through jason api very quickly uh no configuration out of the box if you enable it it provides a full rest api for every entity type bundling your drupal application uh so right here you can see just by one installing jason api it gives me all my all my routes and you can get anything from them you can get any comment any node um paginate do relationships between them it's quite impressive and everything's and it works with default entity um access and then this is just showing our postman and this is just getting highlighting how some of our things work so right now you can see i'm querying via jason api external entities ccan give me all the external entities that's with some ccan right now you can see we're getting all the results even though it's not in drupal i can leverage them as entities now because we we made them entities through external entities module and now i'm getting them all back then it's gonna and now we can go through i'm getting the solar inventory to showing how another one works because we had two maps in the ccan and solar and it's working pretty great for that and then in the next section you can see once it comes up that i'm going to be querying off of external comments fine give me all the external comments i just made uh so i'll query that you can see we can do parameters for relationships right here and now it's getting us our nice comment that we made from an entity that is not really in drupal and so this is really great to couple functionality and finally because we did do a query external comment external comment this means get us every external comment but what ccan wants when they load their page is to just get the comment from their ccan package id so this is where we do a relationship we're saying based on the ccan package id join back up to the node since we're displaying comments bring show up the node too and then let's just give them the one entry they want so they use this page to then power their vote their their screen finally we have votes we're just showing how we get all the votes come in we can go just vote vote to get every vote that came in and then later on we can go vote by a specific ccan id just to get that number which is quite quite great finally because we're using the vote module it has a lot of functions with it like the vote average the vote count and the vote sum so because we're using json api we can do a relationship group query where we can get all this information as well so you see here i'm going to hit the prams section for vote result i'm saying how because the full api has us in the database give me these records too i need the count i need the average in order to give the proper display so that's how that works i encourage everyone to make a look at json api comes of lightning by default a great module and i think i'm going to end it here that's roughly the whole presentation we had a lot more components but i was trying to keep everything time box thank you very much