 Kajšin? Vzal sem tudi izgledati tudi tudi in ta vse in tega. Zvuk je, da je vse za sopem. Završemo, da bilo to spolivne in izvah, oče z vsev delo sop in mi smo početil, da to v stavini vzal, in se da se inbovalo nekaj predstavilo po vsem in pa začeljno. Svetljamo iz kratku prejzavljenju in svojstvo prikazamo, kako se prišli v nekaj temošnih zapotnevacim. V izgledu se bo vših ideja, kaj se prišli vših teglavnji! In prema všeč, včešam v italii in digitalne teme, kjer smo prišlišali. Včešam v italii in digitalne teme, kjer smo prišlišali. Zelo smo 30 ljudi, zelo smo pošlišali, da smo pošlišali, in viznešljes, in izvah všom viznešljesi in viznešljesi in inviznešljesi in inviznešljesi v zelo površenjem skupnih vrstaj. To je to začent več nekaj viznešljes. Všeč nekaj vzajem, da se prav na api, ker je to vzaj tako daj večnega in več nekaj rejbova vzajem. Kaj sem? Zdaj sem vsečen, da je api ekosistem in digitalna teama. Prvno, vsečen, da je vsečen vsečen vsečen vsečen vsečen vsečen vsečen vsečen vsečen vsečen vsečen vsečen. Nu platimo si občutak za vsečen iz application, tudi z iz乐anima, kako smo se razrečili, ta ali je vsečen in vsečen. Počače da je skonec, je to našto vzglasbenih, pomembnačne rečenje, in vse je nekaj vse, zelo vzglasbenih, vzglasbenih, in izgledanje je kar, zelo vzglasbenih, in vzglasbenih sop, ki vzglasbenih je bolj začalj, kot očuza vzačnje, v prihlednji hrb, prihledneni hrb, in tudi gdje je to površite in zelo. To je kompleks, je tako zelo. In zelo je to problema, in je detekljavnja. To je kompleks, in je to prihlednje v taj nekaj environment. Kaj je to kompleks? Zato, da so komunikacijo vse z vzvečenih hrtitopov, skolji, da je 5hv. Vsih je vse in zanemdelov. Zato je in xml. So errors are serialized in XML on the SB, then deserialized and eventually reserialized on the SOP gateway, which then create a further envelope and transmits it abroad. So the issue is that there is no universal semantic for communicating errors. And when does things go bad? Well, when you are on peak loads, because you cannot do service management, and you have a system that is overloaded with requests. For example, the city of Rome is around 3 million people. On the day, on the taxpaying days, you have nearly 3 to 9 million requests, only on the city of Rome. Imagine what can happen when you have a 60 million people country on taxpaying days, then when people just remember on Friday morning that they have to pay taxes and they all make requests between 9 a.m. to 12 a.m., all charging the ecosystem. Those kind of issues are, I mean, everyday issues, and you mean you have millions of requests that should be serviced in 4 hours. And when this is going to break during peak loads, because processing error is costly. OK, so you add further CPU and RAM work even when there are peak loads. And moreover, all these became a barrier. This worked well for 13 years, but now is a barrier for the creation of new services. It is very expensive, but on set up and on operational cost, it complicates communication with the private sector, because if you are a private company that wants to interact with all the public health government staff, you need the ESB and the customs of the gateway and so on. Moreover, the IT world was moving towards SOAP. So, OK, we now started to think if the IT world moved forward SOAP, why can we? SOAP is old, but again, it was born because there were many weak points in the formate HTTP specs, and it added essentially one layer on the underlying protocol that is not necessarily HTTP. You can send SOAP messages on HTTP, on SMTP, whatever, and SOAP is based on messages. It is virtually asynchronous. But today we have this brand new, nice, wonderful semantics that is RFC 72, 32, 39 actually. There are even a lot of work on HTTP now. Almost all services, not only web services, but even mail services and directory services now are inherently based on HTTP. Now we found that the network is reliable enough for having synchronous exchanges. So it does not always sense to transport messages on SMTP. It is fine, I mean, but it's not always a requirement because our system messages are always online. So we can go synchronous. We have instant messaging, we can have chats, and so on. So we can actually move beyond SOAP. These new semantics allow us to root requests based on path and method. We don't necessarily have to process messaging to know how to balance them from it important or non-important, that you can just read like read only or read right or when I do have to cache or not. I can use status and headers that is a sort of reading the payload, but it's not actually ready the whole body of all the records body. So I can just limit to status and headers. Moreover, I got a lot of semantics for caching, conditional, and even range requests. So this new framework wants to standardize API without SOAP. That doesn't mean we just can sell SOAP, but we expect new services will be provided through REST API. We want to enforce an API-first approach based on OpenAPI v3, that is the standards formerly known as ZWOGGER. And we added a team standardization based on standards and a new availability strategy based on distributed circuit breaker and throttling patterns. That means we enforce all API released by the government agencies to implement throttling and circuit breaker. And this is the shiny new ecosystem we want to implement, where every agency implements API that could be measured up and we could create new and new services, mixing national registry and PHR, that is your personal health record. They are provided by certain trial government. Together with services provided by schools and town, and the school can inquire to the revenue system if you are eligible for grants and the hospital can contact the town or your personal health record to see, for example, if you are allergic or intolerant to something, and this is our actual goal, provide and measure services for the citizens because the main goal for the administration and this is sometimes, when you speak with the administration, you have to repeat to them, your goal is to serve citizens. Usually the administration thing, their goal is to create documents and once they have provided documents, they are done but you have to remember it and sometimes you'll find somebody that understands and this will became a turning point because he will remember and he will be somebody that follows and understand the actual goal of the administration and magically they understand the new ecosystem and then they understand we can get rid of soap. So standardization, let's come on the tech stuff. Ok, only HTTPS, we bounded HTTPS in binary message, that means if your administration has a new Kafka, JMS, M-Coupie, shining cluster, simply they cannot expose it on the internet but they have to provide an HTTP wrapper that should provide authentication, authorization via mutual TLS or OpenID Connect or OAuth. This is not because Kafka is not good but because you have to standardize things and Kafka actually is not a standard. Kafka allows you to implement a binary communication system but while HTTP enables to write specs where you communicate in a standard way so you can move from Kafka to M-Coupie or JMS and the other government agencies just they doesn't need to know and let me say they don't have to know if you use Kafka or M-Coupie. They just have to know which is your interface with external services and then we want today to leverage status, method and path because it is important not only to provide fast services but we need auditing and routing because usually when you are in the government stuff you need usually to treat people, data, personal data and so it's better to trade off some speed for the ability to have a unique and trustworthy system of encryption and authentication and authorization based on well-known protocols. Then this is easy. Just stop logging in that way and just log in RFC 5424 that is syslog because so you don't have to care about new year's day, you don't have to care about delayed summing time and if I have to cross check logs from two administration I don't have to pay external services or external company to write expensive log parser but I have to simply just send you the logs and that's fine. Well, this ontology based schema what's that? Essentially in the Italian law states that except for some exceptions public organization should release open source in the wild on github for everybody to use the software but then one of the issue you have is that their web service just serialized the given name in tens of possible different ways while in Italy we have an ontology that is a well known and established schemas that say that the given name is named, that is given name and not name nor first name. The same for the tax card, the same for the VAT number so we expect in a couple of two or three years to get rid of all those serialization variables and to have converged to a simple and unix serialization stuff that will serve a lot of time of unit testing when two organizations communicate because if I say, let's say the revenue service say given name and the health service cause first name you have to provide a converter and you should test that all exchanges work. So this is a standardizing name is probably the most complex part because everybody wants to retain the old names but on the other parts we esteem to save a lot of money in two or three years and to eventually provide for public administration a set of library for personal information management that could provide validation and so on at the central level so if you have to implement or to manage personal information you can just reuse the future to come library. So another part reliability this was the main goal behind the new framework we started from the European interoperability framework if you are in the architecture world it's a good read it's actually a very good read it doesn't go too much in the tech but it goes too much in requirements and if you want to work in the government world you have to motivate that the change you are introducing came from the European Union because this is your passport too for motivating administration to change and actually I think it's a well-read document so what does he says he says that if you provide government services you have to plan you have to write up a business continuity plan it means a few but it can mean a lot for example if you provide your services to a public cloud it means you have to implement multi-availability zone strategy or if you provide the service in your data center it means that you have to provide a disaster recovery in two cities or at least in two different data centers so it means for us on the interoperability stuff not on the architectural stuff you should provide an integrated management on load and failures that means if your infrastructure is overloaded you should communicate it in a machine-readable way to the others because this avoids cascading failures which are actually one of the main issues in governmental services especially when you have infrastructure behind because for example you want to pay a tax you talk with your home banking your home banking will contact both the service payment system of the government and the remote public agencies so it's a tree level architecture between different both public and private agencies it's something like that but the payment service is too complex so for example if the tax office API is overloaded you still contact your home banking but your requests are going to go in time out because the public infrastructure that is behind cannot service the request and this puts a load on your home banking they have actual transaction that are failing so you have for example a transaction ID on PayPal and then they tell you that transaction number X failed the new framework provides a way to in the case the service is not available or if it is overloaded so if it's up and running but it's queue it's service queue, it's API queue call is too long you can use an API management that return a saturation response that is just plain 503 service unavailable plus retry after that could be retry after one hour, five minutes tomorrow in this way the payment service your home banking can know that it doesn't have to still issue more payment services but it can just say this payment service is overloaded I want to issue new calls for five minutes for ten minutes or for example if this service is a real time service that works only from nine a.m. to six p.m. it will service request all in a given time frame but now the service C that relies on service B knows the time when it can start and issue new request and this is very important because it gives hints on the organization one to communicate to the user that actually it's request his services is up but it's request cannot be serviced in five minutes, one hour twelve hours so how can you do this you have to tell people to implement this but you haven't even communicate which are the HTTP headers because as you can see on the left there are at least twelve different possible HTTP headers that you can use to implement this and if you make a standardization server application relying on external services can just check those three that is rate limit remaining a result and just don't have to guess and check all those possible headers so the important thing is to reduce and be clear on which are the supported headers and on the last one rate limit reset tells you essentially if you are over quality when you can start again issuing new requests just tell them how many seconds they have to wait because if you put time stamp or a date if you have to rely on your NTP server or on their NTP server even if the clock is Q they just wait for enough seconds moreover this approach is semantically consistent with another header that is actually in the HTTP standard that is the retry after header so always tell if they are going to saturate their Qs or however when they are over quota or the service is overloaded or out of service communicate how long before the new request has to arrive then this is wonderful this is easy there is a standard for just error response just use it and then almost all then future steps standard as metrics use rates not absolutes because when you have to communicate service status you probably want to communicate your your metrics so which is the fragrance which is the base unix which is your availability the old framework used more than 20 different metrics some was the higher the better some other was rates other was absolutes we say just use rates not absolutes use basic unix just like from infuse that is bytes seconds use availability is post success rates not error rates for example availability the service was up for the 95% of the time or success rate the service responded nicely to the 95% of the request they must pick expected response time we are actually thinking if there is a responsiveness that is a quite tricky metrics to evaluate or an updex index that just use the target response time to create an index that is from 0 to 1 and it's quite readable and quite usable if you have suggestion on that please those are welcome then the hard part is signature and encryption signing an exchange with a digital certificate is the basis for a non-repudiation framework that is if I send something to a public administration I have a sort of receipt that can guarantee me that the counterpart cannot deny the exchange SOAP as a well established standard well established but criticized standard for sending an encryption that is WS security and rest are some standard still criticized that is JSON web signatures and encryption namely JWS and JWE that are used by OpenID connect for example we are investigating still working on all that stuff the possible choices we have are one leave the signature and encryption to the application protocol for example to the administration sign just the body that is what WS security does so we can just extend a JSON object with claims or adding HTTP headers with a signature or another approach that is undergoing on Amazon there are a couple of drafts is to create a signature of request header and body and put everything in the HTTP headers there are some proposals and there are many issues but it is interesting interesting research stuff further discussions are on digital certificate I am lurking on these web group they consider a legacy but it is not that you say just don't do RSA I mean you should think well because it is something we have to think at least with a 10 years perspective the advantage of considering RSA a legacy is that elliptic curves keys are very short and they are easily embeddable in HTTP headers or in claim on others there is something new in HTTP2 are those structured headers you can see the second stuff encoded with stars is actually base senta 64 encoded binary so the advantage of structured headers is that you can embed binary stuff into headers in a standard way so when you get stuff that is encoded it is just base 64 encoded and well another discussion but it is really too much for a conference whether to duplicate or adopt the digest header if you use it please take care of reading the carefully both the digest header RFC and all the proceedings have a couple of HTTP or group conference papers because seems nice but could be tricky well I think that it is run I hope I didn't bore you enough if you are interested on the new Italian framework actually it is in Italian but I'm working with GDS that is the United Kingdom Digital Services I started some preliminary talk on this work there are some similarities some differences we just started last month but we made a lot of calls to try to find some common grounds and we even start some first interaction with the French government and so there is the opportunity to create really European framework for API there will be European conference in October I hope to make it there and if you are interested in the staff please let me know and I will provide all the English documentation I can thank you very much again you can write me on twitter or until on my institutional email if there are questions please for questions come up here any questions thanks for the talk I'm glad to see that also Italy is kind of progressing on this so we can see that there is some conversation with the UK government I think they are quite good in digitalizing all the services they have a very good github account with a thousand repositories so the question is I have a couple of questions so what are the first services that citizens will use using these new technologies and the second one is all these work is open source how was the process of accepting poor requests or contribution from the community ok actually the main services are related to data for example the queue if you go to the ER in the hospital all those data are usually exposed by API so you can know for example if you are in a big city you may know which are the hospitals with the longest queue or the lowest queue and a lot of work is for the citizen is still based on data there are some our first target instead are some frequently used services provided between administrations that will be based on REST API but with mutual authentication with TLS essentially because those are a backbone where local and smallest public agencies like towns can build services on top so the first goal to me is to unlock some services just like the tax code validation that it seems simple but it is not because for example a tax code that person is not valid so there is a service that provides a natural validation to check if the tax code is attached to a valid person a real life person or a real company and this is used billions of times and it could be used for example validation of all forms of the citizen provided by the citizen that are usually when you fill up a form and you specify your tax code they usually validate just they make a syntactical validation or for example if you're driving license is valid those are core services that are very frequently used that could be easily scaled and if you move them in with this approach it will gain a lot of benefits about the open source stuff actually we have almost 190 repositories in the Italian Government account we are working hard with the Central Administration because through the law tells that they have to release in the open source there are no sanction for not doing that so you have to check the project you have to help them because releasing is not open source is not just throwing out whatever it is I mean you should provide a quality software you should provide software that could be documented enough for the organization to get contribution but now there is one of the main project there is a new project we are working on is messaging service for people where mobile app you could receive payment notification you could receive finds for example or announcements on your healthcare data and this application is backed by an API service that can receive and deliver messages to the citizen on their phone and they can register a wallet in Italy we have a pago opia that is sort of pague of UK that started in almost four years ago the nice thing is that you can use this payment service to register your credit card and payment services to pay taxes and so on and this app is fully open source the infrastructure is fully open source you can get it and usually it is very well documented it's on github and the documentation is marked down like mostly all the new Italian government project we have time for one more question I just wanted to quickly mention there's a project called COSE which is RFC 8152 and that's assigning an encryption protocol which might be useful to you as one of the slides you mentioned you were looking at different options for that and I just wanted to mention that Just let me know RFC RFC 8152 Yeah, it's based on on CBO I think it's it's used in another way of serializing the object it's very interesting there is one of the standard exchanges that partly use CBO and I think I don't know if actually all these specifications because it seems that I will show you this CBO stuff is actually very interesting and essentially the CBO is a binary serialization of JavaScript but using the header notation I showed you before the star one you can just create and serialize a JavaScript object in a binary one and serialize and put it in a header in a standard way and well thank you very much and it would be interesting and then thank you very much for your patience I hope it was interesting enough even if it's a public public project but public and government can be good and fun thank you Roberto thank you to you goodbye