 All right everyone, thank you for coming to this last session of this conference And I must say I had a great conference so far so full of information and fun stuff and Also, it's I think it's really cool that this is recorded so that other people can watch it later and learn from it and It made me think of my 10 year old daughter She had a friend over the other day and they were playing in the garden with some old camera They found somewhere and when they came back in I asked though were you playing photographers and said no We were playing super successful youtubers So I will see if this talk runs me that title or if I'll have to work on it. So, okay Thank you for coming My name is Frederick if you didn't see that in the description or on this slide here I work as a software architect for a company called tons all which works within the sphere of social care for elderly people and today I First this session is titled there and back again how tons tall created an IT platform based on mesos and When I was practicing this speech I It just struck me that there and back again. It kind of sounds like we were doing this mesos streaming application All the new stuff and then we were saying Let's go back and do it the old-fashioned way on prem monolith all that But no what I want to describe with this talk is that we made a journey And I wanted to share with you some of the insights we've made while doing that journey So today I'm going to talk about the background why we're doing this the business foundation to it I will just briefly touch on it But I think it's important to put things in context and we're going to look at our new platform Which we call evity and we're going to do that from first a functional perspective And then we're going to do it from a more technical perspective. I will talk about this platform and how we view it as an operating system for the Business applications that we're going to run After that, we're going to take a closer look at one of the frameworks that we supply with this platform the IOT framework and We use the term framework internally to describe a set of services an application that we logically Define together that we deploy together and that are dependent on each other and That is not a mesos framework They could be implemented at mesos framework, but they're not not at the moment at least and After the look at the IOT framework We're going to just touch on some things that we found during this journey technical things that I Would like to share with you things that we thought were important things that you look you need to get a better look at I won't be Going super deep into the technology about it But I will rather just mention it to you so that if you're going to make a platform like this or something similar You would have some Knowledge of what to look at next Okay During these sections, I will ask for questions after each sections if you have questions About something we talked about there and we can also have a Q&A when when everything's over, okay? good so Tunstall, Tunstall is a company that works with social care for elderly people We work with independent living and we work with assisted living and assisted living is where elderly people live together in a Facility that is created for elderly to live in with staff and other things that Help them go on about their business Independent living is where elderly people live in their own home But they have some services provided by some some private actor on the market or by the the government and This market is we is changing right now and Basically, we have several factors that work together to create this change and one of them is demographic Currently there is an increase of in the elderly population people are living longer and We have a different ratio between working people and people that are considered elderly and needs to have social care and Does such services and this also makes this change in ratio between working population and aging population also Creates a difference in the funding there's not so many people around to pay for the services for elderly and With regards to funding we are seeing the trend that the funds that are allocated to the care of elderly are Decreasing rather than increasing and I'm sure that if you go home and you ask your local politician if they want to increase the taxes in order to To increase the funding for this they would probably say no that is not so popular But on the other hand we also have a shift in technology That would allow us to use this new technologies sensor technology and machine learning for instance to provide services that are more efficient and cheaper to Give the same results as we have with our traditional technology We also see that we have a new set of competitors on the market When it comes to the new types of service that we are looking at and these competitors are not our traditional hardware based competitors, but they're new Companies that have worked with digital transformations for a while. We have mobile operators are Turning into this sector and we also have the big ones like Apple and Google And we also see there's a slight Shift in the consumer market people are willing to pay for services if they Have the the private funds to do so and we also see a more interest in personal applications and and Relatives that wants to have services and applications in relation to their their elderly relatives We think that within the next three years 60% of our business will be in New markets and or new products and with new markets. I mean new market. I don't mean geographical new markets Right, do we have any questions regarding this? Oh Okay, let's go on then So what we did at Hanstol was that about a year and a half ago? We sat down and thought about this Trends we saw and what we could do about it and we decided that we needed to create a platform that would act as a technology enabler for new services and applications that helped us transition into this new market economy and we call this platform Eviti and Eviti is from the business perspective a platform that enables us to create new types of services and Use new modern technology to sort of Develop and improve the quality of our our offerings to our customers the services that we are trying to to create are based on care and health services on planning services and on security services so security for instance that is Video surveillance. We have staff security and we have keyless access. There are some examples We also need to have a platform that enables us to monitor what is happening within our system to to look at logs for instance or to have metrics about our Our devices that we have out there We're trying to create new devices To collect health telemetry but also to integrate third-party Devices that will supplies with the telemetry that we need to have these preemptive and reactive services So that was what I had to say about the how of why why we're doing this. Are there any questions about it? Okay, so let's look at Eviti from an architectural position. So when we started this I Started to think about the platform as an operating system So what is an operating system? When we use our computers that we use daily They enable us to use all this hardware that is underneath without really thinking about the technical details about them They transform the technology into functionality the classic example would be Using files for instance when we Program something that uses a file we can tell the operating system to supplies the content of a file without really needing to Think about where is this file? Is this on a hard drive? Is it's on a network drive and? And all that Technology that's happening beneath is transparent to us the operating system takes care of that for tons all if we take Something like a keyless lock We have a lot of products that supply this to us our own products, but also third-party products and from What I would like to this platform to do is to create the functionality that is associated with the keyless lock and Have the business application that uses this functionality Can use the functionality without having to care which lock is actually? Supplying the keyless functionality in the end. So if we have a planning applications a planning application In which different staff teams will have access to different locks that application would be able to run for a Lot of different customers regardless of which type of lock they use or whether the application itself supplies functionality for that type of lock and What we use it we use the the old pattern of plug-and-play so within this operating system We want to be able to allow Services and applications to run by plug-and-play. So we will of you usually we will provide a framework and The services can then either plug in new functionality into this framework or they can reuse the framework for their own purposes and The frameworks that we are going to to deliver with as part of the core frame core platform is The IOT framework identity frameworks. We have data warehouse for instance. We also have a framework for hybrid cloud We do think a lot about data and trade integrity and security for our applications and our data and Although we say that our new platform should be Based in the cloud. We do know that in certain scenarios our customers will require us to Allow them to run certain parts of the application either on-premise or maybe in a data center that is located within the country of That the same country as the customer so we want to have a Framework that allows us to run the same set of applications but in different alongside each other in different locations and the hybrid cloud framework does that for us if we look at the architecture of the platform we it's basically layered in three layers and as we see we use mesos and DCOS to power all three of them and The platform layer or the operating system kernel if you want to call it that if we look at an operating system. I Never mentioned it before but We said that it provides Functions or it translates technology into function, but it also allows us to run Have a runtime environment first to run our application. That's also important in this platform that we have that and Also in the third part of the operating system is to to provide Some common functionality that we can use and that are our frameworks and in the platform layer we use an IoT hub and we use event messaging systems and Authorization and access control for instance, that's part of our framework and on top of that we can then implement our own customized applications and services and We use DCOS as the runtime from that so we can scale them and deploy them easily and also upgrade them and We we want to have an environment in which each each application can run on this platform and Each application can then have their dedicated team that Implements them and have their own life cycle and we often let the team By themselves decide how what technology to use to implement it So we don't say that this is for instance only Java An application could be implemented anything we are technology stack agnostic as long as the application can run in a Container we're happy then we can run it on DCOS And then on top we have access management and we have a protocol gateway which is associated to IoT hub We have an API gateway to Route any incoming traffic to the correct end service in the back end and we also have authentication services for the For the APIs and the protocol gateway any questions about this architecture Okay So let's take a look at the IoT framework. So this was one of the first frameworks. We decided to do and this was maybe this was the the driver that made us that Made us do this platform from the beginning. We wanted to create Environment in which we could read to health telemetry data And we wanted to be able to read health telemetry data from a lot of different sources And the only thing we knew from the outset was that we don't know which Which devices we're going to use in the future We only knew that there would be device it some of them would be our own some of them would be a third party We didn't know with which metrics we would Going to be interested in we knew some of course, but still today I know that we are in the future are going to collect metrics that I don't know today Which ones? so the IT sensor data architecture is Architected so that we we can have a set of or a framework with a set of services that act upon this data that we don't really know today what it is, but still Give us functionality to have a real rapid development with IT sensors so What we do is for each metric that we we are interested in in collecting We try to normalize it so that regardless of the source We can have a unified way of looking at that metric for a specific person We also want to have a situation in which we can plug in new devices easily and rapidly Without redoing a lot of work and without having to redeploy a new system or or anything like that we We also want to to use stream and stream analytics on these metrics and The IT sensor data architecture looks like this so Basically, we have divided it into three parts first We have something that we call the payload pipeline, which is the top four boxes and then we have the metric extraction service and then we have the telemetry services and the first part the payload pipeline it looks upon device data that comes into the framework as a payload that it doesn't know anything about so it's just a Set of bytes basically and it acts upon the metadata that is associated with the payload and from that it will transform or Or re-route the messages to various other services that are interested in the in the pipeline so we can actually hook up a new device to this system and It will function without us having to do any form of re-implementation or any changes in the existing architecture in the end When everything has been filtered and and rerouted it will hit the metric extraction service and This architecture is use is actually using a revamped old architectural style. I don't know if you Realize which one it was We reuse the the old Pipes and filters Which was invented in Bella borders somewhere in the mid 60s when they were Implementing the Unix operating system. I think it's kind of It's really a Property of a sound architecture that you can still reuse these architectures so long after it was was created in a completely new context and I also think it's quite fitting that In a platform that calls itself an operating system that we reuse this classic old architecture for one of the most iconic operating systems ever But let's get back to this so What happens here is that the the The payload will hit the metric extraction service and if we say that the payload pipeline Does not know anything about the payload the metric extraction service knows everything about the payload So the metric extraction service is the service that is responsible for extracting the metrics from the payload that comes in and Then obviously we will have a lot of different metric extraction services We will have one for each combination of device and firmware that is running out there if the different firmwares have the consequence of the device having a different payload and The metric extraction service will then extract the metric that is interesting or that is sent by the device if that's one or if that's many different metrics and then publish that to the metric service which is then normalized format which meaning that we can reuse the metric from a lot of different devices and Then each metrics we will have one service for each metric that we're interested in and how we store and Make that data available to client applications is very much up to each metric that we're going to use so Whether or not that's going to be on a On a stream for stream analytics or if it's going to machine learning Scenario or if it's just going to be stored to have static reports. We will use different scenarios for that and This gives us a lot of flexibility in terms of what we are able to accomplish with new different devices So in the classic scenario what we're having here is a one-to-one relationship between device and metric extraction service So if we have one device running once A version of the firmware we would have one extraction service for that but we have the flexibility to change that because We don't necessarily need to have one met one service for that specific device We could have several services for the same device if that makes sense if there was a very complex Payload with a lot of metrics. We could have one service for each metric on the other hand if we have a device that sends us a payload that is Different between firmware versions, but not so much. It might make a lot of sense to have the same service extract the metrics from both of those. So we have this flexibility and As I as an architect I always Come back to the flexibility part of the architecture And and I would like to share a story with you. I was driving in my car Some time back and I had radio on and on the radio There was a science show and they were interviewing a professor from a university in the US and he was talking about Something that he did with his research team. They were trying to research if they could If they could explain intelligence as a physical force that affects the world much same as you would you would try to describe Gravity or velocity and in the end they had come up with an equation or formula that They thought described the impact of intelligence as a physical force on the world and It was quite a long show and the interview was quite deep and they talked a lot about How they did this research? What it needed to do in order to do this and at some point he said that the first thing they had to Do was to define what is intelligence? What was intelligence for them and It was a very highly academic discussion But then after why he said something that really made me look up look up and he said We define Intelligence as the ability to make the choice it that maximizes your flexibility in an unknown future And I was there and I was thinking hey wait, that's what I'm doing each and every day and If you look at this architecture of that view, I think that is Precisely what we tried to accomplish with this one to maximize our flexibility in an unknown future All right questions so far Okay, I'll have one question So where is that demo that was promised in the description? Oh? well When I wrote that description we were on a roadmap that would allow us to be finished with one of our own devices and the metric extraction for that, but unfortunately that wasn't the case so Some week ago. I was in the I had a choice of Maybe just running a lot of services on DCOS and showing that to you But I didn't think that would make a lot of sense. So no demo. Sorry for that Okay, so final section some Lessons learned while using mason's on Windows Azure specifically All right to build a future proof infrastructure You need to take a deep look into what you get when you install DCOS on one on Windows Azure for instance I think the same questions need to be asked if you're going to run this on Amazon web services, for instance But on Azure you would basically have three options You could use the default templates that you get from Missile sphere or from the Azure container services if you use DCOS as the What they call container orchestration? You could use the Azure container Service engine. Do you know what that is you have any experience with that the ACS engine? Okay, I'll talk a bit about that then later on what you also could do is you could create your own templates For creating DCOS on on Azure that's the road we took a tonsill, but that's basically basically because we missed the ACS engine so the ACS engine is Application it's open source. You can find it on GitHub and you download a Docker image and run locally and that Is the ACS engine is actually what Microsoft uses themselves to create the Templates that you use when you install DCOS on Windows Azure So what you can do is you can can do a lot of reconfiguration of how the end Product will look when you install DCOS. So for instance When you install DCOS on Azure you will get everything you need so you will get Virtual network you will get load balancers You will get all the computers with all the hardware that is associated with them But if you want to have more fine-grained control over that You might want to install DCOS on an existing virtual network on Azure or you maybe want to use something like manage Discs rather than the classical disk then you need to either do your templates yourself or Configure them using the ACS engine. So if you're going to run DCOS on Azure I strongly suggest that you take a deep look at the ACS engine all right, I Wish we had but now we have a set of own templates that we use and I think it's It's it's quite important to have this control I mean if you're in a scenario where you know that okay We're going to install DCOS. It's just for prototyping or to get a view of it fine Then use the default template or if you know that okay, we're going to use this We're going to install Kafka and Spark and run and that's it then you probably find too just using the default templates but if you're in a situation where you want to Integrate your application your you have a lot of legacy applications that you want to integrate with the running DCOS cluster and you know that you can't You can't containerize your legacy applications at install We have a lot of legacy applications that are running.net code 46.net for five for instance, and we can't put those on on Linux containers and yet we want to in certain instances have them in the cloud so that we can have a communication between the mason's cluster and the legacy code and in order to do that we need to have more control over the virtual network and that is The first reasons why we choose to make our own templates Also if you want to have a Different set of nodes in your In your cluster, it's important that you you have a good grasp of how to create new New functionality using the template. So for instance, I think that In the future, we will see a lot of role-based Mesa's clusters. I don't know. Are you familiar with the role concepts within Mesa's? Or should I just to mention it briefly? Yeah, I think I'll mention it then So a role in Mesa's When you deploy deploy a Mesa's framework or a Mesa's service you can say that this service will only be deployed to nodes that have a specific role and Previously there was not so good in Mesa's because a framework such as marathon could only handle one role At a time so With an exception there's something called a default role and that is any node that is in your cluster does that does not have a specific role attached to it It can be handled and then another one but with The new release of Mesa's which I think is also in DCOS 1.10 You can have several roles attached to framework So before you could not have more than one role Except the default role in in marathon for instance And if you install DCOS you already have that role It's called public slave and it's the role that allows you to put certain applications on the part of your cluster that is accessible from the load balancer from the outside and If you want to have another set another role you can't reuse marathon for that or you couldn't before but you can now and I think that will Make roles more used within the clusters for instance if you have certain applications that require a lot of memory you could want to have Specific set of servers or Mesa's nodes that you know have the capacity to run this service because they provide a lot of memory and At the same time then you don't want marathon to To put a lot of other services on those nodes because you want to use them exclusively for those services that require a lot of memory and that is a typical case when you use a role so you attach your specific role to those nodes and Then when you deploy your applications you add that say that these applications should be deployed on these nodes also with Mesa's for Windows coming I Think there are a good share of the DCS users that is going to want to deploy Windows containers on on Windows nodes and then of course you need to tell marathon that a Specific application could only be run on Windows nodes. I don't know if there's some maybe some other Ideas on how to solve that maybe the they will build in some kind of OS type into the Mesa's resource offering I'm not sure but if if you look at the There's in the North American Mesa's con there was a talk about Mesa's for Windows and they use attributes which is kind of a very similar to roles to have marathon add Windows containers exclusively to Windows nodes and When roles are getting more Usable in terms of Mesa's I think it's more. I think this is it's Getting important that you can actually provide new set of nodes with specific capacity to your Mesa's cluster and then you need some form of Insights into how this works rather than just using the full templates what I also would like to mention is That We think that when you start using DCOS for the first time you get a lot of new functionality and There are certain things that At least we had to invest a lot of time to understand how they work so Get to know your new friends you will have marathon Get to know that that is my advice to you it could use some fine tuning there's a very good API for marathon arrest API, which you can use it has a Monitoring endpoint for instance, which you could hook up to your to if you're running some kind of monitoring software in the back end you could hook that up so you can extract its metrics from marathon and make choices of Of how to scale marathon based on that You could you you might also based on that want to change some JVM parameters Java parameters on marathon You can also use the marathon API to do Automation and deployments we use VSTS Attempts all and we we can have a Continuous deployment pipeline based on this API If you're running this sooner or later you're going to want to run your own private container registry so Try it as early as possible to look up how to do that within the DCOs cluster The important thing here is to provide the The authentication information to mesos so it can download from your private registry and What you can do is you can when you do a service in DCOS and marathon you can apply something called a artifact URL, which will be downloaded and The good thing to know is that that will use curl on That URL that you provide so anything that curl can handle you can put in there We for instance put an FTP server on the virtual network, and we just use the FTP link For for DCS to download the container registry information next step if you're Relying heavily on User interfaces that use backend services in your mesos cluster. You must get to know your low load balancer the marathon LB Especially look in how to provide SSL offloading if you're interested in doing that, which I think you are There's a lot of different options for doing that You can provide your certificates as part of your deployment of Marathon LB, which is good to know. There are certain Environment variables that you can use for doing that. So check that out. There's a lot of information on the marathon LB and Pages on Github so check that out and also if you're as we are relying on Legacy Applications running on the same virtual network in order for them to access your DCS cluster. You will need internal load balancing And that would work the same way as the external load balancing that is you would use Marathon LB, but you would use them At internally on the virtual network and with the load balancer that's also internal for the network Okay, that was it Thank you very much. I will I will Be around here a few minutes if you want to ask some questions or just come up and say hi Hi, thanks for the presentation. You're welcome. What kind of data do you gather? That you will go that you are going to insert in the in the application itself. Well, that's a good Thanks for the question. It will allow me to elaborate a bit on that So we're primarily looking at health monitor data. So that will be things like heartbeats There are sleep patterns you can do movements. We want to use this data to have preventive services such as fall detection or To see if someone is active in the room if they're gonna Come up out of the bed at some certain point of time But it's also interesting because this is what we thought when we designed the system, but Just a couple of months ago We Was asked if we could also provide reports for the usage of alarm routing you see We do have a lot of in the Assisted living scenarios. We have a lot of alarm servers that when someone pushes an alarm button they reroute it to the correct staff member, which is configured based on time and and Such things if they are working or not and we were asked if we could use the IoT hub to collect this telemetry data and provide reports and The answer was simple. Yes, of course we can we don't we don't care so much about the telemetry that comes in so so that is one of the That is why I was glad I made it so flexible as I did Okay, so basically patients Are going to wear for the most of the time gadgets on their body that should report their state health And yeah, there's a lot of different types of gadgets. This is a classic one something you wear It could be one that we provide which has an alarm button But also collect these metrics like heartbeats and steps, but it could be a third party like a Fitbit or something But there are also other kinds of sensors. There are sensor room sensors the sensors where we are in the room For instance, and they can be used if you you fall and there's a fall detection. You can also see where You where we are when you fall so that the staff knows where to go to find you when you fall There's also things that are up on the roof, which is called like the whole ceiling is one big sensor Which is also used for doing this kind of in room placement is Any kind of system that you describe already implemented or is something Novative in the in the field Nothing is in the field as as of yet. No, this is ongoing and we are planning to go live early Q1 next year Good luck sounds sounds very useful. Thank you. All right. Thank you. Thank you