 Alex has spent nearly four decades in the process control industry working in processes from making coffee to refining. The first half of his career was spent implementing systems and the second half broadening his experience with time in development, marketing, sales, and service. In this session, Alex will discuss the pros and cons of open automation, centralization, or decentralization. So a warm welcome from the open group please for Alex Johnson. So we're gonna talk about two things that are going on. As the technology is improving, it becomes easier and easier for control system vendors to build a single controller that can run an entire unit or theoretically your entire plant. On the other hand, we also see technology driving intelligence down into field devices. In fact, they're getting so smart and the networks reaching to them, things like APL, are so fast that it actually becomes practical to run the control in those devices, much like Fieldbus tried to do years ago. And I think what we're gonna see is a pull and tug on these two architectures. You're gonna see people asking for both. And as an end user, you'd like to have a solution that addresses both choices. So what are the advantages of a highly centralized? You have a, it's commonly accepted. It's a natural path for the vendors to follow. Peer-to-peer communications are minimized because most of the work is going on in the single controller. And a cyclic process control is fine. You can run it at the rate you want. You're not really dependent on what's outside the scope of the controller, except maybe for, for example, steam pressure and header that's everywhere. And it runs really well. And we do this every day. There are some disadvantages though. The current high availability approaches are expensive and they don't protect against software failures. IO cards can be expensive and making them redundant is expensive. Enlarging an available solution can become expensive if the controller becomes too small for the job. If you add on to a unit, the controller may not be large enough to accept that. And then you have to address how you're gonna split your IO and all of those issues. And the intelligence that's in the field devices is often poorly leveraged, mainly used for in maintenance only when other things could be accomplished. Highly distributed approach has some advantages and some disadvantages as well. If you go highly distributed, you can reduce the impact of hardware and software failures because they're limited to a single field device. So instead of having an IO card with multiple channels and it dies, therefore you need to have two of them, you could limit your system to a single signal failure. It can simplify the control system. It's diverse, it's spread across, but the controls themselves could be simpler. Fewer pieces to deal, less to be installed. Improving scalability, since you would simply add in another field device when you need it, eliminates the cost of the duplicate equipment that I mentioned earlier, and eliminates the cost of the highly available centralized controllers. A lot of times the controllers are quite expensive and having two of them is an additional cost. But there are some real disadvantages with a highly distributed solution. And they're not trivial. You need engineering tools in a distributed runtime that can assure proper execution order and timing. If you put in straight cyclic control in each of your controllers and they're not synchronized in some fashion, then you're not gonna get the control results you expect. So that leads us to this tug of war. You could build a quality system either way with highly distributed or highly centralized, but those approaches do have some gaps that still need to be addressed. Neither approach directly addresses asset-based control, which we're seeing more and more of, a desire to not build a control loop, not build a set of interacting control loops, but to actually control the piece of equipment that's in the plant, and maybe have that intelligence for that piece of equipment come when you buy the equipment so that they interact with each other. And centralized architectures face some availability issues. There's cost and space around the redundant IO. And as I mentioned, there's no protection against software defects as a rule. And distributed architectures face the challenge of control synchronization. So what solutions are out there that could address these remaining challenges? And that's really the core of what I wanna talk about. IEC 61499 is a technology that has been around for a while and hasn't been widely used, I think in part because we weren't looking for the open type systems that we're really pushing for now and that OPAP is all about. It has a function block model similar to the one in 1131 that people have used for a long time, but it's event-driven. So it models the real world rather than doing things cyclically. The event and data IO are there and they can provide data integrity as the signals, the events can be bound to the IO signal and sent through so that you know they all belong. And what's really nice about it is it understands encapsulation and IP protection. You can see in this picture over on this side we have a typical function block diagram. It's not particularly special. In 1499 though, you can encapsulate pieces of those control blocks into a composite function block or even a sub-application. And what's nice about this is that the interior implementation is hidden from the user. It's quite possible that you could build proprietary control algorithms and then provide them as composite function blocks. And the architecture of 1499 allows that to be distributed, but IP encapsulation and protection. So what can we do here? We can create real-time automation applications and we can distribute them. The portability of the code is one of the key features of 1499. Mixing real-time and right-time, what the idea there is we're gonna execute these events as they occur and we can bring that data up when it's necessary to the IT in the cloud. We are going to manage legacy migrations. If you have 1131, the languages that are used in 1499 are the same language so the code can be heavily reused. You can also bring in new algorithms through C++ or whatever the implementation supports. We do improve engineering efficiency once you learn how to use it. You can build the layers of abstraction and model the plant equipment. So we have objects, we have single-line engineering, we can distribute the functionality, it is event-driven and it gives us a level of abstraction so we don't have to know all the details when we're laying out the application initially. Eventually we have to bind it to the equipment but that can be done towards the end. So what we see here is the basic concept of 1499. You have an applications and that they can be spread across the devices in the field and what's really nice is they can be distributed into any kind of device in the field. And field devices and controllers to edge computing architectures can be covered by this. So that means that 1499 allows you to have centralized or highly distributed as a customer choice. You don't have to choose one particular solution. 1499 enables, allows you to take advantage of the increasing field device power and opening up a highly distributed and quite reliable by design. You don't have to work on anything fancy. If a device fails, you lose the one signal. You don't lose anything else. By realizing such an architecture, it requires an event-driven build time runtime and that's what 1499 is all about. It allows you to deploy to those distributed devices. The event mechanism makes sure that the control algorithm operates in the correct order. So you get the behavior you expect with APL and 1499 we can build field devices that have true compute capabilities and avoid the need for centralized controller. If you can have it if you want it. So it gives you the customer choice and commercial industrial offerings are showing up more and more in the marketplace. So that's the gist of my talk. I want to just stress that what we're talking about here isn't that you have to choose centralized or distributed but that you have with its new technology the ability to choose whichever one suits your needs. So that's my talk. We have some questions I hope. Alex, thank you very much. Thank you, a virtual round of applause for your presentation. You took away my first question which side do you come down on centralized or distributed but explained you don't need to answer that customer choice. It's customer choice. And I will say I think you get a potentially simpler system implementation if it's highly distributed because you don't have to have as much equipment to be installed. Right. So can you talk a little about to what extent the open process automation standard is incorporating this or looking to incorporate it? Sure. So 1499 is one of the sections of the OPAS when the 2.1 release of the specification is released it will include a section about implementing with 1499. We in OPAS aren't specifying how that is executed so it could be and most commonly will be I think initially implemented in central controllers but the interesting thing is that the implementation is quite small and the intelligence in field devices is getting much higher and it shows a real potential as a execution environment in those field devices bringing to the world the promise that Foundation Field Bus had but didn't quite deliver but technologies improved a lot since then. Right. Right. It was ahead of its time maybe. I think so. You know, APL when it rolls out will deliver a lot of bandwidth that there was a limitation for most Foundation Field Bus implementations. Do you see this as something that's capable of being included in some way in the certification program that will go with OPAS? Well, certainly as we're going through and writing the section related to 1499 there are going to be requirements on how 1499 is implemented. Right. We're dealing with standardizing. 1499 has a concept called a service interface block that allows you to quote unquote magically pull data from the environment and bring it in to be executed. What we want to do is make sure that the service interface blocks with the common data accesses we need like peer to peer control have a standard implementation or standard interface. Implementation is different but the standard interface so the source code is compatible. So we're going to be doing things like that to enhance the portability of the 1499 source code. Right. Okay. The beauty of the standards approach it's allowed us to do it. It's great. Which option do you think decentralized or distributed is better aligned with machine learning and AI? That's a really good question. It really depends on the compute capabilities of the Edge device. I've worked on a couple of projects well I've been peripherally involved in a couple of projects around oil fields and watching the pump jacks. And what we found was we can do some work around the pump jacks and determine when they're starting to malfunction. But it does require a fair amount of compute there at the jack. But what we're seeing now is that that compute is starting to fit into much smaller devices. And we can do that compute right there on the Edge and then send the results up to notify people that there's a problem with the pump jack. So there's a lot going on in that space. Sounds like it. It sounds like it. We're going to leave it there and move on. But thanks for Alex for your presentation. And if any other questions come in, oh here's one, just come in. Of course, let me close it down. I'll sneak this one in, especially as it's from a, as I know who it's from. Welcome back, Ed Harrington. You're supposed to be enjoying retirement, but glad to have you here. How about the issue around personnel capabilities, centralized or distributed? Any issues around that? Personnel capabilities? Personnel capabilities, yeah. Does one approach favor another? Yeah, that's another good question. I think that currently we have people that deal with the field instruments and field actuators. And we have another group that deals with the control system and all of its cards and their rest. They're both doing maintenance work, but a lot of plants split that work between two different organizations. I think that the distributed one could actually reduce the number of people that are required and reduce the number of skills required. That's a hypothetical, I can't prove that yet. Right, right, one to watch. Again, thank you very much for your presentation. My pleasure.