 All right. I think I've got the all clear to get started here. Hello, everybody. My name's Cole Walker. I'm a developer with Wind River, and I contributed to Starling X. My talk today is full featured PTP in the cloud for telcos. So a quick agenda, we're going to just look at some of the PTP functionality in Starling X. Then we're going to take a look at an example topology and dive into how that's configured on a Starling X system, and then we're going to take a look at some of the monitoring and alarming features that Starling X offers to help applications and users manage the accuracy of the timing on their system. So I was going to start with a precision time protocol joke, but I knew that the timing had to be perfect, so I wasn't sure where to fit it in. So precision time protocol is an IEEE standard used for sub-microsecond accuracy on clocks. It's really useful for edge cloud applications, because it can be run on generic hardware. It provides a best master clock algorithm that allows for a degree of redundancy and failover if clock sources are lost, and there's very low bandwidth requirements. In specific 5G applications, the sub-microsecond accuracy is essential for smooth handoffs between different cell sites, and there's a movement in the 5G space towards open RAM deployments, which the off-the-shelf hardware is very useful for. And specifically, some of the good features that Starling X helps out with is it supports TGM, TBC, and ordinary clock types, which are all different types of PTP clocks. And Starling X also uses its collectD tool to set various parameters that are expected in PTP messages, something that the open source Linux PTP project doesn't provide on its own, but we're going to get into that a little more. So there's a lot going on in this topology here. We're going to break it down over the next few slides and kind of build it back up. What you're looking at is a Starling X simplex all-in-one node with two nicks, and there's a whole bunch of different components going on here that are related to how you keep the system timing in sync. So in Starling X, the core of the PTP work all comes from the Linux PTP project, and there's a number of programs in there that work together. There's TS2PHC, which is used to pull timing information from GNSS timestamps. So you connect an antenna into your NIC, and you're able to use that as a primary reference time clock. PHC2-SIS, which is used to synchronize the actual system time on your device from the NIC itself, and then PTP4L, which is used to distribute timing information over the network so that other nodes in the area can receive accurate timing from your master node, in this case. So what Starling X provides is various CLIs for configuring all of these services, along with monitoring and alarming, as well as a REST API that containerized applications can query to learn about the sync state of the system. So let's walk through how we would set up PTP on a Starling X node. In this case, we're going to be configuring a TGM, so a Grandmaster node that would be serving time to other nodes in its area. The first step, we've got a GNSS antenna on the right side there that's plugged into NIC number 2. So there's going to be timestamps coming in from that, and we're going to use the TS2-PHC application to read those timestamps and synchronize the PHCs, which are the primary hardware clocks. In this case, each NIC has one primary hardware clock. You can see on the bottom here, I've got configuration examples as to how this would be done via Starling X CLIs. So I'm not going to go into great detail on how that works, but at a high level for all of these services, you create an instance of the specific instance type. You add various parameters, which would be things that you would read from the man page for TS2-PHC. You would supply whatever parameters you're planning on using. Then you would assign interfaces on the NICs that you want to associate to that instance, and then you would map that instance to a host. On a multi-node system, for example, you could have different configurations per node, and that's often necessary in some more complicated deployments. So now that we've got timestamps coming in from GPS and the physical hardware clocks on each NIC are now synced to that, so they're reading the same time as the GNSS signal that comes in. The next thing we want to do is frequency lock those NICs together. And in this example, this is an Intel E810 NIC would be one of the types of hardware that supports this. So they have SMA ports that can be connected from one NIC to another, and you can daisy chain them across multiple NICs. And in Starling X, you configure a clock instance type, and that allows you to transmit a one pulse per second signal in this example from NIC2 over to NIC1, and that makes sure that the PhDs on each NIC are going tick, talk, tick, talk at exactly the same frequency and are locked in with each other. That's going to help make sure that the timing is as accurate as possible. So once we've got the GNSS signal coming in, the NICs frequency locked, the next step is, of course, to set the system clock, which is what most applications running on the system are going to actually care about. So in this case, we're going to use PhD2 Sys, which is very simple. We would just provide it with which of the primary hardware clocks, which of the PhDs we want to pull the time from. And PhD2 Sys takes that and just disciplines the system clock to keep it locked to that. So now we've got a system clock that's completely locked in with the PhD, which is completely locked in with the GNSS signal. And we can move on to actually deploying PTP4L, which is the PTP part that we're here to talk about. So once we have all that, now we've added a whole bunch to the diagram here. You can see each NIC has several ports on it that are all connected to downstream nodes. In a 5G deployment, these would probably be radio units, so things that are controlling the 5G signal and they need accurate timing. So you would create, in StarlingX, you can create multiple PTP4L instances. One per NIC is recommended for best performance. And you can provide whatever parameters you need. PTP4L is highly tunable, so it depends on your environment as to how exactly you would set up and configure those. And once that's all configured, you have the interfaces on NIC 1 and NIC 2 that are serving as timing masters and they are sending time over ethernet down to the RU units. And those units are then having their clocks synced to the GM node that we've configured. Each of the downstream nodes would need some of its own PTP configuration, so they would need to be running their own PTP4L instance if they were StarlingX nodes, you'd do that in the same way, as well as a PHC2S instance to ensure that their system time is synced and accurate. So once you have your StarlingX node set up and PTP is all configured, on the operational side, you care about monitoring and alarming to make sure that if you lose your GNSS signal or some other issue occurs that you've got some way to become aware of it and address it. So StarlingX provides support for monitoring and raising alarms for several types of faults. It tracks the loss of GNSS signal. It monitors the offset between an incoming clock source and the system time. So if you're running a StarlingX node as a downstream PTP4L client, it'll compare the incoming timestamps to its own system time. And if there's too high of a skew there, it'll throw an alarm for that. It also tracks the one pulse per second signal between various NICs if you've got those configured. And in StarlingX, that's all viewable through the FM alarm list command. And these alarms are all great and useful, but they're definitely more of kind of systems administration and operations focus. They're not. That information isn't available that way to, say, containerized applications that are running on StarlingX. So for that, we have a containerized application called PTP notification. And this is used to provide alerts and updates to containerized applications running on Kubernetes in StarlingX. And it runs as a system-managed application. So it's basically as simple as just uploading a tar ball in StarlingX and telling the system to apply it. And once that starts up, it tracks the PTP state in the same manner that I described with the alarming before. But it provides a REST API for user applications that they can query to get all of that same information. And user applications can also use the subscription system so they can subscribe to, say, what's the GPS state on the system? And they'll receive a push notification anytime that there's a change to that state. In order to facilitate user applications connecting with the PTP notification application, we provide a client sidecar image that users can download and use. So when they download the notification client container and deploy it as a sidecar alongside their application, it provides a very simple REST API that saves them from having to implement it themselves. So their application can just make some basic get requests to set up a subscription or to query any of the system timing states on demand, however they choose to use it. So you can have both approaches where you're able to receive push notifications if there's a change in the state, but also maybe on first-time startup you want your application to run a bunch of get requests to get all the immediate states of the timing on the system. And that's mainly important, especially in 5G applications so that things like radio units can decide if the timing has degraded, do they need to stop their operation? Do they need to hand off to another radio? Things like that. So, future work. There's a lot of ongoing development related to PTP and Starling X. One of the big features that we're continuing to build out is support for synchronous ethernet, which is a protocol that allows the one pulse per second transmissions to be delivered over ethernet connections rather than just using the SMA ports that are really only supported within a server. So you'll be able to transmit that same information over the network and keep all of your nodes tightly locked to the same one pulse per second signal. We're also looking at exploring various high availability configurations, as well as providing the ability to fail over and fail back from one clock source to another. So you might have multiple GPS antennas on a server or you may have a GPS antenna and a connection to a backhaul network with a lower priority that you want to be able to pick up in the event that you lose signal. And there's also always ongoing work to validate PTP support for additional nick types from other hardware manufacturers. And in addition to that, I'll just mention this was mostly telco focus, but PTP has a lot of applications outside of that in industrial automation, electrical services, financial trading. So if anybody in the community has experience in those areas and wants to talk about how PTP can be used to that, we really tried to keep the Starling X approach very open and flexible so that we can integrate with other requirements very easily. And that takes me to the end of my presentation. If you're interested in talking about PTP, you can reach out on the mailing list. I'm gonna be at the Starling X booth over the course of the conference. So feel free to come and say hi and that's everything. Thank you.