 Hi. Welcome to the KubeCon Europe session for the Kubernetes VMware user group. I'm Steve Long, co-chair of the group. I'm a software engineer assigned to work on the Kubernetes project. I'm joined today by Miles Gray, but he's had some internet access issues during this recording, but he helped with the material and he'll be joining us on the day of the presentation for Q&A. Miles is based in Europe and he's an authority on the subject of storage in Kubernetes. We'll give a link to this deck at the end and we'll hang around for Q&A. The plan for today is to start with some material on how deprecation of the entry cloud provider and the entry storage driver might impact you as a user. By the way, it's possible the answer will be that it won't impact you at all. Then we'll move on to cover recent and planned feature enhancements and changes. Third, we'll quickly go over a top three list of dos and don'ts when running Kubernetes on VMware infrastructure. Finally, we'll wrap up with information on how to join the user group. So we'll start with coverage of deprecation of entry, both cloud provider and storage driver and what it means. The expected timeframe, or should I say release frame for the entry removal is Kubernetes 1.24. If you deployed Kubernetes on vSphere recently, you probably used the out-of-tree provider and the CSI storage driver. In this case, the next few slides don't affect you, but please hang around because we have coverage of other topics coming right up. If you are affected, don't fret. The new stuff has a lot of valuable features, feature enhancements have been going on exclusively in the out-of-tree components for over a year now. So this shift to the new stuff is probably going to be a good experience ultimately, even if there's a little work to do in the short term. By the way, if you're on a commercial distribution, please follow your vendor's guidance regarding migration. I'm covering this on a generic version that might apply if you're using pure upstream Kubernetes, but it's possible that your vendor has added features to support a migration and you'd be best advised to follow their advice. If you're faced with a migration, an important thing to note is you must first upgrade the cloud provider and the storage driver at the same time. Cross-coupling an old version of one with a new version of another simply doesn't work. Also, if you're unlucky enough to be hosting Kubernetes on an old vSphere version and or with old hardware, it might be the end of the road for this old stuff. As sometimes happens in the IT field, eventually things can get so old that it's time to replace them. Here's a table comparing old and new features of the storage plugins, entry and out-of-tree. You can see here that the CSI driver offers some really nice enhancements, although there are a few things that are not carried forward at this time. One probably won't be in the other is in roadmap, but at the current time that raw block volume support is under internal test and hasn't been released. I'd like to tell you migration is easy, but since this involves persistent storage, you never really want to rush in and take chances. Better to go into too much detail than too little and this is a subject too big to fit into this 35-minute session. This is something I'd expect we'll cover in detail, perhaps with some demos in a future full meeting of the user group where we have more time to work with. Those meetings can last a full hour. In the meantime, here's a link to the documentation. I'm about to cover upcoming changes in the Kubernetes 1.21 release, but as I record this, it isn't actually out yet. It's expected very soon. By the time this video recording is aired, though, 1.21 will be out, but it's possible something might have changed. If so, Miles and I will be present during the session and we'll issue any needed corrections. I don't know about you, but I just can't wait for live conferences to return again so we're not faced with something like this. The first item is a speed improvement when you provision persistent volumes. This is more likely to be apparent when you run large clusters where large means many Kubernetes cluster VMs or where you're running on many ESXi hosts. Next is a deprecation warning on the disk format option, which is something that has only been supported in the entry storage provider anyway. What you have here is a deprecated feature inside a deprecated driver. Probably won't affect most people, but I'm just letting you know. Speaking of deprecation, I think there's been a prior talk of this, but now a formal deprecation notice is being issued indicating that versions of vSphere prior to 6.7 U3 will be dropping out of support in the future. Also reaching end of support window is VM hardware version 15. If you're running your Kubernetes nodes on something prior to that, it's time to move them to more modern homes. This one could be problematic for a few people. Deprecation of support for a Kubernetes cluster spanning multiple vCenters is being announced as of Kubernetes 1.21. The workaround would be to move your Kubernetes nodes into a single vCenter. These deprecation notices follow the Kubernetes deprecation policy, meaning you're getting a little advanced notice now. Actual dropout of these features is not expected to trigger until the 1.24 release, but don't wait until then to react to this advice. Start your reactions to the deprecation notices now. Here's a bug fix related to the now deprecated entry storage plugin. It relates to cleanup of orphaned volume attachments, and more details can be found at the link in this slide. Next, I'm going to move on to... Let's see. A known issue that is not resolved where API calls might cause an error. You can read more here at the link. Next, I'm moving on to the top three list. Now, I promised it was a top three, but fair warning, as I composed the deck, I cheated a little, and one item is expanded into some items, A, B, and C, so maybe you get a bonus extra three items today. Users often resort to Slack when issues arise, and based on the experience of Miles and I, these have been the most common root causes of problems. Watch out for these. Number one, you haven't enabled Node UUID. This causes storage related problems. Second most common use cause is you've got the user and password credentials wrong for your vCenter. Either they don't work at all, or they're pointing to an account with inadequate permissions. Third most common is not running on vSphere 6.7 U3 or later when you're using combinations of cloud provider or storage provider that require that version or later. And if it isn't one of those three, often when you want to ask for help on Slack, people will want to know what's in your log files. More information always helps with problem resolution. So here's where you go for the cloud provider logs. Top three, item 2A. Moving on, here's item 2B, also knowing where the logs live, in this case for the CSI storage driver. First step is to get the pods that are running CSI components. These pods will have multiple containers, and you can use kubectl commands to go grab those logs for the CSI related components. Finally, I guess this isn't really logs, but for vSphere, here's a good first place to look. In the vSphere UI, the recent task section shows activity, and if something went wrong, you'll typically get a clue, possibly leading you to a root cause popping out there in that recent task list. Third and final item for the top three list, the known issues list for the CSI driver. It could be that you're not the first person to encounter a problem. Maybe you're not even the second or third, and it rated coverage on the known issues list. This link shows you where to go for that, and this thing is regularly updated. So, wrapping up, if you use Kubernetes on VMware infrastructure, the user group is a great place to go to get help from devs and other experienced users. For example, we recently had a great Bring Us Your Problems workshop that really covered a lot of interesting topics, and it led to a bunch of feature requests that got exposed directly to some of the devs who are working on the infrastructure that supports Kubernetes running on vSphere. That session went the full hour, and we still didn't get to every topic. So, if this sort of thing interests you, I'd strongly encourage you to join the group and join the group's meetings. We have a meeting each month, and the agenda is actually user-driven, but typically we present tutorials and best practices. It's up to members to nominate presentations and discussion topics, including feature requests. And Miles and I, if we get enough advance notice, might go out and try to recruit guest speakers for presentations. And then we've actually got some pretty experienced users showing up on a recurring basis. The user group was founded with a couple of user tech leads, Bryston Shepherd and Joe Cersey, who helped get the group started. But we're always looking for more people, and we'd like to grow this group with a really diverse set of worldwide users. The group is also running a Slack channel, which is a great place to ask questions. In my experience, users often ask questions in the general purpose Kubernetes user channel, but if you know your question is vSphere focused, I think you tend to get a little more expert focus by targeting the VMware users channel instead. Slack isn't a great place to even search for things, and the activity level in that generic Kubernetes user channel is so great that in my experience, a lot of devs don't want to go even look at a channel that has 50 topics a day, yet they are willing to invest a little time in policing a more specific focus channel. So once again, I strongly recommend that you join the group and join the Slack channel, come to the meetings. Speaking of meetings, the next user group meeting will be June 3, at least in the North American time zone. You can go to the Kubernetes community calendar, which has a link in this slide to get a conversion to your own local time zone and add it to your calendar. You become a group member by joining the mailing list, showing the link here. And then finally, here's the link to the group's Slack channel. Here's contact information for Miles and I. We're on GitHub. We're also available in that Slack channel I just mentioned. I've also got a deck link here, and here's a recommended session that I believe is coming up right after this one, and this is put on by the SIG Storage group. And here's a link to that session in the SCED site. Thank you, and I hope to see you in a future meeting. At this point, I'm going to turn it back over to the conference moderators, and we'll move on to Q&A where Miles and I are going to stay on the air and hang out for questions that you might want to ask either in chat or whatever other mechanism the conference forum provides. Thank you.