 Hello. Thank you for coming. I'll get started. The previous talk was a bit delayed, I noticed, and most people gone home. We have prepared a presentation. So my name is Dys, I'm on the CLI team, and we've prepared a presentation that I think is going to be very interesting because it's not about new features and it's not about the market and other things, but it's about the user experience, which is the most important thing to a mature CLI. I will very briefly go over the CLI or a short introduction of it. And Mason already started running. So we're the official CLI of Cloud Foundry for all Cloud Foundry. So that's all vendors, all the product offerings that are Cloud Foundry based. We try to be compatible with all of them. And therefore, we received 1.8 million downloads last month. There's a lot of people using it. Let me introduce the team. To you, we are a distributed team. I've copied a world map from my daughter's geography book. I'm based in Sydney, Australia, where I work for Fujitsu, and right here in the middle of the world, and I have three developers, or my designer, Mike, sitting down there, and next to my anchor, Nick, they're based in San Francisco in the Pivot Office there. Then I have two more developers there. Then I have a developer sitting at the front who kind of moves back and forth between Palo Alto and San Francisco office. Then, well, more down there, the most remotest developer is sitting in the front here from SUSE. And they all pair program, because they're all in the same time zone. I'll just quickly go over some highlights of the past 12 months. There are a number of features that were implemented by the back end and that we expose. So very recently, the private Docker repositories, H3 health check, isolation segments, and already almost a year ago, the one of tasks. Then there are features that came from us doing comparisons of other CLIs to ensure that people get whoever already have a familiar workflow or experience with other commands they use, that we support that as well. So TAP completion is a pointy one. We now provide the binaries or the installers through YAM and AppGet. We improved, what I believe improved a lot, our help text. It used to be you run CF help and you get three or four pages of commands scrolling over your screen and then you have to scroll up again to find anything. So the initial pages has only the common app developer commands and all the other commands are hidden behind the flag. And as a proper open source project, we now bundle the license file on the corporate notices as the open source project should, or any project should. Then there were another request that people asked us through the GitHub issue tracker. The release from last Friday, Australian time Thursday here, was about listing outdated plugins so you don't have to go to a repo site to see if any of your plugins have been upgraded. You can now do it from the CLI and make it easier to then install an update. You used to, it would just return an error if you did it before saying, oh, it's already installed, you have to uninstall it. That's now much easier and improve a bunch of messages and other things. Then I'll talk about how we receive feedback or feature requests. So the most common one that a lot of people use is our issue tracker. It's not only for questions, but also for issues or improvement requests. Some people contact us on the Slack channel and of course, the deaf mailing list. But there's also a lot of casual conversations, especially during this summit, I've heard a few suggestions. There's local meetups, in my case, Sydney. And just people walking by the team or through other PMs. Other PMs hear things from particular users and they contact me. Then of all those features I have, I need to prioritize them. And number one priority, the things that we try to address straight away are security vulnerabilities. That is very rare, but sometimes the new GoLang version comes out with a security vulnerability that could affect us somehow. We try to upgrade and release straight away. Also, we're being very careful about regressions when we release a new thing and we've broken something. Try to fix that straight away in the next release. After that, I try to group stories into themes. As you've seen with the previous release, which was all about the plugin installation experience. So there are sometimes it is drives from other components, back-end components, where they try to push a particular feature out like isolation segments or private Docker registries or repositories. And then we group a whole bunch of stories or anything we had or bug things or small improvements all within the same thing. When there's a lot of features or a lot of commands that as part of the feature set, like with the isolation segments, we will break it up. First by a minimal set that people can play around, see if we get some feedback and then later we visit the remaining commands, either implement them or we may find out that people use things a bit differently and then we do that differently. We have a lot of community or like in the issue tracker of things reported that don't really fit under one team, kind of random, just saying here's broke and that there is weird. Sometimes we regroup them all together and have a theme around just community reported issues. And then finally, low-hanging fruit, if there's a message that was unclear and it caused some ungrief and it's so easy to fix, we'll just fix that. That is my part. And then Mike will take over. Hi, everybody. Thank you for having us. I'm going to talk a little bit about how we build empathy with users, people like you. So this is one thing that we talk a lot about is this idea of a molecule. Who is the user for an idea or a feature? What problem does it solve? And what solutions can we explore to solve that problem for that person? To really understand what solution we're going to build, we need to understand the user and the problem really deeply. So we need to know what are the pain points and what are the use cases where those pain points are felt? It's a whole team effort, by the way. That's why there's three of us giving this talk today. Once we understand the user and the problem deeply and the pain points, then we explore solutions. Often an ideal will come to us as a solution and we take a step back. And then often when we do the solution exploration, we'll actually see the original solution idea come up and often maybe even fade to the background because we'll discover new ideas in our exploration and discovery. This works well when we have two tracks. So we have two tracks on the CFCLI. A lot of teams that have a dedicated designer also have this model of a delivery track and a discovery track. And the feedback loops happen as long as we're releasing frequently. So again, as we go through our journey together of creating a better CLI, new feature ideas come out and we validate those with the molecule in mind. Who's using it? What problem does it solve? And what is the best solution at this time? Cycle time. So we don't separate cycle time for a delivery track or discovery track. It's idea to release. And both tracks are involved in that process. So this is basically, if you don't know what cycle time is, it's from idea to release. I think a lot of people in the crowd probably knew that. But I like to reiterate it. We also talk about our learning velocity. So we often think about what's our velocity on the team, how quickly are we delivering stories. I like to think of our learning velocity also. So how are we executing? Are we going in the right direction quickly and sustainably? An example, recent, is the run task command. Deace just mentioned one-off tasks earlier. So I'll walk you through an example of how we did that with tasks. So first of all, we were like, who uses tasks anyway? At this point, it's an implemented feature in Cappy. We know that we need to expose it in the CLI. But for whom? And for what purpose? So we explored that. Initially, in that exploration, we'll have a lot of ideas about who the end user might be. These are some that come up. We have other conversations where there's other personas, other types of users. But this is what came up for the tasks based on just murmurs and anecdotes that we were hearing. So this is a thing that Teresa Torres came up with. You can follow her on Twitter, T-Torres. It's called the Ladder of Evidence. And it's probably one of the best articulated models of when we talk to users, what kind of questions do we ask? So at the bottom of the ladder is maybe minimal time investment, but we don't learn as much. So if we just ask people, what do you do? How do you solve this problem? And then I think there's a risk in asking people what they would do predicting future behavior, asking people to predict the future is kind of risky. I don't know how many of you know how you're going to solve a problem tomorrow or next week. It's all conjecture. So we ask what they've done in the past. The best predictor of what somebody's going to do in the future is what they've done in the past. So we ask them things like tell us a story about that time you had that problem. And as they're telling us the story, we get a kind of thermostat, a kind of measure of how painful the problem is. Better yet, ask them to show you how they do it. And even better than that is to observe them in a real life instance. So we think about the ladder of evidence quite often when we get in front of end users. Are we asking questions that give us the maximum learnings? Are we asking those questions in the right way? So for the task command we narrowed in on application developers. We talked with considered pain points and problems with other personas, but application developers seem to have the most severe pain points. One example is that, and this is from Dr. Nick's article on the Stark and Wayne blog about how to run a maybe like a database seed or database migration command. You just have to know way too much about the internals of an app on Cloud Foundry. And so this is kind of painful and it also changes depending on which build pack you're using sometimes. So that's not that great of a UX to run a simple database migration against an app. Another problem would be around operations around like who dropped off in our workflow. If it's an insurance app and somebody drops off in the sign up, maybe people in the field office would wanna know like who dropped off in the workflow and maybe we can reach out to them over the phone or over email and follow up on that lead. App developers to get that data out of the database they'd have to spin up a separate like cron job app. So now they're maintaining multiple apps just to run basic errands or tasks against applications to get simple data out of their database. Not a great experience either. So when we explore what the experience is like, so it's a CLI, words matter, what we call commands matter, what we call the flags matters and we explore that in a Google Doc. So we do basic things like change the font so it looks like a terminal font so that we can imagine what the experience might be like. And then as a team we go in, we make comments, we iterate, Google Docs have full version history so it's a really nice like artifact. You can go back in time and see what decisions that we made in the past. And we can actually print that out and put it in front of users and treat it as a prototype. So any time you've used a CLI, you'll notice that you'll have a history of all the commands that you've run and that's one way that you learn. Like how does this command work? What is in my history where that command worked well and I didn't get an error message? So when somebody looks at a printout of a potential CLI command, it's almost like they're looking at a terminal window with history and we ask does this make sense, right? We get feedback that way and we scribble in the margins or the user scribbles in the margins which is even better. So it's a very lightweight way to iterate quickly. I think a lot of people have assumed that we prototype a CLI like with JavaScript and this is not as rapid and we honestly don't learn as much. So this is how we do it currently. And now Nick is gonna talk about why refactor. Thanks, Mike. All right, so I'm gonna segue into this major initiative that we've taken on towards the end of last year. The reason why we're refactoring the code base is to incorporate all these design decisions into the CLI and to improve the end user experience. There's also some developer pain points that we've addressed with this refactor. So let's start with the end user pains first. So one of the things that came up pretty frequently was that the end user experience was very inconsistent. So depending on the what command you run in the CLI, you might get a very drastically different formatted output. Like for example, the colors could be different, your case sensitivity, the space padding. This makes it difficult to script against the CLI. Also, it makes it a lot less intuitive for end users to interpret the information that's being displayed back to you. Oops, whoa, should not hold down that button. Okay, that went really far. Okay, also we've had a lot of unactionable error messages. This is partly due to the code just bubbling up Golang library errors as well as errors from our client directly to the end user. There's no context attached to these errors. It's very difficult to discern what exactly went wrong when you ran your CLI command. And we've also had a lot of regressions due to small innocuous changes that we've made. And this is largely due to the fact that we don't have sufficient integration or unit test coverage in our existing code base. In fact, our main command, which is CF push, has only a handful of integration tests. And that's, you know how important CF push is, right? So it's a big improvement point. Also, old commands are much slower. So take this specific scenario. When you're running CF push and you're pushing up a jar archive, the CLI could potentially copy your jar archive up to three times before it actually pushes up your app bits. And imagine if you had a virus scanner on your local machine that you're pushing from, that will just totally bum out your machine because these jar archives have thousands of files. This is a very inefficient way of doing things. Also, with the existing CLI, for every API request that you're making, we create a new connection. This is very inefficient. And you could potentially DDOS your server, but if you're running a command that lists out all your service offerings. Okay, so now let's go a little more into the developer pings. On the code side, there are many, many layers and they're not cleanly separated. This makes it very difficult for developers to test in a black box setting. It makes it basically impossible to write unit tests. You're essentially writing integration tests all the time. There are a lot of patterns in the old code. We've actually noticed that this is, so the CLI is a GoLang project, but there's actually like JavaScript specific patterns in there, Ruby specific patterns. There's like callback hell here, but then another place as a code is very explicit about what it's doing. We have a repository pattern for instantiating things. There's a lot of patterns and some of them are very non-idiomatic GoLang. And this just makes it much more difficult for someone to contribute to our project as well as new developers to ramp up on our code base. There's also a lot of technical debt. I remember when I first joined the team, I spent the first couple of weeks just making sure that error messages would be specific class of error messages that we bubbled up and displayed to the end user and that took weeks. So there's a lot of technical debt. And lastly, it just makes development difficult and it makes teaching GoLang difficult as well. And Pivotal is an organization we were either learning or we're teaching, so we constantly have developers come onto the team that may or may not have experience with GoLang. And this code base makes it very difficult for them to learn Go. All right, now I'm gonna go into a little about the architecture of the refactor code. First of all, we only have three layers. It's inspired by the Model View Controller ideology. So on the top layer, we have all the display logic. In the middle layer, we have all the business logic that molds the data that we wanna display. And in the very bottom, we have the API layer which interacts with the various clients that we talk to. Like for example, the authentication client, the logging client, our main Cloud Controller client and so on and so forth. Each layer in this refactor code base is separated via interfaces, which is an idiomatic GoLang pattern. And this makes it very easy to inject dependencies as well as swap them out if you wanna test them in a unit fashion. We use this tool called counterfeiter extensively. It's used to generate test fakes. We use this consistently across our refactor code base. It's very easy. Basically, it generates fake stubs based upon interfaces that you're using your code. And so all you need is a comment, as you can see there, to tell counterfeiter what to generate. And then on the command line, you just run go generate, which is a built-in GoLang tool. And this makes our test qualities very consistent. A little more about the architecture. So in our API layer, it's laid out like a middleware, like an onion. You can see it's doing a lot of different things and it's very easy to add in something new. So like for example, on the innermost layer, we're actually making the HTTP request and immediately outside of that, we're wrapping errors with more context. So we're returning something more meaningful to the end user. This is to address that specific pane that I mentioned earlier. And then we have logging. We have auto re-authentication. And you can add more to this as we go along, as new requirements are required. All right, so now I wanna go a little more into the actual workflow for refactoring a command. We start off by exploring the existing behaviors around the existing command. And we write characterization tests. And as you can see from these tests, it very clearly specs out what the command is doing. This is our way of documenting what the expected behavior is. Then we go about actually implementing the command in our refactor code base. And with the goal of parroting or improving the existing user experience. And this is when we add things like better error messages when we make things faster by making the backend more efficient. When we make things more consistent in terms of what we display back to the end user. And then at the very end, we update the, at the end of the coding phase, we update the characterization tests to reflect these improvements that we made. The last part of refactoring command, we actually do manual testing just to make sure, there's only so much you can test in automated fashion. We actually run the command, make sure it's doing what it's doing. We run through the happy paths, and then we get our product manager to perform acceptance on this command based upon the original requirements, which is these right here. So he hits the green button or the red button. All right, most importantly, so what's better now that we've gone, that we've started this big initiative to refactor the CLI? Well, first of all, our UI output is far more consistent. So across all the refactored commands, it follows the same output. So it's more intuitive to understand what information is being displayed back to you. Also makes it a lot easier for scripting. The error messages are much more meaningful. We actually, with that middleware layer where we're wrapping the original errors that come back from our clients, we attach on the context of what this error is actually about. So our end users can make the right adjustment. The refactored commands are also faster. We're not using this repository pattern anymore. So anyway, that's very specific implementation detail, but we're also not creating a new connection for every API request that we make. And on the developer side of things, we now have three cleanly separated layers. So it's a lot less mental overhead that you need in order to actually develop in this code base. There's very thorough test coverage, which means there's far fewer regressions. And as of now, we've only refactored a subset of all the commands that are out there, but we already have 700, more than 700 integration tests and more than 1,500 unit tests so far. And this list just keeps going. But on the developer side, the biggest thing is it's so much easier to develop in this code base that ramp up, like as a new developer that's rolling onto the CLI team, you could be developing in it within hours as opposed to within days or weeks. All right. And with that, I'm gonna go to questions. That's a really good question and there's a lot of information around it. We could probably talk about that after the talk. It's a complete separate topic on its own. You in the front row? I think Mike had best talked about this point. What do you mean by the new CLI? The refactored CLI? I don't, have you seen anything come in where people are like, hey, it's so much faster or anything like that? We've refactored only, sorry. I think we've refactored only 10 or 15 commands so far, like not so much used ones, 23. And I think we haven't heard anything because they're not used often. Just to add on to that really quick, it is faster to make improvements. So I would think the less we hear, the better in that regard, right? People are fine when things just work. We hear things when they don't work well. That's a good question though. Evan? That was a really good question. Regarding a style guide that's like a, I don't like the system record or something to point people to, we're still working on one. It's kind of a Google doc full of ideas. But that is one of the intents is that like other Cloud Foundry projects could be like, oh, what's the documented way of displaying a table? Like that kind of thing. I think right now it's like a shared understanding on this team between us like basically how a table ought to look and things like that, right? Consistency is kind of shared among us. We're working on documenting it and we'll let you know when it's done or out there. I just have a piece to add to that last question. Like what improvements have we, in terms of feedback we've heard so far? So on the developer side, like on the development floor, we've actually had quite a bit of feedback on the refactor commands. One, they run a lot faster because they're making smarter API calls in the back end. This is also because we're actually developing against the latest version of the Cloud Controller API. And also another example with, we added the user agent into the header of every request that we made and we're actually able to track what commands are being received by the Cloud Controller and this in turn allows us to collect metrics about, for example, what are my most recent, what are my most frequent commands that are run against the Cloud Controller? And it's also very useful in debugging situations where you might have seen an error on the server side and you're able to trace back to like, what was the actual CLI command that was run that generated that error? So a lot of other good feedback from the development side as well. I remember another piece of feedback that I thought you would mention during your slide where a year ago when you came on the team or like three months later, you said, when will I feel like I'm ramped up after three months? Like when do I get to the point? And then a developer here had been pairing recently on the new refactored commands with someone from the copy team and I think they paired for half a day then he was in the retro in a meeting with us while the other person just soloed by himself. So half a day, three months to half a day. And I think there is a lot of benefit for us as we onboard more people or we'll over, we'll over, we'll in, out. Roll in? Any other questions? Rotate. So yeah, I understand your question to be who are the core users? Is that what you mean by customers? Like who are the core audience we focus on? Yeah. I think initially, I'm just speaking for myself, my first order of filter is basically does this benefit an app developer? Does this help an app developer develop apps faster for lack of a better mantra? That's the first order. The next one would be operators and in there as well is like admins and org manager type people too. Those activities are important too but the first one is does this benefit an app developer? Like Sandy Cash's talk about isolation segments, we had many conversations that were very productive about what we're not going to force an app developer to do, right? We have to offer the complexity of somebody else that the app developer can continue to be successful and efficient. More questions in the back. Where? Oh, hi. Oh, Charles. Yeah, so how do we prevent feature creep and bloat? Is that kind of am I mischaracterizing it? Yeah, that's, I'll take this really quick. I think D's might have something to add too. That was the driver behind the CF Help design change was rather than when you do CF Help like to show the whole kitchen sink to only show the commands that app developers need to know to get started and start getting their app onto the platform and managing the lifecycle of that app. And so a lot of the admin only commands then are only visible if you say like show me all the help, right? And so that was probably a result of the fact that we do add more commands. As more capabilities hit the platform, we want to expose those. And then maybe do you want to take it? Sure. Yeah, so when I became PM about almost two years ago, we were already discussing, we have like over 150 commands and it's way too big and a lot of, especially from the team I was getting, why don't we throw this away and we start over with a nice clean CLI. I mean, it's only V6 and V7 is going to get it right, right? So I've been pushing back. I don't think V7 is going to get it right either or not without spending a lot of time figuring out. But there are a lot of commands that I kind of feel like how did they get in there? Why that way? Wish they were in there. But like Mike was saying, we found this good solution of we need to clean up this help anyway because it's too long. And now even if it is, if there's so many features because we're splitting up the help and currently we have CF help hyphen A to show everything. It could be A for all. It could be A for advanced and we could have hyphen something else later. We could split it up and it doesn't really matter that it has many commands as long as you can find what you want to do. I think that's fine. It's not like the binary size is getting too big and yeah, so I think we kind of postponed that issue until we run out of abbreviations I suppose or hyphen something. Does that answer the question? Does that raise a new question or another question? Yeah, a search, so a search on the server side or like you want to search. No one has raised that before so that was definitely not in the books. Maybe I can reach out to us and we'll talk about it. Thanks, anything else? Thanks again. I want to make one more appeal. We're trying to learn more about how people also script the CLI. So I think we had something up here. There it is. Come find us on Slack and I'm personally interested in how you're scripting the CLI and how that's going for you. It's easy for us to imagine somebody typing on a keyboard every command but like what happens when you're orchestrating these commands to get some result that you want? We want to learn more about that. So reach out to us about anything really. Thanks.