 Ydi'n ddweud beth yw. Fel ydych chi'n bwysig, David, ydych chi'n fawr o'r fawr ac mae gennym hi i'n gweithio'n gwahosol arall. Felly, mae'n tuch gyda'n gwybod. Rhywb ar hyn yn ddefnyddio. Rwy'n fawr i'n gweithio'n gweithio ar y generaethau. Yn gyffredig. Felly, rydyn ni'n ddefnyddio'n rhaid i'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio. Rydyn ni. Felly, rydych chi'n gweld di'r ffordd gweld y ddweudio ar gyfer y cyfnodol ar gyfer y cyfnodol yn ffawr. Roedd chi ddweud chi'n gweithio'n gwybod ar gyfer ei ddweudio amser, a roedd chi'n gweithio'r ddweudio'r ddweudio'r oedd yn ffawr. Rydych chi'n gweithio gweithio ar gyfer ei ddweudio ar gyfer y cwymodol yn y catalog. On ydwd yn unig i ddweudio ffrifurth oedol a ddiogelwch i gwerth i'r cyffredinol. A rywbeth yn fel cynhyrchu oherwydd. I wnaeth i ddweud gylio arawr bod ydw'n cael ei ddau'r gydig oedd yn rhoi ffyrdd ac mae'r ffrifurth i ddweudio'r gyrfa ffrifurth i gandaeth eftanol. A rydych i gweithio'n ffrifurth oedd. Ond oedd gynhau i gydig i ddechrau i gyflynyd i'r ffrifurdol a dwi'n rhoi'r mneud ymlaen o'r eu bod yn gwybod iawn mewn cyfwyddiad ac, ddau, uwch yn gallu fwyaf o'r pwysig yw'r gennidolol yma yn y Prifysgwr, felly'n rhoi'n llwyddo iawn i ddim ymlaen o'r pwysig sy'n meddwl i'r ddweud wedi bod yn bwysig gwybod yma yw'r produktifiteisau. Roedd efallai yn gwneud ym Mwen Gwyl Coq, rydyn ni'n gwybod i'r amser hi, ac mae'n bwysig iawn mewn cyfwyddiad a rydyn ni'n bwysig iawn mewn gennidol. gyda'r cyfnodau ymddangos genertyf AI, ac mae'n gweithio'r rysg, bo'n gweithio'r cyfnodau o'r ddau, ac mae'n gweithio'n cyfrifio sy'n gweithio'n cyfrifio ar gyfer y cyfrifio. A'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio a'r genai AI yn ysgrifftol. Felly, gallw'n gweithio. Felly, mae'n gweithio'n gweithio. Aha, felly'r genertyf AI yn ysgrifftol ar gyfer y cyfrifio ar gyfer y cyfrifio. Felly, ystod, ddim tyfu ar hynny, mae'n gweithio'n gweithio. Mae'n gweithio'n gweithio ar Garfodraith i'r Llyfrgell ac yn gyfnod i'r Gweithgofydd Bostyn ac mae'n gweithio'n gweithio'n gweithio'n gweithio. Dwi'n gweithio'n gweithio, mae'n gweithio, mae'n gweithio'n gweithio'n gweithio. Felly, mae'n fath o'i gaelio'r fath, mae'n wneud am y fath a'r fath i'r gaelio i'r gaelio, ac mae'r gaelio ond, a'r gaelio'r fath. Felly, mae'n gaelio chi'n dechrau ac mae'n gweld i'r gaelio. Felly, mae'n dwy arall. Felly, mae'n rhai roi amlaen oherwydd, mae'n dwy arall oherwydd a'i roi amlaen oherwydd, mae'n dwy o gyfleoedd yma, a'r wneud oherwydd. Mae wedi cael gaelio'r fath. One was a creative task and another was a problem-solving task, which required analysis of data, something that obviously BCG consultants do a lot of with their customers and the businesses that they're working with. So, what were the results? Well, they were quite impressive to be honest. Every single knowledge worker that took part saw a benefit from using AI in that particular test. The high performers amongst the BCG staff saw a 17% improvement, which is huge. It's very rare that you get a 17% improvement in anything when it comes to knowledge work. So, showing this productivity improvement was quite a good result. But more importantly, perhaps, is that lower performers, this is folks who perhaps don't have the same amount of experience, maybe new in role, maybe younger folks perhaps, they saw a 43% improvement when they were using generative AI. They also found that when they used AI for tasks that it's really good at, things like writing and analysing data and creativity and that sort of thing, the workers who use the AI saw significant benefits in their productivity and the quality and the speed. This is very unusual, these sorts of productivity improvements don't come along very often, perhaps once in a generation really. The workers also completed about 12% more tasks and they tended to finish 25% quicker, so one quarter quicker than when they were working without the AI. So, I think it's pretty clear from that study that there are definitely benefits there to be had by using generative AI. But these were knowledge workers, so the consultants working for BCG. What about software developers, right? Are they any different? Should they be different? Well, there's another study, so McKinsey ran a controlled study. Smaller group size for this one, there were 40 developers took part, but they had varying experiences and they were split into two groups. One group had access to two generative AI tools. One was a kind of a general purpose chat bot, like a chat GBT-like tool. The second was something more code specific, so something that's been pre-trained on code, maybe something like a co-pilot, that sort of tool, right? Group B, the control group, they didn't have access to any AI technology whatsoever, so they were just doing the tasks as they would normally do them before AI came along. The tasks themselves were code generation, refactoring documentation, so pretty much bog standard bread and butter things for developers to be doing. So, what happened? Well, again, the results were pretty awesome to be honest. Developers with access to the AI tools, they saw a 20% to 50% speed increase. They were faster on average by 20% to 50% depending on the task. Same story with finishing tasks quicker. They were finishing tasks 25% to 30% faster. They were much more likely to finish on time, so they were 25% to 30% more likely to finish on time when they were working on a task. They also noted in that paper that they saw tremendous gains in four particular areas. So, one was expediting manual and repetitive work. Another was jump-starting the first draft of code. Third was accelerating updates to existing code, perhaps things like modernisation, for example, which is obviously a hot topic. And fourth, it was increasing developers' ability to tackle new challenges. So perhaps working on less usual tasks than they would normally get. Maybe, for example, a Java developer getting a Python task or something like that. So anything that allows them to sort of migrate and increase their knowledge base tended to make them more useful and they saw lots and lots of gains in these areas. So, what can we learn from these studies? Well, number one, AI makes us more productive. It doesn't matter if you're a knowledge worker or a developer, it's the same result. There are clear advantages to using AI in those knowledge-based tasks, knowledge-heavy tasks, things like coding. Two, developers should consider using generative AI, but perhaps it's best to follow the McKinsey example of using two. You should perhaps have something that helps you with code, something that's available in your IDE, but it's also really useful to have something that helps you with all the times when you're not inside your IDE. Sometimes having access to chatbots, things like a chat GPT or something like that is really, really useful for helping with all those ancillary tasks that perhaps you don't use your IDE for, but they can still make you more productive in those areas. And I'll give you some examples a little bit later of some ideas of where you could apply that thinking. And also, it's not just developers who benefit, right? There's also a benefit for product managers. Product managers, using AI, they can perhaps increase their doorometrics, for example, fix issues faster, improve quality. Perhaps it's around testing and documentation. You could use AI to help you increase your code coverage, get fewer defects, for example, and have fewer regressions as a result. Perhaps it's about planning and productivity generally. So maybe you can use AI to analyse data to help you make better decisions around resourcing and how to use your resources more efficiently. Or perhaps it's embedding AI features in products to help customers use them more easily or to get more from them than they currently do. And finally, of course, developer experience. Using AI in this way by, you know, helping and supporting developers in their knowledge tasks, you can really reduce stress levels. You can limit burnout, you can increase knowledge, share it more widely amongst team members, inexperienced and experienced alike. They can all benefit. McKinsey found that in their study, developers reported being two times happier and entered into a flow state much faster than they did when they weren't using generative AI. So lots of good reasons there why you should take generative AI seriously and think about it for your development teams. But it's not all plain sailing. There are some risks as well. You should definitely think carefully before using online generative AIs, for example. This is what Gartner said. The use of online large language models, such as chatGbt, Googlebot, et cetera, requires a number of trade-offs that many organisations will find unacceptable. For example, privacy is a problem. Samsung found this when their employees started to use chatGbt. They accidentally leaked information about upcoming chip manufacturing techniques because chatGbt does not forget the prompts that you give it. And it can reuse those prompts in its memory and start to share those more widely. So in this particular case, Samsung were a little bit embarrassed by that and immediately stopped using chatGbt for the work that they were doing internally. Sustainability is a problem. There's data centres all over the world right now wracking lots and lots of Nvidia graphics cards and various other stuff and trying to find all the power to run them and the cooling to run them is causing sustainability issues. Here's a story from the US where there was a 34% spike in water consumption from a Microsoft data centre all down to the use of generative AI and the need for extra cooling and stuff. Then, of course, you've got inaccuracy. When McKinsey did a state of AI report, they said that the number one thing that adopters of AI reported back to them was the biggest concern, was inaccuracy. AI can hallucinate and knowing when it's hallucinating is a challenge. If you're not on top of that challenge, it can catch you out. Many organisations don't necessarily know how to cope with that. You should think about that. Also, of course, licensing and copyright. Obvious issues with that. You can't copyright something in the US that's been generated by AI and the large language models themselves may have been trained on information that you don't know exactly where it came from. I wanted to show you what I've been doing to experiment with AI safely in backstage. I introduce you to what I've been working on here, which is adding a private, secure local AI to backstage. I think it's a great place to start when you're thinking about your AI productivity journey for developers. I've put together this kind of proof of concept, really. I call it the backchart plug-in. You can grab it if you use that QR code there. Basically, this plug-in just allows you to start to get a feel for what it might be like if you had generative AI features inside of backstage. Developers, first and foremost, didn't have to leave backstage in order to be more productive, which I think is rather important. What is it? It's open source. It's based on an awful lot of open source tools. It's embarrassingly simple, so please don't judge me. There's a lot of open source tools in the background, but it's basically bringing AI locally, and you can run it from anywhere, and you can introduce that into your backstage GUI very, very simply. How's it architected? Well, it looks something like this. You have the plug-in itself, and then currently you have a number of choices of open source tools, and you can use to hook into that plug-in, which will allow you to then expose AI-like features inside of backstage. That's how it is today. It's not how I want it to be. I've got some ideas about what the roadmap should be for this tool, but I need help. If you're a developer out there and you want to help bring generative AI to backstage, then you should definitely get in touch because I could use your support and would really appreciate it. So, should we have a look? We're going to ask BackChat a question now, so bear with me a second while I try and get it onto the screen for you. So, let me slide this over... Slide this over there. Here we are. Okay. Nearly there. Sorry, folks. I find myself a bit awkward. Here we go. So here we have BackChat, right? As you can see, I've integrated it inside of backstage and I'm going to ask it a question. Down here is my question box. I click in there and post that in there. I'm going to ask it. My Kubernetes pod is stuck. It's in a crash loop back off. How can I get the logs be concise? Very important if you're in a rush at two in the morning. So off it goes and it's going to have a look at that question. What I should explain while it's doing this is all running locally on this laptop. So here I've just got a generation 12 Intel core processor. I don't have a GPU. I've got about 32 gigs of RAM but it's only using about maybe 21% of that at the moment so it's not using very much. And it's responding to me using the Mistral 7B large language model. As you can see, hopefully, it's coming up with a reasonable response, right? So there we go. Describe the pod. Have a look at the events. That sounds pretty plausible. And then find the logs and then see if you can figure out what's going wrong, right? So it kind of works. Hope you'll agree. So now you've seen it, let me try and explain a few places where I think you might be able to use it to get some additional benefit for you and your developers. So here we go. Use cases. Where could you use this? Well, it's not just for developers, remember. It's for anybody who's got access to backstage. So if that's your product managers, if it's your database people, if it's your infrastructure people, they can all come together. They can all use it in the same spot. And what could you use it for? Well, how about databases? Help you work with databases generally. Perhaps not an area of expertise. There are so many to choose from, you know, graph databases or whether it's SQL databases or whatever. It could help by answering all sorts of different questions about those. Operations. You've just seen me use it for a sort of an operation like example where we've asked it to help us with Kubernetes, for example. So lots of different places you could use this. Testing as well. We've already talked about the ability to maybe write tests and improve code coverage. Documentation. I've been to lots and lots of clients all over the world and they always seem to have the same issue when it comes to documentation. I think David mentioned it earlier as well. You know, getting tech docs off the ground is difficult. Well, it doesn't have to be. Maybe your generative AI could help you ease some of that pain and to help you to create better documentation. Here's some more. What about ideation? Coming up with ideas in the first place. Evaluating ideas, suggesting alternatives. You can do all of that. Research. You can ask AI to do all kinds of weird and wonderful things like pretending to be a particular user style or a user type and then asking it to answer questions in their voice and it can help you with that. It can also help you with teamwork, right? 360-degree feedback. I know that when I give feedback sometimes I'm a little blunt. I'm from the UK so, you know, forgive me. I perhaps look on the more pessimistic side and the optimistic side. Having something like AI help me with feedback and soften it and things is a really useful tool. It proves that it's not just perhaps for code. It's also for all the other things that you do as part of a team. Lastly there, unfamiliar code. I'm a Java developer by trade but it would be nice to know a little bit more Python. I'll be able to help out in Python. Why not? Now I've talked about that. Let's have a look at some prompting hints and tips to make you all leave this auditorium as an AI boss. Someone who can talk to an AI and have it do amazing things for you. So first of all, I wanted to encourage folks to think about using a very simple prompt structure. This one's called role, situation and task. So tell the AI the role that it's performing. Tell it the situation in which it's being used. Give it a very clear directive in terms of a task. Here I've got an example down the bottom there. You're a database designer. So bring your expert knowledge of SQL and manage this fleet of vehicles in a database for me. So give me some idea what that database should look like. And off it goes and it creates what you want it to do. So it doesn't have to be a database designer, could be an operator, could be anyone. Next, think about chain of thought prompting. This is where you encourage the AI to take a breath, think step by step, work through the problem slowly and by introducing these sorts of hints into your prompts you'd be very surprised at how successful that is at making the AI become more accurate. Remember we said inaccuracy was one of the problems. So using chain of thought prompting can help it become more accurate. So in this particular example create a Java doc. Here's the method, go and write it for me but think carefully and check your answer. So I'm hoping this is going to get the AI to sort of look at the code that's in the method and sort of actually try and document it properly. Or try these. I like to call these show me and tell me. The first one over on the left there that's show me what you got basically it's like saying to the AI I don't really care how you do it, just show me what you've got. So it would be include things like asking the AI to show rather than tell sorry, you, showing the AI rather than telling it what it needs to do. So for example I've included an idea of summarising a paragraph summarises paragraph use a similar engaging style it will write prompt in the same sort of style. Next tell me, so this is more like tell me how to succeed. This is where you give in your prompt some ideas to the AI about where it should how it can be successful. Tell the AI this is what a good answer looks like. Don't repeat yourself. Make sure that it's done in your own words. Make sure that it's concise etc. And few shot is another one that you can do. Repetitive tasks. We talked about repetitive tasks earlier. Few shot is a way to set up some initial prompting that the AI will then remember and then we'll use in later prompting. So it's great if you've got the same task to repeat over and over you can prompt the AI to do that. So that's it. Let's have a look at some key takeaways. Here's some. First of all point number one generative AI makes you better. Doesn't really matter who you are or what you're working on, it makes you better. So you should embrace it. You should use it you should try it. You should see how you can incorporate it into what you do. Secondly, think about the things that Jenny AI can really do well and can accelerate for you. So anything that includes creativity for example or analysing data is a good place to start. And think about what you're going to do in the future. Are you going to integrate AI with your people and processes? Are you going to train your own AI models so that they're more useful in your organisation? Or are you going to integrate AI into your products and services? And what are you going to do when AI is everywhere? Like when AI is in your pocket or in your car or in your fridge, I don't know. All these are the places where you might find AI in the future. What are you going to do when it becomes an AI centric world? So that's it. Thanks very much for listening. I really appreciate your patients especially with the screens and stuff. I hope you found that interesting. Here's a few links, places where you can get more information. You can have a look at back chat and there's also some tech docs for it and a software catalogue for it and all sorts of other things. So please take a look and if you've got any questions, I'll take them. Thanks so much Ben. Did I run over? We have any questions? Oh, there we go. We have some takers. Thank you very much for the presentation. In my org, the generative AI initiative has primarily been Slack based. So how given that like Slack already provides a pretty good AI chat how do I resolve that and I guess integrate generative AI in backstage in a way that doesn't conflict with that Slack? Yeah, it's an interesting challenge, isn't it? Who provides that? Is it from Slack? So Slack have the large language model and they host it for you? I don't know for sure but I think it's a combination of existing open source tools with our own internal team curating it for our internal experience. I think it's worth looking into how is it delivered so that you can understand it more and then figure out, okay, is there a way that I can integrate this easily, for example? So if there's a large language model behind that, maybe an open source large language model, perhaps it's been trained, perhaps it hasn't, retrained if you like. You might be able to use exactly the same model in these tools that I've shown you today. So something like local AI or text web gen AI. They can both use hundreds of different large language models from lots of different providers. So you might be able to use exactly the same model but deliver it in a very similar way. It comes down to the details, unfortunately. We'd have to say it depends on how you've done it and how it works at the moment. Awesome, and as a follow-up, do you have a road map for the back chat plugin? Yes, so what I'd really like to do is rather than rely on open source GUIs to actually create a true backstage plugin that does basically a similar sort of, you know, has a similar feature level to some of those other open source tools that properly integrates within backstage and then maybe take the model further. So can we start to get information from backstage, retrain the model, and have that exposed also to developers so that they can do things like, you know, in my software catalogue, I want to ask you a question about this thing and to have the AI know about the stuff that's in the catalogue, for example. Thank you. So come and help if you want to. Hey Ben, it was a nice talk. I really enjoyed this. You already answered my question with training the AI model with backstage information. So this is something I'm really looking forward and if you already worked a little bit around this to say, is it possible or will it be possible? Oh, it's almost certainly possible. So another great open source project you should perhaps think about if you're interested in retraining is to have a look at FlowWise. FlowWise allows you to take, for example, a body of PDF documentation and have that embedded into an embeddings database that you then use alongside your AI so that when you ask a question it can also use those fresh embeddings to answer that question for you. So, yeah, FlowWise is definitely worth checking out because it gives you a really nice easy way to start to get comfortable with how all those different mechanisms work. It might not be the supply chain that you end up with but in terms of getting started and understanding what's going on I'd say it's the perfect place to start. Awesome. Yeah, sorry, we have one more over here. Oh, okay. Thanks, thanks for the great talk. One question I have is you were talking about hallucination, right? Yes. You have any thoughts on how to mitigate that through this plugin or that's out of scope for this plugin? Hallucination is always going to be a factor of AI. It doesn't really seem to matter who's you use, whether it's the most powerful tools out there, the GPTs of the world or the clods or whatever or just an open source large language model. There's hallucination everywhere and it's down to the fact that the way that the large language model is created means that it's very unpredictable in terms of what sort of answer you will ultimately get to any one question. It's non-determinative, you can't always get the same answer to the same question. So knowing when an AI is hallucinating is one of the key skills right now for any knowledge worker or developer or anyone who's using AI in their daily work. It's knowing when it's giving you results that are just not accurate. But do you have any thoughts of you know you have some ideas you said, right? Are you also having any thoughts along these lines or that's totally out of scope? No, I'm watching what other people think about that and I'm hoping that the larger, broader community of large language folks will ultimately get a handle on that. But I've been following this for a while now and I can't say I've ever heard of particularly good answers around that. It's the same with deleting or making memories disappear from a large language model. Even the most today methods fail at the moment to take care successfully of deleting information from a large language model. So for things like GDPR it's a bit of a nightmare because once it's in there it's difficult to get it out. Right, thank you. That's all the time we had. Thank you so much for all the questions.