 And today we have with us Sergei Deshan, CEO and co-founder at Code Intelligence. Sergei is great to have you on the show. Hey, yeah. Thanks for inviting me to be here. It's my pleasure and honor to host you here. You are co-founder of Code Intelligence and before the interview we were talking about when was the company created. You gave us a quick history. I would like to just revisit that and tell us a bit about when was the company created. What is specific problem you saw in this industry that you felt nobody's solving, you folks can solve that led to the creation of this company? Basically from 2014 to 2017 we've been researchers at the University of Bonn. We were researching in how to scale up security testing, how to make security testing better. Back then we were using FOSS testing as the main technology around that. And we were also collaborating with some German enterprises and when using our research techniques to test parts of the enterprise code we even realized, hey, even without our research, just with an open source tools available already we find a lot of issues but no one in the industry wanted to use those kind of tech. And here is basically where we start asking questions, hey, what's going on? Why is no one using those kind of tools? And then we realized that it's much more about the development process and you have two people, so the security people without domain knowledge and then you have the developers who have the domain knowledge know what the software does but no security expertise. And so to use the modern fuzzing techniques where you use the feedback loop or where you use the coverage-based feedback you have to have domain knowledge to set it up and security knowledge. And this is basically where it was born. And then when we found the code intelligence we had the vision, okay, we will be using AI technology and bring those security tools closer to the developers so that a developer without the security background can get pretty decent results and then we had the second persona and this is the security people, they get reports, they get what's going on and they can judge the developers, do a good job and then give them feedback. But basically we wanted to enable the developers to do the core of the work. So this is how code intelligence was born in the beginning. But when you're talking about security folks don't have domain knowledge and developers, they don't have expertise in security. That's the whole problem that DevSecOps was trying to solve with the whole shift-life movement. We like to talk about these terms but the reality is different. Also, things like security, they are a very specialized field. It's not that you can just teach a developer about it. You can have practices in place, you can have guardrails, you can have gates, but these are totally different disciplines. Also, AI, ML, you need data science, data engineer who knows about these things. So expecting unicorn, green heart developers to know all these technologies is a bit complicated. So what you are seeing in the market, of course now the company established you have customer base internally, what challenges do you see teams still face despite all the discussion of DevOps DevSecOps? Yeah, in the end it is the problem and it goes to any security tool. Exactly that gap that a lot of developers have to do security now. But in the end of the day the developers working in the product team are paid for shipping new features or shipping new software. And the security is something which is still thought, even though everyone is saying we do shift left, we test earlier, but the mindsets are often if you have to prioritize your week, this week right now, mostly the product teams, the developers, they will prioritize shipping new features instead of security. And then at the end, closer to a release, they will be thinking about security. And obviously most of the companies got to a DevOps where they started shipping new features on a daily basis basically, but still they kind of say, hey, I have to prioritize shipping new features. And this is something where there is tension going on between the security people and the developers. So this is true for any type of company. No matter whether they do shift left or not, it's still at the end of the day. If you have to prioritize, most of the product teams will prioritize shipping. What are your thoughts on the whole evolution of chat GPD or generative AI? And then let's talk about what kind of AI technologies coded intelligence is leveraging. In the end, generative AI got very, very big based on chat GPD. And I mean, this is so in the end, there is some machine learning language models and so on. And those can be applied for and if you take something like GitHub co-pilot to generate new code and helping you to create new code. But similar techniques, but mostly focused on finding security issues have been used for years in static code analysis. So there are a lot of different tools leveraging also machine learning for finding security issues. And I think that most of the companies, at least we talk to, they have already a static code analysis in combination with some AI already enabled. So they use it on a daily basis. And now there comes even more code and this is coming from GitHub co-pilot. So the modern companies also start using GitHub co-pilot or similar tools to generate more code. But if we are looking at penetration tests, how do pen testers find most of the vulnerabilities? It's not with static code analysis because static code analysis is already used by the developers and they fix a lot of issues and mark some things as false positive. But the penetration testers are typically using dynamic code analysis, something tools like Burp Suite or OWASP ZEP. And this is where they find the security issues and then give the report back. So most of the dynamic analysis is not part of the DevOps tool chain. So this is something which comes sporadically in penetration tests or is even done by some other team. And here is where we see a new application of AI. And I can dive a little bit deeper in a second unless you have another question here. No, please go so that we can continue the flow. And then I do have other questions, but let's just continue this. Yeah. Okay, perfect. So if we are looking at the dynamic testing, dynamic testing means you have set up your software, the software is running and then you start attacking it. And most of the dynamic analysis tools are the so-called black box testing techniques. So it means you don't know what's going on inside the code base, but basically from outside you are observing a black box and you see that there is a certain behavior and based on that behavior you find a lot of issues. And as I said, most of the attackers or penetration testers find all the security issues with those kind of tools. And back then when there was a kind of fast testing renaissance, I would say a lot of people were thinking but how to automate the dynamic analysis even more so that you don't have the human in the loop, so that you have less security experts. And here is where a lot of people started using genetic algorithms. So basically you attack a software from outside, but because you were part of the compilation tool chain during the compilation of the code, you could inject markers into the source code. So that means if you give an input to the software, like with an xray, you can see what's going on inside the software and what path has been taken. And imagine it, you have a map, you have something like a maze and then you take the different paths inside that maze. And then because you are seeing what's going on with the genetic algorithms, which is another type of AI, you can basically trace backwards and see, oh, if I do this, I go into a new path, you run the new input and then you see you get into another path. You can combine it with a lot of detection algorithms and this way you basically increase your code coverage, test coverage, where you generate more and more inputs with the AI so that you cover almost everything inside that source code. And this was something which was already done and there are a lot of open source tools. So there is something like OSSFuzz from Google supporting the different programming languages. And we as Code Intelligence wrote jazzer and jazzer.js, which is basically the fuzzing engine for the JavaScript or Java language and basically doing all that automation. Excellent. Thank you. Can you also talk about, number one, the imports of open source for Code Intelligence? Now talking about open source can get complicated when we talk in terms of generative AI because there are so many different components, not as simple as a lamp stack, but talk about the importance of the company for the company and if when it comes to AI generative ALLMs, if you're also relying on some open source technologies there as well. In the end, LLMs and let me clarify a little bit how we use LLMs. So as I said, if you have a specific software running and there is a defined interface, we can attack that interface in a dynamic manner, basically on a running software. But the problem is with dynamic analysis is if you set up a dynamic analysis, you have to have someone, some kind of engineer who is telling this AI how to attack which interface. And this is where the LLMs come in place. So how we use LLM and this can be done with chat GPT or there are also alternatives to chat GPT where we scan through the source code based on the source code, how it is. We are using generative AI to find the different unit tests and from the unit tests, we can auto-configure how the dynamic analysis is then attacking the software. And here it's already working with chat GPT, but right now we are working on a smaller language model which is specified only to the use case how to configure dynamic analysis. And this is the part with LLMs. Obviously, you could use a generic approach. You have right now chat GPT which can code, it can help you with assistance in the natural language. But if you are using a language model specified for a very specific use case, those will be working better. And here is where we use our custom made LLMs focusing on the single use case. Now you folks just announced CI Spark. Talk a bit about what it is and whose problem it's going to solve. And CI Spark is doing exactly that part. As I just explained, there is for a dynamic analysis, you have to have an engineer who's setting up everything. And what does the engineer do? The engineer is looking, okay, I have those different interfaces. I have those different services. And is kind of defining all those interfaces gives something like an open API definition to the software which is doing the so-called FOSS testing. And with the CI Spark release, we basically automated a big portions of exactly that work. Because if you have the source code, there is something, a lot of interfaces are already defined inside the code. So yes, you could have an engineer doing that, but you have the code and based on the code, you can take the definitions out of the code, generate the configuration, and then the engineer is only looking, is it correct or is something missing? But this way you have an already 15x speedup because you can onboard 15 times more projects, more microservice, more services in the same amount of time. We talked about technology a lot. I want to talk about people or cultural side of it. Of course, we are starting the screen with whole DevOps, DevSecOps, all those things, AIOps. What I want to talk about is that when we look at the kind of internal friction that you talked about earlier within Teams, how much cultural change is needed at the same time, how tools like CI Spark, because sometimes tools can also become a catalyst to bring cultural change because they make things easier for Teams, so you don't have to really go to other Teams or it doesn't work, especially security is infamous for slowing things down or at least stopping things altogether. So talk about the role of culture and the role of code intelligence to bring that cultural change. This is a really great question because before we had CI Spark, how it worked, we got a lot of interest from the security people. The security people said, hey, we need this kind of technology and then when we explained how it works, they realized, okay, we need access to the source code and what they did is they went to the development teams and tried to introduce this kind of software and then the development team said, hey, but it won't work for us because our software is so complicated, we are so special and what it created, especially in large software projects, is they were discussing how do we do a POC, how to get it done and so on. So what CI Spark changed culturally is something, okay, now the same security engineer so previously had to consider the developers to do that. They were able to set up because they had gotten the source code access and they use CI Spark and they have set up the first tests on their own without knowing that code base and this way they didn't just get to the developers and tried to introduce just a new technology, they went to the development teams already with some results and have shown them how it worked and this way the resistance by the developers was lower because, hey, if this person doesn't know our code and did it that easily, hey, it must be also easier for us and this changed a little bit that the security people have an easier job with the developers because it's not something, hey, you have to first invest time and then you get results and then we discuss what's happening but this way they already got the first results even without talking to the engineering as long as they had already access to the repositories and this is from my perspective a really big change because the security people have less resistance in a way compared to previous approaches how you would bring in the dynamic testing. One more thing I want to talk about and I want to go back to the company is that you talk about your history, your region from the university, talk a bit about your operations, are you German based company, are you capturing only to the local market or you're looking at the global market and if you can't share some use cases or the kind of companies or industries which are leveraging your technologies. We are focusing on enterprises so most of us are almost all of our customers are large enterprises and if you would look on our homepage you would see some companies like Google or Carriott which is the software factory of Volkswagen or Bowen Planet which is the software factory of Toyota so you hear it's already over different continents so we are operating globally and most of our clients come from the automotive sector but our focus is or our technology can be applied so industry agnostic so it's all about the technology stack you are using the reason why we have more automotive customers is because we started with C++ and C++ is mostly used by automotives and then we later added memory safe languages like Java, JavaScript and so on and after that we got the customers from Fintech and Cloud provider so basically this is how we went and yeah so we are industry agnostic What are the trends that you're seeing once again from the enterprise perspective and where do you see the industry the market the ecosystem is heading? You see some resistance in some enterprise companies against the AI because there was the discussion about the copyright issues but if you are looking where the generative AI is developing I think that Microsoft now announced that they take the liability for the copyright and this way you already see more and more companies are using generative AI to produce more code but there are studies at the same time which say the code is less secure compared to a human written code so obviously you need to scale up the same way AI technologies in software testing so there is no way around that and if you are a company and you want to remain competitive you have to invest and to go into that direction because otherwise the others will be faster and where I see the developer's role changing in a way is that the developer will be writing less manual code and use definitely AI techniques but at the same time the developer will be using AI to do quality testing to use AI to use security testing and in the end the engineering will become more part of that you choose the right tools for the right use cases and it will be a lot more AI we will produce more code we will produce more complexity and we need to reduce complexity this is very very important in the security that means you have to understand the different AI approaches how to use them correctly how to combine them correctly because hey if you have a good AI tool in security testing it can influence how the code is generated so in a way it will be all about how you combine the different AI tools Sergey thank you so much for taking time out today and talk about the company I mean the great insights about generative AI, AI, security and all those open source thanks for all those insights and I would love to chat with you again whenever new developments are happening at the company are just to talk about the whole evolution of security and AI thank you thank you for the invitation pleasure to meet you