 Thank you for coming today. I'm Non-Kawana and he's Keisuke Matsumoto. We are the research of Hitachi and research and development group. Today I'd like to talk about the server of several guidelines and designing build pipelines. This presentation title is Beyond Guidelines, Designing and Implementing Robust Build Pipelines. This is today's summary. We focus on software separation security and survey the frameworks and design our build pipeline. First, we survey and review the open SSF, CNCF, and NISTOP frameworks. Second, we compare the heat of the frameworks and propose a build pipeline based on Furesca and modify to conform to the guidelines. So, our pipeline design enables to meet important frameworks such as NISTOP, SSDF, and Open SSF. So, first, we introduced guideline survey and designing and implementing build pipelines. Finally, we will show you a demonstration movie. First, we surveyed, we have surveyed four frameworks. Open SSF, Salsa, CNCF, software separation best practice, and security white paper, and NISTOP, SSDF. We will briefly introduce each framework. First, Open SSF, Salsa. As you know, Open SSF is a cross-industry organization that brings together the industry's most important open source security initiatives. And Salsa, separation levels for software artifacts is a set of incrementally-adaptable guidelines for separation security. On this April 19th, the version of Salsa was updated from 0.1 to 1. This is the separation threat. The Salsa version 1 supported this area mainly about build threats. And the focus of Salsa is supporting integrity and availability. And Salsa has three security levels, build level 1 to level 3. As you see in this table, confidence is very important for Salsa. Confidence is information or metadata about how a software artifact was created. And it can be used to determine the authenticity of a software artifact. Next is how to Salsa. Salsa published this table about builder and its potential Salsa level. This GitHub Actions and Google Cloud Build is Salsa level 3. And Fleska, that we introduced later, is Salsa level 2. And about provenance, Salsa published a tool to generate and verify provenance. Salsa GitHub Generator is a tool set for generating provenance for GitHub projects. But its tool is generation only. Distribution and verification is not supported. So this Salsa verifier is a tool set for verifying provenance. It supports this tool and Google Cloud Build Provence. So if you use GitHub Actions or Google Cloud Build, you can use these tools for making provenance. Next is CNCF STAG. STAG Security Technical Advisor Group is a group dedicated to security. And STAG has published two white papers, Software Supply and Best Practices and Security White Paper. First, Software Supply and Best Practices is a supply chain specific guidelines. It describes more than 50 ways to improve security in the co-ordinated software supply chain. And Fleska is based on this best practice. I would prefer to introduce this fourth key principle crucial to supply chain security from Executive Summary. It is every step in supply chain should be trustworthy and automation is critical to supply chain security. And build environment should be clearly defined and how that authentication mechanism is very important for supply chain security. Next is Security White Paper is a guideline for cloud native security. It is on comprehensive guidelines for cloud native application security. And this paper classifies software development lifecycle into four categories. Develop, Distribute, Deploy and Lantime. And I introduced one security recommendation of each categories. In the category of develop as code review is recommended. Code review before changing the code and therefore as principle. In the category of Distribute, Encryption, Signing and Verification is important. And in the category Deploy, pay for MSU nature and check security policy is important. And in the Lantime, Implement Service Match and Encryption of Stories is important. Next is SSDF. The SSDF is a framework that brings together several guidelines. The SSDF practices are organized into four groups. Previous organization protect the software, produce well secured software, respond to vulnerabilities. And each practice is defined with these elements. Practice, task, example and difference. And in this reference is 29 guidelines and difference. And scene shift software separation best practice is included in this reference, but not salsa. So this is a summary of our service guidelines. The SSDF salsa and scene shift best practice and security white paper. And this software separation best practice is included in SSDF difference. And security white paper has mapping table with SSDF. So these two guidelines is included in SSDF. So we thought SSDF and salsa are important frameworks in separating security. Next is designing a build pipeline. To consider the necessary components of a secure build pipeline, we have created this table. This table shows whether the scene shift guidelines and salsa cover the SSDF practice. So using that comparison table, we examined the recommended configuration of secure build pipeline and defined it as scope. So this is a pre-prepared organization in SSDF. But we can implement in the build pipeline. So it's not our scope like that. Create new roles or provide role-based training. And like this, SSDFPS protected the software is very important for build pipeline. So all of this is in our scope. As the time is limited, so I skip this specific explain. And using this our scope, we have listed specific example reference to SSDF reference. Also this, as the time is limited, we skip this explanation. And this is all summarized requirements for SSDF and salsa. In the development environment, use automation tools and use builder certified by salsa like GitHub Actions, Google Cloud Build and Fresca. And for the development environment recommendations. And the tool that should be implemented into the build pipeline is signature encode, containers and data. And code review tool, generate s-form, provenance tool, and vulnerability scan tool. And we should store and publish of software-related data like that s-form and provenance. And scanning and verification of OSS before use is very important. So all of these requirements, we have implemented the build pipeline based on Fresca. So next is implementing the build pipeline. Hi, this is Keisuke Matsumoto. Previous part, Kawana-san gave the survey of some frameworks and guidelines. And our scope was brought into shape and our build pipeline was designed. Here, in this part, I'm talking about our build pipeline frameworks, example, and architecture. So our pipeline is based on Fresca. Here, I briefly introduced about Fresca and Tecton. First, I briefly introduced about Fresca. What is Fresca? Fresca is a build pipeline that opens SSS, the supply chain integrity working group participates in developing. Fresca is also reference implementation based on CNCS, software supply chain-based practices. Fresca requires Kubernetes. Fresca can be configured the following tools like this. First, Tecton. Tecton is CI-CD pipeline tools. SIGStore can sign attestations and spawns. Spiffy, Spire can identity build a workload. Bolt, Bolt is a secret management. You can store secrets. Next, I introduced about Tecton. Fresca is using Tecton as pipelines. What is Tecton? Tecton is CI pipeline developed as a Kubernetes-native CI-CD solution. Tecton can configure pipeline which can build, test, and deploy applications on Kubernetes clusters. Tecton is a part of CI-CD foundation, which is one of the React Foundation projects. Here, I introduced about main components of Tecton. First, Tecton pipelines. Tecton pipelines is a building block for CI-CD workflow. Tecton pipelines provide Tecton's key functions such as Tecton tasks and Tecton pipelines. Second, Tecton triggers. Tecton triggers are event trigger functions for CI-CD workflow. For example, developers push and push a commit to GitLab or GitHub repository. Then, Tecton triggers, detect push events, and execute your pipeline. Third, Tecton chains. Tecton chains can observe Tecton pipeline executions and take snapshots, generate, store, and assign provenance. Next, I'm talking about our build pipeline framework. This figure shows our build pipeline image. The left-hand side of this table shows pipeline steps and descriptions. And the right-hand side of this table shows frameworks which conform to each step. Step one, code management and review. We should manage source code and review code to mitigate vulnerabilities during development. Second is CI tools. Third is builder. We should automatically build and test source code using secure build tools. Fourth, artifact management. We should create a source and provenance and manage artifact such as container image and so on. Fifth, vulnerability scan. We should scan and detect vulnerabilities continuously. Finally, signature. We should assign provenance and code publishing stores them. So, our pipeline can be realized using the following tools in this table. So, I explain these tools. We use GitHub as code management. So, SolenoCube can review our source code. We use Tecton as CI tool. Canico is a builder. So, we can build a container image using Canico. And the container image can be stored to Docker registry. So, we can use SIFT as S-bomb generate tools. SIFT can scan our container image and generate S-bomb from the container image. Then the S-bomb can be scanned by gripe. Then gripe can detect vulnerability. So, last step, signature. We use TectonChains to generate provenance. These provenance can be signed by SIGSTORE. So, our provenance can be record on local environment using record. So, here, let us touch on main tools. So, I would like to review one of the main tools is SolenoCube. SolenoCube is a source code review tool to inspect code quality continuously. SolenoCube supports 30 different programming languages. SolenoCube can detect both special code and security vulnerabilities. SolenoCube can integrate into your continuous integration pipeline of DevOps platforms. Actually, SolenoCube is an available task on Tecton pipelines. So, you can use SolenoCube task YAML file on GitHub. Therefore, we select SolenoCube for our pipeline. So, SolenoCube is feasible for our pipeline. Next, we review SIFT and gripe. So, what is SIFT and gripe? SIFT is S-form-generated tool and gripe vulnerabilities can do. Let us consider the following process, this process. So, you prepare artifact such as docker image, archive, and so on. Then, SIFT can scan this artifact and generate S-form from this artifact. This S-form can be scanned by gripe. Then gripe can detect vulnerability for artifact. You can use different formats, such as Cyclone DX, SPDX, SIFT-specific formats you can use. And SIFT supports OS package and different language package. So, SIFT and gripe are available at GitHub. You can use this URL from this URL. Next, I am talking about our pipeline architecture. So, our pipeline can be executed the following steps. So, first, developer push or commit to GitLab repository. Then, text on triggers can detect push event and execute the following pipeline. This is our pipeline. So, step one, clone repo task. This task is a disk task clone of our source code from GitLab repository. Next, solar cube scanner. This task can scan our source code using solar cube. This task builds a control image using KaniCo and Dockerfile. This task is using SIFT and gripe. So, this task generates a swarm and this task can scan vulnerabilities. Last task is deploy to cluster. Each of tasks generates artifact. Then, these artifacts can be stored to metadata store. Finally, text on chains generate provenance about build process. Then, SIG store sign our provenance. Then, our provenance and attestations can be stored to metadata store. Okay. Here, when we assemble our pipeline, we encounter the issue. It is Docker rate meter issue. So, as you know, Docker Hub limits Docker image downloads on Kubernetes port. Why happen? Because we use an environment with a shared outbound IP address. So, to address this problem, we share Docker login credentials as a Kubernetes secret with each port. This way allows us to pull image from Docker Hub. So, you can pull image. You can also distribute IP address as an option to solve this issue. So, here, we show demonstration video. So, we configure GitHub repository and sample repository we configure. This is WebFood. So, I execute push event test then, okay. So, then text on trigger detect this push event and execute our pipeline. Then, so, the result of execution can be checked this command. This command is the text on command. So, let's start. So, now, Chrome left task, Chrome source code from this repository. So, we skip. So, execute this task, execute scan source code using Sonacube. This is Sonacube task. Sonacube task was finished. Next, canico build a container image from Docker file in our GitHub repository. Next, shift scan this container image. And generate a spawn. So, this task was finished. Next, gripe scan spawn and detect vulnerability. Last, so, last task deploy to, deploy to cluster. Okay. So, back a Sonacube task. Dozen. Ah, so. Ah, so. This, you can find the result of scan scan scan at this URL. So, let's click URL. So, you can check the result of scanning, scanning. Okay. Next, so, we can obtain spawn and provenance. Okay. So, we show, so, we can, you can obtain a spawn to JSON file and provenance and provenance to JSON file. So, we show this file, spawn file. So, this one, this one is spawn, spawn file as JSON file. So, you can store information, package information and version, license. And so, also, we can obtain provenance as JSON file. This one, like this. This JSON file represents build process using a canonical. So, you, as you can see, you can check build process from this file. So, okay. Okay. Finish demonstration. Last, in summary, we, our presentation was closed. Thank you. Any other question? Okay. Okay. Business. So, your question is business plan in this pipeline, using this pipeline. So, I'm not sure. So, we, we, so, we don't think. Thank you for your question. It's a, we just started this research. So, we will propose that to our company. So, it's a business plan. Yes. Thank you. Ah, sorry. Ah, sorry. Thank you.