 I'm Karen Bennett. I work with IEEE and the ISO standards, trying to get, you know, better technical depth into some of the standards that are getting produced out there. Myself, I have an open source background. I was part of the founding group that started up Red Hat. So, you know, from the early, early days, it's amazing to me to come back to this community because it has grown so much. But in the last sort of 10, 12 years, my primary job has been building AI applications and Metaverse. I mean, it was before it was called Metaverse. It was really just 3D games and immersed in, you know, your goggle type things. But now there's an actual term for it. But about a year ago, I approached Linux Foundation because I could see that they were doing something called S-bombs. I mean, hopefully, I don't know how many people have done any S-bombs for their software. But it is an interesting way to document code. So, with IEEE and some other governments, we heard a number of keynotes and talks. The government are looking for some way of evidence that you've done the due diligence to know that your software works as intended. The whole phenomena associated with chat GDP. And unfortunately, I missed the talk this morning. But this, I called it a little bit of the Wild Wild West out there because people are throwing stuff into production that haven't been tested at all. Like, and it just worries me. And from a standards point of view, there's no standards that I could go to open AI and say you're not following X, Y and Z. So a lot of catch up is going on. And so why I wanted to do Metaverse. To me, this is an up and coming or it's been around for a while. As I say, it used to be called different things. But it is an opportunity for maybe standards and regulations and getting people to talk about the issues before it gets to production. I mean, there are a number of things in production right now, but that's not to say that they can't improve by doing an S-bomb. So again, can I just get a show of hands of who's familiar with S-bombs? Yes, no, nobody. Okay, so I'll go a bit deeper into that then. Okay, so I'm sure all of you have built a Metaverse. I mean, this is the dictionary that's out there, the IEEE dictionary glossary definition. It really is bringing the digital world and the physical world together within software. I worked on a project for those in Canada. Maybe they know there's a toy for kids called Webkins, which actually it's over 10, 20 years, it's been in the market. But what it does is allows the kid to play games with whatever toy that it purchased. So it has that interconnection between the physical world and the digital world, which was sort of a first step into what people are starting to call Metaverse. I also have a couple of kids and their gamers. So I can see that Metaverse and this whole physical world and buying things is a phenomenon that is going to be for our next generation. Okay, so software bill of materials. Really all it is is a document that documents certain fields associated with your software. So it's been in existence. It started out, I think Kate was saying who's running that group from Linux Foundation that it is, it started out with what they called traditional software. Now with when IEEE came to the table with we wanted to get some things into the S-bombs structure associated with AI and data sets. Because for those of you if you've built any AI applications, the key is the data and it's how it gets trained. So understanding the data that you have, as well as the software, is a crucial component of documents in an AI application. And it started out with traditional software. Linux Foundation have been targeting cyber security and I think everybody in the world, at least in this community of people, log 4j. So it really is you need to understand all your dependencies. And when a cyber or security attack is made on an open source component, it quickly can figure out whether your source code has the potential of having that same risk is one of the benefits of documenting a software bill of materials. The other is licensing. So if I'm going to go buy a piece of software out there, I want to know the licensees. I did a talk two days ago about so if you go and do a software bill of materials for chat GDP, you will quickly see so much of that information is not known to the public of how it was built. I mean they do document the licensing, but unfortunately Microsoft owns all the rights to anything you do with that code if you read the the T's and C's of the license. But it really is unless they document these things, how are you going to know? It looks like a cool technology. They say, hey, come use our API. You don't really know all the repercussions that's happening. So again, the reason software bills of materials are strongly being encouraged by the standards groups. It allows us, so as I build a standard, which right now I'm with a group of people that are building one for empathetic type technologies. So if you use facial recognition or biometrics, what do you need to document so that if I'm going to buy your software, I can have a good conscience that I've got a good piece of work. And that's really ultimately what the SBOM is for is to be able to do that. Can it be expanded to do more things? For sure. But they're not quite there right now. It's really just the basics get documented, associated with your code. Again, I just threw this in, but statistics for metaverse, it's a growing field. And again, if my kids are any representation of the future, it will be really important. I mean, I right now see it and have been developing software games, which I think is my next slide. I would have said, you know, right now the killer app is games, but I just did a project for a real estate company here in Canada to be able to use drones to, you know, take all pictures of your house you want to sell. And then we're using virtual reality to sell it to people that may or may not be able to go see it. So like foreign investors, or I'm going to be moving to Vancouver, I currently live in Toronto, I can see the experience of the house with actually physically not having to be there. And I think all of us can see the opportunity within education and healthcare as well. I mean, I worked a few years back on a remote hospital, so the ability and all the tools and devices that you could do so that I could be a doctor in Toronto, but I'm actually looking at a remote patient, maybe up in the Yukon or something. So it's got a huge opportunity in some of these spaces. So it's going to grow. But to me, what I have seen with a lot of technologies, the proprietary solutions out there don't want us open source people knowing too much about what they're building. I would like to try and get that at least a little bit changed by getting these S-bombs for Metaverse. So before I get into that, the other thing that I can see, and I saw it with AI. So when I started out in the AI space, I was doing an AI platform. Ultimately, Microsoft bought it, which is good. I guess it was a startup that went successful. But I could see that there's no real standards out there. So the team that I had, we were building the models and using card data to be able to self-driving cars. Again, for the last couple years, I've done a lot of consulting in reviewing when an AI incident happens like Tesla, like Uber, where they actually kill somebody because of their self-driving cars. And when you go do a deep dive to figure out why, it's scary actually. The Uber one was they didn't train their AI model on any diverse data. It was all sort of white males of a certain age bracket. And so when the AI, the self-driving car, was going down the road, it hit a woman and actually killed her. But it thought it was a building when we went and looked at all the logs to figure out why it had done the actions that it had. So with Metaverse, with AI, sorry, there was a ton of standards out there. Again, when I came to IEEE, I think there was 1,200 standards on AI. That to me as a developer is way, I'm not going to read that. So what we're trying to do with the Metaverse is we've created what we call a Congress, but it's in Linux terms. It really is a steering committee. So it's a set of people that are going to drive what gets documented in the standards. And so currently right now, one of the things that we have, we have working groups that are comprised of a number of people around the world, and they come in and we define the terminology. So one of the things when I joined the SBOM group, I could see that the tools people were trying to define terms in their glossary associated with state bill of materials in a different word. And I was like, why are we doing this? Somebody's already done this before you. And so I was able to share with them because those documents are public. Terminology documents within IEEE are available to anybody. But the problem with IEEE, and I will try and to fix it, but I still got time or I need to do a lot of work, is we refer to a lot of these documents like working group 2048. And there's no way you would have a clue what that is. And it's very hard. They have done a number of changes to the IEEE website. But finding the information that you need is probably one of the number one problems with associating with, you know, a standards group. After terminology is set, not saying that it's set in stone, because as you develop the standards, more terms come up. But they go about doing recommendation and practice guides. And so they have a number of recommended practice guides for VR and AR right now. Again, by the 2048.1 is how most people within IEEE refer to it. They've just created in the last month or two, two working groups that are going to start to build the rules associated with metadata. So like how, or metaverse, sorry, how, you know, you would document bias, how you would calibrate the goggles. These types of things will go into a standard that hopefully people will read. And when you're building an app of that nature, you'll you know, go get certified by that. I personally was in building software for 20 plus years. And I had never looked at a standard in my life. So there's that gray area, unless it's going to bring value to the developers or the organizations. You know, why would I go look at some of these standards? The only thing that having been in that universe now for a couple or four years, there's a lot of expertise that go into building these standards. And so you can see, you know, that expertise by reading and you can also see how you might get through an audit because I think at least in the US, we all know that the auditors are coming for a lot of this software to figure out if it's safe. Within IEEE, we refer to safe as both cybersecurity types safe, as well as ethically safe, because there's a lot of software starting to come out on the market that, you know, might not be a good, could do harm to humans. Unfortunately, in IEEE, we tend to call people humans. And then you get the digital type representation of code. But the humans themselves, like again, ChatGDP is a great example, but it nudges you to do things that you don't really want or would if you knew what it was doing. It can nudge you like I participated in a pilot with Amazon. And it really was showing how much data they had on my online buying behaviors. And you know, you can have recommendation engines, and we've seen that where it will give you ads, etc. But this actually would put at my front door a product, and then I would decide whether I wanted it or give it back. And it had no reference to anything I had ever done a search on. And they did a trial of sending me five products. I actually purchased four of them. So they were, they knew enough about me and ways to nudge me to actually buy something that I wasn't even on my radar to buy. But that is the type of thing that harm can happen. You know, I was just chatting with some folks before this with ChatGDP. There's a lot of good number of questions going into ChatGDP these days about, you know, I'm feeling blue or I want to commit suicide or, and if you do that on a Google search, it gives you help. ChatGDP does not, it actually tells you the best way to kill yourself. So it's like that is actually harmful. And we need to be able to figure out these types of things. If it's doing that as part of, you know, maybe an SBOM or there's another document that I Tripoli is going to start pushing and that's an impact statement. So really the impact for those from Europe, they have, especially in the area of facial recognition, it's considered a high risk technology or use. And so how do you know when a piece of software is high risk versus low risk that somehow needs to be communicated to us humans. So that's the I Tripoli. I would encourage you if you have any interest in metaverse. The teams are very diverse. For a lot of years, it tended to be hardware engineers, you know, lawyers, DevOps type people. It has morphed quite a bit into more software related things. But psychology as well, people that understand sort of the human brain and how it works. So it to me, it's been a great experience because I get to see other sides of a product versus just the code that gets developed at the end. But why I put in an abstract is I got a notice saying Linux Foundation was creating an open metaverse foundation. And I was like, oh, my God, here we go again. We're all not going to collaborate together. So first of all, the key is awareness. So we now both know we have an effort going on and we need to somehow collaborate and figure out there are other groups out there. In fact, metaverse standard forum, if you go there, they'll tell you all the different standards that are getting written about metaverse. So again, understanding the landscape is crucial. But we need to feed these information from these standards groups into the tools that open source developers build. So again, the metaverse is a very complicated, similar to AI sort of architecture. So you have, in the games that I wrote, it used blockchain to be able to do the purchasing, the crypto, currency type exchanges, etc. You have the AR and the VR sort of experience idea. You have AI in that organism that you're going to be building. You have different ways of having a relationship with between the physical and the digital world. So there's all these different things going on. But when I went, you know, was looking at this architect, I was like, oh my God, Sbom has started to organize their stuff. So as part of this conference, it got for SPDX 3.0, which is sort of the language to build Sbom's. It's starting to organize itself in a way that makes sense with how AI and metaverse type or architectures are built. And so again, what comes to mind with 3.0, we now have what we call an AI profile and a dataset profile. And with an Sbom, you have fields that are mandatory. And then there's fields that are optional that we're going to try and encourage suppliers to fill in, because then it gives the consumer or another development group insight into what that software is all about. It gives, you know, the licenses. So with AI, one of the things as well is, you know, I'm in the biomedical group, but it uses hardware. And there are things that need to be documented somewhere about the hardware you use with AI. So with self-driving cars, you have Lido's hardware, you know, with one of the things we're doing with one of the car manufacturers, we actually monitor the heart rate and the facial expressions and the eye movement of the passengers to know, you know, is the car going too fast, too slow, you know, to give us insight into how we should address. So all of these hardware needs to be calibrated in a certain way and have thresholds that are well understood if you're building an AI application. So again, it's an optional field, but we started to bring in, you know, devices, IoT devices, but with Metaverse, it's a natural that there would be a package that you would fill in your AR, AR, and VR type hardware that you're using and bring it in and document it. For now, I suspect it will be an optional component, but over longer terms, as people get more familiar with this, you know, we can change what is mandatory and what's not. So, and then, of course, avatars in the Metaverse world is another area, sort of, how did you create them? Well, I have a couple of slides of ideas of the different fields that Metaverse developers should start to think about that they need to document associated with their code. But you can see that now that they've done the restructuring of the 3.0 release of SPDX, it's more sizable chunks before it was like this huge core. And, you know, you had to document certain things and certain things were optional. But now if I'm building, you know, an AI app of some sort, but I don't actually, I'm not going to release a product because, again, LLMs or large language models that are out there, you don't really have to release the dataset that works with it. You could just release the model. However, I will say that with any model that goes out, you have to explain how you trained it. So, there will be some aspect of the dataset that will have to go with that. But as we in open source, all of these components now or S-bombs, they can live by themselves. It's not this mass package. And if I decide, you know, I want to build something that uses, you know, I helped one of the banks build a model in Canada to be able to predict who's right for a mortgage. I could change the dataset on that model. I don't have to package it up and then it stays static forever and ever. Like that is the uniqueness about these metaverse and AI apps. They're not so static anymore, like traditional software. There isn't a very formal release of it. It's just plop, it's out there. And all of these things are going to have to come into play when you're documenting this. But again, why would you document this? There's no value. Well, I'm telling you right now, being a chat GDP in particular, I'm sure all the LLMs are in this camp, uses a particular Python library, which very similar to log 4j, somebody hacked into. Now every model out there that uses this library needs to be corrected. It needs a fix or it needs to be investigated to see if it needs a fix. So again, how are we as developers going to know that? Yeah, we can do sort of searches and scans. But as these apps get huge, it's going to be harder and harder. So having these S-bombs that do this. As well, I was able to convince the S-bomb people to put in a field about standards. So again, in the space that I'm in right now, when I build a piece of software, you know, it's probably of good interest to whoever's going to use it, that I'm compliant with one of the auto safety regulation transportation standards so that you can know that that risk has been taken out of the code base for your use. So as I say, these are the core, there's core, there's a build now profile within SPDX. There's a security one that talks, it's going to tie your vulnerabilities to the packages a little bit more tightly. There's an AI and there's a dataset one and a hardware. So that's already there. So from a metaverse point of view, maybe there'll be some tweaks that we need to make. But I think these are the three core areas that as we move forward and hopefully some of you join in with us. So the whole, you know, extended reality type profile will have to be there, as well as the UI aspects is crucial for metaverse. And then blockchains is another area. I mean myself, I use blockchains more with financial apps in the past, not so much metaverse. So again, thinking about what fields would we put in? By the way, this is not, this is Karen's ideas. We're going to create a working group. Hopefully it will be metaverse, will be added to a working group associated with it to figure out exactly what we need to put into the SPDX and it will probably kick off next month or so. So, you know, expect to see from Linux Foundation the ability to join in, but I would strongly encourage you. But again, these are the types of fields that, you know, we could add to the S-bomb, in particular around blockchain that would be relevant to another person or another developer wanting to pick up your software and or for government regulation or whatever they're called auditors to be able to figure out what's in your application. So these are some ideas for the blockchain. Here are some of the ones for the ER or the extended reality. So you, you know, you would say what platform you actually used. You would also talk about the engine. Hopefully, you know, it's hopefully more and more of this becomes open source. Some of these packages, because a lot of them right now are proprietary solutions. You would also like the rendering pipeline, you would talk and put some information in there. And then lastly, the tracking technology that you used associated with your extended reality. With the user interface, again, I think this is a huge opportunity that we start to identify the relationship between the human and the digital human. And this would be the area that you would probably put a lot of information or have information about how your avatars are created in this space or, you know, which platform type tools did you use or, you know, the different types of accessibility aspects that you used in your thing, because these things are not currently in the SBOM core. So again, hopefully that gives you a feel. They're really raw right now and will need to be perfected. But the reason, as I say, I just don't want to see the mess that we had with AI, where if you go look for best practices, I think everybody that has ever written an AI application has their own, like Google has one, Microsoft has one, like IEEE has one, like which one as a developer do I go and, you know, adhere to? They're all got some similar things. But again, with a standards type group, you really would like one way of doing it so that your tools can be built. I mean, we heard a couple of keynote speaks, but the goal, the next phase is to get automated tools out there to do a lot of this. So that too much of it's being done by a developer or a human going, you know, this is what I had to do to do a build or with an AI. These are the tools I had to use to pre-process the data. We need to somehow be able to pull all this information out of, you know, with AI in particular, Google has done a great job of documenting model cards. So I don't know if folks out there are familiar with them, but a lot of the information can be pulled if an AI app has a model card. And then you have an automatic way of getting, you know, your ingredients from your product. Or we heard a number of keynotes about actually building S-bombs out of the CD pipeline or the build process, like you mark, this is a step that should go into the S-bombs. So the goal right now is to try and get that as automated as possible. As I say, I think what I sort of just in the final words is we really do need to get a collaborative groups going. And in the past, standards groups and open source development haven't really worked together on a, you know, day to day type basis, just so like I think I mentioned, but I am a red, I was part of the foundation or founding members of Red Hat. So I can see the importance of the open source, but I can also see at the production level, there needs to be a way to audit some of this code and, you know, follow standards if there's a standard, especially as it goes global, because not everybody has the same laws out there. So it's getting harder and harder to build software that is global. As I participate with the Chinese AI standards groups, and they have a lot of different ideas than we do have in the North America. So somehow that all has to get rounded up in, you know, how to build software for the global groups. But again, my heart is with tools. I started my career in the tools group at IBM and Red Hat, but tools are key for this metaverse and being able to understand the complexity of these things. I think folks, I just listed a few sort of metaverse apps out there. A lot of them are not open source yet. So it would be great, very much like the LLMs, you know, maybe two months ago, there was some weak open source ones. Now, like the presentations I saw a couple minutes ago, Bloom is a perfect LLM, as well as GPT-4, I think it's dash J or something. These are competitive open source models now. So, you know, as an open source community, let's start using them and making them even better. It might force some of these private, proprietary models to be forced to open source or, you know, have a license that's more usable is what I'm going to say than what they have right now, because they, for the most part, they're trying to keep it a black box to the rest of the world. And with black boxes, you know, it's very hard to know whether there's problems in the software, there's security breaches. It's all about transparency and making that known. And then, as I say, if you wanted to help, there's two places I would love for some of you to join us. But on the SPDX group, they beat every week. You don't have to come every week, but being part of that community and putting your ideas forth, especially in the metaverse, because the developers in SPDX are typically tools developers and, you know, hardcore operating system type tools. Having domain experience about AI and metaverse is what, you know, will add to the value of how these things get done. And then on the IEEE, you can either contact myself, or there are so many working groups there. You now, thanks to a lot of prodding, you can get standards for free, at least around metaverse and AI. So hopefully that opening up of standards, because, you know, who wants to pay like $2,000 to get a standard? I sure wouldn't. And I never did in my career. But now that they're open, I can see sort of their terminology, their ideas. I will highly recommend ISO and IEEE have created a new development lifecycle process for AI, and it's being added to for metaverse. So there are different steps that you need to be aware, like training, you know, there needs to be a development phase about training your model with data. And how do you, you know, I was talking to one of the developers, any AI day or any data out there is what I refer to as dirty, and it has to be clean. Do not think you can take something raw, like you just can't. Your models will give you predictions that, you know, are not valid. So there are different things that you have to consider when you're building these types of apps. And, you know, so far, I haven't seen, like I follow SEI as well, Institute, but I haven't seen as in-depth development process for people that are building these types of apps. So I would encourage you to go look. And then thank you for listening to me. And I'll open it to questions. No questions? So I'm going to take all your names in here. I'm going to be contacting you. I'm just kidding. I mean, it's great that people are wanting to learn S-bombs and how they're used because, you know, I was just chatting with some folks beforehand. About a year ago is the first time I heard what an S-bomb was. And I was like, oh my God, how come I, you know, I'm in this world and nobody is talking about it. One of the Linux Foundation under open SSF projects is actually getting S-bombs everywhere. So it's, I suspect, going to be a more common term that people start to use. And again, the U.S. government in particular, and I'm currently working with the EU government, they're going to require for procurement of software, they're going to require an S-bomb. So folks, you know, having an automatic way to generate this is the only way I would do it as a developer. I'm not going to sit there and hand-fill them in or, because one of the problems with AI and Metaverse, well, especially AI, because the data changes. So if you don't have an automatic way that's generating these S-bombs, there's not really a release cycle unless your code doesn't work for a big reason. And then they might pull it back and then put out a new release. But it just changes the app as well as the data is constantly changing. So it's this idea that you have to be monitoring and creating new S-bombs on a trigger of some point is going to have to come. So exciting times for all of us, but you know, you'll get the auditors off the back because I'm not a, I'm not big on having my software audited. So the simplest way to get it done is where I'm headed. So a couple of items I noticed. You've combined a Freudian slip with Chet GDP, probably because you're in standards. In terms of ethical safety with IEEE in the use of open source, what are your feelings about autonomous weapons that are built in Metaverse systems? As you know, Palantir has just combined an LLM with a virtual kill zone. So I have a lot of feelings about that. I mean, I forget what companies stepped away from the big autonomous weapons contract that the US government had. But I actually took on a contract to help test a drone that was using AI to actually kill terrorists. And things like with AI apps, you're so dependent on the data that goes in. And I was hearing things, I mean, it's only between you guys and me. But I was hearing things from some of the kernels and generals. I would say, well, you know, this has the risk of killing the wrong person. Like when we did the audit of the code, and they were like, Oh, a few casualties is okay. Like it was very flippant to me. And I was just like, Oh my God, I got to get out of this contract. But they, you know, they're getting better at being able to test. But AI, right now, it's extremely hard to know whether you're getting the right prediction. And I also did an audit on the one firm that scraped the internet of all our faces that were on LinkedIn Microsoft. No, did Microsoft do it? No, that wasn't actually Microsoft did it and then said, guys, maybe this shouldn't be legal, what we're doing. After we did it. But there was somebody that did it and then sold it to the law enforcement to be able to track. I can't think about it here. Well, who was it? Palantir. Yeah, the same company I just met. It's also a Peter Thiel company, also the main investor of open AI. You'll actually find that most of the people who are in charge of ethics are compromised in every possible way. Yeah, so I, I can see that it's needed. Sending troops over to wherever to kill a terrorist is less risky for them if they use drones. But they're not following a test and accuracy validation. They're following Pareto principles. Correct. And that's why to me, I did that contract before I joined IEEE. Um, so there, I can even give you stories because I, I work with, I worked and then I guess I still do with three of the five core manufacturing cars for automated cars. And what got me to go to, to standards group was I could see, you know, like Mercedes Benz, they had their developers made one set of decisions on who gets killed. And then, you know, GM and Ford had totally different rules. And I was like, Oh my God, as a pedestrian, I'm going to have to know what car make is this and what are they going to do? And, and I was like, there has to be standards for this. But the US government or governments in general, I'm going to tell you it's not just the US. There's lots out there using drones and AI, but they need rules and they need a valid way of testing their code. Because I don't know if you're a data scientist, but I managed scientists for a lot of years, but telling me the precision is like 80% or whatever. Like, what does that mean? Like, even I actually did an interview of the folks that open AI. And they told me that their stuff was 10% accurate. And I'm like, what? Yeah, yeah. Well, people also don't realize that Obama was behind the change to our amendments that allowed you to be killed by a drone without having a court trial. Oh my gosh, I did not know you just need to be labeled as a terrorist. The sad thing is I'm the head of cybernetics at the machine perception cognitive robotics laboratory in Florida. And they basically added weapons on top of open source systems. And so this is the state where we're at now. Yeah, I mean, it is a scary world, but unfortunately, humans are not so accurate either. So, you know, I'm a big believer in some of this stuff. It's going to get better. The irony is that this metaverse environment can be used to better help these systems kill those people. Yes, yes, yes, yes, you're right. No, I'm just saying that the irony of as we improve metaverse systems, those exact same systems, first of all, the irony is that we have a missing set of standards for metaverses. So, for instance, the room that we're in is not defined in a metaverse. But if you were to pass it to that gentleman, he would take the DWG files and the 2D floor and the elevations and create an actual building information modeling of this room. So the natural transition of that is then that gets fed into autonomous weapon systems so that they can map the room and know where everything is and what would allow them to better track these people and better interact and corner them. It would be these metaverse systems themselves. Yeah, I'm going to grab you to come work on the standards group with me. Who's the chairperson for the standards group? There's like multiple for the metaverse one. Yes. Yeah, currently right now, it's the president of IEEE. He did his PhD in metaverse, but he's looking for somebody else to take over. Yeah, I'd love to talk. I can introduce, but I think the next speaker is coming on, right?