 Hello, everyone. Welcome to the Circuit Python Weekly for August 15, 2022. This is the time of the week where we get together to talk about all things Circuit Python. I'm Katny, and I'm sponsored by Adafruit to work on Circuit Python. What is Circuit Python? It's a version of Python designed to run on tiny computers called microcontrollers. Circuit Python development is primarily sponsored by Adafruit, so if you want to support them and the folks that work on Circuit Python, consider purchasing hardware from Adafruit.com. This meeting is hosted on the Adafruit Discord server. You can join at any time by going to adafru.it-discord. We hold the meeting in the Circuit Python-dev text channel and the Circuit Python voice channel. This meeting typically happens on Mondays at 2 p.m. Eastern, 11 a.m. Pacific, except when it coincides with the U.S. holiday. In the notes document, there is a link to a calendar you can view online or add to your favorite calendar app. We also send notifications about the upcoming meetings by a Discord. If you'd like to receive these notifications, ask us to add you to the Circuit Pythonistas Discord role. There is a notes document to accompany the meeting and recording. It contains timestamps to go along with the video, so you can use the document to skip around and view only the parts that interest you most. The meeting tends to run 45 to 60 minutes, so this gives you the option to skip around. After each meeting, we post a link to the next week's meeting notes document in the Circuit Python-dev channel on the Adafruit Discord. Check the pin messages to find the latest notes doc so you can add your notes for the following meeting. If you wish to participate but cannot attend, you can always leave hub reports and status updates in the document for us to read during the meeting. This meeting is held in five parts. The first part is community news, which is a look at all things Circuit Python and Python on hardware in the community. The second part is the state of Circuit Python, libraries, and Blinka, which is a statistical overview of the entire project. The third part is hug reports. Hug reports is an opportunity to highlight the good things folks are up to. Fourth part, status updates. This is an opportunity for us to sync up on what we've been up to. Take a couple minutes and talk about what you've been doing in the last week, since the last meeting, and what you'll be up to over the next week. And the fifth and final part is in the weeds. This is an opportunity for more long-form discussions. These discussions can come out of status updates, or be something you've identified ahead of time as too long for status updates. And with that, we will get started with community news. First up, Circuit Python Day, Friday, August 19th. That would be this Friday. We have the final schedule available. I will not read the whole schedule off, but we have a panel discussion. We have development sprints. Maker Melissa is going to be doing a project build. We're doing a special edition Circuit Python themed show and tell. Scott will be doing a Circuit Python 8 preview. There is going to be a Circuit Python Day chat with Katnie, myself, Jeff, and Dan. And FOMI guy is going to be doing a Circuit Python Day game jam stream. Special note about the show and tell. The format is a little bit different. The concept is exactly the same. However, folks will have five minutes to discuss their projects, which is twice as long at least as the typical show and tell. And that's because we have more time. So feel free to come by with your Circuit Python project and show it off and know that you will have the opportunity, if you would like it, to do a little more detail than you would on the typical Wednesday show and tells. There are two other events. One is reimagining IoT deployments with Circuit Python from Blues Wireless. And the other one is Circuit Python Night at I3 Detroit, which is a makerspace in Ferndale, Michigan. And they are doing a local Circuit Python event. So you can check out the Adafruit blog for more details on this. And the final schedule is posted there as well. At this point, if you still have ideas, please let us know. But the schedule is pretty much set. So feel free to come by and definitely if you have projects you want to show off, join us on show and tell. All right. Next up. Python gains 2% remains top programming language. Unstoppable Python once again ranked number one in the August updates of both the T.O.B. and PIPL, I guess, indexes of programming language popularity. New browser-based microbit Python editor launching in September. The new microbit editor runs in a browser, so it's quite different to THANI or Mew. It will be launched in September, but the beta is online now and can be used. And there is a link to that in the notes. Next up are a couple projects. A headband with a surprise LED matrix hidden inside, all programmed in CircuitPython. And here is a quote from the Twitter thread. Made with Adafruit NeoPixel strips and Adafruit Qtpy and Lypo BFF. The diffusion layer is some black tool wrapped in scrunched up layers, and I was really pleased with how it turned out. And finally, this is also a quote, my little Pico step midi sequencer is getting better. Now you can save and load sequences while running and without missing a beat. So this community news section is a preview of the CircuitPython weekly newsletter, which is a community-run newsletter emailed every Tuesday. The archives are available at AdafruitDaily.com slash Category slash CircuitPython. It highlights the latest Python on hardware related news, including CircuitPython, Python, and MicroPython developments. To contribute your own user project, you can edit next week's draft on GitHub, which is github.com slash Adafruit slash CircuitPython dash weekly dash newsletter. Or in submit a pull request, or you can tag a tweet with hashtag CircuitPython on Twitter, or you can email cpnews at adafruit.com. And that is community news. Next up, the state of CircuitPython libraries and Blinka. This is a statistical overview of the project, which gives us a chance to look at the project by the numbers and get an idea of where it's at outside of what we're all up to. I will read the section overall, which covers the whole project, and then we'll talk about the core, the libraries, and Blinka individually after that. So overall, we have 57 pull requests merged by 22 authors. A few names I've not seen before are TC Franks, Taka Yoshio Take, Retired Wizard, Strider21, Cis, Sakura Teesfuss, and C. Co. Seagit. The rest of the names I recognize. And we had eight reviewers on those 57 pull requests, which is great. There were 45 issues closed by 10 people and 21 opened by 18 people, so we are down. I imagine that was almost entirely in the core, but I could be wrong. But that's where we are overall. And with that, Scott, if you're available to talk about the core, I will turn it over to you. Yes, let me stall while I find my tab. Okay, numbers for the core. We had 34 pull requests merged, which is a lot, so awesome. We had 16 different authors, so thank you to all of our authors. I won't read off the individuals. We had five reviewers, so thank you to all of our reviewers. We have 16 open pull requests, and the oldest is 186 days old, which is better. The 300 plus day old one I closed and implemented, so that's cool. I will reiterate that a lot of these open pull requests are for specific boards. They have the board label on them. If you have those boards, please drop in and test and maybe polish up those pull requests because those of us in the core have access to Adafruit boards, but not necessarily third-party boards. So that's a hugely helpful way that people can help. Issues-wise, we had 25 closed issues by five people and 14 opens by 12 people. So we're net down 11, which is awesome, for a total of 551 open issues. The way that we keep track of priorities for Adafruit-funded folks is through milestones. We have 35 open issues on the 8.0 milestone, which is down like 15 from last week, something like that. So we're making good progress trying to get towards 8.0 beta this week for Circuit by Thunder, I think would be cool. And we have four issues not assigned to milestones, so those are the ones that we need to triage. And we have no open issues for 7.3x, so it seems like 7.3 is doing pretty well. And that's the current state of the core. Thanks, Scott. Next up, I will talk about the libraries. So this applies to all of the Adafruit Circuit Python libraries, which is everything beginning with Adafruit underscore, CircuitPython underscore, as well as a few extras such as the community bundle and our cookie cutter. Across all of these repositories, we had 23 pull requests merged from seven authors and five reviewers. I'm very excited to see that six of the closed pull requests were nearly two weeks or older, up to three of them being a month old. So I'm glad to see we're still getting through some older PRs. That leaves us with 27 open pull requests. We have 20 closed issues by six people and seven opened by six people, leaving us with 666 open issues. 175 of those are labeled good first issue. If you are interested in contributing to CircuitPython on the Python side of things, check out circuitpython.org slash contributing. You'll find all of the open pull requests and all of the open issues listed out. And if you are new to everything, check out good first issues. We have a guide to contributing on to CircuitPython using Git and GitHub. And we are always available on Discord to help. So don't let the process intimidate you. We always want to make sure that you can contribute in a way that works for you. In terms of library updates in the last seven days, we had one new library, which was MAX 1704X. And in terms of updated libraries, literally every library was updated. I did not list five pages of libraries on the notes. If you're interested in seeing the whole list of libraries, check out the bundle or the library reports. And with that, I will turn it over to Melissa to talk about Blanca. Hello. So Blanca is our CircuitPython compatibility layer for MicroPython, Raspberry Pi, and other single board computers. And this week we had zero pull requests merged. There are currently four open pull requests. There were zero closed issues. And we currently have, and there was zero opened, and we currently have 79 open issues amongst all the repositories. There were 10,508 Pi wheels downloads in the last month. And we are currently at 89 boards, although I think there's been a couple since then, so they just need to be updated. And that's where we're at. Thanks, Melissa. Yeah. All right. And that is the state of CircuitPython, the libraries, and Blanca. Next up is Hug Reports. Hug Reports is an opportunity to call folks out for the amazing things they're doing in our community and for recognizing the awesome things that folks are up to. This section is held is around Robin. I will start and then I will go through the list, reading off folks who are text only or missing the meeting and turning it over to folks that are here. So with that, I will get started. I have a huge list today. So first up, I have a Hug Report to make a Melissa for merging a time sensitive PR when the other person who could help was out to Tectric for moving all of the libraries from setup.pi to pyproject.toml to Eva for doing the subsequent release sweep. Also to Tectric for fixing the Itabot library report when we were struck by a ghost. I wish it was a joke. It's not. It turns out if someone deletes their account on GitHub, it gets replaced with a ghost. And we had never encountered this before when checking the issues and users on them and so on. So it returns none through the API. But it shows up still in GitHub. And so we had to update it to be able to deal with the nontype object. But still struck by a ghost. Okay. So to Lady Aida for teaching me how to use the Nordic PPK2 power profiler kit and to use ESP tool to create a bin of the board contents. And now is my circuit python day roundup. To Paul Cutler for all the work he's put into the circuit python day panel discussion and for agreeing to take on the official circuit python day introduction. To Melissa for doing her first live stream. To Liz for hosting the special edition show and tell and handling all the prep work for that. To Scott for doing a circuit python 8 preview. To Dan and Jeff for joining me for another chat stream this year. To Tim for doing a game jam stream and for being so flexible about the timing when I was putting together a very difficult schedule. To Tectric for taking on hosting the circuit python dev sprint and for recording a sprint pro video with me. A group hug to everyone who agreed to help Tectric out with the sprint. To Ann for agreeing to keep up the blog and Twitter up to date with everything going on as it happens. To Mr. Certainly for agreeing to help out and moderate in the chats during the streams. A group hug to everyone planning to join us on show and tell. And to Phil for making time last week to meet with me to finalize a few things. And finally a group hug to the community for continuing to make all of this possible. And next up is Charles. Sorry. You're not prepared. Here we go. You muted Charles. How's that? Now I can hear you go ahead. Okay. I just wanted to give a group hug to all those who all those people who are have created circuit python day. I missed the one last year and I'm really going to make sure I do not miss the one this year. Sounds like a good bunch of items to listen to and maybe even participate in. Thank you. All right. Next up is Dan. Okay. Thanks. Thanks to Paul Huttler for organizing circuit python day panel. This has been repeated several times by other people here. Thanks to Scott for working full blast on a row issues before he takes his paternity leave. Thanks to Lee who's working on a bulk analog in feature and we've been working with them on this. It sounds really interesting and appreciate their being patient with us while they learn all about the insides of circuit python. And thanks to the Adafruit internal developers who did a big update on the Adafruit forums updating the internal software and the software that it depended on. So it will look a little different. If you have problems, let people know. You've seen blog posts about that but it was a lot of work and we really appreciate it. Okay. Thanks. Thanks Dan. Next up I have notes from Dave and Glauda to foamy guy for exploring the self-hoster runner idea in his stream and to maker Melissa for helping with the circuit python dash org pull request. Next up is to Shibu. I'd like to thank Jeff for working on the camera called again. So we have completely new one now. And for helping me yesterday, my junkie camera setup. And I'd like to thank the tech trick and Katni for running up people for the sprint so that we can help running that. Thank you. Next up is foamy guy. All right. Thanks. And this week I have a report for David G for sharing the ideas about self hosted runners and pointing to some good resources to start learning about those. To near doc for creating and publishing the tool called disco tool for finding connected circuit python devices. Echoing a couple other folks. So thank you to Paul Cutler for preparing and organizing the panel for circuit python day as well as the other things on circuit python day and then group hug for everybody. Thanks. Thanks foamy guy. Next up is K match. Thanks Katni. I've got one hug this week, not necessarily circuit python related. This is a hug to Sean Heimel for introductory videos on free our toss real time operating system. If anybody's interested, there's a great starting point within an hour. You can know a lot. Appreciate it. Thanks. Excellent. Next up is maker Melissa. Hello. I wanted to get lost section here. To near doc for fixing the web workflow characters with emojis. I hug to Liz and Katni for getting me set up and running with stream yard. I hug to Liz for coast co hosting show and tell with me and group hug everyone else. Great. Next I have notes from Mark gambler. Hug report to me for organizing circuit python day and to K Stilson and I am redacted for their work on I squared C target for our P 2040 and a group hug. Next up is Paul Cutler. It's getting I have a hug for you for all your work on circuit python day and a group hug. Excellent. Next up is Tammy makes things. Thanks. So I have a hug for you Katni for organizing circuit python day and for tech trick for coordinating the dev sprints for circuit python day and then a group hug for the community. Thanks. All right. Next up is Scott. Hello. This is my last meeting in a while. So I'm going to try to thank forwards a little bit. First thank you to a family guy for taking a crack at the on device testing looks promising and starting small which I think is the real trap. So I'm excited for that. Thanks again to Katni for organizing circuit python day. I won't be able to say that next week. So say now. Thank you to everyone for participating in circuit python day. I think this is going to be the best one yet. So I'm excited about that and just an early hug for everyone who helps keep circuit python going even when I'm out. It's amazing. It's amazing to see this community continue even when I'm taking leave. So thank you all. All right. Next up I have notes from tech trick. First up a hug for me for help with preparing for circuit python day to phone me guy for reviewing some PRs tech had submitted a few weeks ago. For me guy again for working on the memory usage quantification issue to the volunteers that have agreed to help out with the circuit python development sprint and a group hug and rounding it out. I have notes from Tom F who has a hug for me tech trick and phone me guy for patients with my growing pains working on annotations contributions. And that is hug reports. Next up is status updates. Status updates is an opportunity for us to sync up on what we've been up to since the last meeting and what we're going to be up to until the next meeting. It's an opportunity to provide tips and tricks for stuff people are working on and to help with quick questions. If and remember if a conversation ends up extending longer than make sense for status updates we can move it in the weeds and continue on. I will start and then I will go through the list in the note stock similar to hard reports. Let's see last week I learned to use the Nordic pvk to to get power graphs for guides previously to do the low power templates. I had to get that information from more and no longer do I have to do that. I started the S3 TFT feather guide. We apparently missed that one. So we are tucking it in now and continued circuit python day planning. This week continue on the S3 TFT guide. I need to update the I2S template to not use discontinued hardware. And I'll be meeting with folks throughout the week to finalize circuit python day things as needed. And then next up is the semi qt update for the quad alpha numeric display backpack. And after all of that is done a quad alpha numeric display event countdown. And this week is obviously circuit python day. So Friday is all things circuit python live streams galore. And that's pretty much it shorter list than usual because this S3 TFT guide we actually put in all the templates we created. And it turns out that's a lot more work than not adding all the templates. So this one's going to take me a little longer than usual, but it is definitely an indicator of how long it's going to take moving forward. Because we created those templates to put into board guides. That was the whole point. So we should be doing it moving forward. All right. Next up is Dan. Okay, I've been working on a lot of 8.0 issues. So these are kind of miscellaneous. We've changed the I2C terminology from we have a module I2C peripheral and a class I2C peripheral. The official terminology turns out was recently changed to target. That is controller and target. So controllers like the microcontroller and the target is say the I2C sensor. This does not mean the I2C peripheral on the chip, which is the electronics that talks I2C. So we're adding I2C target as an alias for I2C peripheral or it's really kind of the other way around. So both names will be in 8.0.0. They point to the same thing internally. And then in 9.0.0, we'll drop I2C peripheral and the name will just be I2C target to correspond with the latest terminology. I restored rainbow I1 and I2O to a bunch of boards because we now have space for them. And we store a few more modules to some other boards because there's space. I increased the C stack size on expressive boards. It was 8K and we made it 16K. That should fix some stack overflow errors. Particularly, the regular expression module uses recursive calls to do things. And so if you have a long string, it tends to run out of stack space. And able to web workflow on their Feather Huzzah 32. That was just an oversight in my part when I didn't have the original pull request for the Huzzah 32. In MicroPython, there was a PR to MicroPython which fixed some floating point printing and formatting idiosyncrasies where you'd get a lot of zeros or the numbers would seem to be off by a little bit. And that was because it was doing floating point arithmetic internally when it could use fixed to get more exact printing. So we cherry picked that in from MicroPython instead of waiting for the next time we merged from MicroPython because this is important and a nice fix and it's simple to bring in. In Display.io, I removed support internally and externally for auto brightness because we sort of had latent support. The original idea was that there would be a sensor on boards and it would adjust the brightness automatically. But it wasn't never really implemented and it caused various problems and we just took it out. And then another thing that we're doing for 800 is to take out passing a PWM out to pulse out. Now you can only pass in a PIN. So in 7.00, you could do either and so we took it out in 8. Scott had a PR for that and I went through the learn guide and library code and fixed all of them. So they're 7.00 and 8.00 compatible now instead of being back compatible to 6.00. And I will continue to fix 800 issues we're doing really well in terms of fixing things down from like 50 to 35 issues. So we're getting a lot closer to 800 which is nice. Okay that's it. Thanks Dan. Next up I have notes from David Glada who says in CircuitPython I fixed SEED in capital letters to SEED Studio and the new naming for the shell board on CircuitPython.org. Non-CircuitPython tested whippersnapper on an S3TFT to capture data from my SEED 30 which is a CO2 sensor. Next up is Deshipu. Okay so I worked a little bit on this PNG support for the image log library and I ran into problems with type annotations and I need to read a little bit about type annotations in Python to be able to figure out how to actually properly write those. So this will take some time and there will be a delay on that. And also with the gesture sensor I was working on I still have some problems with. It works in full light but it doesn't work or works very poorly in artificial light so I still need to figure out how to properly set the registers on it to make it more reliable. That's it. All right. Excellent. Next up is Fomiguy. Right. I had limited activity in the early part of next week. I was still out from vacation. Once I did get back and get back into the swing of things though, what I worked on was mostly things centered around memory quantifications. So I have a couple of CPython scripts that measure the size of MPY files and also the strings contained within those MPY files. You can see those get printed out automatically for each PR or push or anything like that. I also created some other scripts which are also CPython. They run on a PC but they connect with serial to a Python device that's plugged in and they measure the amount of memory that's consumed when you import a specific library using the GC module. I made a proof of concept that would trigger that memory measurement to happen from a web socket so a client would connect to a web socket and just wait for a new trigger to come in and then do the measurement and send the result back. And then the next day after that I tested out kind of a different approach to that using self-hosted runners for GitHub actions which is it will allow us to do those memory measurements but not really need like the web sockets or the same stuff in between. This week so far this morning I've done a lot of PR reviews and I've got a stack of others for testing this afternoon with some devices that I need to pull out. I need to take one or two more photos and then get them into the octopus guide and submit the final version today. I am going to continue working on some ideas around on-device testing and memory measurement this week and then the thing I have that I know so far is going to be working back into the hack tablet land specifically trying to troubleshoot the dot-clock display core issues that pop up when the device resets. And then of course Super Python Day is on Friday so I'll have my stream for that. That's what I have. Thank you. Next up is K-Match. Thanks, Katni. Last week mainly worked on work and caught up with where I need to be there and also related learned some about ultra-wideband positioning and related real-time operating system to work with that. But this week I hope to get back into the Bowling training aid project now that I got some sonar sensors so I want to test if those can detect the position of a high-speed passing bowling ball. We'll see how it goes. Thanks. You are welcome. Thank you. Next up is Mika Melissa. Hi. This last week I finished up the second phase of adding web workflow functionality to code.circuitpython.org by adding a huge update that adds all the essential stuff and that is now merged in. I tested out the Arduino RA-8875 example code for a user to verify that it's still working. And then I started working on the third phase of adding the web workflow to code.circuitpython.org. And I also co-hosted my first show until it was on Wednesday. This week I'm going to be preparing for my first ever live stream this Friday by working on the code for the projects that I'll be showing. Also, I'm preparing by making sure my computer setup is working well though. I may need to have a backup of the leaky office and it isn't working again like it hasn't really been since Wednesday. I'll possibly do a little work here and there on code.circuitpython.org if you've got to find some quick things. And I'll be co-hosting show until again, but this time in more like the greeter role. And I'll be actually doing the live streaming project for Circuit Python Day on Friday at 1pm Eastern time. Other than that, I've been finally walking for the first time over the last week since my surgery a couple of months ago. And that's pretty much made my energy levels plummet to almost nonexistently. And that's where I'm at. Good to hear that you're walking though. Thank you. All right. Next up I have notes from Mark Ambler. He says submitted a PR for I2C target on RP2040, helped some community members. Kay Stilson and I am redacted mentioned in the hug reports work on this functionality. So I have done basic tests only as I did not have a specific use and found a small bug in the IS31FL3741 code that held PR eventually. It does not affect the glasses only the matrix. So it's less likely to come up. Next up is Paul Cutler. Thanks, Katny. Last week I prepped the next two episodes of the podcast with Brent Rubell and DeShipu. So look for those over the next few weeks. Finalized all the panel questions and wrote the first draft of the kickoff. And that's what I'll be focusing on this week is working on the script to kickoff Circuit Python Day. Thanks. Excellent. Next up is Tammy makes things. So last week was performance appraisal week at work and I had to do performance appraisals for all of my teams so I didn't do any Circuit Python stuff. This week is Circuit Python Day. And so that's what I'm going to be working on and not circuit Python related but I'm excited to have been teaching myself baritone ukulele. And I'm going to point now where I need some professional guidance so that I don't create bad habits in myself. So my first lesson with an actual music teacher is this Friday and I'm excited about that. And that's what I've got. Very cool. All right, next up is Scott. Hello. Like I said before this is my last week before 12 weeks of paternity leave puts me into like mid November so be aware of that. I'm also out Thursday because technically my partner goes back on Thursday for work so she's taking Friday off so that I can do Circuit Python Day so I'm taking Thursday off so she can go to work on Thursday. So let me know if you want to chat before I'm out. Otherwise I'm going to try to disconnect as much as I can. On Circuit Python Day I'm helping with the core sprints and streaming this as Circuit Python 8 preview. I need to, well I'll need to come up with a list for that. I need to add a move API to the web workflow. Just like the last little piece I think. That's kind of like definitely missing. I want to update TinyUSB because there was a bug fix for NRF that I'm very curious to see if it'll fix a bug I've seen on NRF for years now. I'm going to get to testing code.circuitpython.org today. I'm very, very excited about that. Thank you, Melissa. And then I'm just going to bug fix as much as I can before I'm out which is not that long. So we'll see what I can do. That's my plan. Finally I have notes from Taktrick. Last week migrated all the libraries to PyProject.toml. Some neat things came alongside the migration. All libraries that define the DunderVersion attribute can be accessed with libname.dunderVersion such as 80fruit underscore bme680.dunderVersion which hopefully will help with support and debugging. This wasn't sure beforehand if the library was installed from PyPI via pip. Problem mashers are fixed for the CI so the CI should give clearer, more recognizable responses when failing which should help alleviate some of the parsing reviewers and submitters have to do currently. Requirements now have a single home in requirements.txt instead of two places like before. Optional dependencies only used in examples or optional features such as pillow can now be put in the new optional requirements.txt. These can be downloaded using pip install library name bracket optional close bracket. Pure Python wheels built distribution are uploaded to PyPI which should cut down on install time since packages won't need to be built from source distributions after downloading. Source distributions are still uploaded for redundancy. Other things submitted a PR for 80fruit IO updates, fixed the Adaba reporting errors and submitted a draft for a fix to circuitpython.org library infrastructure issues list now pending a few more additions and getting it to pass CI. This week, Naradoc raised a good point that the current version string that gets replaced isn't PEP440 compliant so the library would help with the manual and editable installs of the repository via PEP. Finalize the library infrastructure fix and hosting the circuitpython day sprint. Come and join and hack away at issues. And that is status updates. Next up is in the weeds. In the weeds is an opportunity for more long form discussions. I see we already have two topics which is excellent. The way it goes is I will turn it over to the person who posted the topic and they can talk about it and then other folks can jump in to help out with answers and so on. If you have it in the weeds topic, please get it added while we're talking about the other topics. So we're not waiting around at the end, which is to say we won't wait around. So add your topics if you have them. So first up, I'm going to turn it over to Foamy Guy to talk about his in weeds issue. Thanks, Kenny. So I have been looking into the, you know, generally the idea of on device testing. Specifically, I'm starting with looking at how much memory a library takes, but we can of course branch out from there. And I've come to a bit of a fork in the road we have to kind of pretty high level options for ways that this could work so I wanted to kind of describe them both and what I understand about pros and cons and see if anybody has any opinions on which which way might make the most sense for us. So one of the ways that I got this working was with GitHub actions self hosted runners. So basically what this is is GitHub allows you to use your own PC or Raspberry Pi as an actions runner so you download a thing you run a little script and then essentially your computer will be eligible to receive tasks that need to get executed inside of GitHub actions and you can set it up such that only certain specific tasks will go to it, and not everything so in our case you know the tasks that deal with on device testing would be arched to go to these self hosted runners whereas all the normal stuff we do today like building and building docs and all that stuff would just keep using the existing VMs inside GitHub. So this was was pretty straightforward to set up so in terms of pros like I did I did feel like it was pretty easy to get up and running I got this tested on a Raspberry Pi with a circuit Python device connected and I was able to get an actions task to successfully interact with that device and spit out some information that it got from it. The infrastructure to do this already exists obviously so GitHub actions were already using it we already have a bunch of stuff set up with it, adding your own Raspberry Pi or PC isn't too much more work and it relies on that existing GitHub infrastructure. It's also very versatile. I do that are familiar with actions you probably know it have actions can do almost anything you can set it up to just run arbitrary Linux commands or you can clone other repos to have it run more specific actions and stuff like that so there's plenty of opportunity to branch outwards and evolve like what specific tests we want to do on the devices as time goes on. So I'm going to start very basic and never worry about really running into a ceiling or anything as far as like what we can do on the devices, which is definitely really nice. In terms of cons though I was reading around on the documentation a bit for these self hosted runners and it looks like it hub really strongly recommends not to use the self hosted runners, except for on private repositories, because pull requests which can be made by members of the public can trigger the actions to run and in particular pull requests could change the actions and then still automatically trigger them to run in some cases, meaning that somebody could execute, you know, arbitrary code essentially on that self hosted runner. In the existing infrastructure this is also true but because those are like VMs that disappear after the actions is done the risk is kind of mitigated in that way, whereas of course the self hosted runner. You can try to take steps to isolate it on the network or isolated as its own OS without other stuff on it but it doesn't get deleted in the same way directly afterwards so there's always some risk of it doing something and additional stuff remaining behind. So one, one thing that I thought of is I think we had this issue pop up before where GitHub we were having some folks put in pull requests to run malicious actions code. And I know at one point in time at least there was a thing where it was like, you had to approve the actions flow if it was not a, you know if it wasn't an existing contributor or like member of a group with right access or something like that. I would wait for you to approve the actions. I don't know if that's like on across the board or if GitHub does that automatically certain times and not other times or what. But that's something to look into and think about if we do go this route of self hosted runners, I would think almost certainly we probably want to do that to where only existing contributors can trigger those self hosted actions to run automatically. So one of the food for thought on that side of things and then the other option which kind of gets us away from using the GitHub actions directly the self hosted runners at least we still trigger our stuff via GitHub actions so that that portion of the workflow still works the same but instead of having self hosted runners, we could create a custom piece of infrastructure that kind of acts as a go between between the actions tasks and our device testers so those Raspberry Pis or PCs that we hook up. We could build our own server that sits in the middle and basically waits for HTTP requests to come in. Those requests will come in from GitHub actions, and then it can be forwarded along to the device testers via Web sockets or long pulling through the registry that will kind of send that trigger down to the device tester, which can then you know execute the test via a measurement of memory or whatever else get the result and then return it back to the server, which can ultimately return it back to that initial requester the pros in on this side is it's much more restricted it's it's not open to just arbitrary commands of whatever you want to put into it we basically lock it down to where certain triggers are possible. And any other kind of trigger that somebody tries to send to it just gets ignored so it will only ever do the specific things we set it up to do. Which is a pro from the security side, but of course kind of a con from the, you know, evolving side of things right because then if we want to add new functionalities new possibilities we probably will have to do some work inside of that infrastructure layer as well to add the new type of command or whatever it is that we want to add on to it and then of course the other major con is like it doesn't exist that I know of so far today so like we'd have to build whatever this in between thing is and of course maintain it and keep it running probably put it hosted on a server somewhere to run so I wanted to kind of share the findings I've got so far and just see if either one of those made any more sense we thought from kind of the whole project's perspective. See tech tricks says can we have those workflows be approval only save sending jobs that don't affect memory size. Does that too I saw. That's a good question yeah if there is an existing mechanism that makes it so that. And I'm not positive if I have the right terminology so forgive me if I don't but I'm going to say a task. But but maybe what I mean as a job or a workflow truthfully I don't I'm not super clear on how what the different layers are but if there is a thing that exists today that lets you limit it to be approved only. And we could do that it just would depend on how does that approval work because if it's like a. It's kind of this catch 22 is like if that is configured inside the workflow file. And they can of course just change the workflow file and push to a branch and trigger the PR to run it so it would need to be like in settings or somewhere inside GitHub that. That user wouldn't have access to I believe it's possible. That would be. I think that would be best I think the actions the self hosted runner. Seems like the much easier option. Certainly the you know the less work we can do. Building this in between thing and maintaining it and hosting it somewhere I think is. Pretty helpful but I also don't I do think we ultimately this effort depends on members of the community. Being willing to like plug in their devices and set them up to be one of these runners for us so. I definitely want to make sure that we're not going to be putting those people at at risk. In that situation so. Yeah, Tim I do have a suggestion for that I've been working with a risk 5 big cluster in China. And the one of the things that we cannot. Use this the self run a code from GitHub because that thing is wrote is written in the net and Microsoft is still not interested in importing the net to the risk 5. So while I was looking for alternatives I found that one that I put in the chat, which is called g a r m. And that's actually written in go. I think it is. And one of the cool things is that you can actually invoke your own containers based on LXC. And there's also the infrastructure and the examples on how to run whatever type of container you prefer. And the good thing is that this thing actually works like the GitHub internal actions. So you create a container, the container runs all of your code and then it disappears and when you run another one. A new one is going to get created. So it's more. It's easier on the security side type of things. You still need to put at least some protection for the outside of the network so that you don't create like a like the now service or or a privacy disclosure of your network or something like that. But normally for running code, it's, it's nicer. So, yeah, this example works more than fine. And if you need any help, that thing is like a complete tutorial. But if you need more help, I'll be happy to help you set one up. Okay. Yeah, that's awesome. I will definitely look into this one just to make sure I understand to those. So it creates these containers. Is that inside of like a local machine so you could run this on your local PC or something like that. Yeah, so. Okay. Yeah, so let's say you have an Ubuntu machine and that Ubuntu machine will create another container inside of it called an LXC, which is, let's say that it's a step in between Docker and just an almost age route. They're very tiny containers, even tinier than Docker let's say. But yeah, some people have already run it with Docker containers or whatever. So it's really easy to let's say run something on Debian or run something on SUSE or run something on whatever you want. You want it to as long as there's already a created container. And if you need to create a specialized container, it's as easy as creating Docker containers. So yeah, it's very easy to have it. Okay. Awesome. Yeah, I'll definitely I'll have to do some reading up on it. I don't have too much experience with doctors in the containerizing aspect of it, but I'll definitely look into that. And that does sound like it would be ideal if we can have it kind of work the same way where it creates the container and destroys it. One potential gotcha is we do still have to make sure that our container can have access to USB, you know, basically the serial connection to the device. And so ultimately, that's what we're trying to interact with. But I guess there probably will be some way that that could pass through. Yeah, all of that later is completely easy to actually give the permissions to. So yeah, just send me the task. I need a container that does this and this and this and I'll be happy to create it for you. Okay, awesome. Yeah. Thank you. I really appreciate that. Let's see. David also asked one other question if you want members, basically, do we want members of the community to offer the runner by dedicating a device to it or have only ate a fruit host the infrastructure. A contribution they'll have to do with onboarding, saying what's connected to the runner. Yeah, which is definitely a good question. I had imagined it at least to start with being just members of the community and members of the development team probably. I think a lot of us probably have extra either raspberry pies or computers that we could just leave connected and leave turned on and run those but I do think there is a possibility down the line where it might make a difference for a deferred to own or at least maintain one setup of that infrastructure but I figured to start with that would probably be like a community type of a thing. Let's see. Keith also says what about a different approach pull request workflow works as it is right now but then once a week jobs run over everything and raises an issue if a recent pull request expands memory beyond the threshold this would alleviate the ability to have the job always ready to run hosted on a third party machine but it's entirely different ideas and it's more of a distraction than it is helpful. I have thought on that. On that front as well. Keith so it is, it is a good idea and it's something to keep in mind basically looking to script a little bit and instead of trying to do this measurement or this testing right whenever the pull request gets created. Instead we have some schedule like a debate about already runs series of reports daily and I think a different series of reports weekly or something. We could add a report to that that will basically check all the libraries and run the stats on them and output a report somewhere. In which case we could pick it up after the fact rather than right when the PR is submitted which I do think something like that would be helpful for the existing libraries. But I do wonder like I think there will be times for PRs where it might yeah like David's mentioning where it might be too late so you somebody puts in a PR and it gets reviewed and merged before that weekly thing or even daily if we if we run it daily. Before that that scheduled job to do the testing runs essentially but I definitely think there is value in running a wholesale report against all the libraries in addition to the work that I've been doing trying to get measurement whenever a PR is submitted. I think both of those can definitely be helpful to us. So I think from my side what what is your goal of doing on device did you compare MP why usage stats versus on device stuff. Yeah so far. I am just trying to print that information out but then at some point we could add filters into the actions that say like if it raises by more than X amount then it can. Right trigger the actions to fail or something but yeah I have it printing actual size of the MPY file. The size of the strings within the MPY file and then the latest one is just basically doing an import on the device in the REPL then getting calls from GC dot mem alloc and returning the results of subtracting those so. Right yeah we're able to compare the size of the memory that it consumes on the device or the amount that it thinks it consumes compared to the size of the MPY file. Yeah I think. I think my feedback would be that for this particular problem of MPY size tracking I don't think. Having an on device testing setup is what you want because it's a huge pain. Right it's it's a huge pain to maintain devices or and or a server like it. I would I would really push for going towards. Tests like the MPY stuff that we can automate and roll out and understand where we think about doing on device stuff. And I also think that for on device testing it has a lot of benefits but those benefits are actually like. If we had a standard set of sensors that we were connected to and we actually did want to like exercise. Hardware peripheral stuff I think that's where a lot of a lot of the benefit of on device testing could be. So I guess I'm saying that this sounds really cool and I would definitely go self hosted actions runner or what. That I to point it out as well but I think I would like to see. I think there's a lot of value to doing the MPY analysis and automating that and integrating that into our workflows before we before we spend too much time on on on device testing. Okay. In terms of those on that side of it is it what is the best next step because I do have a proof concept that will print the MPY file size does it make sense to PR like cookie cutter or somewhere to so that that's like. Yeah I think yeah I think I think cookie cutter having an issue or PR there is really good I think. Yeah I think you're you're like me the the the technical stuff is really fun but the rollout is like the the the tedious work. But I think that yeah a PR to cookie cutter and starting to get it on the repos so that we can kind of put up that roadblock based on MPY size it would be really really helpful. And I think a lot in a lot of cases it'll catch what we're talking about. Would it make the most sense to just have it like print into the actions output for now or do we want to have it try to leave a comment or anything I don't know what's the basically the best way for it to report its results. I think I think for me the thing I pay attention to the most is the thing that tells you like no you're too big like the thing that actually stops you. Right like any sort of information is nice to have but but the reality needs to be like. You're really growing this library are we sure that we want to do that that sort of thing. I would say for a user. So that it's I think so it's easiest on contributors regardless of whether you want to put it in the actions workflow that's fine but I think making a comment is also excellent. Because not everybody knows how to read through actions or knows to go to it even when it fails. Right. So if if you were to have it also post a comment I think that would be super important for you know for ease of contributor interaction. Yeah and then the then the reviewer can be the person that says like hey like can we get this down. Yeah exactly. So just pure info no suggestions. But let the reviewer make those suggestions. Yeah I will get started working down that front then I'll get a PR over on cookie cutter and work in off to look into doing the comments and stuff like that and work in at least a stab at the first logic to say like if it grew X amount then Right raise it as more more important or fail the action or something like that. I think he's right. I think a comment's enough because you do have human reviewers. I think if it's just a like get a bot puts a thing that says hey by the way you're adding 200 K or 200 bytes of MPY size like and it's mostly in strings or something right like that can be a huge like hey want you to let you know. Yeah so I think I this this on device testing stuff is really interesting. And I think for the long term health of the project we're going to need to do it. I just want to make sure that we're kind of doing it in stages right like starting starting with the stuff where we don't have to maintain it like I went there in the first year at circuit Python and it found some bugs and it was helpful but it was a huge pain. I will say the process of the self hosted runners actually made it pretty straightforward I was very surprised and impressed with how easy that made it set up and some of the possibilities that we could do surrounding that so. Yeah and I was I wasn't doing library testing I was doing circuit Python testing and so I had issues with like Linux USB reliability because I was like resetting devices and stuff like that like. Quite familiar with yeah yeah yeah so so I would say this stuff's cool and definitely I think self hosted runners a way to go. In the long term but I think for the short term let's focus on getting the stuff that doesn't require us to run separate separate infrastructure going first. Okay sounds good yeah I think that covers my question. Excellent then I will turn it over to Feta too. Thank you very much. Yeah so I've been working with the open SSF the open security foundation and they have a very nice tool called scorecard. What this tool does is that it reviews the security practices of a particular repository and it has basic things like for example to factor authentication dependency peening binary artifacts to give another example. So yeah I was kind of asking everybody if it's worth to actually start working on scorecard as a repo so just a small couple of notes circuit Python already kind of does this. They right now the repository has a score of 5.5 and there's a link over there if you want to see the detailed JSON data that has all of the answers that Corindog has inside the report. The whole Adafruit right now has an average scorecard of 4.3 I made a small tool that actually checks this and they're only using it on 4.68% of the repositories. So it would be nice to actually get these three scores a bit higher over time. Some of the pros is going to improve the security practices of the organization and of course all of the collaborators that actually work here are going to learn how to work with these security practices. So that for me is like number one is going to remove or it makes a little bit harder to get some of the attacks getting to the code. This repository already has a lot of good practices. So for example the circuit Python already uses a lot of dependency a lot of dependencies. It also has things like for example code reviews which we have been doing for a lot of time and for example to have a license all of those things are part of the evaluation. So this will actually kind of celebrate all of those good practices that are already happening. And if there's stuff that we need to learn how to do that's probably going to end up in an ordering guide and that is also a very good problem. Some of the cons, so for example version pinning takes some time to get used to. As I mentioned we already have version pinning we only need to remove it on a couple of places. And this is not going to make it harder to test like new versions of libraries in this dependency. So it's just kind of like checking that all of the code has a version pinning. It is a gradual process something that I would like to comment here is that the goal is not to get a 10 out of the 5.5 that we already have. This is as I mentioned a gradual process. The whole idea is to actually get a scorecard of something like 7 is actually a very good score. We were just celebrating between the community the other day that there's a Python library that got to 9. So that's incredibly high for a Python library. And another con, I'm sorry and something important is that it's going to take time non-development type from the core developer. So if I'm going to start working on this I'm going to have to have somebody on the line so that I can ask hey how is this released? How can we add a GPG signature to it? I'll give you one example. And the last cons that I have is that the best practices batch is a dependency of scorecard. But this is actually something that's a good thing. So for example, the best practices batch is the one that asks, hey, are you using two factor authentication for your releases? And this is something that we already are using for things like being in the circuit by Tunista's group inside the... I forgot the name of this tool, inside of Discord. Yeah, thank you. This is also something used already for the people who want to buy Raspberry Pi cards. So yeah, we're not external to things like two factor authentication. It's just to continue applying these practices on these type of repositories. Thank you. Do anybody has any comments about it? My take is it seems cool and good. I can't really sign up for time because I'm about to take leave. So I think I would delegate to Dan whether he wants to give you some time of his for that. Is there an HTML version of that report, the JSON report thing? Yes, sadly no. There is a text file that comes out of the tool which I can put here in the chat so that you can review it. But yeah, sadly we're actually working on an HTML version of this result but that's getting in very slow. We just passed this API part so in the past if you wanted to see your scorecard you will actually have to run a tool. But yeah, we are expecting to have one in the next month or so. Yeah, because I would love to see all the things in there but my new dad brain can't handle looking at the JSON. Yeah, but maybe a tool like JQ might be enough for some. I mean I also think that I want to actually like, I think that software supply chain stuff is really interesting and valuable. I'm a little worried about going into security land because I don't consider CircuitPython secure. And I want to make sure that we let people know that are using CircuitPython code itself in scenarios that could be security sensitive. It's on them to do the diligence to make sure that it matches, that it meets their requirements. So I would say we should still do it but I think that it's a tricky balance to try to let people know how secure you think something is. And that's with the web workflow and the basic auth like it's also like this is a very low bar for people getting access to your device. Yeah, so just to comment here, this is not only for the end product, this is more for like the process. So for example, right now one of the warnings that the code is that the scorecard tool is saying is that we have two binary tools in the BOSAC Linux and the BOSAC OS X tools are binary artifacts. And so for example, this is going to improve more the security of the developers because if I download this on my Linux or my Mac box, I'm going to run these binary artifacts. And I really don't know what these artifacts have or I haven't really checked who put them there in the first place. So the whole security process is going to actually improve the security of the developers more than the end result. So this is not about checking. Well, a little bit is about checking the code itself, but mostly it's about the process of committing to GitHub and doing stuff with GitHub itself. Dan, what is your take? I think it's interesting to look at some of these things. Some of the things that we do might be outside of what like we don't store. We don't keep artifacts anymore because they were so large. So we have our own mechanism for doing release artifacts. We upload them to our own private place and I'm not sure that this handles like that's that's sort of a workflow kind of thing that might not be considered by this. But certainly an audit of what's going on internally is helpful and I would like to see that it is easier for him as possible. And whether it's done one time or on every release or something or on a PR merges would be interesting because it would be good. It would help to catch things. So, you know, you say we have this current score and I looked at that JSON file and even not the fact it was JSON. I didn't understand what some of the things were. So an explanation would be helpful and we can work on those things. And I think if you see issues, you can open it. If you see problems, you can open an issue about it and we can we can address those and we can run this tool manually for a while to see what we might improve, I guess. Yeah. And the tool has already found two binary artifacts, which are the Bosch tools. So, yeah, maybe this could be rebuilt inside the PR. And another quick thing that I didn't mention in the comments of the document is that there's already a GitHub action to actually run work hard within your tool. So, yeah, maybe this is a good way of starting. So we could just do a pull request on adding this to the GitHub actions. And then we can actually start very, very slowly to actually start improving this one by one. So it would not be like a huge code review and then just send it. But it would be more like a very slow process of, hey, you know, let's add GPG this month. And the next month, let's try to remove these Bosch tools and things like that. Yeah, like the Bosch tools, I think they were put there for convenience. We could probably just remove that. So they're not used during the build process as far as I know. I think they're just for deployment. Yeah. And most people don't deploy with them anymore anyway. Yeah, it's pre-UF2 probably. Yeah. So we could probably just throw that out. But it's good that you found that. We can just take it out then. Yeah. I'm open to an actions for this. Yeah. So if you would like to point out the problems, we can fix them. And if you want to submit a PR to run this tool, say on release or something, on release.yaml or on PR, assuming it doesn't take too long to run, which it probably doesn't, that would be great too. So all for continuing on this, I think that would be fine. It's only going to make it better. Yeah, the other similar thing on my radar is like SPDX licensing stuff. Yeah. And there's clearly a best practice for that now that we are not following and probably something we should follow. Yeah. Open to it. Good. So thank you for suggesting this. And working on it broadly. Yep. Thank you. I think that's it. I think that's the end of In the Weeds. So it is time to wrap up. Let me get to my wrap up. This has been the Circuit Python Weekly for Monday, August 15, 2022. Thanks you to everyone who participated. If you would like to support Adafruit and Circuit Python and those of us that work on Circuit Python, consider purchasing from the Adafruit shop at adafruit.com. The video of this meeting will be released on YouTube at youtube.com Adafruit and the podcast will be available on major podcast services. It will also be featured in the Python for microcontrollers newsletter. Visit adafruitdaily.com to subscribe. The next meeting will be held on Monday, August 22nd as usual at 2 p.m. Eastern 11 a.m. Pacific. This meeting is held on the Adafruit Discord which you can join at any time by going to adafru.it-discord. To be notified about the meeting or any changes to the time or day, you can be asked to be added to the Circuit Python Easter's roll on Discord. And we hope to see you all next week. Thanks everyone.