 I think it would be good if it I guess it probably is for the best that the White House can arbitrarily decide these things. But generally like it's just a bunch of guidelines and we're already like on the conservative side when it comes to how we're handling AI. So it's more a question of how it's going to impact how the people who are I hesitate to say adversarily using AI, but as far as our business is concerned adversarily using AI against us, how this might affect their decision making and what they're going to do. The right people I mean have signed off on it as far as like seeing the policy shops that we usually trust to tell us when something is good signing off and saying it's good. So that's good. And I think the order also says that every executive agency needs a point and a chief AI officer and start coming up with some policy toward this thing, something like that. Yeah, I mean it's all like draft policy guidance at this point. But I think it could be directionally good. It's hard to tell with these things. Cool. Chris, do you have audio working too? Hopefully. I don't know. Sometimes at nine times out of ten, it's the Zoom app itself that has issues. And occasionally it doesn't like one of my cameras, but who knows. So I came in at the tail end of that. But I've been curious to ask what at the higher levels of media have you run into instances where people are using AI to generate any kind of content or are most outlets having a corporate policy, no AI content of any shape or sort unless it's part of the story we need to generate something for it? I mean, I think like there's certainly no formal legitimate news organizations. There's a definitive attempt to avoid any sort of use of AI to generate content. I mean, you know, these things depend off and also on the individual organization and just how much venture capital and private equity control is involved in it because that will definitely do things. But I know there have been cases in the past where reporters become a bit on the fabulous side and make things up. And I'm waiting for that to happen, but with AI as the cause for a stressed out writer. And then that be the blame. Yeah, I mean, the current news thing I just posted. The Guardian editor actually yelled at Microsoft and said, hey, I need you guys to promise not to be putting AI bullshit into my publication. That's interesting. Yeah, I think the one place where I've seen the actual legit news organization using AI in public. I mean, G slash I was doing it, but it's legitimacy is more and more in question every day now. The other one was Gannett. Apparently they deny it, but their writers say, hey, a bunch of review articles showed up on our reviews website. They have a wire cutter competitor. And they said, hey, a bunch of these reviews showed up. There are by lines that belong to people with any LinkedIn. Seems unlikely that these are real people and they're written in the style of an AI. And Gannett said, no, it's just we hired some consultancy. And the writers at that consultancy might use AI. But yeah, Gannett also private equity is putting on a lot of pressure in their ownership. I think the answer is generally like we need accuracy and AI cannot provide that. So we try not to use it for the purposes of creating content. You can see it for preview images sometimes where they're clearly stylistic. You can see it for generating recommendations. Obviously, that's been longstanding. And we are experimenting around with using it in our engineering workflows using GitHub slash Microsoft's co-pilot. Though generally, that experimentation does not seem to have turned out for much. I mean, there's sort of this in the web developer community, there's sort of this interesting conversation that's happening, which is there are certain people who are not interested in using AI at all. There are a lot of newer developers, more junior folks who are very interested in using AI. And there's a discourse around this idea that like if your employer is not providing you with a subscription to an AI assistant for your programming work, that's like the equivalent of them not providing you with a laptop. So whether or not this is actually useful, we may end up like adopting a subscription to it because it's part of the recruitment process as much as it is part of the recruitment process as much as it is part of being useful. But like thus far, it has not been useful. And I don't see any major media organizations using like chat GPT style AI. Yeah, it did happen very quickly. Though like there has long been like logic around token replace like what is it token based language replacement? I can't remember what the exact word is for this. What I'm just trying to hunt it down in my own notes here. And I'm blanking on the name of the product that we experimented with this on. But like where it's not really AI so much as it is like using machines that are good at understanding like language and context and sentence structure to build sentences around like sports stats, right? So it knows when to pluralize or it knows when to slot things in. That's not really anything like what GPT does. It's just a very, very different application that we've used in the past but aren't currently using. Kind of interesting. I don't know yet whether or not I think that there's a use case for chatting with a corpus. I find Chepity very helpful in assisting me on language tasks or programming tasks actually. So not generating stuff but generating like little filler things that I'm looking for or context or helping with outlines or things like that. But a bunch of people are you know, retailing a whole set of PDFs or all of my all of my, you know, knowledge base material for my internal knowledge base and then you can chat with that corpus. It seems like that would be a really good thing for news organizations to do because they got this huge corpus of well-structured material. So you could go, you should be able to go, it sounds cool. I guess it sounds cool. You shouldn't necessarily have to but it sounds cool to be able to go to a chat bot interface to the Washington Post's corpus of articles and say, so I need to know more about the context between Biden and, you know, Mitch McConnell or something right and get, you know, a prose answer out of that. That seems like a really good application for chat. Yeah, I mean the problem is right now it can still take in that entire corpus and return to you a Y. Yeah, right. So like what are the interesting things we saw with co-pilot now that we're doing testing is co-pilot is testing like a chat bot and the chat bot you can feed like the not the one that's out now but the one that they are like in the process of releasing it should be out soon has like a system where you can feed a repo into it like a repository into it and when it responds to your chat it has citations where it says this is where I got this from click here and skip to that file and I think like if if we had a tool that worked like that that might be much more yeah possible to use but like standard chat gpt now it's just not reliably giving you true answers so not useful for journalistic organizations. Yeah or if you were going to have it I would think saying here is the corpus of the New York Times since the 1880s or you know the Wall Street Journal internally and use it as a clipping service to say we need to know this fact at this place or this time and have it return material that you know has been written in fact checked at some point in the past or not you know done or the other thing that I think potentially could be useful but probably for journalists who have left bigger outlets who no longer have editorial support for dealing with a lot of the heavy lifting so you've got things like Microsoft Word will do a spell checker or underlying weird grammar things or stuff like Grammarly will give you grammar tips about what to fix but how could you throw in what you've written and say at that whatever that next higher editorial level is you know don't fact check me and I can do that myself but what are what is the broader public likely to call into question based on what I've written versus what else is out there and give me questions or things that I either may have missed or things that could be incendiary or problematic or you know just that may be factually wrong that I should maybe double check those kind of editorial tools would be cool rather than hey just write the damn thing for me which is you know I was under the impression I was under the impression that a whole lot of local news which of course is a dying category in this country that articles about sports scores traffic accidents and the weather are largely being written by software now no there was a big movement towards attempting to do this a couple years back like you could see we were talking about then to 20 that articles from 2016 right but like two things occurred the first was the scent like the articles were very bland um the technology was thinking it was nlp right now natural language processing no language programming which is no no that's a that's a whole different thing but like the articles themselves are very bland the places where the statistics are coming from are lessening but but also just like in theory people want sports scores from like all sorts of levels including even down to like their high school teams but like the people who want sports scores don't give a shit on whether it's in a sentence or in a table right they just want to see the numbers and see who won and so it was a lot of work that these that a bunch of different publishers were trying to put into it when like people just wanted the numbers and the same with the weather right like you need you need a weatherman to give you context and explanations nlp is not going to do that and you know generative ai can't do it accurately but if you're just want to know what the weather is you don't need someone to turn it into a sentence right you just want the number and so most publishers dropped that technology and we still have it and we still occasionally use it for a few very limited things but like the readers are just not interested it's sort of weird too because like a lot of people don't know this but there's actually an entire different writing style for sports like most publishers use ap style apparently the messengers use in chicago style and a few others use some other like commonly used style systems but the sports section has always had its own style guide and rules i think the ap has its own separate like ap sports style guide and so like people who want numbers they don't want them in sentences and people who want sentences they don't want them built like nlp they want them in this very particular style which is descriptive in some ways and that nlp was not suitable enough to accomplish and generative ai is definitely not suitable trying to get generative ai to generate sports style writing is difficult really you can't say like write this the way howard cosol would write it to name a really ancient sports figure i mean maybe a lot of them have turned off the ability to do that type of query right write this like x would write it and they don't really understand what a sport style is because they don't really understand like style rules right like you can't tell generative ai like write your next three paragraphs in chicago style and you could tell it that but it doesn't know what to do right that has to do with like punctuation and commas and periods and it's not like if you pulled in a hundred thousand sources they're marked with whether they're chicago style right or someone else's yeah well they sort of are in the sense that you can assume that certain newspapers are going by their style guides so they're those corpuses would be you know per each of the styles right but it's a lot like newspapers have on their front page we write an ap style or we write in a house style or we write in chicago style i mean one of the most fascinating things right it's like that most newspapers do have subtly different styles new the new york times has its own style rules that are pretty different um so does the new yorker so does the washington post right the ap is the bedrock but everything books in our newsroom that are built up on top of that and the same for most other major newsrooms interesting i remember years ago i can't find it right now um there there were so many murders in the la area that a blog showed up to basically pick up the slack that was left by the la times which could no longer post about every murder about every death and so that and this is this is at least a decade ago i think but that that weight sort of that load got lifted by load got carried by a local blogger that also happens a lot like if there is a demand then there's probably a local blogger who's willing to do it for themselves um yeah i mean and that's difficult to parse through too because like what if you want to collaborate what if you want to don't want to collaborate i think one of my favorite uh probably told us all right one of my favorite aggregation stories happens to the washington post but not while i was working for it back in like big prime local bloggers days the dmv the dc maryland virginia metro area um had a very strong local blogger community and the washington post said we're gonna host a blogger ring and we're just gonna pull everything these blogs publish with their permission into an area of our site that's like the local blog site and we'll rev share with those bloggers um and at the time i mean it's still somewhat but at the time the washington post had an especially conservative op-ed page and so at some point during this time some op-ed writer wrote a very conservative leaning op-ed on the washington post and one of the bloggers that was an washington post partner published a blog post that uh it was not happy with it and it got aggregated onto washingtonpost.com so there was a washingtonpost.com um page an article page where if you loaded it up it would just say in bright red 42 point letters fuck you the washington post um and that was a washingtonpost.com for like 24 hours before they were able to get it down um so that was sort of the end of that experiment you still need an editorial yeah all of shame moments in major media mainstream media yeah i mean that's what it comes down to right like at the end of the day you're always going to need an editor on any of these things no matter how professionally produced or how smart the ai is there's no gang around the fact that like it's not and the thing about like generative ai texts is fact-checking them is harder than fact-checking the human right because like sometimes it just doesn't make sense or it makes up citations or like it it directs you to something that doesn't make sense for it to be a source or something like that i mean that's harder to check now the same thing happens with code actually right this is a big problem we are facing up as some of our junior developers are playing around with using um co-pilot right is it will generate code that works right it executes it doesn't throw any errors it works but it doesn't accomplish what we needed the code to accomplish so you end up with an entirely silent failure because it works doesn't throw an error right whereas like if they had just tried to build it themselves and screwed something up it would have probably thrown an error it would have probably been closer to what we wanted it to be and it would have been much easier to spot right but if something if something is supposed to like oh no just add two zeros to the end and they ask the ai for that and the ai is like oh here's a function back and the function back adds the number to itself twice you'll get a number back it's all technically correct but if you don't catch that in code reviews suddenly you have code in production that's taking your number that should have two zeros added after the decimal point and multiplying it by itself right and you don't notice until you're like oh these are not what's supposed to be happening here so where do you think this stuff is going i don't know it sounds like you're convinced that humans will always be in the loop writing most of these stories because there's they're really necessary um anybody think not well it's already we already have seen physical people making up stories about things that don't happen shifting identities to the far right or far left you know so the question is does ai just help accelerate that even faster you know so even if you have humans and humans actually are you know the fallibility of ai is because of the fallibility of humans to begin with so the even the training sets are introducing the same types of biases so it's you know it's just now it's easier instead of the touch of a lot of buttons it's the touch of one button yeah i don't know in some ways like the people who use it for propaganda who have the least worried about right what's much more worrying is like it's the same thing that happened with cryptocurrency right the people who are going around promoting cryptocurrency because like it's a shallow scam that's obvious to see i don't give a crap about those people right they'd be promoting some other shallow scam the problem is like the true believers looping in companies and individuals and entities who then go on to use the ai to accomplish things that they think work and then they don't and that has impacts right like gannett probably paid a bunch of money for these this consultancy to use the ai to write product reviews and now that money is burned it's in the hole and this is like time to right so like when you're a news organization that doesn't have a lot of revenue to begin with that's bad right you've put yourself in a worse position that has more downstream effects that like are not obviously attached to ai right they're going to hire less people they're going to have less stuff yeah right like i just i don't know it just seems to me like we're promoting a lot of stuff around ai that it can't actually do on the hope that one day it'll actually do it the same way we did with cryptocurrency and the same way it was true with cryptocurrency there is absolutely no proof that generative ai will reach the heights of the promises that people are making about it which is like crazy because there's a lot of interesting things you can do with it but not the things that people are promising in the market yeah it's the same thing even with you know a decade ago it was big data was all the hotness and i you know i haven't heard anybody utter the phrase big data i think this entire year it's been a while yeah it all just got dumped into machine learning really right so now everyone who is doing big data is now doing machine learning i mean it's the same thing right like what is big data for if it's not defeated into a big algorithm that makes decisions or gives suggestions that was always what big data was supposed to be for to begin with so i think that's when it's a little different but yeah it is the point is these things definitely go through fads and i think it's weird because the fads are getting the rate in which fads occur is accelerating and the impact of those fads is increasing but the efficacy of what they are promoting is not getting any better right i think less than big data probably a better example is like um i don't know what was there was a big fad a little aisle back that i'm just blanking it that what's the organization of the fads that's accelerating as well too because i remember the shiny happy days in the late 70s when pet rocks were all the rage and you know cute and lovely and you could throw them through a window but nobody really was doing that and now look where we're at peter you're more optimistic about whether i can deliver on this stuff or where would you draw a line or maybe what would be a good test of whether this is going to work it depends what you mean by this stuff i think that's a big part of it yeah um i i certainly agree that there's a bunch of things that that even today people expect chat tpt to be able to do you know oh i think it can answer you know i think it can answer a question and i can rely on the question you know and and they're wrong you know um and and it's there is a super funny thing where i i tell people that people i'm teaching about chat tpt i say don't trust anything it says it's going to make up answers and stuff like that and then i'll show an example where i'm doing exactly that where but i you have to have enough context to know whether the the human has to have enough context to know whether or not the the answer is reasonable so there's a there's a thing where i use chat tpt all the time to give me answers but i'm always skeptical about the answers so we're already in the state where you know the erin kind of discussed where you know people are going to assume that it can do things that it just can't um but i think i i don't know i i'd looked up a there's a twitter a twitter post from an ai product manager type person who said here's where open ai is going they're building a new kind of computer um and all of that makes a lot of sense so what he talks about is uh assisted intelligence not artificial intelligence um uh generative ais and conversational interfaces i think are a game changer and i think it's going to for for me the way i think about it conversational interfaces are going to democratize access to computing resources in a way that we kind of saw when we went from not having PCs to having PCs and when we went from uh terminal interfaces which were obscure for most people to guis and we went when we went from guis to touchscreen interfaces it it democratizes access in a way for people to be able to use magical computing stuff in the background that was previously available to specialists but not to most people so conversational interfaces are going to continue to do that and then you know the the ability to do correlation of stuff to be able to ask chat to bt hey i'm thinking of a song that had something something or you know what was the what was the person's name who invented blah blah blah i don't have to figure out how to frame a question i i don't have to know the answer to frame the question well i can i can blunder at a question and conversationally touch piece can help me find the answer um because it has and this is kind of back to you know thinking through the washington post corpus going back 100 years 200 years whatever in new york times corpus 300 years um you know there's something maybe it's not a thing that i would have the general population chatting with but if i were a research analyst um being able to to you know talk my way through 200 years of newspaper articles to find the right thing rather than trying to use keyword searches or something like that it it's a game changer for that kind of stuff access to information and access to common sense um uh that i don't know it's it's going to continue to grow in use um and get subsumed into other interfaces and other tools and stuff like that it's not going to be so front and center right now so chat gpt right now is like a a a demonstration of a gooey interface you know it's like you know it's it's like oh my god we have windows and you can drag it with a mouse and stuff like that and you know it's a like a like an alto or something right it's like okay so you know i can buy an alto for $10,000 or whatever $20,000 $50,000 and then what can i do with it you know i guess i can run the demo the same demo that the park guys are doing you know so what that's where we are with chat gpt right now we're going to get to the part where you know generative ai and conversational um uh linguistic interfaces are just going to be subsumed into everything so that's the thing to look for so one way to look at this is is a sort of step functions of progress in different sub categories of software of anything you know scientific or whatever and i've watched as neural networks went from like ooh we little pledge links so they could barely do anything and then bump up and then bump up and suddenly take a like a big leap like what happened kind of last December when chat gpt came out it was a pretty big leap and the thing i'm wondering is where is the new plateau and i think we're right now trying to figure out what is the new plateau because what happens is that functionality kind of settles in and then you take you assume this set of things you take them for granted over time etc etc and in few i won't say none of the previous step function leaps in other technologies or other areas was the technology itself able to actually help improve the step function and broaden functionality um at the top and i know pcs the you know the third or fourth or fifth generation of pcs was developed by people who were only able to um invent things using pcs but there's a bootstrappy action of having better hardware to do more complicated computations in that sense it seems to me like the same thing you know um uh an ai that can maybe or maybe not help you do coding tasks um you can use it to to build a better ai or or like immediately kind of like examine instead of you know sitting down a brainstorming session with you and a whiteboard and a few other people you know it could brainstorm like 100 or 200 a thousand different kinds of things for you to try and then you can march through all those things and try them right um but it's it that feels it's it's faster than where we were with the pc thing the bootstrapping thing with pcs but it it feels the same too but you don't think somebody's using these engines to say hey we we've got this sort of working pretty well over here this working pretty well over here what would it take to bridge those two and to create a common memory model or whatever whatever and see what the what the machine comes up with as a solution for the thornier problems that would break us through our involving perceptions of the current plateau i i agree that that's going to happen and it feels a lot like um you know back in 1980 all of a sudden you have circuit simulators and so instead of building you know an analog circuit and then testing it uh you just had the machine simulate it and go okay you know i'm now i'm going to do 10 circuits today instead of one same kind of thing yeah i mean i just i just don't know if history has proven out that conversational interfaces are actually powerful or what people want to use right like abby you could argue that maybe they're just not good enough yet but like alexa purchases and google home purchases these are all going down the use of these interfaces go down and every couple of years and this is just the perspective from working of media or my perspective from working of media we've gone through like a two-year cycle where every two years someone's like chat box that's the ticket and each and each time in the cycle the chat bots are more powerful and we go through the process of building them and connecting them up to your point to like our corpus of all of our information or a particular subset or election stuff and nobody uses them nobody likes using them like the chat gpt is very fun to use as a toy and it's very interesting to use for us as technologists who understand what we're interacting with but does the common person want to it's like the problem with vr shopping right like yes you could have a shopping store that has virtual reality items on the shelf and you could walk through it in virtual reality and pick them off the shelf and put them in your card and go to check out but why would you do that when you could go to amazon and click a button and get it um right like i think a lot of the uses of chat are the same thing why would you do it that way when like 90 of the time it's better to use documentation or to look it up using like an index and i think like in some ways the the problem is like and this gets to the problem that most publishers have with ai right like the abstraction of our data into these ai systems and then the presentation of it on the search engine is like a labor issue um in the sense that it's like you're taking this stuff you're representing it incorrectly nine times out of 10 and then you're not paying the people who are creating it and eventually it's just going to be like training on itself because you've everyone who had a job to write this stuff has been fired um and replaced with ai yeah go ahead sorry no that's that's great that was a really good rant i appreciate it and i love the skepticism um i i think um it will give me something to think about so so my presumption for 20 or 30 years has been that conversation especially spoken conversation speech is is kind of the killer app for for humans and that text even books and things like that is going to fade away when we get you know when we get machines that can talk better so i have a long-standing kind of belief in that which i you know you may or may not share which is fine um i think what you're seeing when i i agree right now that even chat gpt is hard to use for things like looking up stuff um or um uh you know doing a relevance query i maybe just want to go to google and page rank or whatever page rank has been replaced with this day and age i it's it's a different thing um uh when chat gpt gets easy enough for most people to use which is a really funny thing to say because it seems like it should be easy enough for people to use but i'm i'm kind of literally in the business of of showing people hey it's you know let let me help you use this thing which seems like it's easy to use it's really counterintuitive for most people to be talking to a machine you know i'd like you know like okay what do i start off talking about sports scores and then we get into the conversation that you need to have with it or why don't i do that you know is it just going to confuse it if we're talking about the the jets and the and the whatever and then i need to ask it about you know my hvac system or something like that you know how do i have a conversation conversation with me what does this even mean and why does it tell me that you know one plus one equals three why would it tell me that you know when it was doing so well before all that stuff is super confusing for people using a chat chat bot right now having said that i find it incredibly useful now that i know how to use a chat bot and i had to teach myself how to do that um i find it incredibly useful and more more productive to ask a fuzzy question in the context of a conversation rather than trying to rely on an index or trying to rely on keyword searching through google or you know even asking a person it's easier to use something that's got a conversational interface to it once you learn the rules and regulations of using a conversational interface which right now is out of the reach of those people so i my my prediction is we're going to have another step function when when amazon finally gets generative ai chat hooked up to alexa because they've been trying to do that for years and haven't yet i think that's coming in in spring i think that's going to blow people's minds all of a sudden you can chit chat with alexa rather than asking it for the weather and to set a timer or whatever that's going to be like a big thing i think i think we'll we'll figure out how to make i guess it'll be two directions regular folks will be we'll figure out more how to use the chat bot and chat bot makers will figure out how to make it easier to approach and manage expectations better and create better outcomes for surprises and things like that and i think we'll get to the point where just like we saw with terminal versus gui and gui versus touchpad touchscreen i i think we'll see everybody move to conversational interfaces with computing and information and i that's seems like a no-brainer to me and i think we're going to see that next you know three four five years and so despite standing here skepticism and your criticisms are very astute and make a lot of sense to me yeah i don't know i i don't think it will be capable of executing that like there's no evidence thus far i understand your point i just yeah i just don't see it and like when we see normal users interacting with chat i don't see it there either i i don't know it just seems to be like a bad way of organizing information uh i don't know but i do think like i do think if it is going to be successful part of the thing that's going to have to be figured out is the how people get compensated when their stuff is in it the uh yeah that's right because i think they changed their fallback hour they just did all about two yeah yeah um but like what's really like that's one of the core things right i don't know if you read or are familiar with brian marchant and his book that came out recently on the luddite movement and he goes into the history of it and like that the core of it is that the luddites were not though they've been re they've been sort of reinterpreted differently in modern times when they were there the luddites were not anti technology yeah that's the book they were for pro technology but they did not want technology to be an excuse for them to no longer be capable of making the living in the world um and the ai process is as much of a problem in that vein as it was with an automation first launch to then and it doesn't and the thing is right like it doesn't even matter if it works because people are already using it to replace artists to replace writers it doesn't matter that it's producing shitty content and bad art that's obviously generated by ai you know the mechanisms of capitalism are such that you would they'd rather manage the management class would rather pay less than they would pay for a quality product um i don't think that they were necessarily world war one you mean like the saboteurs in general like the movement like the saboteur comes from throwing a sabot peasants wouldn't clog into the machinery that that's etymology of saboteur and i thought it was a world war one thing when they pressed peasants to go work in factories to make munitions or whatever but i think it was from before that but i'm not sure we could look it up right yeah and it's just like it i think that unless that problem gets solved what will inevitably end up happening even if even if the chat does become a mechanism that works which i i think we both agree it does not currently really work but even if it does become a mechanism that works right there have been a number of studies that show like it takes very few generations of ai learning on ai generated content for it to start producing gobbledygook and if the problem of the labor of producing the original content that feeds into the ai is not solved it will eventually eliminate itself only all of the people who previously wrote stories and all of the companies that previously wrote the content that powered the ai will have collapsed um is sort of semi apocalyptic for our infoscape um and for that reason alone it's a it's a big problem it it just reminds me there was a i can't remember who wrote it somebody wrote a little tweet or a blog post or a tumble that was like uh we all hoped that automation would free us up to do creative work and art and poetry but now it turns out they've automated the art in the poetry and we're stuck doing manual labor um and like yeah it's an issue you know a generation of creativity could be destroyed because we would rather let google turn the search page into a vast experiment on comprehensibility and so um pete mentioned all the the levels of increasing democratization or you can even look at the democratization of the blogosphere that lets everybody publish their own stuff or twitter that you know goes another level further and farther and faster um but then went you wonder with all these slow levels of democratization happening at the same time the world seems to be undemocratizing itself somehow um and how do you how do you juxtapose those together to to fix problems that are happening we're very rapidly running toward undemocracy and i'm unclear that democracy is the best way to organize things anyway but we're certainly on you know hitting undo on it over and over again around the world well the the the thing that worries me too is they're within the world of philosophy and i forget the philosopher who kind of made the analogy um but he said you know being at war isn't the actual fighting and bullets flying it's the thing that happens before you know the the slowly boiling pot so israel's been at war for you know nearly a century now since its beginning and everybody thinks that something happened on october 7th but really it didn't happen they were already at war before october 7th it's just the perspective so you think about what's happening with even democracy in america and we're physically at civil war already it just we haven't gotten to the tipping point where everybody realizes who we're a civil war yet and the the question is how do how can you deescalate it before it actually turns into fist fights and guns and shooting although guns and shooting is already even started we're just ignoring the fact that we've started shooting at each other and what you're asking is a relatively important question at this moment in our lives and in the planet's life and in humanity's life like how do we keep this from going down the tubes seems important yeah or if you were you know how do you how do you fix something before it's too late so if you were the king of france before the revolution what could you or should you have done to you know smooth things out or at least to do something if not to save your own life but to so the problem those around you a problem with that and similar analogies throughout history is that by the time you get to gosh maybe i could have done something to stop the revolution you have to hit undo on a whole lot of layers of shit that they were doing in order to keep the nobles in place i mean part of the reason to move the capital out of paris over to versailles and to have these balls and basically they wanted to drain everybody's coffers so that they didn't have more power so they made them do a whole bunch of expensive stuff as an arms race to keep them poor to keep like to keep control and so many things are about control and it's different for every place and and and era in history but it's always complicated and there's always like bitter grudges and there's always then the winners the ones we think of as winning that section of history always felt like they were going to lose at a hair's breadth all along they were like i could be deposed at any second and so they became paranoid and crazy and like etc yeah right it's it's the psycho history question yep i've been looking at um roman history again in like the second to first centuries and and rome started because of uprising started giving away the power of the roman senate to the peoples they had conquered in the slow fits and spurts to prevent mass attacks on the central government and as that's happening there's an internal fight with you know that's playing out with this the senate itself and do we call a king a king or can we call a king a king so as augustus is coming to power ostensibly as a king he you know he used the word prinkeps like i'm the first citizen of rome but the amount of power he had versus all the other senators was massively imbalanced but he specifically would not use the word king to describe what he was because he knew just the word would have driven them all against him and he would have been killed the same way Caesar was so it's i'll take these responsibilities and a title but i still make sure that the senators still have enough things that make them comfortable and that's the same thing that seems to be playing out right now in us politics with trump is i want to own the power and have all the things and all the senators and congress people at least on the far right seem to be willing to give it to them as long as they stay in their positions of power and they won't say no it's so worth you know we're sliding down that same slope i think ironically it's almost even worse than that i think the thing is that really none of them actually want power because they don't want to make decisions or do anything they just they just want to be in the visual position of power they want the the visuality of power the illusion of power perception and the perception of power that's the word i was looking for right they want the perception of power and they happily delegate the actual work of power to a bunch of basically anonymous people who work under them right it's it's the the irony is that the republicans who go around accusing people of being the deep state are the most interested in establishing essentially that concept right and you can see it because like it's especially visible at state legislator levels where there's a humongous a tremendous problem where like major conservative like nonprofits and like political organizations lobbying groups right they just go around writing bills and then they hand them over to state congress people and then they get passed and like a couple of times they've asked there's been interviews where the state congress person's like i didn't read it i just i trusted the heritage foundation to write a good bill right like they don't even care about actually exercising the power at all because if they did then that would not be how it would work well it's the same thing with students in classrooms is no one wants to do the actual work you just want the a or the perception of the a and the things that the a gives you rather than actually doing the work to get the knowledge and have the actual space and well i think that you see those yeah i mean i think that though is connected to a change i think that's like not necessarily a indicator of a cultural trend so much as a trend in education where it's like there's a really good book by malcolm harris which talks about the industrialization of the american education system um i'm gonna look it up real quick um nope maybe not malcolm harris uh but anyway the point is like what it goes into is this idea that like education even k through 12 education which was always initially set up in order to essentially produce skilled labor for industrialists at some points in american history shifted more and more towards like the philosophic concept of people should be educated but now has begun swinging all the way back and so what you're seeing in schools especially by the time they get to college is that students are burned out the same way that people who who work burn out right they've been doing homework they've been doing going to school they've been commuting to school the schools start earlier and earlier um and go longer and longer and the homework gets longer and longer and the test process gets longer and longer and none of this is learning per se it's very mechanized it's about memorization um and so people don't want to participate and that's and then on top of that your reward for participation and success is immense debt that you can't pay off and you can't get rid of and that's why higher ed enrollment is down right that yeah or it's or it's all shifting now towards stem degrees because you can get a job with a stem degree that pays you money for things that you know versus i i think the english department harvard right now has or in 2020 it had 56 professors in the english department but only 60 people enrolled you know and that's insane the hellent teacher ratio and i think english as a major over the last decade has lost a third of its enrollment and so you see a heavy losses of just massive cuts in humanities departments everywhere and i think this year alone there are an awful lot more of them and they're going unreported nobody it's a problem no one's talking about generally or if they're talking about it they're talking about a lot of the cuts that were happening in programs that disappeared just before just at the pandemic and it's gotten worse and worse and worse um and it's you know eventually it's gonna break but there are i'd know of a handful of people who are already saying don't call your school you know a liberal arts education you can't use that word anymore because you're not providing that as a thing maybe they're all vocational schools now and it's yeah and a lot of places are literally looking at so um i've got a friend who was uh a full professor who was just retrenched at a large state school in minnesota and literally they've decimated the humanities departments and more likely than not the their solution is going to be to partner with another school halfway across the state that is doing you know more industrial arts and welding programs and that type of training and they're going to merge the two schools together and it's not going to be what we would consider a western based liberal arts education of any sort it's going to be training for physical jobs or specific types of jobs rather than anything else and it sort of yeah sort of connects back to the problem right like increasingly the roles for which creatives would be trained for are shrinking or their compensation is shrinking or both and so the argument to go to a college education for doing that it's not great the life time between training and job is also long so it's really hard to know where to aim these days because you know five years is a whole change in a lot of different industries yeah well that was you know in part the the the original figure out how to learn to think critically so that regardless of what you were faced with you could at least figure it out um a friend and I have been reading through um uh Hitchens and Adler's book the the great conversation which was the first book in the 54 book series of the great books of the western world in 1952 and they're looking at a lot of that where education has been and there's a good bit of discussion of Dewey and what Dewey thought and how all of that impacts on democracy because at the time they're writing it they're going through world war two and it's if we want to have a strong democracy we need a well educated electorate is you know the underlying thing but we seem to have lost a huge amount of what that was in 19 early 1952 when it came out versus where we're at now and no one's talking about that as a thing it's all when you hear about education at the government level it's education for specific jobs how do we keep people employed and in jobs and doing things um in the early 1900s it was a lot easier because it's not until and it's only happened in the west it's now happening in the rest of the world but in the early 1900s you still have the vast majority of the world population working as subsistence farmers and now they've most of them have moved into larger cities to do physical labor factory labor anything but you know tending a farm on a day-to-day basis but now that we've had that massive shift of everybody the city and working where do you go to next what do you what do you grow yourself into or have we just grown so far there's nowhere left to grow to you go to the metaverse chris we're going to live in the metaverse together no we're not well only only this zoom metaverse but honestly it doesn't always work so we got to be careful yeah exactly um we've run over our hour um no sign of flansian so yeah i thought we'd show up at 12 o'clock i liked your theory i thought that was great i was expecting he's uh on holiday in asia somewhere oh okay yeah cool um so shall we wrap up our show today and head on back to the real world which is full of these blunders well i we've solved it all so you know what else is there to do i'm comforted by that thanks everybody or at least our own house so yeah yeah