 depending on how you count and how you read it we are currently in the third software crisis which shows itself in ever longer and more expensive software projects which in their results also do not meet the expectations and there are some real counterparts such as overpriced construction projects that they have decided so what next computer scientists business people are dealing looking at these questions and wondering what technology philosophy could contribute here and our speaker Ayubu is a broadcaster technologically philosopher and he does radio and cooperatives and in his talk he will take us by the hand and tell us what we have to expect in a software crisis 4.0 and Ayubu it's your stage welcome at the chaos studio in Potsdam I am Ayubu and close to the end of this year's Easter divock bridging bubbles I would like to give you a small talk of 20 minutes with a subsequent Q&A and if you want a big new button session about the software crisis that is a very familiar term it is used in university lectures but that only leads us to the present and what will happen in the future that is something that you may just worry about depending where you are I want to use this short time to update you on what technology philosophy has to say on this issue so it will be 20 minutes 20 minutes Q&A and then the big button round and you can see the links in the pre-talks page if you would like to join us there and there are further links on those pages on participating in the exchange of course this is not an issue that everyone in IT is normally confronted with those that have studied computer science maybe have heard about it and the first few phases in the software crisis are well understood and they are the topic of university lectures and so on I would like to briefly explain these two phases where this all came from and talk about the way the problem is shaped at the moment and about the approaches and the hopes and dead ends that the future may hold for us now regarding history let's start with start like this when things were as they were when God created computers we were talking about mainframes that you were perhaps leasing from the makers or maybe building yourself maybe bought and with that you had a handbook a manual literally taken that was about 100 kilos perhaps and a software was something that please you should write yourself so from one user to the next things were quite different from that resulted about 20 years after the introduction of the computer as we know it today we had the so-called first software crisis which led to a conference which interestingly to place in Lake at Lake Constance in southern Germany in 1968 and the issue there was what the computer scientist Dijkstra called the hardware progress being faster than the software progress and that was supposed to be a problem and it was actually NATO that ran this conference which shows that where the responsibilities and interests how they were distributed computers were a mainly military issue that what and the problems were just solved by the military were to be solved by the military and the second phase was in the 1970s and 80s where you could say that up to the beginnings of the internet as we know it today or as it was first seen on the horizon we had software engineering in a literal sense you could say projects that would define from the start very exactly what the objectives were how the project was to be executed as a German term the kind of project obligations sheet where you wrote down what you were to produce and what you used for controlling and all that and of course you know software technology kept growing and became a non-linear process so it became too big too slow and too complex for these structures and that was phase two of the software crisis as we are taught it today at universities and that was exactly what we are still dealing with today in the last 30 years and that was this arbitrary kind of in this arbitrary kind of perspective that was phase three where people tried to use dynamic methods to deal with the increase in complexity and and the size of the structures and these 30 years it seems for a while these 30 years are nearing their end and that means that we are going to enter a crisis 4.0 where we don't know how things are going to be now with the beginning of the actual internet you had collaborative and simultaneous forms of work and clearly a phase of euphoria everything is networked everyone can take part in a decentralized simultaneous way and that way we will very easily develop all the we will no longer lack behind the progress in complexity and be ahead of the curve as people would like to say so the internet was going to be the source and solution of all the programs and using the old methods from phase two there was a lot that you could deal with the older among us remember how the progress from Windows 95 to Windows Vista was managed where everything was still quite static and security requirements were raised to a fairly new level the last time when a large project really delivered even though the users didn't quite notice it at the time and together with that euphoria all those very sensible progressions and losses of control came about the management of the human factor that became more and more difficult the working climate as it were the collaborative methods of packaging the education sorry the the establishment of the open source idea as we know it today the availability of libraries and complexity of dependencies the introduction of new layers between assembly machine language and the person in front of the screen all that contributed to those last 30 years of the software crisis 3.0 becoming more and more dynamic and I remind you of historic issues such as computer aided programming and so on now today some people say we are in the crisis 3.9 where all the counter measures uh emergency exits dead ends and uh quicksands are kind of culminating and we don't know how long we can continue keywords are standardization lock ins for example if you as a small to medium enterprise were using microsoft that is the so-called industry standard this used to be in the 70s you would say no one was ever fired for buying ibm it's now more it's the same regarding microsoft but the problems are getting larger all the time and many of us know many of us that have seen the methods that were developed to save us from the consequences of ever increasing complexity agile scrum and so on so these were the emergency exits that became an everyday issue everyday tool and they often became dead ends and quicksands and drove costs up and not to speak of the human factors and human emergencies that were involved so what is the philosophical perspective i'm not going to burden you with a lot of technical terminology from philosophy uh so from a non technical non philosophical point of view you have a choice a crisis of trust of control and legitimacy the sense of loss in software is something that everybody knows politics is having a hard time just think of the struggle for critical infrastructure and securing that to deal with the security holds there so all the authorities the financial authorities legal authorities moral authorities are experiencing a loss of trust control and legitimacy the philosopher sees this under the term of ethics and this thought involves the concept of a golem for more than a century so an artificially created subject that is alive that has to be assigned a certain ethical value too and which you try to control agency is a word that is often used in english speaking philosophy the problems that arise if something that you created yourself is then acting on itself and again a term from english the moral hazard which older people will know from real socialism or in a modern sense the freed rider problem so as a human or a human group you don't actually act in a way that the common good is optimized you pursue your own agenda that is why agency comes in here so these are all issues that have a long backtrack and that are well documented that is how philosophers work and using this backtrack we try to conduct impact assessment from a technology philosophy point of view to try to assess what the consequences will be in a more or less defined amount of time and that brings us to the hypothesis and the attempt to extend the current problems and solution approaches using the means of technology impact assessment to see what might happen and the assumption that arises from there is that is this at the next phase of looking for solution from all this complexity and the loss of control could be to use the whole AI artificial intelligence paradigm to software development and precursors might be keywords such as low code or even no code so strategies and tools and methods that amount to software development no longer being done by humans but rather several layers of software and computers that you need to program computers what are the no problems here and extensions or the unknown extensions the assumptions assumptions that you need a computer to program a computer is something that we've seen in microchip production we see that already we have the issue of the lack of accountability not to speak of legitimacy just think of the accountability of democratic elections the issue why we were always arguing against election computers the same could apply to software development and what is the key question here well there is a thought experiment which is hard to contradict in philosophy that is looking for a theoretical or empirical solution and that is the assumption that we still haven't managed to simulate enough neurons and put them into a computer so that we could even approach the complexity of the human brain and the hypothesis that is hard to contradict here is what if we did manage to amass the required number of neurons couldn't we reach a point where we surpass a certain critical stage that a certain kind of consciousness might arise and how and from which criteria will we notice that and what happens if we do that is where ethics will then reach its next extensions regarding control and legitimacy shouldn't we then define machine rights as a counterpart of human rights would we create a new kind of creature living creature this seems very far-fetched and very marginal but that might actually be a consequence of the crisis 4.0 not just computers programming computers and becoming indispensable but these computers that but do program computers might then develop a kind of conscience so that means the question of the image of the human and the image of the machine how about conflicts keyword cyber war human machine conflicts how about the interface between human and machine how do we notice that consciousness might arise and all these questions under the keyword of transhumanism hybrid creatures and so on and from a macroeconomical point of view the issue is markets democracies and human rights transparency and machine rights how does that all map out if evolution processes are in competition with each other be it in markets or some kind of political hopefully democratic processes do we need voting rights for machines that reproduce themselves is there something like a software Darwinism or will we have certain markets that reproduce this could they be distorted in some way so some economic issues here and how would you would you with the changed role of the human what is the indispensable human issue that's something that we as philosophers will have to think what happens if we have machines with a certain amount of consciousness that are supposed to program other machines where are we then and do we see precursors I've talked about keywords such as low code or no code that seems very low key but could that possibly be a first indicator of what we have to look for what can we hope to quote Immanuel Kant what is the optimistic positive variety does it have to be dystopia could it be a utopia and how do we need to prepare these are the kinds of questions that technology philosophy would have to ask to an anticipated software crisis 4.0 and then the question to us as the chaos what do we do to what to what dot AI we had this to what is do something so the AI problem was something that we dealt with and after sketching out what we are dealing with let's look at the initiatives and let's talk about it if you want let's do it in right in the q&a and the big blue button and these are the keywords under which you can find me let's see if in the next few months we can establish a dialogue my colleagues from technology philosophy and I would very much be interested in feedback and I also do a bit of radio if you would like a radio program both in seabase and ccc potsdam there are radio programs that are being produced so you could actually use those to put the debate into radio so if that's what you want please let me know and thank you for that and that gets us to the q&a wow i give what a talk thank you a virtual applause as it showed insight into the pandora's box i think it's not just software developers that were listening with interest um and i think from software developers to philosophers these are people that will be interesting um so to use one quote from cant what can we hope for and let me add what do we have to fear good question i have tried to kind of teaser this what we would have to fear and what is this old schwarzenegger movie called since the old science fiction novels from the 1920s when the machines turn against the humans it doesn't have to be this way of course we may not be killed by a certain autonomous robots it might just be a very subtle development by the loss of control in software project taking on completely new forms that we may not even notice at first okay just to link to that next question from the q&a pad what low code and no code as you talked about wouldn't that lead to a kind of explanation of this golem is the accountability of code even possible if in a neural network code and data are melted together what depends on the kind of expectations you have even though i have no real clue here i have to listen to what people tell me when you are involved in the production of microchips where we uh do know that the lowest layers are no longer really understandable by anyone i think we are going through a very long gray area of loss of control and that is not going to happen at a stroke i think we will have to get used to or we will get used to as generations are dying out and of course there's always the progress and people are keep saying okay in a few years time we will get here and reach this kind of stage and in the end it takes a long long time so i cannot really say if what i'm trying to sketch as a kind of logical extension if you can use that word of the impact assessment if this kind of scenario will be here in 25 years or in 125 years i have no idea but the problems that we have seen in phase three of the software crisis they are growing on an exponential scale it seems yes that's the impression i get too um hardware is developing faster and faster and and we are not catching up and the faster we run it seems the faster we enter into a bigger crisis yes i do sometimes i get the question this is all unrealistic why should someone do this why should we just enter in this huge struggle and well at the very beginning of my education i became an economist and i can remember this idea of oh this kind of effort is nothing that anyone would go to that kind of assumption failed in it as well because the lack of experts and the lack of workforce will always lead people to say okay let's try a technological solution which doesn't have to mean that the transition into phase four only consists of great solutions might just happen that very bad solutions are being deployed simply through the promise of cost savings and i think we shouldn't just say we shouldn't just explain things with sinister uh conspiracies when it's just cost savings and stupidity that are enough to explain so it doesn't really have to be the case that we have a full blown phase four immediately it will be more of a quicksand kind of development a kind of great area that we go through and the arguments of getting involved isn't just simply because we can maybe we can't but we may believe and we may believe that the cost savings are possible and that of course could be a kind of evolution right so we're trying more and more and that gets us deeper and deeper into the quest could be yes some people describe it in this way and that is very arbitrary you could of course divide it into eight or five or three phases but um if we try to extend this um and uh then i think that phase four is going to look as i described it and we see the problems now don't we oh yes we do and you also said an aspect that i would like to deal with if our brain the monkey's brain that i brought with me isn't even able to recognize itself in its whole depth then the question is will we ever be able to recognize an artificial intelligence however it is going to look like yes uh in that whole AI debate of the last i think 20 years that was one of the main aspects as you've said our brain structure gives us all kinds of limitations and uh in becoming self-aware and that of course will happen again and not least the artists and the authors and filmmakers they'll they are just demonstrating us already maybe there is a kind of a really not just a lower kind of intelligence that we see in it maybe we just haven't noticed it yet without trying to promote someone but mark uverkling i'm just reading part two is kind of enlightening i you think the the book that you can get in black and white no promotion here yeah i forgot the title too yes and you can deal with it in a humorous way too of course like mark uverkling does and maybe that is the only way well don't just be too pessimistic one further question next time we always have to adapt the solution to the problem question mark couldn't we perhaps adapt the problem to the solution so why do we need drones that navigate with ever more image processing if we could quote simply construct a tube post network that network doesn't know of any other traffic patterns such as birds so the idea is clear just focus on that problem and reduce it yes well my spontaneous thought is it didn't quite work from the beginning of technological development i'm not going to say because we can and explain it this way that might be a bit too simplistic but as long as there is competition and evolutionary systems that are a kind of competition then people always try to do something differently and again that whole terrible marketing problem exists here too it's not so important that software actually has more capabilities only that it can be sold as such just a very broad rough thought here so independent of the social system these are fundamental conditions of the human evolution and to extend this a bit further maybe it's just this marketing machinery that is to blame for software becoming faster and faster and the programming becoming more and more difficult and software doing things that it wouldn't do well i would go beyond marketing here economic systems that do not have marketing exist think of the cold war that we all have to think back to these days there was a kind of competition of systems there and a kind of military innovation that wasn't tried out it doesn't always have to be in a market kind of system it could have if you can have a bipolar cold war kind of model where two parties two agents just imagine or express themselves with a scenario of mutual destruction and play it one to one as it were yeah what do you believe out of these probable scenarios or aspects that you are outlining here which do you think will be the earliest one to catch up with us ah well my glass my crystal ball sorry it broke two days from now um well first let's ask from when we start perceiving this in in this way some people might believe that it's possible some might say that oh this is real ai to pick one example and others might say oh not at all ai as a marketing term is of course ubiquitous i like to say artificial lower intelligence you can run a kind of simulation like that whether we accept it or not is a cultural and mentality issue the first experiments are 230 years old the mechanical turk we all know this story and then came good old life knits with the determinism and the machine that uh the thought that you could calculate the world yeah that actually went until the german empire first german empire um so this kind of building of thought is what the whole phase two was based on when you thought that you could have a whole engineering approach with a project sheet that contained all the requirements from the start and and like that and interestingly the first programmers were uh mechanics weren't they um dealing with the smallest kind of mechanical problems and they were women too um the males of the species were dealing with the world we could would call it architecture they say but the actual coding or linking up of things that was uh as it was at the time it was women's work and uh it was took until the early 70s to change that very interesting sociological phenomenon and there was a lot of research on that too a small ray of hope lit up for a while when you talked about the fact that we are not looking into the whole depth anymore all these libraries written in c we keep using them written in the 70s and one positive aspect maybe is that there is a group of people that is actually able to still look into that and that's the hackers uh so we could tidy things up there couldn't we yes they still exist i still remember quite clearly when i was working on the way to k-buck and profiting from that because i was still able to fix date fields in cobalt and other things like that it wasn't rocket science at the time but you were actually weeding out people from the old pensioners homes at the time to solve these things that's how it worked that's what was done and maybe we won't have any humans maintaining old code but maybe there will be machines that are still able to do it maybe picking up on this aspect if i talk about artificial intelligence on one side and the internet of things on the other maybe that will exponentiate the communication of machines among each other and cause an explosion of a development and i'm going to extrapolate here and maybe that might bring about a consciousness because the human body too consists of billions of cells that communicate in a certain way could be of course the question is when the participants in such a network regard their kind of perception as a consciousness my fridge will probably not reach that stage if anything will reach that stage during my lifetime we'll have to see but the beginnings the beginning of that phase will be debatable for sure there might be differences some might accept this consider the way the japanese deal with robots in contrast though to the way we deal with them yes and it would be a nightmare if the problem of care for the elderly can only be dealt with with robots i don't know uh to grow old or very old means that my radius is keeps shrinking until it consists maybe of just the room in my or pension's home and if i have built the robots myself that then help me that wouldn't be so bad would it well then the nightmare you may fear is not mine because i fear that a robot might kill me because they hadn't run the windows update or something well maybe you could take precautions by maybe build your own technological environment at your last address yourself but we're drifting yes maybe we are how about the constrained oriented programming another question with higher order logic i hope you can understand these terms from the pad yes i can these are programming theory terms and that is part of the weight towards that maybe that is on the borderline between 3.9 and 4.0 but these approaches are what might lead to a situation where the next evolutionary step does look like that because that's what results if you conduct an impact assessment on the basis of current problems yes i have another comment talking about care for the elderly with robots and the writer thinks well that is kind of good because that will mean that the emotional burden on care workers is no longer there yes it was in the 90s i think the eliza experiment you can google it people with mental problems inhabitants of old age people's homes were quite ready to communicate with a computer at the technological stage of 230 years ago accept that computer as a communication partner and then not feel alone anymore and the computer would repeat questions wouldn't it so this kind of therapeutic approach yeah really do you think so yeah why do you want to know how the weather is going to be maybe that too yeah that is close to sorry Dave i can't do that uh and of course the gold touring test uh would receive an update there how do we recognize what kind of software entity we're dealing with yes but the test you apply is always an individual question but also a cultural question we'll have to see and the question is whether in the 3.9 phase will we progress to 3.99 or when we will actually get across the threshold yes and actually maybe it's not a crisis of the software just to pose an example just think of siri eliza is called siri these days and if i use it or if i would were to activate it and siri would ask i would ask here to set an alarm for me and siri would say thank you i might imagine that many that are not so close to technology not as much as people that look at technology and hardware all the time they might see this as a kind of life form with with which they interact yes well i did try to to kind of limit the what i'm talking about to software it doesn't have to be that all of our reality changes in this way and that might be a bit too ambitious to thought but maybe in 25 or 125 years i think the next stage of the crisis will be there and it might be caused by different things maybe the artificial low-end challenges will no longer be in step with the hardware advances or maybe there'll be a cultural conflict that people don't agree on around the globe what the next phase should be or will be and of course from a very biological sense if the latest organic storage chips are dead that we're able to read that code maybe we're only that's the only thing that's left would be machines a small ray of hope concerning siri if siri does set the alarm for me and i don't say thank you i actually experienced this twice that caused siri to simply crash so maybe let's see we'll see do we have anything else actually looking through the pad i think the q&a pad has been exhausted a hint maybe the fascination for a more simple solution a recommendation from the audience there a link solar low tech magazine dot com i have no way of verifying this or spelling it out solar low tech magazine dot com a recommendation from the audience regarding