 So this is joint work with Manfred Troste. He's also from Leipzig, and Werner Kurisch from Vienna. And yeah, I will be speaking mostly about omega languages today, so infinite words. And more specifically, I will be talking about simple automata, and I will directly explain what they are. So this is a model which is more or less some kind of normal form for push-down automaton. And I guess you have some idea what a push-down automaton is. Here we have a stack, and I told you it's about infinite words, so the input word continues to the right. And I needed and used this model for some logical characterization of this language class. And what I needed it to have, or what I needed the properties that I needed for it was that it doesn't have epsilon transitions, so it is a real-time model. And additionally, I needed a restriction on the stack access. So this is what you normally have in a push-down automaton. You replace the top letter by some word, which can also be epsilon. And instead, I will allow only three stack commands, more or less exactly like in the visibly push-down automaton that we saw in the last talk. So we have pop one letter A, we have push A, or we ignore the stack. And note that those two together are actually not non-trivial because restricting the second is fairly easy, but then you would introduce new epsilon transitions. So this is actually the key idea of those simple automaton. And additionally, I have two properties like, except when popping the letter A, you don't know what's on top of it, but you can keep it in the stack if you want. And the last property is that you restart with an empty stack. So this is the simple automaton model and it already occurred once, or at least I found it once in a proof. So it was hidden in a proof on Plasim Gurevich, also nested words, by the way. And this has very good properties for some other proofs where you want to go from push-down automaton to somewhere else. How does it work normally that you prove that this automaton model is expressively equivalent to all push-down automaton, is you start with an arbitrary context-free language. So here it's already for Amiga words, but it also works for finite words. And you start by a Kramer in Kaibach normal form. And we need a special version of this Kaibach normal form. So here this is an Amiga context-free grammar. It has non-terminals, terminals, and production rules. A start non-terminal, and this F here is a set of Buche accepting states. So we'll be talking about Buche acceptance here. And this strict version of the Kaibach normal form allows only to have on the right-hand side of every production rule. It allows that you start with a terminal, and then you continue with either zero, one, or two non-terminals. And here I have an example for you, a context-free grammar for Amiga words. This is the language. So there's three non-terminals, and these are the production rules. And we can simulate it by an automaton, by a simple one, by using the non-terminals as states. So then you have four states. S is the start state, and S is also the Buche accepting state in the automaton. So this will also be a Buche acceptance condition. And then you translate the transitions. So you take the transitions with one non-terminal are translated directly like this. So if you derive M, you would continue with B, and you would do the same in the automaton. And this hash symbol here means that you ignore the stack. And here's a second example. And then if you have a transition, a production rule with two non-terminals on the right-hand side, that means you would go from S to M, and you have to keep that N for later. And this is what I have here. It's a push N. So this downward-facing arrow means we push N onto the stack, and then we can use it later. There's two more examples where you push B and S. And now, whenever you have a production rule with no non-terminals on the right-hand side, then that actually means that you look on the stack. What's this there? So for example, if you find S, then you would continue in state S. So this here means pop S, right? And for example, if you find N, then you continue in N, and if you find B, you continue in B. And yeah, as I told you, only if you pop something, then you know that it was on the stack before. But here you don't know what was on the stack before, and also here you just ignore it. And this, yeah, this has some good properties that I needed. And the general theorem is that for every omega-context-free language, you can find a simple automaton that recognizes this language. And now I use this in a logical characterization, as I told you, and now I'm working on a weighted logic. So I am needing this theorem in the weighted setting. And this is actually the goal of this talk and also of the paper. And yeah, let's do that. So what is the weighted setting? I think most of you know, but I have a short reminder. We are talking normally about formal languages, which is a subset of all strings. And now you can view it from another angle. Then you have a function from those strings into true or false, either there is in the language or it's not. And of course we can generalize this to arbitrary sets A. So now we restrict the sets in a moment, but for the moment just think of some weight structure A. And of course now we have larger functions. So these two functions are either for finite words or for infinite words, and we call them series. And the class of all series is written like this, okay? So for the weight structure A. And now the weight structure is normally, well there's different choices, but the most common choice is the same range. So here's a short definition of what the same range is. So it's an algebraic structure with two operations. And addition and multiplication have neutral elements, zero and one, both have to be associative. And the addition has to be commutative as well. And then we have distributivity and zero also has to be a zero element for the multiplication. And later in the talk we will also be needing commutative same range where the multiplication is also commutative. And so because we are talking about omega words, we actually need more definitions and they're not so easy to define, but I want to give you just an intuition and I don't want to go really deep into them. So for example a complete simmering is one that can handle infinite sums. So if you have an infinite sequence of values then you can sum them up. And continuous simmering are complete and additionally they guarantee some fixed points so the existence of those fixed points and we need that for equation systems and least solutions. And then we will also be talking about the star and the omega operation which for the moment you can think of them just like they're behaving like in the formal language framework. And I have Can you say sum over this operation? This is a sum over this operation yes. And I hope you can see the examples. They're not too low right? So this is the example I had in the previous slide with the Boolean simmering. So either it is in the set or it's not or you can for example count solutions. How many are there? This infinity symbol means that an infinite sum might be infinity actually and then there's tropical simmerings for example that are continuous and also what I like very much is this simmering here which you can use for probabilities. So probabilistic automata fit in our setting. This gives you some idea. Let's continue. So you saw in the unweighted proof that we needed the Skrybach normal form and for the paper it actually contains two parts. So the first is there is no proof that the Skrybach normal form exists for weighted omega context free languages. So this will be the first part and then the second part will be about simple automata again. Well, of course for the Skrybach normal form we need some notion of grammar and weighted grammars are called algebraic systems. So this is for omega words again and intuitively you can just think of them as systems of equations that work more or less like context free grammars. So if you look at the context free grammar like this then you also have some busier accepting set again. So why one has to be derived infinitely often. We only look at left most derivations and this would be along the language recognized by this grammar and you can easily translate it into an omega algebraic system which looks more or less the same. So now we have two variables and we are interested in solutions of this system and there's multiple solutions possible but we are interested mostly in those solutions that look more or less like you would expect them from the unweighted setting. We call that in the infinite case a canonical solutions and for those we also need the continuous samarings and the first canonical solution is the one where only why one is busier accepting. And then we also get a solution which looks more or less like the one that we had before. And this is actually the first component of the solution. So the second component is just the inside of this bracket. And we call all those solutions the canonical ones algebraic series. So this is a specific kind of series and we write them like this with the ALG in the exponent. Okay, so now we can actually come to the first main result of the paper. So you take a as a summary and has to be continuous and therefore also complete and commutative and we need the star and omega operation. And then for a given series, the following are equivalent. So first is that this U is actually algebraic. So it's weighted context free. And the second is that we can write it as a component of a canonical solution of these algebraic system exactly as I showed you before. And the third is that we can write it as an omega glini closure. And well, you might know the omega glini closure from the unweighted setting, which looks more or less the same. So we can write it as exactly like in this sum where we have S and T. And S and T are basically algebraic series but over finite words. And this is important because for finite words we already have the result that the Greibach-Normal form is possible. And this theorem like this existed already. We improved this third part a little bit for the, to continue with it. But then the new part is basically the fourth point where we extend the second by an algebraic system which is now in Greibach-Normal form. Okay, so I have a very short proof for you which I hope you get at least the idea how it works. So we start by this omega glini closure. So I told you it's sums over finite algebraic series and the second one has this omega in the exponent. And now if you have such a sum, then what we do is we find algebraic systems for S and T and they can be in Greibach-Normal form because we already know that we can make them be in Greibach-Normal form and then we combine them into one system and actually the main contribution is finding that system but then of course you also have to prove that it's the correct one but this system has to be in Greibach-Normal form as well. And here for the talk I will restrict it a little so I assume that L is one then we only have one somand here and also S1 applied to epsilon will be zero and then we just have this equation here which makes it simpler. And then the first step is finding the algebraic systems for S and T. So this one is for S and I have another one for T which looks a little bit scary. So here we have variables X1, XI and XI prime and they range over N, one to N and one to M and we actually have also types for these PI, PI prime and PIJ so on. And important for you to notice is that those have at most one variable X and then together with this variable it's at most two variables and they always start with a terminal which makes them in Greibach-Normal form. And now we can combine them so this is the second step into a big system which looks like this. And yeah, this is also in Greibach-Normal form. So these types are the same, these are the ones from before and then you have at most one variable behind them so this is still in Greibach-Normal form and well then the proof, this one is very algebraic so we can actually prove the solution by using linear algebra and this means that we can show that the system actually has exactly the, or the solution of the system is exactly as we wanted it. So this is the first result of the paper. Now we want to use it for the simple version of the pushdown automata. So that means I will shortly introduce what pushdown automata look like. Namely it's a tuple where we have N states so we'll just save the value N here then we have a pushdown alphabet so this will be used for the stack of course. We have an initial state vector and this actually means that it has some values. So for every state we have one series which tells us what weight we have to enter the automaton at this state and then we have a pushdown matrix and this is actually not so easy to understand but intuitively you can just look at it like an adjacency matrix of your automaton. So if this would not be a pushdown automaton but a finite automaton then you can just write it as a graph and the adjacency matrix of this graph would be our matrix M. Now that we have a pushdown automaton we somehow enrich this graph by the states of the stack then you get an infinite graph and the adjacency matrix of this graph is actually M where you see there's a gamma star times gamma star in the exponent so this is an infinite matrix again but some restrictions make it possible that you can represent it finitely. And then we have an integer K where we say that the first K states are actually the Busche accepting states so you somehow have to reorder those states to make that possible and then the behavior is exactly as you would expect we start with an initial state vector and then we apply the matrix M which is in some kind of adjacency matrix, omega times but some of this first K states have to occur infinitely often along the run and we start with an empty stack exactly as we had for pushdown for simple automaton. And now the last definition was for general pushdown automaton and now we want to see what changes if we make it a simple automaton. Namely what we have is we have a more restricted type for M so here is a polynomial that doesn't allow strings or epsilons so this is important that we don't allow epsilons transitions right this was the first restriction and the second restriction is that we only have three stack commands so we either pop something or we remove P by epsilon we leave the stack altered so we exchange P by P or epsilon by epsilon so we can always think as the top letter of the stack by these indices and the third stack command is that we push something namely here we push P1 and these equivalents show us that we actually don't differentiate between P, P and epsilon, epsilon so that means we don't know what's actually on top of the stack if we ignore the stack and that's the basic definition so now one more ingredient is the simple automaton for finite words which we actually use so this is a result from a more recent paper on weighted simple automaton or finite words and here we need the same ring to be complete and commutative and then for an algebraic series we can find a simple automaton which recognizes or whose behaviors exactly are and this is for finite words now we generalize it to infinite words and this is actually the second main result of the paper so now we need A to be continuous and commutative and of course we also need the star and omega operations and then for a series R we can find a simple omega pushdown automaton with behavior exactly this R and again I have a very short proof for you just to get you the idea so we start with a series R and now we're using the first theorem to paper and we can write this R well this should be algebraic actually so R is algebraic of course and because it's algebraic we can write it in an algebraic system in Kreibach normal form and then it looks like this so this is a little bit complicated again but it's important to note that you either have you always start with a terminal symbol A here and you can either have two one or no non-terminal on the right hand side and again you have N many of those equations and now we're taking this over to the next slide so this is again our algebraic system and we transform it into an automaton and this one is simple and what we do is we actually take those N variables and convert them into N states in the automaton then you also have N non-terminals which you use as the pushdown alphabet and then what we have is again our three stack commands so we have either a push where you replace epsilon by yk and you do that exactly when you have yk as the second non-terminal on the right hand side you can ignore the stack by replacing epsilon by epsilon or you can pop a symbol where you pop it here and then you go to the state k because you found that on the stack and you do that exactly in the transition if there's no non-terminal on the right hand side and again then use a linear algebra to prove that the behavior of this automaton actually fits to your Omega algebraic system this is a summary so I told you what simple automata are they don't have epsilon transitions and they restrict the stack access and they only allow to push or pop one symbol or ignore the stack and then to approve the Kreiber-Norme form for Omega algebraic systems you use the Omega-Clinic closure to somehow lift the result already from finite works and for the weighted simple automata you use the Kreiber-Norme form from the first theorem and also the result from finite works and I actually simplified this proof a little bit or this result actually it is also these automata and also the grammars are actually containing both a finite part and an infinite part and we are looking at them always together but the simpler version is also fine thank you for your attention for this class so for the logically characterization for the unweighted case it was not known but it's a paper that we submitted last year so I think it will be as also it's accepted and it will it's an MSO logic and uses also some kind of matching which looks a little bit like in the part before so we don't have intervals but we are using this matching that we also see in vis-à-vis push-down languages and for the weighted setting we are currently working on it so we want to use this to make the the other words please so you mentioned about the fixed points in the continuous savings certain kind of fixed points so maybe I wasn't able to follow what kind of fixed points you were talking about there so basically you need those mostly for the finer words so what you do is you look at solutions of those here but of course they are like recursive and you want to have some fixed points of those this is the solutions if you look at the unweighted case here there you would like in your head you would start with some like for example with the empty language and then you would plug it into your variables compute what words you would find and then you would do that again and again and for those you can do the same in the azure park systems here in the equations and then you need to see nice thing is that the least fixed point will behave exactly like the solution in your unweighted setting