 to give you a quick overview of transactions in concurrency control. It's going to be super short because Hadoop took more time than I planned. I'm sorry about that. So the notion of transactions is something which I hope most of you would have heard of. It's a transaction is a unit of program execution that accesses and possibly updates multiple data items. Now a transaction might run within a database. It may have external actions also. Our focus here is going to be transactions within a database. Transactions which have external actions are discussed in the book. You can go read it up. Now if everything runs perfectly, there are never any failures. The job of transaction processing is a little easier. But in reality failures do happen. So you have to deal with failures. For example, this transaction transferring 50 dollars from account A to account B, this is written in a schematic way. Read some variable, some account A. Subtract 50 from it, write to it. Similarly, read B, write B. This is not actual code. In reality this would be a select balance from account where account ID equal to some number. And then this would be account update account set balance equal to this value where account ID equal to something. We are abstracting away those details and just saying A is an item which we have, we have the identifier for it. So if there is a failure in between reading A and writing B, we are subtracted from A and not added to B. That means money has been lost from the bank system. It's not acceptable. If you flipped it around and did B first, added 50 to B first, then subtracted 50 from A, a failure that occurs in between can result in money added to B but not subtracted from A which is also a no-no in a bank. So that failures have to be dealt with. You have to essentially undo any partial things which were done to clean up after a failure. Another issue is concurrent execution of multiple transaction. We will see that. So we will use this example of point transfer, the same one. And with respect to this example, atomicity means that if the transaction starts, it may fail in between but the system should clean up such that it appears as if the whole transaction executed or nothing happened at all. So in other words, the transaction should appear to be atomic. How do you do this? You have, like I said, you have to undo partially reflected effects. The next thing is durability. By the way, atomicity in the sense that an atom was considered indivisible. It's the smallest unit. Of course, we know atoms are divisible today but atomicity here means that it's indivisible. Durability means that once a transaction is executed, the database cannot forget about it. So the transaction is done with these two rights. Supposing there's a failure after this and the database says, forgets the whole thing. Some user is going to be very unhappy. They did the money transfer but now the database has rolled it back. So once the database says, yes, I have completed your transaction, it should never forget about it. Until it says, yes, I have completed, it's possible that it may, failure may lead to forgetting about it. But once it has confirmed that it has finished it, it's called committed the transaction, it cannot lose the information. There's also a consistency requirement. For example, in the banking system, a consistency requirement is the total money in the bank should be, should add up to 0 in the bank's account. What I mean is, if they take money from a person and add it to the account, now they just added money in the bank. So they actually do this double entry bookkeeping where they keep record of something else which says minus that much money. Don't worry about the details. But the bottom line is, whatever transaction happens, the total money in the bank should be the same and that's a consistency requirement from the application. That is enforced by the application, assuming that the system will run transactions automatically. Other things are primary foreign key which the database enforces. But it can't enforce this kind of thing. Sum of A plus B is unchanged. Now a transaction must see a consistent database. During its execution, the database may be temporarily inconsistent, meaning we have added 50 to B but not yet subtracted 50 from A or the other way. Temporarily, this requirement is violated. But once the transaction completes successfully, the database must be consistent again. And the idea is that your transaction logic must be correct and enforce this consistency constraint. As long as it sees a consistent database coming in, it will leave it in a consistent state. That's the job of the application programmer. The database can help by enforcing some constraints but the rest are up to the programmer. The next is the isolation requirement. So isolation, you can understand when you have two transactions running. So here is this original transaction. It subtracted 50 from A, wrote it and then it's reading B and adding 50 to B and writing it. Supposing while this is running, a new transaction came in and exactly this point, it read A, read B and printed A plus B. If you look at the result of this transaction, it's clear that the sum of A plus B is not preserved. In this case, I am just saying A plus B but if you had, supposing both were your accounts and in the midst of this transfer, you ran a query which says, what is my total balance? The transfer should not have affected your total balance. Before and after the transaction, your balance will be the same but if it read it in between, your total balance is going to come all wrong. So this T2 has seen an intermediate state of T1. That should not happen. Isolation means that no transaction should be able to see a database state where another transaction has run partially. Should either have finished or not run at all. You can easily ensure isolation by running transactions serially, that is one after another but that is very inefficient for reasons you will see. So borderline is that any database system should support the acid properties, atomicity, consistency, isolation and durability. For lack of time, I am going to skip the rest of this slide. I have already explained what is in there and some terminology. When a transaction starts active, at some point, if everything goes well, it will say, I want to commit now. At that point, it is said to be partially committed. Now the database will take certain actions to remember what all it did and once those actions are done, it will say this transaction is committed. But while the transaction is running at any time, it may realize that it cannot go ahead because there is some problem and at that point, it is said to be failed. Once it has failed, you must clean up after it and then it must become aborted which means it has been cleaned up already. Now even after partial commit, if the system crashes, when it comes back up, the transaction may get aborted. Only if everything went fine will it go from partially committed to committed. I am going to skip some details. Going back, why do we even want to allow transactions to run concurrently? If you have ever been in a queue behind somebody who is taking a long time to be processed, you will know what it feels like. You will say, look, my purchase is just one item and this person is purchasing 100 items. Can you please take me ahead of him or in parallel with him? So that is the kind of situation which would happen in databases too. You have long transactions, then you have small transactions. So if you are able to run transactions concurrently, you get two benefits. The first is small transactions, actually the second here is that the small transaction does not have to wait for the big transaction to complete and the net result is reduced average response time. The small guys wait less and everybody is happier overall. The average response time is reduced. The other part is that different transaction may be doing different things. One transaction may have asked for an I.O. and it cannot proceed until the I.O. finishes. But now the disk is busy but the CPU is idle. Another transaction which uses the CPU could use the CPU. So running things in parallel means that you can improve processor and disk utilization. In particular today you have multi-core with many things running in parallel. You have multiple disks. So you absolutely need concurrent execution to exploit this. The problem is things can go wrong and we will see examples and we need a concurrency control scheme. We want to control concurrent execution so that bad things do not happen. So to understand what bad things can happen and how we can ensure bad things do not happen, the first concept we need is the concept of a schedule. What is a schedule? A schedule is simply a sequence of instructions that specify the chronological meaning time order in which instructions of concurrent transactions are executed. Let us see an example here which will make it more clear. This same transaction as before is T1. It is reading A, subtracting it from A, writing A, read B and so on and finally it says commit. It has to be committed and commit here means it is done. The transaction has been committed by the database. So here is another transaction which is slightly different. It is reading A and subtracting 0.1 time say that is 10 percent of A's balance is being subtracted from A and that amount temp is being added to B and then it commits. So here you have two transactions which if run serially there is no problem, T1 followed by T2. So this schedule is okay. Note that in a schedule I am not showing two operations running at the same time of two transactions and that is a simplifying assumption. In practice when you have multiple codes many things can happen actually concurrently. There is a lower layer thing which the database handles mutual exclusion to shared variable to make sure that if two people want to access a shared variable only one of them can access it at a particular instant of time. So you can actually have stuff happening concurrently here but if you have two operations on A say this is writing A, this is also writing A. There is some lower layer which ensures that both of these will not happen together. So at the same instant, this may happen and a second after, not a second, nanosecond after that the other writing may happen. So there is an order there. So that is the assumption. So this is a schedule. So serial schedule. Here is another serial schedule where T2 ran first and then T1 ran. Is the final result going to be the same? It is not. Here A has the initial value and you are taking 10 percent of the initial value and transferring it to B. Here you are then subtracting 50. If you use this schedule on the other hand, A has already been subtracted. 50 dollars have been removed from A. So temp is 10 percent of A after removing 50 dollars. So in this case temp is 5 dollars less than in the second case. So this schedule actually transfers more money from A to B than the first schedule. So the net result of this schedule with T1 followed by T2 is different from T2 followed by T1. So again I think I did not mention it but time goes this way. This is the first thing that happens then this then this and so forth. So the final result may differ but there is no concurrency issue here. Both of them are serial schedule. So both give a correct output. May not be the same output but both outputs are correct. Now take the same schedules and let us say they are interleaved like this. This one read A, updated A wrote it. Then this thing came in read A, got the value of temp, updated A wrote it. Then this came in read B, wrote B then this did it. Is this schedule safe? It turns out that it is equivalent to schedule 1. That is the schedule where T1 runs first then T2. Why? If you see this thing reads A which has already been updated by T1. So in this schedule the A value which it sees is the same as if T1 had completely executed including the update to B. And so the final result of this is the same as schedule 1 which was this one. T1 first then T2 is schedule 1. So coming back here this is equivalent to schedule 1. So this one is fine. The final result is the same. The intermediate values which it reads temp and all are the same. It is equivalent to schedule 1. And you will notice one other thing that if you see what look at these two operations. This is write A, this is read B. In this schedule write A happened first followed by read B. In schedule 1 it happened the other way because T1 fully executed before T2. Read B happened before write A clearly. Does that matter? And the answer here is it does not matter if you consider just these two. If you flip the order you do read B first followed by write A. It has no effect at all because they are writing and reading different things. Similarly if you take write B and write A the order is irrelevant because they are writing different things. How about read B and read A? Actually reads do not conflict. Even if both of them read the same thing if you flip their order the net result will still be the same. The problem is when one is a read and one is a write and they are both touching the same item then you have a problem and we will see examples. So, this schedule is fine. There is no problem. Now, let us come to a schedule which actually has a problem. This is schedule 4 which read A subtracted 50 from A but it is not written A yet. Meanwhile T2 comes reads A, computes temp, writes A and proceeds. Now, this guy writes A. What value does it write for A? It is this value original A minus 50. In between T2 has come and subtracted from A and written to A and this right here clobbers that. So, whatever update this did is going to be wiped out and the final value of A was the original value minus 50. On the other hand what is the final value of B? 50 is added here and then temp is added to B here. So, we have a problem. The net result is not even preserving the integrity constraint that A plus B should remain unchanged. Furthermore, it is not equal into either of the serial schedules T1 followed by T2 or T2 followed by T1. This schedule has a problem. We should not allow it and if you look in terms of conflicting operations if you try to move all these operations down this has a read B, this has a write B. If you try to swap the order the result would be different. Here it read the old B, if you moved it down it would read the new B. On the other hand if we try to move these operations up above this has a write B, this has a write B. If you move this up the final result will change again. So, neither of this cannot be essentially you cannot swap operations to turn it into a serial schedule meaning T1 or T2 are the two possible serial schedules here. You cannot do these kinds of swaps to make it serial. So, that brings us to the notion of serializability meaning a schedule which is not serial, but is equivalent to a serial schedule in some way. And in what way is it equivalent? There are two ways. There is something called conflict serializability and there is another called view serializability. We are going to skip this and our focus today is conflict serializability. Before we define it I am going to do one more thing. We are going to ignore all operations other than read and write. If you saw our schedule here had many operations. It had a equal to a minus 50 and so forth. Now, it turns out from the view point of conflict serializability we you know the database has to do something to enforce this. These operations might be happening inside an application. Database has no idea what the application does. All the database knows is the application read some data and it wrote some data that is all it knows. What happened in between? It has no idea. So, anything we do should be based only on the read and write operations and that is our focus. This is not strictly speaking true. If you have an SQL query which does something the database actually knows what it does. It is not just the read and write, but our focus is going to be on read and write. SQL query can be thought of as doing several reads and several writes. Now, we come to the notion of a conflict. So, two instructions or one of them by one transaction another from another transaction. The conflict only if both of them access the same data item let us call it Q does not matter which data. Both of them access the same data item and one of them is a read one is a write the conflict. It could be the other way. The first one is a write and the other is a read the conflict if both are writes the conflict, but if both are reads they do not conflict. What do I mean by conflict? What I mean is if I had a schedule where this ran first and then this ran. If I try to flip the order if write Q happened and then read the result that reads is will be different. I cannot change the order in the schedule. Similarly, for this write and this read if I flip the order the result will change. Write, write the final result will change. If this went first it will have the it will define the final result sorry. If this goes first the final result will be whatever this wrote on the other hand. If this goes first the final result will be what the other one wrote read read do not conflict they can be swapped. That is the basic intuition and that brings us to the notion of conflict serializability. So, if a schedule S can be transformed into another one by a series of swaps we say that the two schedules are conflict equivalent. And in particular if a schedule S is conflict equivalent to some serial schedule does not matter which one as long as there is a serial schedule. What is the serial schedule? First run one transaction then another then may be others one after another. If a schedule S is conflict equivalent to some serial schedule we will say it is conflict serializability. So, what we have just done is actually a very very deep concept you know it may not seem that way, but it is actually very deep. You will really understand this if you consider you know the state of the world before these things happened. People would write programs parallel programs and kind of hope they work. And if you do something you do not even understand what could go wrong. In fact this is kind of the situation outside of the database. Who else writes parallel programs? Many people when I say parallel programs I do not mean parallel queries they do not have conflicts. I mean things which also do updates. Many people write such things. A Hadoop it is not a issue. Hadoop is designed for queries not for updates. But many other systems including device drivers and operating systems and other such stuff they do stuff which is inherently parallel. And they have had a lot of problems with concurrent accesses to data structures and there are a lot of bugs. In fact Windows was notorious for crashing. And at the end it you know Microsoft found out that most of those crashes were not caused by the Windows code, but rather by device driver code which other people wrote. They were buggy and they caused they made Windows crash. So, later on Windows you know there was a project there to detect bugs in driver code. And they will refuse to let you put a driver at least pre-installed unless it that program pass those verification tests. And there was a group in Microsoft research which came up with these ideas and they really saved Microsoft space. So, Microsoft is very grateful to that group in research because after that the number of crashes for Windows dropped drastically. These days you think of Windows as a relatively stable platform unless you get viruses. The absence of viruses it rarely crashes very different from the old days. So, anyway the point is that doing stuff concurrently is very difficult to deal with. Things can go wrong. In the context of databases the notion of schedules help us understand when something is bad and when something is acceptable. So, conflict serializability is a key condition. If something is not conflict serializable we say it is bad. So, we have understood conceptually what is required. The next step is for the database to have a component called the concurrency control manager whose job it is to ensure that the schedules which get generated are conflict serializable. And we are going to briefly see how that is done. So, for you to understand the notion of conflict serializability better let us take one more example or few more examples. Here is an example which actually models a schedule we saw earlier read a write a then t2 does read a write a read b write b and then t2 does read b write b. Now, let us do these forms read b and write a can it be swapped yes we will move that one step up and this down. Now, this read b and this read a no conflict both the reads anyway and they are in different data item. So, now this read b has moved up above both of these operations. Now, let us do the same thing with write b write b write a different items no problem move it up write b read a no conflict different items move it up. So, the final result is we have read a write a read b write b followed by read a write a read b write b this is serial schedule which t1 first then t2. So, what we have shown is that a series of swaps can turn this into a serial schedule which is this one. So, this one is serializable. Now, here is an example which is not serializable read q write q write q. Now, can you move this write q up if you did the read the value this reads would be different. So, in fact both are on the same data item q one is a read one is a write. So, they do conflict you cannot move it up. On the other hand can you move this down the conflict theory says that both are writes on the same variable they conflict. So, you should not and the reason is if you did swap them the final value of q would be different right now the final value is whatever t3 wrote, but if you move this down the final value would be whatever t4 wrote. So, there is a problem this is not equivalent to any serial schedule. So, this is a bad schedule we are unable to swap things. So, this was situation. Now, here is a small quiz question I will not bother using clicker I will just let you think about it for a moment. Here we have two transactions read a write a and this one is doing write b question is is it conflict serializable or not or is it serial? It is clearly not serial because t2 has an instruction in between two instructions of t1. So, it was serial either t2 would run first or t2 would run after t1 finishes. So, the question is is it conflict serializable or is it not and the answer is very easy here. These two are on two separate data items. So, they do not conflict you can move the write a up in the write b down and you have a serial schedule where t1 runs first then t2. In fact, here you can do another thing if you compare read a and write b they also do not conflict. So, you can move write b up. So, that you get a schedule where t2 runs first followed by t1. So, this is interesting you have a situation where there are two different serial orders both of which are equivalent to this schedule and the point to notice that there is really no conflict between this transaction and this transaction. So, the order in which they run really does not matter, but if there was a conflict the order in which they run would matter. The last two concepts which I want to focus on the first one is recoverable. So, here let us see what happened read a write a at this point t8 has not yet committed. Meanwhile t9 reads a. So, it read the value which this wrote it commits it is done after this t8 reads b. Now, let us see what happened so far is this serializable you see read b and read a you can move read a down. So, it is serializable at this point, but t8 has not yet completed supposing t8 commits now there is no problem this schedule will be equivalent to a schedule where t8 ran first followed by t9. The problem is t8 has not yet committed what if it aborts why will it abort may be the system crashes at this point. So, it cannot complete t8 it unders it has to undo whatever t8 did, but now see what has happened t9 read something that t8 wrote and after a crash to make t8 atomic we are undoing everything t8 did. So, t9 has seen a database state which we are pretending never existed that is bad. So, this schedule is not recoverable if a failure happens here t9 is in trouble this is not a recoverable schedule. So, we only want schedules where even if a failure happens there is no problem with respect to other transactions. In particular you can ensure this by avoiding reading anything which is not committed here what happened the problem is t9 read data written by t8 when t8 has not committed. So, if you read uncommitted data you are in trouble you should not allow that another less problematic issue, but still an issue is a cascading rollback. So, here is a schedule t10 has you know read a read be write a it is not committed yet t11 has read a written a it is not committed yet. So, it is ok if t10 aborts you can abort t11. Now, t12 has read something that t11 wrote if t11 aborts it is ok you can abort t12 also because it is not committed. So, this schedule is recoverable because neither of these has committed. So, you can make a schedule recoverable by postponing commit operations. However, the problem here is if t10 fails t11 has to abort and in turn t12 has to abort you can have many abort. So, that is not desirable either all of these problems can be avoided by not reading uncommitted data all the problems came here because you read a value which is written by t10 which is not committed. So, basic principle do not read uncommitted data. So, cascadeless schedule is one where cascading rollbacks cannot occur and any cascadeless schedule is also recoverable. So, to let us see by an example here consider this schedule read a write b this one is write a read b what can we say about this is it conflict serializable this is different from cascadeless. Now, how do we check this can write a and write b get flip these are two separate things. So, write a can move down write b move up. So, it is equivalent to a serial schedule you will write t1 runs first followed by t2. If you try to swap the other way move t2 up first it would not work because t a is first and then read a is first followed by writing. So, it is conflict serializable. So, concurrency control is a mechanism which any database should ideally provide which ensures that schedules are serializable. It turns out that while this is a good goal most databases provide only an approximation to this. If you insist they will give you ensure serializability, but in practice in order to show that their database gives better performance most databases actually compromise on serializability and offer something slightly weaker and we will see that. So, there is something called weaker levels of consistency. The motivating things are there are motivations for this. Supposing you want to get an approximate total of all balances you want to get a rough idea of how much money the bank has you want to add up the balances of all accounts. So, that can be done in a way which is not serializable maybe that is acceptable. Then statistics for query optimization statistics include how many tuples are there how many distinct values are there and maybe you do not care about getting exact value for this. It is ok if you do this transaction in a way that it is not serializable. If you insist on serializability that may block other transactions and affect performance. So, people do use this in the SQL standard recognize the need for it and said in SQL serializable is the default which is actually a lie SQL says it should be the default, but no database supports serializable as the default you can ask for it, but it is not the default. Then next level down well let us start at the bottom read uncommitted means you can read anything including uncommitted data. Now this is very bad because you can read something which is then rolled back we saw that that is a very bad idea. So, the next level up is only committed records can be read, but what it does not ensure is that if you read the same values multiple time it can return different values why because you read a value once it was written by one transaction which had committed. When you read it the next time meanwhile another transaction wrote it that also committed. The two reads of the same record would give different values with this level of consistency read committed, but in fact this is very widely used this is the default which most databases support, but it is not serializable it can get you in trouble. The next level up is called repeatable read which means not only should you read only committed records, but if you read the same record again you will get the same value. So, it seems very close to serializability, but it is not the difference turns out that it may be that you had a transaction which inserted a record and inserted two records let us say and a second transaction might find one of the records, but never see the other record. Repeatable read only says that if you read that record again you will get the same value, but it may be that it saw the first record, but not the second one that does not violate repeatable read, but if they ran serially this can never happen. If you read one record inserted by a transaction you must read other records inserted by that transaction it is impossible to see one, but not another. Therefore, even with repeatable read you can have non-serializable execution. So, SQL does provide a way to set the isolation level to serializable and supposedly once you do that the database will guarantee serializability. It turns out that I forgot to update this slide. This slide says Oracle and PostgreSQL by default support a level of consistency called snapshot isolation which actually does not guarantee serializability. This slide is needs to be updated because as of about a year and odd back Postgres actually supports if you ask it to run in serializable level it does actually run in serializable level. Earlier version of Postgres would run in level which is not one of these actually somewhere in between called snapshot isolation which also does not guarantee serializability. Oracle still does this as far as I know even now if you get an Oracle database and tell it run in serializable level and then see what it does. It actually runs it at a different level which is not quite serializable. You can get into trouble because of that. So, PostgreSQL also implements a version of snapshot isolation. Older version also implemented that. The newer version implements a version of snapshot isolation which is called serializable snapshot isolation which fixes the problems with snapshot isolation. Ideally I would have like to cover snapshot isolation in great detail in the concurrency control part, but we probably do not have time. So, I am just going to skim over the details later today. Now, the last part of this chapter is how do you control transactions in SQL? There are several things you can do in according to the standard that transaction begins implicitly and finishes when you do commit work or roll back work. In practice all databases will commit a SQL query as soon as you execute it. You can turn off this called auto commit. You can turn off auto commit in JDBC connection dot set auto commit false. So, if you are doing if you are building an application which actually needs atomic transactions you had better do this. You have better turn off auto commit false, do the steps of your transaction and then say commit or roll back if something went wrong. Otherwise there is no guarantee that your application will actually be serializable. Even if you told the database run in serializable mode the JDBC connection might still run in auto commit. So, that is that for this chapter. There is just one question so far on transactions. The question is what is the difference between a serial schedule and serializable. As I said a serial schedule is one where transactions run one at a time. Transactions starts it runs till it finishes only after that the next transaction can start. So, they are running one at a time with no inter leaving. Transactions starts it completes only then the next starts. Serializable on the other hand does not imply that things are running serially. However, serializable means that it is equivalent in some sense to a serial schedule. And therefore, it cannot have concurrency problem because the effect is exactly the same as some serial schedule. Therefore, concurrency problems cannot occur. So, our goal is to ensure that any schedule is serializable. We want to allow concurrent schedules to get the benefits of concurrency. At the same time we want to ensure that nothing goes wrong. And to do that we want to show that the concurrency control scheme ensures that the schedule is serializable. That was the only chat question. I can take one or two questions on a view only on serializable. Nothing else please at this point. Rajalakshmi go ahead please we can hear you. Sir. Good afternoon sir. We have seen that transaction can be run concurrently. So, there is a concept called logs. So, how can I identify which log is currently holding which resource? So, I have not yet come to locking. I am going to cover it in the next half an hour. Very quick introduction to locking. Your question is how do you know what logs are there in a database at a given point in time? So, most databases provide some way to look at current logs. It is database specific not a general purpose thing. So, there are tools front ends to Oracle for example, which will let you view logs stored for Oracle. PGA admin I do not think I do not know if PostgreSQL actually allows you a way to see what logs are being held by who. I am not sure about this. So, I would not get further into that. So, let us any other follow up question. Sir, please sir. Let me take a question from chat. Please explain the problems of concurrency like dirty read, unrepeatable read and lost update. This is a good point. So, I said you know supposing we have non-seriesable schedule bad things can happen and I just gave some example. So, people went around and said let can we classify the examples of bad things that can happen and came up with several of these. So, let me list those out since this question has been asked. The first one is dirty read. Dirty read is the same as read uncommitted value. And this is something which we want to avoid and read committed level avoids this particular problem. The next problem is called unrepeatable. Well actually lost update. I will come to that first. A lost update happens in the following scenario. You have T1, T2. You have an update by T2 right A. Supposing T1 does the same thing. So, this is the right A here without having read A in between. So, what I mean by this notation is that if T1 read the value written by T2 and then it wrote A. That means it has taken the value written by T2. This particular right has been taken into account by T1. Supposing T1 did not do this. What it has done is T2 has written something and T1 has effectively overwritten it. Without seeing what T2 did, T1 has overwritten it. So, this is an example of lost update. The update done by T2 is lost. So, you do not want this to happen. And the last one is unrepeatable read. So, of course, we do not want this concurrency control technique had better prevent this kind of lost update. And locking is one way to prevent it. Now, coming to unrepeatable read, supposing you had the following. You had T1, let us say write A, commit, T2 reads A. Let us write the value written by T1. Then T3 comes and let us say it reads A and writes A, commit. And now T2 again reads A. In this situation, the value that it read here, this is read committed mode. So, this value was committed, it read that value. When it comes here and does the read again, the current committed value of A is this one, which T3 wrote. So, it is going to read a new value for it. So, the first time it read A, it got some value. The second time, it got a different value. So, this is an unrepeatable read. So, these are symptoms which you can see, which reflect the fact that you are not running in serializable mode. So, if you go back to the slides here, the levels of consistency in SQL were defined taking into account these symptoms which can be observed, lost updates and so on. For lack of time, I am going to skip the details.