READ Committed Isolation Level in Postgres

Recently we were running into a problem while solving a particular use case with Postgres, and postgres was returning results which were hard to make sense of.


Postgres Version: 9.6
Table Name: test_store

  • ID as SERIAL
  • field1  as INT
  • field2 as TEXT

Isolation Level: Read Committed


  • UPDATE test_store SET field1 = field1 + 1
  • UPDATE test_store SET field1 = 0 where field1 = 10

Above two queries are launched more or less at the same time by our clients.

Initial State of the Table


After a couple of runs for the above mentioned queries

Expected State of the Table


Final State of the Table


  • Values of field1 in the table crossed 10 and they were never reinitialised to 0 which begs the question how did the value of field1 crossed 10 when we had a couple of runs of the 2 queries specified above.

Timelines for Query-1 and Query-2 looked as follows:

Time Query1 Query2
( Initial State of the Table )
T1 Launched Launched ( around the same time) 
T2 Launched Launched ( around the same time) 
( Final State of the Table )

Explaining the Behaviour

Now we started investigating into the why of the issue. There were many things which we learnt along the way , one of them being that every statement which we executed happened to execute within a transaction block implicitly even if we did not specified a transaction block on the Client Side. This meant that every UPDATE statement which we send from the client essentially translates to


UPDATE test_store SET field1 = field1 + 1



UPDATE test_store SET field1 = 0 where field1 = 10


The next step was to understand what was happening during the concurrent execution  of these two update queries. 

Initial State


First sequence of Transactions at T1

Transaction 1 Transaction 2
UPDATE test_store SET field1 = field1 + 1
UPDATE test_store SET field1 = 0 where field1 = 10 ( This statement was not blocked as there were no target rows which watched the expression field1 = 10 before the transaction began )

State of the table after the above two queries at T1


On investigating further , from the postgres documentation for READ COMMITTED 

UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows that were committed as of the command start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the would-be updater will wait for the first updating transaction to commit or roll back (if it is still in progress).

The explanation is pretty simple for the above mentioned queries which got executed at T1,

  • Transaction 1 started and updated the value of field1 from 10 to 11.
  • In the meantime, transaction 2 started and ran the query “UPDATE test_store where Field1 = 10”, but in this transaction there are no committed rows in the current table which have field1 = 10. We although have dirty rows where field1 = 10 , but as the transaction is still not committed, hence they are dirty. READ COMMITTED isolation level as the name implies reads the committed rows only.

Second sequence of Transactions at T2

Transaction 1 Transaction 2
UPDATE test_store SET field1 = field1 + 1
UPDATE test_store SET field1 = 0 where field1 = 10
COMMIT ( The above statement ^^ for update was blocked until transaction 1 committed )


So essentially this happened to the update’s statements ( assuming nothing else was happening apart from these two transactions ) in the second sequence of transactions at T2

UPDATE test_store SET field1 = field1 + 1

  • As this statement got executed earlier, hence this statement took write locks on each of the entries in the test_store. These entries were all committed entries in the test_store table , which essentially means all the entries which we have specified in the table.
  • This transaction got the required locks and hence it updated the values of the rows and incremented by 1

UPDATE test_store SET field1 = field1 + 1

  • This statement first figured out the target rows which it is going to touch or update.
  • These target rows are selected on the last committed state of the table. Hence the target rows will be selected from the state when transaction 1 and transaction 2 just got started. That state looked something like this
  • In this state, the target rows for the update statement will only include row with ID = 2. So now the update statement according to the documentation, tries to acquire the lock for ID = 2, so now as the transaction 1 which has the relevant write locks on the ID = 2, hence the transaction 2 will get blocked on these locks , until and unless transaction 1 commits ( essentially releasing all the locks ).
  • After transaction 1 commits, transaction 2 starts proceeding and gets the relevant write lock for ID = 2. At this moment when the transaction 1 has committed, the state looks something like this
  • So when the transaction 2 proceeds on the target row ID = 2, it again checks for the condition of field1 = 10, and figures out the ID = 2, isn’t the target row at all. Hence the update statement successfully completes , without updating a single row.

Hence when the UPDATE command in transaction 2 runs, it essentially does no updates whatsoever even when we had a row with field = 10 before the transactions began and we had a field = 10, after the transaction completed.


3 thoughts on “READ Committed Isolation Level in Postgres

  1. As an Oracle veteran of 25 years, it’s always interesting to see people experience situations like this. My first database was Oracle where the default is to never allow dirty reads, you just accept it as the norm. Many people start off using RDBMS systems that allow for dirty reads of uncomitted data by default, which leads them to have to put severe custom locking strategies in place sometimes to handle the classic “phatom data” situation. When they come across READ COMMITTED concurrency, either enabling it in the RDBMS or by default in something like Oracle, it’s quite a surprise. People get used to having never seen sessions blocking as they tussle over the same predicates. Poeple panic about locking but it’s not bad per-se, locking must be handled correctly to ensure it’s used safely and effectively and doesn’t hamper performance. You wouldn’t remove the brakes on your car just ‘cos the pedal gets in the way! People try to bring the same methods and ways of working between RDBMS systems like SQL Server to Oracle, you cannot do this, their default concurrency models work in different ways and they must be respected and used effectively. One of the reasons as an Oracle veteran, that I love PostgreSQL so much, it’s so very similar to Oracle in the way it functions that it. This is a really great article on a classic RC situation, nice one!

  2. Classic transaction concurrency problem. Isolation level and locks are one way to handle this but other way is via scn’s or kind of record change version. You use read first , keep scn of the record. while updating record you make sure scn is matching which you read in the first step to make sure you are not updating older version of record. It makes system more fault taulrent and provide higher throughput in concurrent transactions. However it also depends on the use case.

    Nice read. Cheers.

    – Novneet

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.