In this blog post, we will go over a simple intuitive approach on how to atomically update a hashmap which has been distributed across multiple nodes.
Category: databases
High Number of Postgres Connections with Play + Slick
In this post, we will go through one of the recent interesting scenarios which we faced while debugging high number of postgres connections. Recently i started looking into pg_stat_activity to have a better understanding about the queries which were taking up a lot of time on postgres. For people thinking what is pg_stat_activity, it is … Continue reading High Number of Postgres Connections with Play + Slick
Improving Software Development with Learnings from LSMs
How can one improve the software development process in a startup where the time to delivery of a feature is of utmost importance. Lets understand how we improve the feature development process by applying some learnings from LSMs
READ Committed Isolation Level in Postgres
One of the interesting issues which happened on our development enviornments on postgres servers
Best Practices for Creating Indexes in MySQL
MySQL is one of the most used databases in the current world. There has been a lot of research and thought gone into making the indexes in MySQL + InnoDB performant keeping the space complexity of these indexes in the same ballpark. In the context of this blog post, we will be going through the … Continue reading Best Practices for Creating Indexes in MySQL
Understanding Transactions with MySQL and Slick
How can Full GC cycles on one JVM could impact the latencies on other JVMS sharing the common MySQL RDS
Handling Large Amounts of Data with Parquet – Part 2
Know more about the knobs which parquet offers to tweak it with respect to your application so as to attain maximum benefits of parquet
Handling Large Amounts of Data with Parquet – Part 1
Know about the internal working of parquets and how does parquet achieve world class performance in terms of reads and writes
Exactly Once Semantics with Non-Idempotent Requests
Using PREPARE and COMMIT operations to make sure we achieve exactly once write or updates for non idempotent requests.
Building Replicated Distributed Systems with Kafka
In this blog, we are going to understand how can we easily build fault tolerant stateful services with kafka. Kafka provides us with concept of partitions which can be used for syncing writes among different replicas making sure that primary and replicas are in almost same state.