Skip to main content


Why JSON Came into Picture

After 2005 applications & user requirements started growing rapidly, Hardware and software developed, the advent of Single Page Applications and modern mobile/web apps that we know today needed some kind of data interchange to function seamlessly. To fulfill user requirement technology started shifting into  new language-independent data interchange format that time JSON came into the Picture. JSON gained rapid popularity because it makes transferring data very easy. It’s also lightweight and easy to read and understand. There are few other reasons that JSON make buzzword after 2005. Flexibility for rapid application development requirement APIs (application programming interfaces) Modern Web & Mobile Applications. Need of faster data travelling on various kind of devices. Big Data and variety in data.

Is it time to revisit PostgreSQL and MySQL with JSON support

Hi Friend, This is the time is your most important time in some productivity. Where the world is suffering from Pandemic of Corona disease (Covid-19). Everyone is under restrained in their home like jail. Govt of India told to stay in home for 21 days. So I decided to share my long pending post in a series. One of my favourite articile I wrote one year back, I got chance to share. I will share this as a series. Here are some topics related to my current Article  " Is it time to revisit PostgreSQL and MySQL with JSON support " . JSON stands for JavaScript Object Notation, and was first formalized by Douglas Crockford. JSON is a data format interchange - method of storing and transferring data. Mostly its uses such as data conversion (JSON to SQL) and exporting data from proprietary web apps or mobile apps. XML was a big buzzword in the early 2000’s, JSON become the buzzword in later few years.  Why JSON came into picture. What are the impacts JSON bring in database t

Hybrids Database Architecture with In-memory & Disk

When storing data in main memory, gives performance benefits but it is an expensive method of data storage. An approach to realizing the benefits of in-memory storage while limiting its costs is to store the most frequently accessed data in-memory and the rest on disk. This approach is called hybrid architecture of in-memory with Disk. There are some benefits of using hybrid architecture listed below: Performance (which is enhanced by sorting, storing and retrieving specified data entirely in memory, rather than going to disk). Lower TCO from reduction in hardware investment Cost, because a less costly hard disk can be substituted for more memory. Reduced point of failure using Persistence - All data is stored and managed exclusively in RAM is at risk of being lost upon a process or server failure. So every In-memory database  provide solution as a persistence to disk. This is usually act like hardening transaction log from memory to disk. Different vendor have different mechani

In-Memory database and In-Memory engine.

Now-a-days almost every relational or non-relational database gives in-memory capability. Traditional database vendor gives this capability using add-on in-memory engine and some are also specialized vendors. MySQL have in-memory table engine,  SQL server have in-Memory OLTP engine,  SAP has HANA,  Oracle have TimesTen,   Mongodb have separate in-memory services.  These all are version and edition dependent. To use this we need extra licensing cost. Apart from that there are some popular specialized in-memory database vendor like  Redis,  Aerospike,  VoltDB and  Arangodb.  Thanks for reading Plz dont forget to like Facebook Page..

Why In-Memory Database

An in-memory database (IMDB) is a database whose data is stored in main memory to get faster response times. Currently there are 3 most popular storage systems for data/database, RAM, SSD and HDD. RAM performs random reads/writes at 100 MB per second. Sequential reads/writes are even faster 1 GB per second and higher. An SSD is slower than RAM. Random access time is a fraction of a millisecond (1e-5 seconds): we can randomly read around 10,000 blocks per second. Random access time is 10 milliseconds (1e-3 seconds), which is 100 times slower than an SSD. If we talk about price on average, each storage type is ten times more expensive than the previous one. An HDD is the cheapest storage type, an SSD is ten times more expensive than an HDD, and RAM is ten times more expensive than an SSD. So If we need access time of 1 ms or less, choice is RAM. Different database vendor have different size limits. There some popular use cases where In-memory database is being use. 1. Sessi

Why Oracel or SQL DBA like Postgres than mysql

We can create database server architecture similar to Oracle/SQL as well as mysql and also feature of mongodb. Postgres Supports all sorts of performance optimization that we used in Oracle or SQL Server. I think in that's case where MySQL is lacking. Supports materialized views and temporary tables. Supports temporary tables but does not support materialized views. It implements the SQL standard very well. It includes support for "advanced" SQL stuff like window functions or common table expressions (now supported in MySQL 8.0) . Postgres is very innovative in the matter of how plpgsql interacts with SQL. It supports lots of advanced data types, such as (multi-dimensional) arrays, user-defined types, etc. MySQL is partially compliant on some of the versions (e.g does not support CHECK constraints). PostgreSQL is widely used in large systems where read and write speeds are crucial and data needs to validated. In addition, it supports a variety of performance op

MongoDB Compass

MongoDB Compass allows users to easily analyze and understand the contents of their data collections within MongoDB and perform queries, without requiring knowledge of MongoDB query syntax. MongoDB Compass provides users with a graphical view of their MongoDB schema by randomly sampling a subset of documents from the collection. Sampling documents minimizes performance impact on the database and can produce results quickly. Features of Compass 1. Real Time Server Stats. 2. View, add, and delete databases and collections. 3. View and interact with documents with full CRUD functionality. 4. Build and run ad hoc queries 5. View and optimize query performance with visual explain plans 6. Manage indexes: view stats, create, and delete. 7. Create and execute aggregation pipelines. Thanks for reading Plz dont forget to like Facebook Page..