Skip to main content

Posts

Showing posts with the label performance

Database Performance Troubleshooting Methodologies and Dimensions

When You have assigned a task to optimize a database or performance tuning of an application. Then there could be various dimensions to perform this task because slow performance of application could be many more which can not describe on single page but it can be summary in a table like below I found this table that shows database performance and slow running application performance dimension and there activity start procedures. Performance Dimensions Percentage Values Process Strength Activity Strength Remarks Application Design and Business process 25.00% Long Process Lower priority Module wise activity. Database Schema Design - Logical 15.00% Medium Follow best practices Required short downtime Module wise activity. Database Maintenance 15.00% Quick process Required on OLTP Short downtime weekly or monthly. Indexing 15.00% Quick process Required on OLTP Short downtime weekly or monthly Module wise activity. Server Hardware (CPU/Memory/other) 12.00% Medium process Follow

How to load huge amount of data from csv to postgre

Recently I got a defiance from a some developers regarding postgresql server is slow performance. In his opinion it is not able to fetch 5 lac record from CSV files. They show there aplication which was using (node.js+sequlize ORM). When executed it was really sucking in middle. 5 lac records are really in huge amount but postgre is alos being for high performance application. There is no question we can say that it can not able to fetch this amount of data. So I suggested him to start importing from short amount of records like top 6 then top 600 then top 6000 then 60000. Yeah he did and really it stopped after 60000 records. It failed from application, program is running, running and keep running.. So boll goes in my bucket then I started digging on database server configuration parameters like effective_cache_size, work_memory, Shared_buffers, Maximum_number_Connections, wal_buffers etc, given appropriate values as per best practices and current system resources. Then re

Postgres is also an option than MySQL

Last month I got a chance to optimize some queries of a postgre sql database in my current organization. I was totally new to this database. SQL Server, mySQL, MongoDB, and now, postgres. I  never say a 'no' if some work related to database comes to me, and especially, if the database belongs to the RDBMS family, then, I can never say a 'no', as I am always keen to learn new and new trends. My thought for postgres was wrong earlier. I used to think that it is less popular than mySQL, but, I was wrong. When I started working on it, I found it much better mySQL in some scenarios, like - query optimization(using execution plan and statistics). Technically, Postgres is "better SQL" as it is more standardized. With better query planning, so that you can follow the relational model more closely if it suits you. Socially, MySQL has been very popular for a long time and many people are familiar with it. Postgres is neither owned by a major conglomerate with a qu

How to export performance metadata and statistics

Performance of query execution (stored Procedure or Ad-hoc query) is depends on Execution Plan and execution plans depend on Statistics-histogram . We already know Query Statistics is metadata of database engine which depend on data. Every insert-update-delete changes its stats. SQL server allowing us to export and copy to another server. So we can use it to sync the performance of server during quality or testing of procedures and queries. Step Connect to Production Server and right click on database. In the list just choose Generate Script. Now click on next and Choose Advance option button. Then select script statistics and then Next and Finish. This will generate Script just copy these script and run on Quality server/Testing Server Database.   Happy Reading...