Skip to main content

Posts

Showing posts from 2017

How to load huge amount of data from csv to postgre

Recently I got a defiance from a some developers regarding postgresql server is slow performance. In his opinion it is not able to fetch 5 lac record from CSV files. They show there aplication which was using (node.js+sequlize ORM). When executed it was really sucking in middle. 5 lac records are really in huge amount but postgre is alos being for high performance application. There is no question we can say that it can not able to fetch this amount of data. So I suggested him to start importing from short amount of records like top 6 then top 600 then top 6000 then 60000. Yeah he did and really it stopped after 60000 records. It failed from application, program is running, running and keep running.. So boll goes in my bucket then I started digging on database server configuration parameters like effective_cache_size, work_memory, Shared_buffers, Maximum_number_Connections, wal_buffers etc, given appropriate values as per best practices and current system resources. Then re

Backup oracle database using RMAN

RMAN (Recovery Manager) is a backup and recovery manager supplied for Oracle databases (from version 8) created by the Oracle Corporation. It provides database backup, restore, and recovery capabilities addressing high availability and disaster recovery concerns. How to start recovery manager in and connect to database? just go to bin folder of oracle and then execute C:\oraclexe\app\oracle\product\11.2.0\server\ rman target / <enter> or start-run-rman <enter> connect target / <enter> After connecting how to check configurations ? SHOW ALL; How to perform backup operation. BACKUP AS BACKUPSET DATABASE PLUS ARCHIVELOG; BACKUP AS BACKUPSET DATABASE How to check backup history? LIST BACKUP SUMMARY;

Real Life Oracle Fiction over a Large data set

After being SQL DBA 4+ years now since 2 years I am trying to be a DBA mean cross platform database administrator. So trying multiple technologies. MySQL, mongodb and Oracle too. My today's post is on oracle, true fiction working on large data-set. Actually I was testing CRUD and aggregations on oracle 11g over 21 millions records (21326826). I had a table employee having huge amount of data and there is a column city which have null values on 21 millions records. The story begins when I was trying to update these null value with some information. Column have already secondary index we have created on city column because result of group by was slow on city so for better response time we have created index on city column. Physical architecture of database and tables space are default . Table is present in user tablespace. We have not changed any configuration of oracle before, I am totally new on oracle.

Migrating MongoDB data to MySQL using Python

The database developers/DBAs go through the task of database migration very often in various projects. Most of the times, such tasks have source and destination using the same technology, for example, MS SQL Server to MS SQL Server or MySQL to MySQL, etc. But, in some challenging projects, we encounter some situations in which, the source and destination technologies differ, leaving us in a mess, for example, MySQL to MongoDB, etc. Even this situation can be handled, where we have to migrate from structured to non-structured database technology. But, when it comes to move data from non-structured to structured technology, it becomes a tedious task, for example, MongoDB to MySQL.  As a database developer, I was working on one such project where I was required to migrate data from MongoDB to MySQL. I tried the following steps at the initial level for connecting the source (MongoDB) to destination (MySQL) using Python Requirements : 1. Python 2.7 2. MongoDB 3.2 3. MySQL 5.7 4. My

CRUD Operations with MongoDB

CRUD operations indicates to the basic Insert, Read, Update and Delete operations. It is the most common requirement for any database how this will input and output. Today I want to share how to do CRUD operation with mongodb database, which is fast growing NoSQL Database. 1.) How to create or open any database in mongodb use <Database> 2.) How to list all collections show collections 3.) Read / find / search from documents all documents db.emp.find() where dept = CS db.emp.find({Dept:"CS"}) db.emp.find({"_id" : ObjectId("53eb50ab3bcd8479d0c302e3")}) Search name only db.EmpDetail.find ( { },  { name: 1 }) 3.) Search Documents Where dept = cs and parent is pradeep db.emp.find({Dept:"DOT", parent:"Avinash"}) 4.) Search Documents where Parent is Deepak or dept is java db.emp.find( { $or:[{parent:"Deepak"},{Dept:"java"}] } ) 5.) How to update to all users db.use

MySQL Error Lock wait timeout exceeded

Cross platform database support is really challenging for any dba, specially for them who is working on sql server since long time because who knows what question may raise tomorrow from which database or version or edition. Last week a developer came with her database issue on one of mysql database. Error she was facing “ Lock wait timeout exceeded; try restarting transaction ” during update on  large table having around 15 millions records, she was updating records through joins from another table. She tried 2-3 times on production, after 1-2 minutes this error was throwing.  I said try it on development/local with same data size, result was still same erro. Then started googling found some positive configurations and did some changes on db instance level settings. But result was  just few seconds improvements only & final out put was error. There are below early actions we have taken : SET join_buffer_size = 1024 * 1024 * 42; SET innodb_write_io_threads = 16; SET innodb_l

mysql database size

Workbench is good but its dead when long running query or table having million of records. If you are experimenting something new on large data sets, then I guess the popular error everyone facing which is “Error Code: 2013. Lost connection to MySQL server during query”. So this is the time to forget workbench and come to command shell. Actually today I was finding list of the database with size, running on my local system. This is the query which show results very quickly from command shell.  SELECT table_schema AS "Database", ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size   (MB)" FROM information_schema.TABLES GROUP BY table_schema ;