Skip to main content

Posts

Showing posts from 2018

History of MySQL from AB Corp to Cloud Database

MySQL was created by a Swedish company, MySQL AB, founded by David Axmark, Allan Larsson and Michael "Monty" Widenius. Original development of MySQL by Widenius and Axmark began in 1994. The first version of MySQL appeared on 23 May 1995. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter,and "SQL", the abbreviation for Structured Query Language. ·          23 May 1995 - First internal release ·          Year 1996 - Version 3 o     Simple CRUD operations o     January 1997 Windows version was released on 8 January 1998 for Windows 95 and NT o     production release 1998, from www.mysql.com ·          Year 2002 - Version 4 o     MyISAM o     unions o     Tracking o     B-trees o     subqueries o     prepared statements ·          Year 2005 - Version 5.0 o     cursors, stored procedures, triggers, views, o     Federated Storage Engine o     Event scheduler,  partitioning , p

Ignore a table from backup in mysql

Sometimes it happens that you need to ignore a table from backup due to any reason like 1. Table size is big, we keep log in that data. 2. Table that don't need to copy in backup. 3. Database copying from prod to test and some tables don't need in backup file. Generally when we create backup in single dump file it consolidate all db schema and data and if we need to exclude any table of sets of tables, then we need to use "--ignore-table" clause in mysqldump command. Syntax is as below. mysqldump -h <host> -u <username> -p <schema> --no-create-info --ignore-table=schema.table1  > db-data.sql Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

How to grant all privileges in mysql8

Hello Friends, How are you, hope you are doing good. Cold have been started in the country, specially northern India and I also started mysql 8.0 on AWS RDS instance. Yesterday I was trying to give full permission to new created user from all the IP in mysql 8, It started giving me error 1406 . I was using below query to give permission, as the same we were using earlier in all versions. grant all privileges on *.* to 'youngdbauser'@'%' with grant option; Above query gives error 1406 in mysql 8.0. After lots of trying and dig inside error and mysql portal. Then I created user with full permission in create statement only. This is the new statement with create user which I seen in mysql document portal. create user ''youngdbauser''@'%' identified by 'mypass'; grant all privileges on *.* to 'youngdbauser'@'%' with grant option; It resolved my issue. Summary : Now you can must specify the user's local during

Waiting is over MySQL 8.0 is Available on AWS RDS

Hello friends, Many MySQL developers and administrators are waiting since long for mysql 8.0 on AWS RDS.  Now Amazon RDS for MySQL started supports MySQL Community Edition major version 8.0 in all AWS Regions. Please upgrade your instance to test new product performance. MySQL 8.0 is the latest major version release and offers new query functionality and enhancements for better performance, reliability, security, manageability, and international and mobile support. Here is MySQL 8.0 Version Summary: MySQL Document Store. JSON enhancements. New Window functions. New Common Table Expression & Recursive CTE. Better with documents and JSON. New Database Roles. Index Hiding, a.k.a “Invisible” Indexes Configuration Persistence. Unicode UTF-8 Encoding. InnoDB NO WAIT & SKIP LOCKED. Thanks for reading Plz dont forget to like Facebook Page.. YoungDBA on Facebook

What is Data virtualization

Transformation in technologies creating lots of changes in every related area. Artificial Intelligence, Machine Learning and Data-Science created a new kind of concept   in data platform, that is " Data Virtualization " .   By name it look like it might be related to server or network, but it is concept and logical method of data management. Data virtualization is the method of manipulating data from various sources to develop a common, logical and virtual view of information so that it can be accessed by front-end solutions such as applications, dashboards and portals without having to know the data's exact storage location. What are the benefits of Data virtualization? ·          Data virtualization increases revenues. ·          Data virtualization lowers costs. ·          Data virtualization reduces risks ·          It is much faster way to manage data. ·          It complements traditional data-warehouse ·          It maximize performance of

MySQL Table Partitioning over cloud (Google & AWS)

AWS documentation also says horizontal physical partitioning of large table data is best practices instead keeping all in one physical file. There are advantages and disadvantages to using InnoDB file-per-table tablespaces, depending on your application. To determine the best approach for your application, go to InnoDB File-Per-Table Mode in the MySQL documentation. We don't recommend allowing tables to grow to the maximum file size. In general, a better practice is to partition data into smaller tables, which can improve performance and recovery times. One option that you can use for breaking a large table up into smaller tables is partitioning. Partitioning distributes portions of your large table into separate files based on rules that you specify. For example, if you store transactions by date, you can create partitioning rules that distribute older transactions into separate files using partitioning. Then periodically, you can archive the historical transaction data th

what is TRY_CAST in sql server

This is the new function introduced with 2016 version of SQL Server. This is used to cast value from its existing data type to a specified target data type. This occurs only if the operation is successful. It returns NULL if the conversion fails. TRY_CAST is an extended version of the CAST function. Query SELECT CAST ( 'Tech-Recipes' as INT ) as CastExample ; result Msg 245 , Level 16 , State 1 , Line 2 Conversion failed when converting the varchar value 'Tech-Recipes' to data type int . query SELECT CAST ( '12' as INT ) as CastExample ; result 12 SELECT TRY_CAST ( 'Tech-Recipes' as INT ) as CastExample ; Result NULL SELECT TRY_CAST ( '20' as INT ) as CastExample ; Result 20

What is String_Split in sql server

This is new built-in table-valued function introduced in sql server 2016. STRING_SPLIT that splits the provided input string by a specified separation character and returns the output separated values in the form of table. This is usefull-function that you might always use userdefined long and unoptimized function in most applications. Now sql server have built in function to split the text. Query SELECT * FROM STRING_SPLIT('Suman,Satyam,Shivam',',') Result Suman Satyam Shivam Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

Temporal Tables in SQL Server 2016

Temporal, or system-versioned tables which was introduced in SQL Server 2016, allow us to go back in time and see what the data in table looked like at that point in time. We can say a system-versioned temporal table is a type of user table designed to keep a full history of data changes and allow easy point in time analysis. This type of temporal table is referred to as a system-versioned temporal table because the period of validity for each row is managed by the system. What are the usage of Temporal Tables ? The most common business uses for temporal tables are: Slowly changing dimensions. The temporal tables provide a simpler way to querying data that is current for a specified period of time, such as time slicing data, that well-known problem on Data Warehousing databases. Data Auditing. The temporal tables provide an audit trail to determine when data was modified in the “parent” table. Temporal tables allow you to query data as it was at a particular point in time to

Foreign data wrapper in postgres

A running application can have multiple database integrated environment to send receive data. Some time multiple database of Postgres or sometimes other database like sql server/mysql/mongodb. Every database have different-different feature to integrate. Like sql server have Linked Server, mysql have Data federation Postgres have foreign data wrapper. This post is related to Postgres's foreign data wrapper. What is foreign data wrapper in Postgres ? How to access table from other database in Postgres ? Postgres have a  different feature which lets you to create a foreign data wrapper inside Postgres, which lets you feel the object of current connected database. It will help to create object that will part of foreign data or database. So we can easily integrated data. This feature is called foreign data wrapper. What are the components of foreign data wrapper ? 1. Foreign Data wrapper Extension { file_fdw , postgres_fdw } 2. Foreign database server location 3. User Mappi

What are In-Memory OLTP and Memory Optimized Tables

In-Memory OLTP  In-Memory OLTP is an in-memory computing technology developed by Microsoft to speedup the performance of transaction processing applications running on SQL Server databases. In-Memory OLTP is built with two core components: memory-optimized tables and natively compiled stored procedures. In-Memory OLTP, also known as 'Hekaton' and 'In-Memory Optimization', which is Microsoft's latest in-memory processing technology of 2014. In-Memory Tables It is integrated into SQL Server's Database Engine and which can be used in the similar way as other Database Engine component. In-Memory OLTP originally came with SQL Server 2014 . Memory-Optimized Tables Memory-optimized tables are fully durable, like transactions on disk-based tables, transactions on memory-optimized tables are fully ACID compatible . Memory-optimized tables and natively compiled stored procedures support only a subset of Transact-SQL features. The primary storage for memory-o

DBCC Clone Database in SQL Server 2016

What is DBCC Clone Database in SQL 2016 ? During Database development many times we required to create exact copy of database without data. This command is used to creating a database clone is fast and easy way for such kind of requirements.  The main important feature and benefits of clone database is it copy entire objects, metadata and statistics from the specified source database without any data. Here is the command to create clone. DBCC CLONEDATABASE (YoungDBA2016, YoungDBA2016_Clone) What are the operation being process when we start cloning database ? When we configure dbcc clone database it does following operations: It Creates a new destination database that uses the same file layout as the source. Then an internal snapshot of the source database. After that it copies the system metadata from the source to the destination database. Then it copies all schema for all objects. Then it copies statistics for all indexes. Does it requires any Services packs for

Azure Data Lake Analytics Services feature ?

Azure data lake analytics is 2nd main component of Azure data lake and bigdata analytics platform. There are below main feature; It is built on apache Yarn. Scale dynamically with the turn of a dial. Pay by the query. Supports Azure AD for access control, roles and integration with on-prem identity system. Built with U-SQL with the power of C#. Process data across Azure. Write,debug and optimize big data app in visual studio. Multiple language U-SQL , Hive and Pig. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What are the Feature of Data lake stores ?

Azure data lake is the key services of microsoft bigdata platform. There are 2 main component of Azure Data lake is Data store and analytics. Store is one of important. 1.Each file in ADL stores is sliced into blocks. 2. Blocks are distributed across multiple data nodes in the backed storage system. 3. With sufficient number of back-end storage data nodes, files of any size can be stored here. 4. Back-end storage runs in to the azure cloud which has virtually unlimited resources. 5. Metadata stored about each file. No limit to metadata either. 6. Azure maintains 3 replicas of each data object per region across three fault and upgrade domains. 7. Each create or append operation on a replica is replicated to other two. 8. Writes are committed to application only after all replicas  are successfully updated. 9. Read operation can go against all replica. 10. It is role based access mechanism. Each file/ directory has owner and group. they have r,w,x permissions. Thanks for

What are Traditional BI and Analytics process model.

If you see the Traditional BI processing model and modern data lake environment then you will find the processing model are totally different.  In respect of schema or transformation or requirements.Before we were using schema on write and now we are doing schema on read. In BI model Transformation was done after extraction but now after load.    Start with end-user requirement, to identify desired reports and analysis. ·   Define corresponding database schema and queries. ·   Identify the required data source ·   Create a ETL pipeline to extract required data and transform it to target schema. Create reports. Analyze data. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What is the architecture of Azure Data Lake?

What is the architecture of Azure Data Lake? Azure Data Lake is designed with 2 major components, data lake store and analytics. And majorly there are below structure: 1.) Internal system - YARN & WebHDFS. Yarn - Analytics   & WebHDFS - Hadoop hdfs storage. 2.) Analytics - USQL    3.) Compute Engine - HdInsight (Big Data batch processing). 3 Azure Data Lake Store (ADLS) serving as the hyper-scale storage layer. What can I do with Azure Data Lake Analytics? ·          Right now, ADLA is focused on batch processing, which is great for many Big Data workloads. ·          Prepping large amounts of data for insertion into a Data Warehouse ·          Processing scraped web data for science and analysis ·          Churning through text, and quickly tokenizing to enable context and sentiment analysis ·          Using image processing intelligence to quickly process unstructured image data ·          Replacing long-running monthly batch processing with shor

What is Data Lake and Azure Data Lake

What is data lake ? A data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question. A data lake, on the other hand, maintains data in their native formats and handles the three Vs of big data — volume, velocity, and variety — while providing tools for analyzing, querying, and processing. Data lakes eliminate all the restrictions of a typical data warehouse system by providing unlimited space, unrestricted file size, schema on read, and various ways to access data (including programming, SQL-like queries, and REST calls). What is Azure Data La

Full Text search in MongoDB

Full text search is similar to content search from entire database or from storage where data is located.  It is something similar to how we search any content in any search application by entering certain string keywords or phrases and getting back the relevant results sorted by their ranking. This is common requirement in any large data-set application for quick and efficient searching method.   This post I am sharing about text search from MongoDB database. Text search option is available in almost every database either RDBMS family or NoSQL family.  Mongodb have something different that is ranking (weight of attributes). Starting from version 2.4, MongoDB began with an experimental feature supporting Full-Text Search using Text Indexes. This feature has now become an integral part of the product. The Text Search uses streaming techniques to look for specified words in the string fields by dropping stop words like a, an, the, etc.  What are the features of  " Mongodb Full

SQL Servers 20 steps after start or re-start

This was always in my mind, what happened when we start or restart sql server services from os service panel. There are long list of activities process out to get connected with database. These are below : 1. Server process ID allocation and with Authentication mode. 2. Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\Log\ERRORLOG'. 3. Registry startup parameters setting for master, error,log -d,-e, -l 4. SQL Server detected cpu and cores and allocating as per licenses and configuration . 5. SQL Server is starting at normal priority base (=7).  6. SQL Server Detecting RAM and allocating to server as per awe and configured.  7. SQL Server Detecting Node configuration: node 0: CPU mask: 0x000000000000000f:0 Active CPU mask: 0x000000000000000f:0 for NUMA configuration. 8. Starting up database 'master' and do transactions rolled forward in database 'master' (1:0). 9. SQL Server Audit is starting the audits. 

Database Performance Troubleshooting Methodologies and Dimensions

When You have assigned a task to optimize a database or performance tuning of an application. Then there could be various dimensions to perform this task because slow performance of application could be many more which can not describe on single page but it can be summary in a table like below I found this table that shows database performance and slow running application performance dimension and there activity start procedures. Performance Dimensions Percentage Values Process Strength Activity Strength Remarks Application Design and Business process 25.00% Long Process Lower priority Module wise activity. Database Schema Design - Logical 15.00% Medium Follow best practices Required short downtime Module wise activity. Database Maintenance 15.00% Quick process Required on OLTP Short downtime weekly or monthly. Indexing 15.00% Quick process Required on OLTP Short downtime weekly or monthly Module wise activity. Server Hardware (CPU/Memory/other) 12.00% Medium process Follow