Skip to main content

Posts

Showing posts from September, 2018

Foreign data wrapper in postgres

A running application can have multiple database integrated environment to send receive data. Some time multiple database of Postgres or sometimes other database like sql server/mysql/mongodb. Every database have different-different feature to integrate. Like sql server have Linked Server, mysql have Data federation Postgres have foreign data wrapper. This post is related to Postgres's foreign data wrapper. What is foreign data wrapper in Postgres ? How to access table from other database in Postgres ? Postgres have a  different feature which lets you to create a foreign data wrapper inside Postgres, which lets you feel the object of current connected database. It will help to create object that will part of foreign data or database. So we can easily integrated data. This feature is called foreign data wrapper. What are the components of foreign data wrapper ? 1. Foreign Data wrapper Extension { file_fdw , postgres_fdw } 2. Foreign database server location 3. User Mappi

What are In-Memory OLTP and Memory Optimized Tables

In-Memory OLTP  In-Memory OLTP is an in-memory computing technology developed by Microsoft to speedup the performance of transaction processing applications running on SQL Server databases. In-Memory OLTP is built with two core components: memory-optimized tables and natively compiled stored procedures. In-Memory OLTP, also known as 'Hekaton' and 'In-Memory Optimization', which is Microsoft's latest in-memory processing technology of 2014. In-Memory Tables It is integrated into SQL Server's Database Engine and which can be used in the similar way as other Database Engine component. In-Memory OLTP originally came with SQL Server 2014 . Memory-Optimized Tables Memory-optimized tables are fully durable, like transactions on disk-based tables, transactions on memory-optimized tables are fully ACID compatible . Memory-optimized tables and natively compiled stored procedures support only a subset of Transact-SQL features. The primary storage for memory-o

DBCC Clone Database in SQL Server 2016

What is DBCC Clone Database in SQL 2016 ? During Database development many times we required to create exact copy of database without data. This command is used to creating a database clone is fast and easy way for such kind of requirements.  The main important feature and benefits of clone database is it copy entire objects, metadata and statistics from the specified source database without any data. Here is the command to create clone. DBCC CLONEDATABASE (YoungDBA2016, YoungDBA2016_Clone) What are the operation being process when we start cloning database ? When we configure dbcc clone database it does following operations: It Creates a new destination database that uses the same file layout as the source. Then an internal snapshot of the source database. After that it copies the system metadata from the source to the destination database. Then it copies all schema for all objects. Then it copies statistics for all indexes. Does it requires any Services packs for

Azure Data Lake Analytics Services feature ?

Azure data lake analytics is 2nd main component of Azure data lake and bigdata analytics platform. There are below main feature; It is built on apache Yarn. Scale dynamically with the turn of a dial. Pay by the query. Supports Azure AD for access control, roles and integration with on-prem identity system. Built with U-SQL with the power of C#. Process data across Azure. Write,debug and optimize big data app in visual studio. Multiple language U-SQL , Hive and Pig. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What are the Feature of Data lake stores ?

Azure data lake is the key services of microsoft bigdata platform. There are 2 main component of Azure Data lake is Data store and analytics. Store is one of important. 1.Each file in ADL stores is sliced into blocks. 2. Blocks are distributed across multiple data nodes in the backed storage system. 3. With sufficient number of back-end storage data nodes, files of any size can be stored here. 4. Back-end storage runs in to the azure cloud which has virtually unlimited resources. 5. Metadata stored about each file. No limit to metadata either. 6. Azure maintains 3 replicas of each data object per region across three fault and upgrade domains. 7. Each create or append operation on a replica is replicated to other two. 8. Writes are committed to application only after all replicas  are successfully updated. 9. Read operation can go against all replica. 10. It is role based access mechanism. Each file/ directory has owner and group. they have r,w,x permissions. Thanks for

What are Traditional BI and Analytics process model.

If you see the Traditional BI processing model and modern data lake environment then you will find the processing model are totally different.  In respect of schema or transformation or requirements.Before we were using schema on write and now we are doing schema on read. In BI model Transformation was done after extraction but now after load.    Start with end-user requirement, to identify desired reports and analysis. ·   Define corresponding database schema and queries. ·   Identify the required data source ·   Create a ETL pipeline to extract required data and transform it to target schema. Create reports. Analyze data. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What is the architecture of Azure Data Lake?

What is the architecture of Azure Data Lake? Azure Data Lake is designed with 2 major components, data lake store and analytics. And majorly there are below structure: 1.) Internal system - YARN & WebHDFS. Yarn - Analytics   & WebHDFS - Hadoop hdfs storage. 2.) Analytics - USQL    3.) Compute Engine - HdInsight (Big Data batch processing). 3 Azure Data Lake Store (ADLS) serving as the hyper-scale storage layer. What can I do with Azure Data Lake Analytics? ·          Right now, ADLA is focused on batch processing, which is great for many Big Data workloads. ·          Prepping large amounts of data for insertion into a Data Warehouse ·          Processing scraped web data for science and analysis ·          Churning through text, and quickly tokenizing to enable context and sentiment analysis ·          Using image processing intelligence to quickly process unstructured image data ·          Replacing long-running monthly batch processing with shor

What is Data Lake and Azure Data Lake

What is data lake ? A data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question. A data lake, on the other hand, maintains data in their native formats and handles the three Vs of big data — volume, velocity, and variety — while providing tools for analyzing, querying, and processing. Data lakes eliminate all the restrictions of a typical data warehouse system by providing unlimited space, unrestricted file size, schema on read, and various ways to access data (including programming, SQL-like queries, and REST calls). What is Azure Data La