Skip to main content

Posts

Temporal Tables in SQL Server 2016

Temporal, or system-versioned tables which was introduced in SQL Server 2016, allow us to go back in time and see what the data in table looked like at that point in time. We can say a system-versioned temporal table is a type of user table designed to keep a full history of data changes and allow easy point in time analysis. This type of temporal table is referred to as a system-versioned temporal table because the period of validity for each row is managed by the system. What are the usage of Temporal Tables ? The most common business uses for temporal tables are: Slowly changing dimensions. The temporal tables provide a simpler way to querying data that is current for a specified period of time, such as time slicing data, that well-known problem on Data Warehousing databases. Data Auditing. The temporal tables provide an audit trail to determine when data was modified in the “parent” table. Temporal tables allow you to query data as it was at a particular point in time to

Foreign data wrapper in postgres

A running application can have multiple database integrated environment to send receive data. Some time multiple database of Postgres or sometimes other database like sql server/mysql/mongodb. Every database have different-different feature to integrate. Like sql server have Linked Server, mysql have Data federation Postgres have foreign data wrapper. This post is related to Postgres's foreign data wrapper. What is foreign data wrapper in Postgres ? How to access table from other database in Postgres ? Postgres have a  different feature which lets you to create a foreign data wrapper inside Postgres, which lets you feel the object of current connected database. It will help to create object that will part of foreign data or database. So we can easily integrated data. This feature is called foreign data wrapper. What are the components of foreign data wrapper ? 1. Foreign Data wrapper Extension { file_fdw , postgres_fdw } 2. Foreign database server location 3. User Mappi

What are In-Memory OLTP and Memory Optimized Tables

In-Memory OLTP  In-Memory OLTP is an in-memory computing technology developed by Microsoft to speedup the performance of transaction processing applications running on SQL Server databases. In-Memory OLTP is built with two core components: memory-optimized tables and natively compiled stored procedures. In-Memory OLTP, also known as 'Hekaton' and 'In-Memory Optimization', which is Microsoft's latest in-memory processing technology of 2014. In-Memory Tables It is integrated into SQL Server's Database Engine and which can be used in the similar way as other Database Engine component. In-Memory OLTP originally came with SQL Server 2014 . Memory-Optimized Tables Memory-optimized tables are fully durable, like transactions on disk-based tables, transactions on memory-optimized tables are fully ACID compatible . Memory-optimized tables and natively compiled stored procedures support only a subset of Transact-SQL features. The primary storage for memory-o

DBCC Clone Database in SQL Server 2016

What is DBCC Clone Database in SQL 2016 ? During Database development many times we required to create exact copy of database without data. This command is used to creating a database clone is fast and easy way for such kind of requirements.  The main important feature and benefits of clone database is it copy entire objects, metadata and statistics from the specified source database without any data. Here is the command to create clone. DBCC CLONEDATABASE (YoungDBA2016, YoungDBA2016_Clone) What are the operation being process when we start cloning database ? When we configure dbcc clone database it does following operations: It Creates a new destination database that uses the same file layout as the source. Then an internal snapshot of the source database. After that it copies the system metadata from the source to the destination database. Then it copies all schema for all objects. Then it copies statistics for all indexes. Does it requires any Services packs for

Azure Data Lake Analytics Services feature ?

Azure data lake analytics is 2nd main component of Azure data lake and bigdata analytics platform. There are below main feature; It is built on apache Yarn. Scale dynamically with the turn of a dial. Pay by the query. Supports Azure AD for access control, roles and integration with on-prem identity system. Built with U-SQL with the power of C#. Process data across Azure. Write,debug and optimize big data app in visual studio. Multiple language U-SQL , Hive and Pig. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What are the Feature of Data lake stores ?

Azure data lake is the key services of microsoft bigdata platform. There are 2 main component of Azure Data lake is Data store and analytics. Store is one of important. 1.Each file in ADL stores is sliced into blocks. 2. Blocks are distributed across multiple data nodes in the backed storage system. 3. With sufficient number of back-end storage data nodes, files of any size can be stored here. 4. Back-end storage runs in to the azure cloud which has virtually unlimited resources. 5. Metadata stored about each file. No limit to metadata either. 6. Azure maintains 3 replicas of each data object per region across three fault and upgrade domains. 7. Each create or append operation on a replica is replicated to other two. 8. Writes are committed to application only after all replicas  are successfully updated. 9. Read operation can go against all replica. 10. It is role based access mechanism. Each file/ directory has owner and group. they have r,w,x permissions. Thanks for

What are Traditional BI and Analytics process model.

If you see the Traditional BI processing model and modern data lake environment then you will find the processing model are totally different.  In respect of schema or transformation or requirements.Before we were using schema on write and now we are doing schema on read. In BI model Transformation was done after extraction but now after load.    Start with end-user requirement, to identify desired reports and analysis. ·   Define corresponding database schema and queries. ·   Identify the required data source ·   Create a ETL pipeline to extract required data and transform it to target schema. Create reports. Analyze data. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757