Skip to main content

Posts

Showing posts with the label bigdata

ETL Features of SQL Data Platform

Either Datawarehouse or BigData, ETL or ELT is the major part, or you can say it plays a vital role in Data Transformation Life cycle. ELT-ETL used in any kind of data related operations like, Data migration, Data transformation , Business intelligence, Datawarehouse, Big data and Analytics. ETL is Extract transform and Load whereas ELT is extract load then transform. Both have their own pros and cons but now a days ELT is more popular than ETL. This is why because size of data is growing & growing for transformation. ELT also supporting Data virtualization concept where actual original data will reside on system without any modification and transformation will less. SQL server has reach sets of tools for ETL & ELT .  Here I am sharing some important Differences between ETL and ELT which we should know before jumping on features. SrlNo Dimension ETL ELT 1 Technology Adoption ETL is a well-developed process used for over 20 years ELT is a new technology 2 Co

What is Data virtualization

Transformation in technologies creating lots of changes in every related area. Artificial Intelligence, Machine Learning and Data-Science created a new kind of concept   in data platform, that is " Data Virtualization " .   By name it look like it might be related to server or network, but it is concept and logical method of data management. Data virtualization is the method of manipulating data from various sources to develop a common, logical and virtual view of information so that it can be accessed by front-end solutions such as applications, dashboards and portals without having to know the data's exact storage location. What are the benefits of Data virtualization? ·          Data virtualization increases revenues. ·          Data virtualization lowers costs. ·          Data virtualization reduces risks ·          It is much faster way to manage data. ·          It complements traditional data-warehouse ·          It maximize performance of

Azure Data Lake Analytics Services feature ?

Azure data lake analytics is 2nd main component of Azure data lake and bigdata analytics platform. There are below main feature; It is built on apache Yarn. Scale dynamically with the turn of a dial. Pay by the query. Supports Azure AD for access control, roles and integration with on-prem identity system. Built with U-SQL with the power of C#. Process data across Azure. Write,debug and optimize big data app in visual studio. Multiple language U-SQL , Hive and Pig. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What are the Feature of Data lake stores ?

Azure data lake is the key services of microsoft bigdata platform. There are 2 main component of Azure Data lake is Data store and analytics. Store is one of important. 1.Each file in ADL stores is sliced into blocks. 2. Blocks are distributed across multiple data nodes in the backed storage system. 3. With sufficient number of back-end storage data nodes, files of any size can be stored here. 4. Back-end storage runs in to the azure cloud which has virtually unlimited resources. 5. Metadata stored about each file. No limit to metadata either. 6. Azure maintains 3 replicas of each data object per region across three fault and upgrade domains. 7. Each create or append operation on a replica is replicated to other two. 8. Writes are committed to application only after all replicas  are successfully updated. 9. Read operation can go against all replica. 10. It is role based access mechanism. Each file/ directory has owner and group. they have r,w,x permissions. Thanks for

What are Traditional BI and Analytics process model.

If you see the Traditional BI processing model and modern data lake environment then you will find the processing model are totally different.  In respect of schema or transformation or requirements.Before we were using schema on write and now we are doing schema on read. In BI model Transformation was done after extraction but now after load.    Start with end-user requirement, to identify desired reports and analysis. ·   Define corresponding database schema and queries. ·   Identify the required data source ·   Create a ETL pipeline to extract required data and transform it to target schema. Create reports. Analyze data. Thanks for reading Plz dont forget to like Facebook Page.. https://www.facebook.com/pages/Sql-DBAcoin/523110684456757

What is the architecture of Azure Data Lake?

What is the architecture of Azure Data Lake? Azure Data Lake is designed with 2 major components, data lake store and analytics. And majorly there are below structure: 1.) Internal system - YARN & WebHDFS. Yarn - Analytics   & WebHDFS - Hadoop hdfs storage. 2.) Analytics - USQL    3.) Compute Engine - HdInsight (Big Data batch processing). 3 Azure Data Lake Store (ADLS) serving as the hyper-scale storage layer. What can I do with Azure Data Lake Analytics? ·          Right now, ADLA is focused on batch processing, which is great for many Big Data workloads. ·          Prepping large amounts of data for insertion into a Data Warehouse ·          Processing scraped web data for science and analysis ·          Churning through text, and quickly tokenizing to enable context and sentiment analysis ·          Using image processing intelligence to quickly process unstructured image data ·          Replacing long-running monthly batch processing with shor

What is Data Lake and Azure Data Lake

What is data lake ? A data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question. A data lake, on the other hand, maintains data in their native formats and handles the three Vs of big data — volume, velocity, and variety — while providing tools for analyzing, querying, and processing. Data lakes eliminate all the restrictions of a typical data warehouse system by providing unlimited space, unrestricted file size, schema on read, and various ways to access data (including programming, SQL-like queries, and REST calls). What is Azure Data La

Full Text search in MongoDB

Full text search is similar to content search from entire database or from storage where data is located.  It is something similar to how we search any content in any search application by entering certain string keywords or phrases and getting back the relevant results sorted by their ranking. This is common requirement in any large data-set application for quick and efficient searching method.   This post I am sharing about text search from MongoDB database. Text search option is available in almost every database either RDBMS family or NoSQL family.  Mongodb have something different that is ranking (weight of attributes). Starting from version 2.4, MongoDB began with an experimental feature supporting Full-Text Search using Text Indexes. This feature has now become an integral part of the product. The Text Search uses streaming techniques to look for specified words in the string fields by dropping stop words like a, an, the, etc.  What are the features of  " Mongodb Full