Software Training Institute

logobr

Azure Data Factory Training In Hyderabad

with

100% Placement Assistance

Azure Data Factory Training In Hyderabad Batch Details

(Online Training)

Trainer NameMr. Bharath Sreeram
Trainer Experience18+ Years
Next Batch Date18-05-2023
Course Duration45 Days
Training Modes:Online Training (Instructor Led)
Call us at:+91-81868 44555
Email Us at:brollyacademy@gmail.com
Demo Class Details:ENROLL FOR FREE DEMO CLASS

Azure Data Factory Course Curriculum

Azure Data Engineer Services Introduction Part 1

Azure Data Engineer Services Introduction Part 2

Azure Data Engineer Services Introduction Part 3

Azure Data Engineer Services Introduction Part 4

Azure Data Engineer Services Introduction Part 5

Azure Data Engineer Services Introduction Part 6

ADF introduction 

Difference between ADF version 1 and Version 2. 

  • DataFlows and Power Query are new features in Version 2
  • DataFlows are for data transformations
  • PowerQuery is for Data preparations and Data Wrangling Activities.
  • Building blocks of ADF
  • -> PipeLine
  • -> Activities
  • -> Datasets
  • -> Linked Service
  • -> Integration
  • RunTime
  •   Auto Integration Runtime
  • Self Hosted Integration runtime
  • ADF Session 8:SSIS Integration Runtime. 

 More on Technical differences between ADF version 1 and Version 2 – Part 1

  • More on Technical differences between ADF version 1 and Version 2 – Part 2
  • Introduction to Azure Subscriptions
  • Types of Subscriptions

 -> Free Trial

-> Pay-AS-You-Go

  •  Why Multiple subscriptions are required?
  • What are resources and Resource Groups? 
  • Resource Group Advantages
  • Why Multiple resource Groups need to be created ?
  • What are regions
  • Region Advantages.
  •  Create  Storage Account with Blob Storage feature
  • Converting Blob storage feature as DataLake Gen2 feature
  •  Create Storage Account with Azure Data Lake Gen2 features.
  • How to Enable Hierarchical name space. 
  • Creating Containers
  • Creating sub directories in the Container of Blob Storage. 
  • Creating sub directories in Container of DataLake Gen2 Storage
  • Uploading local files into Container/sub directories.
  • When  is ADF required?
  • Create Azure SQL and Play with Azure Sql – Part 1
  •  Azure sql as OLTP
  •  Create Azure Sql Database 
  •  Create Azure Sql Server
  •  Assign Username and Password for Authentication
  • Launching Query Editor
  • Adding Client IP address to FireWall Rule settings
  • Create table
  • Insert rows into table
  • Default schema in azure sql
  • Create schema
  • Create table in user created schema
  • Loading query data into to table
  • Information_schema.tables
  • Fetching all tables and views from database
  • Columns of Information_schema.tables

-> TABLE_CATALOG

-> TABLE_SCHEMA

-> TABLE_NAME

-> TABLE_TYPE

  • Fetching Only tables from database
  • How to create Linked Service for Azure data lake
  • Possible errors while creating  linked service for Azure datalake account.
  • How to solve errors for Linked Service for Azure data Lake
  • Two ways to Solve Linked Service Connection Error.

 -> Enable Heirarchical namespace for Storage Account

 -> Disable Soft Delete options BLOB and CONTAINER

  • How to create datasets for Azure data lake file and containers
  • Your First ADF pipeline for Datalake to data lake  file loading
  • COPY DATA  activity used in pipeline
  •  Configuring Source dataset  for Copy Data Activity
  • Configuring Sink dataset for Copy Data Activity
  • Run the pipeline:
  • Two ways to Run a Pipeline. 

 -> Debug mode

-> Trigger mode

  • Two Options in Trigger mode

 -> On demand

-> Scheduling

  • How to load  data from Azure data lake to Azure sql table. 
  • Create linked service for Azure sql Database 
  • Error resolving , while creating Linked service for Azure Sql Database 
  • Create a dataset for Azure Sql Table.
  • Create a pipeline to load data from Azure data lake to Azure sql Table
  • Helped activities 
  • Copy activity 
  •  If   data lake file schema, and azure sql table schema are different,    How to load Using Copy Data Activity. 
  • Perform ETL With “Copy Data” Activity
  • Copy Data Activity With “Query” Option
  • Loading Selected Columns and Matching Rows with Given Condition , From Azure Sql Table to Azure Data Lake
  • Creating new fields based on existed Columns  of table and Load into Azure Data Lake 
  • Problem Statement :  A file  has   n fields in the header , but data has n+1 field values . How to Solve this problem in Copy Data Activity
  • Solution to Above Problem Statement. Practical implementation. 
  • Get Metadata – part 1
  • Get Metadata Field List Options for Folder
  •  “Exists” Option of Field List 
  • Data type of Exists field in  “Get Metadata”  output as Boolean(true/false)
  • “Item Name” Option of Field List
  • “Item Type “ Option of Field List
  • “Last Modified” Option of Field List
  • “Child Items” Option of Field List
  • Data Type of “child Items” field  in Json Output.
  • What is each element of “Child Items” called?
  • Data type of  item of “ChildItems” Field
  • What are subfields of each item in “childItems” Field
  • Get Metadata Activity – Part 2
  • If Input dataset is file, what are Options of “Field List” of Get Metadata.
  • How to Get the number of Columns in a file?
  • How do you make sure the given file exists ?
  • How to get the file name, which is configured for the dataset. ?
  • How to get a dataset  back end data object type(file or folder)  ?
  • How to Know when a file was last modified?
  • How to get File size ?
  • How to get File structure(schema) ?
  • How to get all above information of a file/folder/table  with a single pipeline run
  • If the input dataset is an RDBMS table, what are options of “Field List” of Get Metadata Activity. ?
  • How to get the number of columns of a table ?
  • How to make sure the table exists in the database ?
  • How to get the structure of a table ?
  • In this Session, you will learn answers for all above questions practically.
  • Introduction to “Get Metadata” activity. 
  • How to fetch File System information Using “ Get metadata” Activity. 
  • How “Get metadata” activity writes output in “JSON” format. 
  • How to Configure Input dataset for “Get Metadata” activity. 
  • What is the “Field List” for “Get Metadata” activity?
  • Small introduction to “Field List” options. 
  • Importance of “Child Items” option of “Field List” in “Get Metadata” activity.
  • How to Check and understand  output of “Get Metadata” activity.
  • “childItems” field as JSON output of “Get Metadata”.
  • “childItems” data type as Collection  as data type As Array.
  • What is each element of “childItems” output
  • What is the “exists” option in “Field List”. 
  • Introduction to “Filter”  activity. 
  • Problem statement: in a container of a Azure Datalake storage account , there are 100s of files, some files related with adf , and some files related with employees, and some files related with sales and some files related with others like log.  
  • How to fetch only required files from Output of  “get metadata” activity  ?
  • How to place “Filter”  activity in the pipeline. 
  • How to Connect “Get metadata” activity and “Filter” activity. 
  • What happens , If we don’t  connect two activities? (bcoz, this is the first scenario with multiple activities in a single pipeline). 
  • How to pass output of “Get metadata” Activity to “Filter Activity”
  • “Items” field of “Filter” Activity. 
  • What is the “@activity()” function  and “@activity().output” .
  • Above Output produces a lot of fields , How to take specific fields as input to filter activity. 

Example :  @activity(“get metadata1”).output.childItems

  • How to avoid unnecessary information  to next  subsequence Activity
  • Configuring “Condition” field of “Filter” activity. 
  •  How to access each element of the output of “childItems” field is @item() function. 
  • @item() output as nested json record.  How to access each field. 

Example,    @item().name,    @item().type  

  • @startswith()   function  example
  • How to check output of “Filter” activity   
  • What is the Field name of  “Filter” activity , in which required information is available. 
  • @not() function example 
  • ——————————————–
  • @item() output as nested json record.  How to access each field. 

Example,    @item().name,    @item().type  

get metadata + filter activities and how to apply single condition how to  apply multiple conditions all the things  

there are functions  

Functions  

@equals 

@greater( ) 

@greaterOrEquals( ) 

@or (C1,C2 …….) 

@and (C1,C2 …..) 

@not (equals( ))

  • Task : Get Metadata – fetch only files , and reject folders of Given Container with “Get Metadata” and “Filter”  Activity.
  • steps   to achieve Above Task.
  • Step1:  Create a Pipeline and Drag Get MetaData Activity
  • Step2 : Configure Input DataSet of Container
  • Step3:  Add a Field List  With “child Items” Option.
  • Step4:  Add “Filter” activity to “Get Metadata”
  • Step5:  Configure “Items”  Field , Which is input for “Filter” activity  From Output of “Get Metadata”
  • Step6:  Configure Filter Condition  to  take only Files.
  • Step7: Run the Pipeline
  • Step8:  Understand Output of  “Filter” Activity. 
  • In which  field of Json, Filter output is available ?
  • What is data type of  Filtered  output 
  • In this task you will work with below  ADF expressions
  • @ activity()  Function .. 
  • @ activity(‘Get Metadata1’).output
  • @ activity(‘Get Metadata1).output.childItems
  • @ item().type
  • @ equals(item().type, ‘File’)
  • After Completion of this session you will be able to  implement all above steps  , and can know answers for above questions , and  able to use adf expressions practically. 
  • Scenario :  with  GetMetadata and Filter Activities – Part 1

Scenario :  with  GetMetadata and Filter Activities – Part 2

  • Task : bulk load of files one storage account to another storage account   (from one container to other container )
  •  Helped activities 

 -> get metadata 

 -> foreach

 -> copy data

  • Bulk load of files from one storage account to Other storage account with “Wild card “ option.
  • Bulk load of files from  one storage account to Other Storage account with “Get Metadata” and “ForEach” and “Copy Data” Activities.
  • Data Set Parameterization 
  • Bulk load of files  into Other Storage Account  with “Parameterization”
  • copy only files which starts emp into target container Using Get MetaData , Filter , ForEach, Copy Data Activities – [ in target container, file name should be same as source ] 
  • load multiple files into the table.(Azure Sql) 
  • Helped activities 
    • get metadata + filter + foreach  + copy data activities
  • Conditional split . 
  • Conditionally distributing files (data) into different targets (sinks) two ways.
    • using filter 
    • using if condition activity  
    • Conditional Split implementation with “Filter” activity
    • Problem with Filter , explained. 
  • Conditional split with “if condition”. 
  • Helped activities
    • get metadata  + filter +  for each + if condition
  • Lookup activity 
  • migrate all tables of the database into a data lake. with a single pipeline. 
  • Helped activities

lookup activity for each activity  COPY DATA ACTIVITY WITH PARAMETERIZATION

  • Data Flows Introduction. 
  • ELT (Extract Load and Transform)  
  • Two DataFlows in ADF 
  • Dataflow activity in pipeline.
  • Mapping Data Flows  
  • Configuring Mapping Dataflows as Data Flow Activity in PipeLine
  • Introduction to Transformations
  • Source 
  • Sink
  • Union
  • Filter
  • Select
  • Derived Column
  • Join etc. 
  • Difference between source , sink of “Copy Data” activity  And source , sink of “Mapping Data Flows”.
  • Source as (data lake) Sink as (SQL table) ——-> by using dataflow  source and sink transformation 
  • Extract data from RDBMS to Datalake ———–> Apply Filter ——> sink
  • Assignment1 
    • SQL Table to Datalake
    • Helped Transformation (Souce) 

Sink (Dataflow Activity)

  • Filter transformations 
  • Helped Transformations

(Source) Filter Sink (Dataflow Activity)

  • Select transformations 
  1. Rename columns 
  2. Drop columns 
  3. Reoder the columns 
  • Helped Transformation 

(Source ) Select Sink (Dataflow Activity)

  • Derived column Transformation Part1 

1.You can generate new column with given expression 

2.You can update existed column values 

  • Helped Transformation 

Derived column 

Select Sink (Dataflow Activity) 

  •  How to clean/handle null values in data.
  • why should we clean nulls 

1.computational errors 

2.data loading errors into target table synapse table.

  • Cleaning names(cleaning means not always replace nulls with some  constant value,  cleaning means transform data according to business tool).formating of  the data (names) 
  • Helped transformation 

Derived column

  • Generate new columns,with conditional transformation.  two options: 

 1.iif() —->nested ifs, 

 2.case() 

  • Helped transformation 

Derived column 

  • Conditional transformation with case() function. 

emp—->derived columns—–>select—–sink 

  • Helped transformation 

derived column 

select  sink  (Dataflow activity) 

  • Union transformation  part 1.
  • Merging Two different files with the same schema by using two sources using the union transformation, And writes output into a single file. 
  • Helped transformation 

Two sources

Union 

Sink 

(Dataflow activity)

  • Merging three different files with different schemas by using derived column transformation, select transformation and union transformation,  finally we get a single file with a common schema. 
  • Helped transformation 

Derived column 

union 

select 

(source) 

sink (Dataflow activity)

  • Two different files by using derived column transformation,  union transformation and aggregate transformation,  to get branch1 total and branch 2 total. 
  • Helped transformations 

(Source) 

Derived column  

Union 

Aggregate 

*(After this Watch session 70, for more on Unions).

  • Joins transformations 5 types  

 1.Inner join  

 2.Left outer join 

 3.Right outer join 

 4.Full outer join 

 (5.Cross jons) 

  • Helped transformation 

Joins 

Select 

Sink 

(Dataflow activity) 

ADF session 47:

Full Outer Join Bug fix

Full Outer Join Bug fix

How to join more than two datasets (example 3 datasets).

  • Inter linked scenario related to 25th session. 

treat, dept as project 

task is ” summary Report” 

active employes (who already engaged into project -> 11,12,13  projects,-> these projects total salary budget (bench team 20,21 total salary ,bench project ) 

  • Helped transformations 

Joins 

Select  

Derived column 

Aggregate 

Sink 

(Dataflow activity)

  • To use full outer join advantage (complete information,no information  missing) 

task~1 : Monthly sales report by using derived column transformation,  aggregate transformation. 

  • Helped transformation 

Derived column  

Aggregate 

sink

  • Task~2 : Quarterly sales report by using derived column transformation,  aggregate transformation. 
  • Helped transformation 

Derived column  

Aggregate 

  • Task~3: Year as primary group,quarterly as sub group,sales report.
    • Helped transformation 

Filter activity 

Derived column 

Aggregate 

Sort 

Sink 

(Dataflow activity)

  • Task~4 : Comparing Quartely sales report: Comparing Current Quarter sales with its Previous Quarter Sales. 
  • Helped transformations 

Source  

Join  

Select 

Derived column

  • A real time scenario on Sales data Analytics – Part 1

A real time Scenario on Sales data Analytics – Part 2

  • More on Aggregation Transformation.
  •  Configuration Required  for  Aggregation Transformation
  • Entire Column aggregations.
  • Entire Column Multiple Aggregations.

-> Sum()

-> Count()

-> max()

-> min()

-> avg()

  • Single grouping and Single Aggregation
  • Single Grouping and Multiple Aggregations
  • Grouping by Multiple Columns with Single aggregation
  • Grouping by Multiple Columns with Multiple Aggregations.
  • Finding range Aggregation by adding  “Derived Column” transformation to “Aggregation” Transformation. 
  • Conditional Split of data. 
  • Distributing data into multiple datasets based on given condition
  • Split on options:
  1.  First Matching conditions
  2. All Matching conditions
  • When to use “First Matching Conditions”  and When to use “All Matching Conditions”
  •  An Example.        
  • An Use Case on “First Matching Condition” option  of “Conditional Split” Transformation in Mapping Data Flows with Sales data.

An Use Case on “Matching All Conditions” option of “Conditional Split” Transformation in Mapping Data flows with Sales data.

Conditional split with cross join transformation   by using matrimony example.

  • Lookup with multiple datasets 
  • Helped transformations 

Source 

Lookup 

Sink 

(Dataflow activity) 

  • Lookup with more options
  • products as primary stream 
  • transformation as lookup stream 
  • Helped transformations 

Products 

Lookup

Broadcasting

Partitioning part 1

Partitioning part2

Partitioning part3

Exists transformation – part 1

  • Helped transformation 

(Customers) 

Exists 

 Exists transformations  – part 2

  • Helped transformation 

(Source) 

(Customers) 

Exists 

Sink 

(Dataflow activity)

  •  Finding Common Records, Only Records available in first dataset, Only Records available in Second Dataset. 
  • All Records except Common records in First and 2nd Datasets  in Single DataFlow
  • How to capture data changes from source systems to Target Data warehouse  Systems. 
  • Introduction to SCD (Slowly Changing Dimensions)
  • What is SCD Type 0 and Its Limitation. 
  • What is Delta in Data of Source System.
  • What is SCD Type 1 and Its Limitation 
  • What is SCD Type 2 and How it tracks History of specific attributes of source data. 
  • Problem with SCD Type 2
  • Introduction to t SCD Type 3
  • How It solves problem of SCD Type 2
  • How SCD type 3 maintains recent History track
  • Limitation of SCD Type 3. 
  • Introduction to SCD Type 4 
  • How SCD Type 4 will provide complete Solution to SCD type 2.
  • (Remember no SCD 5)
  • Introduction to SCD Type 6 and Its benefits.
  • How  data transformations are done in ADF1 (with out dataflows)
  • Aggregate Transformation , Sort Transformation with following examples
  1. Single grouping with single aggregation 
  2. Single Grouping with multiple aggregations
  3. Multi Grouping with Multiple Aggregations. 
  4. Sort with single column
  5. Sort with multiple columns. 
  •  What is pivot transformation?
  • Difference between Aggregate transformation and Pivot Transformation 
  •   Implement pivot transformation in dataflow
  •     How to clean pivot output
  •    How to call multiple dataflows  in a single pipeline. 
  •  Why we used multiple dataflows in one single pipeline. 
  • Assignment on   Join, aggregate, pivot transformations. 
  • Finding Occupancy based on salary. 
  • Unpivot transformation. 
  • What is should be the input for Unpivot (pivoted output file). 
  • 3   configurations : 

  -> Ungrouping column

 -> Unpivoted column.  (column names to as column values)

 →  aggregated column expression (   which row aggregated values to be turned  as column values) 

  • Difference between  output of  “aggregate” and “unpivot” transformation. 
  • What additionally unpivot  produces. 
  • Use case of Unpivot
  • Surrogate Key transformation, 
  • Why we should use Surrogate key.
  • Configuring Starting Integer Number for Surrogate Key
  • Scenario :  your bank is “ICICI”, for every record Unique CustomerKey should be generated as ICICI101 for the first customer, ICICI102 for 2nd customer . But the Surrogate key gives only integer value as 101 for first and 102 for 2nd. How to handle the given scenario , which is a combination of string and integer.
  • Rank Transformation in dataflows – Part 1
  • Why sorting data is required for Rank transformation 
  • Sorting options as “Ascending and Descending”
  • When to use the “Ascending” option for Ranking. 
  • When to use the “Descending” option for Ranking. 
  •  Dense Rank and How it Works
  • Non Dense Rank (Normal Rank) and How it Works. 
  • Why we should not use “surrogate key” for Ranking. 
  • What is difference between Dense Rank and Non Dense Rank(Normal Rank)
  • Limitations of Rank Transformation
  • Implementing Custom Ranking With a Real time scenario
  • Custom Ranking  implementation with Sort,  Surrogate key, new Branch , aggregate, Join transformations. 
  • Scenario : 

   -> problem statement:  if  a school has 100 students, one student got 90 marks , remaining all 99 students failed and scored  10 marks. The Rank transformation of ADF dataflows gives 1 rank for those who scored 90 , and 2 rank for failed students , who scored 10 marks.  But the school management wanted to give a gift to the top 2 Rankers ( 2nd rankers failed and got equal least score , so that all students will get a gift).  How to handle this scenario. 

  • Window transformation part 1
  • Window transformation. 
  • Cumulative Average
  • Cumulative Sum
  • Cumulative Max
  • Dense Rank for each partition. 
  • Making all rows as Single Window and apply Cumulative Aggregation. 

Window transformation part 2

Window transformation part 3

Window transformation part 4

Window transformation Part 5

Window transformation Part 6

Custom Rank Implementation with Window Transformation,

  • Parse transformation in Mapping DataFlows. 
  • How to handle and parse string collection ( delimited string values )
  • How to handle and parse xml data. 
  • How to handle and parse json data
  • Converting Complex Json nested structures into CSV/Text file.
  • Complex data processing  (transaformations). 
  • How  parse nested Json records
  • How to flatten  array of values into multiple rows. 
  • If data has complex structures, what are supportive data store formats .
  • How to write into json format. 
  • How to write into flatten file format( csv).
  • Transformation used to process data. 
  1. Parse
  2. Flatten
  3. Derived Column
  4. Select 
  5. Sink.
  • Reading Json data (form vertical format)
  • Reading a single Json record(document)
  • Reading array of documents . 
  • Converting complex types into String using Strigify and derived column transformations. 
  • Used transformations in Data Flow. 

-> source 

->stringify

->derived column

Expression :  toString(complexColumn)

         → select

         -> sink.

  • Assert Transformation
  • Setting Validation Rules for Data. 
  • Types of Assert:

-> Expect True

-> Expect Unique

-> Expect Exists

  • isError()   function to  validate a record as   “Valid”   or “Invalid”.
  • Assert Transformation part 2.
  • Assert Type “Expect Unique”
  • Rule for Null values
  • Rule for ID ranges.
  • Rule for ID uniqueness
  • hasError() function  to  identify  which rule is  failed. 
  • Difference Between isError() and hasError()
  • Assert Transformation part 3.
  • How  and why to configure Additional streams
  • Assert type “Expect Exists”
  • How to validate a record reference available in any one of multiple additional streams. (A scenario implementation).
  • How to Validate a record reference available in all multiple additional streams.(A scenario Implementation),

Incremental data loading (part 1)

Incremental data loading (part 2)

  • Incremental data loading (part 3).
  • Implementation of Incremental load(delta load) for multiple tables with a single pipeline.

Deduplication part 1

Deduplication part 2

Deduplication Part 3 

Solving Bugs of Incremental Loading . – Part1

Solving Bugs of Incremental Loading – Part2

  • AlterRow Transformation Part – 1 
  •  Removing List of Records  from Sink of a Specific criteria (Table)
  • AlterRow Transformation part- 2
  • Removing a given List of Records from Sink of no specific Criteria .
  • AlterRow Transformation Part – 3. 
  • Update given List of Records in Sink. 
  • SCD (Slowly Changing Dimensions Type 1 ) Implementation. 
  • Alter Row Transformation with UpSert Action.
  • A realtime Assignment (Assignment1 and Assignment 2) on SCD. 
  • Incremental Load and SCD combination 
  • How to implement SCD before “Alter Row” Transformation not introduced in DataFlows. 
  • How exists transformation helpful. 
  • How to implement SCD before ADF version 2 features of DataFlows. 
  • How Stored procedures helpful and implement SCD.
  • Combining rows of multiple tables with different schemas with a single Union Transformation. 
  • Problem with Multiple Unions in DataFlow. 
  • Types of Sources and Sinks
  • Source Types:

-> Dataset

-> Inline

  • Sink Types:

-> Dataset

-> Inline

-> Cache

  • When to use dataset and Inline
  • Advantage of  Cache as Reusage of output of Transfomation 
  • How to write output into Cache. 
  • How to Reuse the Cached output. 
  • Scenario :   Generating Incremental IDs based on Maximum ID of Sink DataSet – Part 1
  • Scenario: Generating Incremental IDs based on Maximum ID of Sink DataSet – Part 2
  • Writing outputs into JSon format  
  •  Load Azure SQL data into Json File.
  • Cached Lookup    in Sink Cache .   Part 1
  • Scenario : 

 Two Source Files, 1. Employee 2. Department

  Common Column in both dno(department number) which is as Joining Column or Key Column. 

  • Task: Without using Join Transformation , Lookup Transformation  , we need to join two datasets to increase performance using “Cache Lookup” of Cache Sink Type. 
  • Other alternatives of this Cached Lookup.
  • How to configure Key Column for Cached Lookup
  • How to access values of Lookup key in Expression Builder. 
  • lookup() function  in Expressions
  • sink#lookup(key).column
  • Cached Lookup in Sink Cache – Part 2
  • Below Scenario will teach you, how cached output is used by multiple transformations. 
  • Scenario:

    Single Input file : Employee,

            Two Transformations required.

  1.   Foreach employee , find his salary occupancy in his department
  2. Foreach employee, find his average salary status in his department as 

“Above Average” or “Below Average”.

             For these two transformations    common input sum(), avg() aggregations with 

group by department number . This should be sent to Cache Sink   as Cached

 Lookup.

   Each transformation output should be in separate Output file. 

  • In above Dataflow, What flows are executed in  parallel and What flows executed in sequence. 
  • How spark knows dependency between flows. [Using DAG (Direct Acyclic Graph ) engine ]
  • Scope of “Sink Cache” output. [ problem statement ]
  • Using “  Multiple Cache Sink  “ outputs in Single Transformation. 
  • Scenario:

Employee table has id, name, salary, gender, dno, dname, location columns with two records 101 and 102 ids.

New employees are placed in datalake file  newdata.txt with  name,salary, gender, dno  fields.    

Department name, department location fields are available in department.txt file of datalake. 

 Insert all new employees into Employee table of azure SQL , with incremental ID based on maximum id of the Table  along with department name and location  .

  • Sink1 : write max(id) from Employee table  (cache sink)
  • Sink2 :  write all rows of department.txt into  Cache Sink. and Configure dno as Key Column of Lookup
  • Read data from new employees file and generate next employee id with help of surrogate key and sink1 output. 
  • Generate lookup columns department name, location from Sink2 and Load into Target Employee table Azure SQL. 
  • How two Different DataFlows  exchanging Values – Part 1. 
  • Scope of Cache Sink of a DataFlow.
  • What is Variable
  • What is Difference between Parameter and Variables
  • How to Create a Variable in Pipeline
  • Variable Data Types:

-> String

-> Boolean

-> Array

  • Create a  DataFlow with Following Transformation Sequence to find Average of Column
  1.  Source 
  2.  Aggregate (for finding Average)
  3. Sink Cache (to write Average of column)
  • How to pass Cache Sink output to Activity Output. How to Enable this feature
  • Create a Pipeline to Call this Data Flow which writes into Sink Cache
  • How to Understand Output of DataFlow Activity which writes in Sink Cache and Writes as Activity Output
  • Fields of  DataFlow Activity output.
  1. runStatus
  2. Output
  3. Sink
  4. value as Array
  • “Set Variable” Activity 
  • How to Assign Value to Variable using “Set Variable” Activity.
  • How two Different DataFlows  exchanging Values – Part 2
  • How to Assign DataFlow Sink Cache Output  to Variable .
  • Dynamic Expression to Assign Value to Variable Using “Set Variable” Activity.
  • @activity(‘dataflow’).output
  • runStatus.output.sink.value[].field
  • Understand Output of “Set Variable” Activity.
  • How two Different DataFlows  exchanging Values – Part 3
  • DataFlow Parameters
  • How to create Parameter for DataFlow
  • Fixing Default Value for DataFlow Parameter
  • How to access DataFlow Parameter
  • $<parameter_name>
  • Accessing variable value  from pipeline into DataFlow Parameter
  • @variables(‘<variable_name>’)
  • Dynamic expression to access variable value in DataFlow Parameter
  • From  above 3 sessions, you will be learning how to pass output of cache sink output to other dataflows .
  • Categorical Distribution of Data into Multiple Files (foreach data group one separate file Generation ). → Part 1
  • Scenario:   There is department name column with “Marketing”, “Hr”, “Finance”   values.    But these values are duplicated.  All rows related with these 3 categories . We need to distribute data  dynamically into 3 categories.
  • Example:   all rows related with marketing department into “marketing.txt” file.
  • Find Unique Values in Categorical Column and Eliminate Duplicate Values
  • Convert Unique Column into Array of String
  • collect() function
  • Aggregate Transformation using Collect() function
  • Write Array of String into Sink Cache
  • Enable  Cache  to  “Write to Activity Output” 
  • Create a pipeline, to Call this Dataflow. 
  • Create a variable in pipeline with Array Data type 
  • Access    DataFlow Cache  Array   into   Array Variable  with “Set Variable” activity Using “Dynamic Expressions”.
  • Categorical Distribution of Data into Multiple Files (foreach data group one separate file Generation ). → Part 2
  • Array variable to Foreach
  • Configuring  input (value of Array Variable)  for Foreach Activity 
  • DataFlow sub Activity under Foreach.
  • Passing Foreach Current item to Dataflow parameter 
  • Apply Filter  with DataFlow Parameter. 
  • DataFlow Parameter as file name in Sink Transformation,
  • SCD (Slowly Changing Dimensions ) Type 2 – Part 1
  • Difference between SCD Type 1 and SCD Type 2
  • What is Delta of Source data
  • How to Capture Delta (Solution : Incremental loading)
  • Behavior of SCD Type 1
  • If Record of Delta existed in  Target, what happens for SCD Type1
  • If Record of Delta does not exists in Target, What happens for SCD Type 1
  • How to capture Deleted records from source into Delta. 
  • Behavior of SCD Type 2
  • What should happen for Delta of Source in Target
  • What should happen for old records of Target when Delta is inserted in Target
  • Importance of Surrogate Key in Target Key table. 
  • What is version of a record
  • How to recognize a record is active or inactive in Target Table
  • Importance of Active Status Column in Target table to implement SCD Type 2
  • What additional columns are required for Target table, to implement 

SCD Type 2

  • Preparing data objects and data for SCD Type 2
  • Create target table  @ Azure sql or Synapse 
  • What is identity column is azure sql 
  • Identity column as surrogate key in target table
  • Possible values for active status column in target table
  • Preparing data lake file for delta

SCD (Slowly Changing Dimensions ) Type 2 – Part 2

SCD (Slowly Changing Dimensions) Type 2 – Part 3

SCD (Slowly Changing Dimensions) Type 3 – Part 1

SCD (Slowly Changing Dimensions) Type 3 – Part 2

SCD (Slowly Changing Dimensions ) Type 4 – Part 1

SCD (Slowly Changing Dimensions) Type 4 – Part 2

SCD  (Slowly Changing Dimensions) Type 6 – part 1

SCD (Slowly Changing Dimensions) Type 6 – Part2

SCD (Slowly Changing Dimensions) Type 6 – Part 3

How to write activity output into file: ( datalake).  – part1

How to write activity output into file: ( datalake).  – part2

How to write activity output into file: ( datalake).  – part3

How to write activity output into file: ( datalake).  – part4

More Information on Integration Runtime. – part 1

More Information on Integration Runtime – Part 2

Until Activity Part 1

Until Activity Part 2.

Web activity Part 1

Web Activity Part 2

Switch Activity  

Script Activity

Accordion Content

More on Lookup Activity With Set Variable  Part – 2

More On Lookup Activity With Append Variable Activity

 Importing Data From  SNOWFLAKE   to  Azure Blob Storage   Part – 1

Importing Data From  SNOWFLAKE  to  Azure Blob Storage  Part – 2

Key Features Of Azure Data Factory Training in Hyderabad

ADF training in hyderabad

What is Azure Data Factory?

  • Azure Data Factory is a managed cloud service built for extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
  • It allows you to scale-out serverless data integration and data transformation.
  • It is an integration tool as well as a cloud data warehouse that facilitates the process of creating, scheduling and managing data in the cloud or on-premises. 
  • Some of the services offered by Azure Data Factory are Azure HD Insight Hadoop, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database.

About Azure Data Factory Training in Hyderabad

Azure Data Factory (ADF) is a powerful, and cloud-based fully managed serverless ETL data integration solution for ingesting, preparing and transforming all the data at a scale. 

It provides a complete data integration and data transformation experience that can be used to move data between many business data  sources, transform them at scale and write the process data store of choice. 

Azure Data Factory can connect all of the data and processing sources including SAAS services, file sharing and other online services. 

In this course there will be multiple industry experts teaching our students Azure data factory course right from scratch. 

Brolly Academy provides Azure Data Factory training in Hyderabad. 

Our azure data factory course is the most comprehensive with a conceptual curriculum and hands-on practical exercise. 

In this Azure data factory course from Brolly Academy students will be going to learn about adf. Our azure data factory syllabus contains – 

Best Azure Data Factory Training In Hyderabad

Our ADF certified trainers have a deep understanding of ADF technology with passion, and their teaching style makes it easy to understand even the most complex concepts. 

They take extra effort to explain each topic in detail, ensuring that the student enjoys a satisfactory quality learning experience at brolly academy.  

We provide azure data factory certification course and placement assistance for students from Brolly Academy. 

After completing Brolly Academy’s Azure data factory course students will receive an azure data factory course certificate certifying their expertise in ADF. 

Are you interested to learn more about Azure Data Factory course join us to enroll in our most comprehensive Azure Data Factory Training in Hyderabad and make the most out of it. 

Who uses ADF

ADF using companies

Azure data factory learning path

  • Azure data factory training online

    Brolly Academy’s Azure Data Factory training in Hyderabad provides online training to its students so that they can learn Azure data factory courses at their own pace with access to our e-learning platforms. Our online course provides in-depth training in all the core concepts as well as more advanced features, allowing students to attend azure adf training at their own pace. We’ve included a number of practical examples, case studies, and exercises that will be demonstrated live by our trainers.

  • Azure data factory training videos

    Our Azure Data Factory video learning classes are engaging and informative with videos that allow you to learn from experienced instructors who are experts in their field at your convenience. This Instructional videos are customized classroom recordings that will be available to our students wherein our trainers give out instructional training with live demonstration of the concepts of ADF which promotes an easy learning environment.

Why Choose Us For Azure Data Factory Course?

  • Industry oriented project and Exercises

    All our projects and assistance are designed and prepared by project managers from top MNC’s who are in the domain of Azure data factory. Students will provide to work on in a capstone project which will have real-life project use case what ever they have learned in this Azure data factory course. Our expert trainers include case studies as an active part of the training process to improve the technical knowledge of the students. This also gives them first-hand practical exposure that accelerates the learning progress of the students.

  • Pre and post training technical support

    At Brolly Academy our students will be provided with industry oriented project work and exercise. Our dedicated experts will support you with your pre and post technical training and clear all your doubts you get technically while working on a project.

  • 360° proficiency in Azure data factory

    Brolly Academy’s Azure data factory course will provide students with 360° proficiency in ADF with a perfect combination of live instructure led and self paced training in adf with real-time projects. Here our students will get a chance to apply all their skills that they have learned through individual modules at our institutes.

  • Collaborative + interactive sessions

    Our prime goal at Brolly academy is to provide quality training and as a part of that, we have collaborative sessions where students get to interact with one another with the trainer guiding them throughout. This promotes a good learning atmosphere for our students.

  • Azure data factory placement assistance

    Our counselors help students learn job skills and gain professional polish. We assist them with everything from resume writing to interviewing techniques, and we match their skill sets with jobs that make the most sense for their career paths. In addition to resume and cover letter review, interview prep workshops are available for those seeking employment.

  • Expert ADF certified trainers

    Brolly Academy provides best azure data factory course, you will receive a certificate stating your accomplishment and expertise in the concepts of Azure Data Factory. The certificate is given by Brolly Academy and is widely accepted by companies and organizations. This certification can greatly impact your career prospects, further accelerating opportunities.

  • Azure Data Factory certification course

    Brolly Academy provides best azure data factory course, you will receive a certificate stating your accomplishment and expertise in the concepts of Azure Data Factory. The certificate is given by Brolly Academy and is widely accepted by companies and organizations. This certification can greatly impact your career prospects, further accelerating opportunities.

  • Mock Interviews in Azure data factory

    Our mock interviews conducted by the real-time hiring managers around the ADF industry. To conduct this mock interview, we searched through over 100+ job listings for ADF developers and highlighted the skills required to be successful in that field.

Market Trend in ADF

Azure Data Factory review

Nimi
Nimi

Brolly Academy offers good and quality training for Azure Data Factory course in Hyderabad . The instructor showed real-life projects and provided live cases to help us understand better and it was excellent training. This institute is the best place to get good knowledge about Azure Data Factory course with a very experienced trainer, you'll definitely learn from the best. Thank you for delivering the best learning experience Brolly.

Devansh

The azure adf training was very valuable for me as I already had basic knowledge of the concepts. Brolly Academy provides a great learning environment in Azure data factory training online in Hyderabad and is a great place to get practical knowledge from experienced educators. I was happy with my experience at the brolly academy and the trainer took the time to address all of my queries. I am so grateful to the staff here for their amazing services, Brolly is the best azure data factory course provider in Hyderabad.

Nuthan

Brolly Academy is the best place to learn Azure Data Factory training in Hyderabad with a positive supportive environment. They have taught this adf course in a way that anyone can learn the concepts from scratch. The mentors have adequate exposure and experience and teach you in a way that matches the industrial standards.

Shalini

I am currently working in one of my dream companies. I have done Azure Data Factory training in Hyderabad at brolly academy and it gave me so much knowledge and experience. They helped me prepare for my interviews and that's how I could confidently perform well in interviews and score a job. Thank you Brolly Academy for the excellent assistance for Azure Data Factory placement in Hyderabad.

Ramchary

I am very happy to have taken Azure Data Factory training online from Brolly Academy. This course is designed in such a way that it gives you the right knowledge to be an ace professional. Brolly Academy has made my career as a developer much more interesting. I really enjoyed the training given by Brolly Academy and I can say without any doubt that this is the best place to learn and grow professionally.

Alina

I am very satisfied with the Azure Data Factory training in Hyderabad by brolly academy. The trainers are very friendly and knowledgeable and The quality of the training is good. The Azure Data Factory course content is very nice and making it easy to revise. Thanks, Brolly Academy.

Azure Data Factory Certification Course

ADF Certification details

Placements

John Doe

    Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

    John Doe

    Designation

    John Doe

      Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

      John Doe

      Designation

      John Doe

        Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

        John Doe

        Designation

        John Doe

          Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

          John Doe

          Designation

          John Doe

            Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

            John Doe

            Designation

            (ADF) Azure Data Factory Benefits -

            Skills developed completion of the Azure Data Factory Course

            Azure Data Factory Course Prerequisites

            Job possibilities in Azure Data Factory

            Who can enroll for Azure Data Factory Course?

            Anybody can learn Adf, including working professionals. Some of the professions that can learn Adf to upgrade their skills include

            FAQ's

            Azure Data Factory is an SSIS-compatible cloud service for data integration and transformation. ADF can also run your existing SSIS packages on the Azure platform with full compatibility.

            It is a cloud-based ETL and data integration service that allows you to create scalable workflows for moving and transforming your data.

            Azure data factory is easy to learn since it dose not required major coding skill.

            Our expert trainers will help you to learn the Azure data factory full course and clear all your doubts by providing one-on-one doubt clearing sessions with real-time practical examples.

            You don’t required major coding experience to learn Azure data factory course.
             

            The Prerequisites to create Azure data factory is one should need to have Azure subscription and Azure storage account.

            • You can create ADF by using Azure portal UI.
            • People from the commercial background by installing Azure they can write and execute those comments and can create and manage Azure data factory.
            • .Net background people can write a code using .Net shape language. Whenever thay run the code that code will create a Azure data factory for them.
            • People can write a code using python and deploy that code you will get Azure data factory created and subscription.
            • REST – There are some rest API exposed by Microsoft. Whenever you hit any http request by passing some specific value in the request body that http REST API will create a Azure data factory.
            • (ARM Template) Resource Manager Template (Azure Powershell AZ Module) . ARM Template’s are nothing but JSON files with keyvalues

            The Data Factory service can be used to design data pipelines that move data, and then schedule them to run at specific intervals. This means we can choose between a scheduled or one-time pipeline mode.

            The Language used in Azure data factory includes – . NET, PowerShell, Python, and REST.
             
            The Azure data factory certification course cost at Brolly Academy is very affordable so that every one can learn the technology with ease.
             
            Yes, at Brolly Academy you will be provided with Azure data factory syllabus also you can customise it as per your requirement.
             
            Our Azure data factory course consists of 45 days of rigorous training in adf concepts. It may extend and depends on how you learn the course.
             
            Brolly Academy provides 3different types of training – Azure data factory course online, Azure data factory training videos, and Azure data factory classroom training.
             
            Yes, Brolly Academy provides free demo on azure adf training to all students.
             
            Yes, you will get a placement assistance from Brolly Academy after you complete the Azure data factory course.
             
            Brolly Academy provides the Azure data factory in a very affordable course fee but does not provide it free of cost. To know more about the fee structure please feel free to contact us by the given number on our website.
             

            Other Relevant Courses:

            Snowflake

            Azure Data Engineer

            Azure Devops

            CCNA Training in Hyderabad Batch Details

            Enroll for Free Demo Class

            *By filling the form you are giving us the consent to receive emails from us regarding all the updates.

            Azure Data Factory Upcoming Batch