Nivas Reddy
Email id : **************@*****.***
Phone:518-***-****
Azure Data Engineer
Professional Summary:
Azure Data Engineer with 8+ years of experience in building data intensive applications, tackling challenging architectural and scalability problems.
Experience in building data pipelines using Azure Data factory, Azure data bricks and loading data to Azure data Lake, Azure SQL Database, Azure SQL Data warehouse and controlling and granting database access.
Implemented Copy activity, and Custom Azure Data Factory Pipeline Activities for On-cloud ETL processing.
Very Good experience working on Azure Cloud, Azure DevOps, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Azure HD Insight Big Data Technologies (Hadoop and Apache Spark), and Data bricks.
Design and implement database solutions in Azure SQL Data Warehouse.
Working with Azure Data Factory Data transformations.
Working with Azure Data Factory Control flow transformations such as For Each, Lookup Activity, Until Activity, Web Activity, Wait Activity, If Condition Activity.
Experience in implementing Audit notebook.
Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Driver Node, Worker Node, Stages, Executors and Tasks.
Architect and implement ETL and data movement solutions using Azure Data Factory, SSIS create and run SSIS Package ADF V2 Azure-SSIS IR.
Big Data - Hadoop (MapReduce & Hive), Spark (SQL, Streaming).
Strong knowledge of Spark ecosystems such as Spark core, Spark SQL, and Spark Streaming libraries.
Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors, and Tasks.
Good understanding of Big Data Hadoop and Yarn architecture along with various Hadoop Demons such as Job Tracker, Task Tracker, Name Node, Data Node, Resource/Cluster Manager, and Kafka (distributed stream-processing).
Experience on DTS Migration and Metadata Management, Migrating DTS Packages to SSIS, Package Migration Wizard, Bulk Insert, and Storage Management.
Experienced on working with different data formats CSV, JSON, and Parquet.
Strong in Data Warehousing concepts, Star schema, and Snowflake schema methodologies, understanding Business processes/requirements.
Expert in building hierarchical and Analytical SQL queries that help in reporting.
Analyzed the source data and coordinated with the Data Warehouse team in developing Relational Model. Designed and developed logical and physical models to store data retrieved from other sources including legacy systems.
Experienced in developing Data Mapping, Performance Tuning, and Identifying Bottlenecks of sources, mappings, targets, and sessions.
Expert on creating SQL objects like Tables, Complex Stored Procedures, Triggers, Views, Indexes, and UserDefined Functions to facilitate efficient data manipulation and consistent data storage in Azure Synapse.
Experience on Database design, Development, Data Modeling, Administration, ETL packages using DTS / SSIS, Performance Tuning, and Stored Procedures for Database applications using MS SQL Server 2019/2017 Proficient in implementing all transformations available on the SSIS toolbox.
Experience with MS SQL Server Integration Services (SSIS), T-SQL skills. Design and develop Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into customer usage patterns.
Developed custom alerts using Azure Data Factory, SQLDB, and Logic App. Developed Databricks ETL pipelines using notebooks, Spark Data frames, SPARK SQL, and python scripting.
Developed complex SQL queries using stored procedures, common table expressions (CTEs), and temporary tables to support Power BI reports.
Experienced on building Data Integration and Workflow Solutions and Extract, Transform, and Load (ETL)solutions for data warehousing using tools like SSIS import and export and SSIS packages and ADF.
Program queries using Python/SQL scripts to get data from different databases.
Experience on Handling Heterogeneous data sources, databases Oracle, IBM DB2, and XML files using SSIS.
Experience on RDBMS database design, Data warehouse, performance tuning, optimization, client requirement Analysis, Logical design, Development, Testing, Deployment, and Support.
Extensive experience and hands-on working knowledge in various domains of the IT industry like Health care, financial, and insurance as an SQL/MSBI Programmer.
Technical skills:
Language/Tools
T-SQL, PL/SQL, Python
Cloud services
Azure services – ADF (Azure Data factory), Blob storage, Data Lake, Synapse, Data Bricks
ETL Tools
DTS (Data Transformation services), Informatica Power Center 10.x/9.x/8.x, SSIS, SSRS.
Database
MS SQL Server, Oracle 8i/9i/10g, RDBMS DB2, Azure SQL Database
Scripting
Shell Scripting, Python, Spark
Data Modeling Tools
Microsoft Visio, ERWIN 9.3/7.5
Data Modeling ER
(OLTP) and Dimensional (Star, Snowflake Schema)
Excel Tools & Utilities
TOAD, SQL Developer, SQL*Loader, Putty
Other Tools
Notepad++, Toad, SQL Navigator, Power BI, Teradata SQL Assistant, Red-Gate, Sonar cube.
Professional Experience:
Homesite insurance Boston, MA May 2022 to Present
Azure Data Engineer
Responsibilities:
Analyze, design, and build Modern data solutions using Azure PaaS service to support visualization of data. Understand the current Production state of the application and determine the impact of new implementation on existing business processes.
Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.
Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, and backward.
Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into customer usage patterns.
Responsible for estimating the cluster size, and monitoring and troubleshooting the Spark data bricks cluster.
Experienced in performance tuning of Spark Applications for setting the right Batch Interval time, the correct level of Parallelism, and memory tuning.
Hands-on experience in developing SQL Scripts for automation purposes. Created Build and Release for multiple projects (modules) in a production environment using Visual Studio Team Services (VSTS).
Worked on migration for large amounts of data from OLTP to OLAP by using ETL Packages.
Worked on Azure SQL Data Factory to load Azure SQL Data Warehouses for analytics and reporting.
Developed PySpark, Python for regular expression (regex) in Data Bricks.
Data integrated from multiple internal sources into an Azure SQL Server with Azure Data Factory and Azure Automation.
Used SQL Azure Databases, Data Lakes, and SSIS to capture features used to train risk attribute models used to score and identify high-risk deals.
Large Volume of data from blob storage to Azure SQL Databases using ADF.
Architect and implement ETL and data movement solutions using Azure Data Factory, SSIS create and run SSIS Package ADF V2 Azure-SSIS IR.
Integrated Custom Visuals based on business requirements using POWER BI Desktop.
Created DAX Queries to generate computed columns in Power BI.
Wrote calculated columns, and Measures queries in the POWER BI desktop to show good data analysis techniques.
Expert in creating simple and parameterized reports and complex reports involving Sub Reports, Matrix/Tabular Reports, Charts, and Graphs using SSRS in the Business intelligence development studio (BIDS).
Environment: Facets, MS SQL Server 2014/2016, SSMS, Azure Data Factory (ADF), Azure Blob, Data Lake Gen2, T-SQL, Power, MS Excel, Azure Databricks, Azure SQL Database, Azure SQL Datawarehouse, Programming Scala, Python, Spark SQL, MSBI (SSIS, SSAS, SSRS), Data Visualization, Data Migration, SQL Server programming, Confidential Power BI, Analytic Problem-Solving.
Travelport Englewood, CO August 2019 to April 2022
Azure Data Engineer
Responsibilities:
Created Dynamic Linked services and Datasets to reuse them in different pipelines by passing parameter values at runtime.
Extensively worked with Azure Storage, Azure Data Lake, Azure File Share &, Azure Blob Storage to store and retrieve data files.
Used Azure Key Vault to securely to store secrets for connection strings and database passwords and used them in configuring Linked services.
Created Metadata driven pipelines to dynamically load flat files and database tables from source to destination.
Migrated on PREM Traditional warehouse to synapse Warehouse.
Developed complex SQL queries using stored procedures, common table expressions (CTEs), temporary table to support Power BI and SSRS reports.
Used Microsoft Power BI to Build and maintain the reports and dashboards. Used Power BI Desktop to develop data analysis multiple data sources to visualize the reports.
Experience on Migrating SQL database to Azure Data Lake, Azure data Lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse and controlling and granting database access and Migrating On premise databases to Azure Data Lake store using Azure Data factory.
Data analytics and engineering in multiple Azure platforms such as Azure SQL, Azure SQL Data warehouse, Azure Data Factory, Azure Storage Account etc. for source stream extraction, cleansing, consumption, and publishing across multiple user bases.
Designing and development of SQL objects, writing complex SQL queries using Microsoft Data Analysis, Data Modeling, Data profiling, T-SQL, and advanced functions.
Experience with Azure transformation projects and Azure architecture decision making Architect and implement ETL and data movement solutions using Azure Data Factory (ADF), SSIS.
Architect & implement medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, and Azure SQL DW.
Regularly wrote Python routines to log into the database and fetch data.
Analyzed source system data and designed the star schema for the data warehouse to be developed.
Developed fact tables, dimensional tables, cubes for OLAP warehouse using Analysis Services.
Environment: Azure SQL Server, Azure Synapse, T-SQL, SQL Server Integration services (SSIS), SQL Server Reporting Services (SSRS), Power BI, GIT, Azure Data Lake, Data Factory, Azure Blob, Azure Storage Explorer, PySpark, Power BI, PowerShell, JIRA, Devops.
Chervon Corporation, Santa Rosa, NM February 2017 to July 2019
Azure Data Engineer
Responsibilities:
Involved in the complete Software Development Life Cycle SDLC process by analyzing business requirements and understanding the functional workflow of information from the source system to the destination system.
Advanced extensible reporting skills using SQL server Data Tool 2012 SSRS.
Designed and created Report templates based on the financial data.
Developed various types of Complex reports like Drill Down and Drill through reports.
Involved in designing Parameterized Reports for generating Ad hoc reports as per client requirements.
Coordinate with front-end for implementing logic in stored procedures and functions Experienced in writing Complex SQL Queries, Stored Procedures, Triggers, Views, Joins, Constraints, DDL, DML, and User Defined Functions to implement the business logic and created clustered and non-clustered Indexes.
Developed an SCRA website.
Developed Questionable, Error Response, Reports, and Admin modules for the SCRA website Developed a secure web application, which includes a login form, which will authorize and authenticate the users by retrieving their login names and passwords from the database using ADO.Net.
Developed User Control according to the requirement.
Used ADO.net Connection-Oriented and disconnected Architecture for database connectivity Extensively used ADO.Net objects to populate Data sets, Data grid, and Repeater controls for display and manipulation of records.
Created a simple, aesthetic, and consistent User Interface with shortcuts, menus, forms, and controls in VB.NET.
Dealt with complex design and maintenance issues for SQL by implementing atomicity, consistency, isolation, and durability.
Used Data Grids, Data Sets, Data Views, and Data Adaptors to extract data from the backend Worked extensively on web forms and data binding controls like Grid View, Data List and drop-down boxes, and Mapping page fields to the database fields.
Used Web Forms and Server controls in ASP.Net.
Created Dynamic Controls on web pages.
Developed object interfaces for relational database access from the presentation layer.
Created session objects to maintain the session between ASP.NET Pages.
Successfully developed and deployed .NET applications following all coding standards.
Validating the whole functionality.
The system was deployed as an ASP.NET web application using VB.NET.
Environment: T SQL, SQL Server2012, Microsoft SQL Server Management Studio, SQL Server Integration Services SSIS, SSRS, SSAS, VB.Net 4.5, and ASP.NET.4.5.
Hudda Infotech Private Limited Hyderabad, India June 2014 to November 2016
Hadoop Developer
Responsibilities:
Worked with Business Analysts in gathering requirements and translating them to technical specifications.
Involved in the analysis and profiling of source data, creating prototypes and sample reports to help with requirements and design.
Involved in Performance Tuning of Code using execution plan and SQL profiler and added Indexes to improve performance on tables.
Generated Reports using Global Variables, Expressions, and Functions for the reports using SSRS 2012. Designing and implementing a variety of SSRS reports such as Parameterized, Drilldown, Ad hoc, and Sub-reports using Report Designer and Report Builder based on the requirements.
Designed and developed efficient SSIS packages for processing fact and dimension tables using transformations like Fuzzy lookup, lookup, merge, merge join, script component and slowly changing dimension.
Designed and developed efficient SSIS packages for processing fact and dimension tables with complex transform.
Extracted large volumes of data from various data sources and loaded the data into target data sources by performing different kinds of transformations using SQL Server Integration Services SSIS.
Created SSIS packages for Uploading different formats of files Excel, Access, dbf and databases SQL server, Flat files into the SQL Server data warehouse using SQL Server Integration Services SSIS.
Used Script task to write custom code using Vb.net Designed high-level ETL architecturefor overall data transfer from the OLTP to OLAP with the help of SSIS.
Used SSIS package to load data from different platforms such as CSV, XML, Flat file and excel.
Environment: MS SQL Server 2012, Team Foundation Server, Microsoft SharePoint Server, Crystal Reports, SSIS SQL Server Integration Services, SSRS SQL Server Reporting Services, Visual Studio 2005, Windows XP, VB.Net.