Post Job Free

Resume

Sign in

Azure Data Sql Server

Location:
Irving, TX, 75063
Posted:
March 13, 2024

Contact this candidate

Resume:

Sahana Angadi Ph: +1-682-***-****

Senior Data Engineer Email:ad4bh4@r.postjobfree.com

PROFILE SUMMARY:

•Having 8+ years of experience as an ETL Data Stage job designer and ETL job design, Data Replication, Defect Analysis, Maintenance and Production Support & Release Management activities.

•Experience in Microsoft Technologies and Azure Data factory includes design, Development and implementation of Database systems using Azure SQL Server and MS-SQL Server.

•Good experience on Azure Data Factory, Azure Data Bricks, Python, PySpark.

•Worked on designing and developing Data Pipelines for Data Ingestion or Transformation using Python or Spark.

•Expert in T-SQL development to write complex queries by joining multiple tables.

•Great ability to develop and maintain Stored Procedures, Triggers.

•Created automated workflows with the help of Triggers.

•Worked extensively on Datastage to create ETL end to end flows for huge processing been streamlined and has also done performance tuning which has got monetary benefit to the client.

•Expertise in designing and developing of ETL, performance tuning of jobs, as well as development and implementation of large-scale Enterprise Data Warehousing.

•Ability to design complex ETL jobs in DataStage designer.

•Strong understanding of Relational Database Management Systems (RDBMS), proficient in SQL on Oracle, SQL server database environments. Used database utilities like SQL Developer, Oracle DB.

•Good Experience in writing, managing, and optimizing the existing Data stage jobs on UNIX and Windows.

•Experience in software Development Lifecycle (SDLC), Agile/Waterfall Methodologies.

CERTIFICATIONS:

•IBM Certified Solution Developer InfoSphere DataStage v8.0 - 2013

•IBM Certified Solution Developer InfoSphere DataStage v8.05- 2014

TECHNICAL SKILLS:

ETL Tools: IBM DataStage, Basic Informatica,

Cloud Applications: Azure Data Factory, Azure Data Bricks

RDBMS: Oracle, Sql Server

Languages: Basic Python, Shell Scripting, Basic Java

Operating Systems: Unix, Windows.

EDUCATION:

Bachelor of Engineering from Visvesvaraya Technological University (VTU) with 70%.

PROJECT SUMMARY:

Client: Leidos Oct 2023 - Till date

Location: Reston, VA

Role: Senior Data Engineer

Environment: Azure Cloud, Azure SQL DB, Spark, Python.

Responsibilities:

•Currently leading data engineering project at Leidos in the healthcare domain, using Agile methodologies for project management.

•Responsible for maintaining Technical and Functional Specifications Defining data mappings, conversions rules and technical implementations.

•Streamlining data processing workflows to support real-time analytics, enabling timely healthcare decision-making.

•Utilizing Python and SQL for developing complex data processing scripts and queries, ensuring data accuracy and efficiency.

•Extensively involved in analyzing the data for each load (One time and Daily load) and working with the Business group to modify or redefine the transformation as well as Data fixing rules due to discrepancies in the data.

•Worked on ETL (Data Extraction, Transforming and Loading) in ADF.

•Created the Linked services on data source and destination data.

•Worked on integrating the data between On-premise database to Azure SQL Server using Azure Data Factory

•Created pipeline using dependencies of activities in ADF.

•Carried out tests for performance tuning and improving the throughput in the Azure environment.

•Used different ADF data flow components as Source, Sort, filter, aggregator, window, Rank.

•Enhancing data management and analytics using Azure SQL Database, supporting large-scale healthcare data applications.

•Extensive use of derived columns with multiple functions to manipulate the data.

•Use of copy activity and self-hosted Integration Runtime to migrate the on-premise data onto Azure Cloud.

•Use of schedule, event-based triggers to schedule the pipeline runs.

•Implementation of SCD types using data factory pipelines and a lot more complex scenario.

•Use of pre- and post-to manipulate the data before or after it flows through the ADF pipeline.

•Use of lookup Activity, Until Activity, For- loop Activities in ADF pipelines.

•Created pipeline parameters and dataset parameters to call the values dynamically in Azure.

•Debugging pipelines using Data Flow debug session.

•Use of web Activity in ADF to provide a call to the web endpoint. Or send a mail.

•Writing queries in azure data bricks using Pyspark.

•Establishing best practices for data storage, processing, and retrieval in healthcare applications, ensuring high performance.

•Enhancing data management and analytics using Azure SQL Database, supporting large-scale healthcare data applications.

•Streamlining data processing workflows to support real-time analytics, enabling timely healthcare decision-making.

•Ensuring compliance with healthcare data standards and regulations through regular data audits and quality checks.

Client: Hotfix Squared, Inc Mar 2023 to Oct 2023

Location: Scottsdale, AZ

Role: Data Engineer

Environment: Azure Cloud, Azure SQL DB, Spark, Python.

Responsibilities:

•I was involved in understanding of business processes and coordinated with client effectively on weekly basis to get specific user requirements.

•Involved in creating specifications for ETL migration processes, finalized requirements and prepared specification document.

•Extensively worked as SME for Azure Data Factory and Informatica to fill in the gaps during migrations and creating finalized plan for upgrading the stack to Azure.

•Worked on conversion of actual Informatica mappings and workflows to Azure Data Factory Pipelines and Data Flows.

•Worked on ETL(Data Extraction, Transforming and Loading) in ADF.

•Created the Linked services on data source and destination data.Worked on integrating the data between On-premise database to Azure SQL Server using Azure Data Factory.

•Retrive data from Rest API using ADF. Created pipeline using dependencies of activities in ADF.

•Carried out tests for performance tuning and improving the throughput in the Azure environment.

•Handled multiple migration challenges like stored procedure calls within the mappings, unconnected lookup workaround.

•Use of variables in the expressions at Informatica level has been translated into Azure Data Factory actual logic implementation using locals.

•Utilizing Python and T-SQL for developing complex data processing scripts and queries

•Create automated workflows with the help of triggers.

•Transforming data between Excel and Servers using dataFlows.

•Created pipelines to copy data from source to Azure target and implemented agreegation rule in data flows.

•Created link service and datasets and used get metadata,lookup,activities,iterative loops based on the requirement in Azure.

•Created pipeline parameters and dataset parameters to call the values dynamically in Azure.

Client: GAP Oct 2016 - Jan 2018

Company: Infosys Ltd

Location: India

Role: Technology Analyst.

Environment: DataStage 8.5, SQL DB, Unix Scripting. Control-M, Oracle.

Responsibilities:

•Understanding business logic provided in mappings and creating queries based on that to extract data.

•I was mainly involved in Production Issues of Datastage jobs.

•Experienced in developing parallel jobs using various Development/debug stages (Peek stage, Head & Tail Stage, Row generator stage, Column generator stage, Sample Stage) and processing stages (Aggregator, Change Capture, Change Apply, Filter, Sort & Merge, Funnel, Remove Duplicate Stage).

•Debug, test and fix the transformation logic applied in the parallel jobs.

•Involved in creating UNIX shell scripts for database connectivity and executing queries in parallel job execution.

•Created multiple Shell scripts to send notification Emails to end users on success/failure of datastage jobs.

•Used the ETL Data Stage Director to schedule and running the jobs, testing, and debugging its components & monitoring performance statistics.

•Coordinating with other teams for data reconcilations.

Client: Target Aug 2014 - Sep 2016

Company: Infosys Ltd

Location: India

Role: Senior Software Engineer

Environment: DataStage 8.5, SQL DB, Unix Scripting. Control-M, Oracle.

Responsibilities:

•Target.com eCommerce Data Warehouse (eDW) was built as a placeholder for inbound files & data to create the financial reports. eDW was involved in serving the Operational, Financial and Analytical needs of Target.com business.

•Understanding business logic provided in mappings and creating queries based on that to extract data.

•Extensively worked on various Parallel job stages like Oracle Connector Stage, Teradata Mload and Enterprise Stages, Transformer, Join, Lookup, Merge, Sort, Filter, Aggregator, Copy, remove duplicates, Funnel, Change capture, Data Set, Sequential File and Sequence job stages.

•Separated loading jobs from extraction and transformation jobs, so that extraction and transformation jobs can be scheduled before the load window starts.

•Actively worked on Monitoring the production jobs and fixing the incident tickets that are raised.

•Developed System Testing Strategy and involved in setting up System Testing (ST) environment for DataStage .

•Created DB stored procedure and Unix scripts to load data into target tables.Extract data from target tables using Sqoop and process the high-volume data as per business requirements.

•Created high level design document for the new implementations.

•Created UNIX scripts (ksh) for archive source files and delete files from source path.

•Extensively used Processing Stages- Join, Funnel, Filter, Aggregator, Sort, Remove Duplicates, Copy, Transformer, Change Data Capture and Lookup. Develop/Debug- Row Generator & Peek stage. File Set Stages- Dataset, Sequential File, DB2.

•Created new projects in administrator, adding user defined variables in project level, giving specific roles to user, and making configurations.

•Development of SQL queries to validate the data cleansed and transformed and loaded using Data Stage.

•Involved in the testing of the various jobs developed and maintaining the test log.

•Created UNIX scripts (ksh) for preload and post load auditing of source files.

Client: Sears - Kmart Dec 2011 - Apr 2014

Company: Infosys Ltd

Location: India

Role: Software Engineer

Environment: DataStage 8.5, SQL DB, Unix Scripting. Control-M, Toad, SVN, Jeera

Responsibilities:

•Design and Development of DataStage Jobs, ETL Queries for populating foundation tables.

•Creation of low level and high-level design document, Unit test result document.

•Preparation of Job schedule to be executed in CONTROL-M.

•Peer Reviews of High Level and low-Level design documents, DataStage jobs and Extraction Queries.

•Unit testing and Bug Fixing, Reviewing and resolving Issues.

•Creation of the Implementation Plan for deployment.

•Uploading UTP in QC.

•Ensuring quality and performance of the designed jobs.

•Fixing application bugs before the final release.

•Designing and implementing new requirements.Created some routines (Before-After, transform function) used across the project.

•Experienced in PX file stages that include Complex Flat File stage, DataSet stage, LookUp File Stage, Sequential file stage.

•Experienced in using SQL Loader and import utility in TOAD to populate tables in the data warehouse.



Contact this candidate