Post Job Free
Sign in

Data Engineer Sql Server

Location:
Jersey City, NJ
Posted:
June 09, 2025

Contact this candidate

Resume:

Baskaran jaganathan

Jersey City, New Jersey ***05 • +1-201-***-**** • ************@*****.*** • LinkedIn: https://www.linkedin.com/in/baskaran-jaganathan-09940746/

Professional Summary

14+ years of progressive development experience which includes Requirement Analysis, Design, Development, Testing, Infrastructure migration, Re-Engineering and Release Coordination for Data warehousing applications in both waterfall & Agile methodologies in Pharma and banking domain.

Strong experience in data ingestion, extraction from different data stack and data modeling using cloud applications (Snowflake, DBT (Data Build Tool), IICS, Databricks, Python, Py-spark and AWS (IAM, SNS, SQS, S3), Azure (Storage point) and Tableau)

Expertise in Data migration projects from legacy Systems like (DataStage with Netezza into DBT with Snowflake) and (Informatica with Oracle into Azure Data Bricks and Py-spark)

Hands on experience in creating Python programs with file handling packages (pandas)

Expertise in Design & Implementation of complex business requirements involving slowly changing dimensions - (SCDs Type I & II), Incremental data loading, data partitioning through informatica power center

Worked on Database objects creation (Table, Views, Stored Procedure, Functions, Trigger) in relational database (Oracle, MS SQL Server, Netezza, Teradata)

Hands on experience in creating ETL mappings with transformations such as Lookups, Router, Aggregator, Sorter, Filter, Update Strategy, Normalizer, Sequence Generator, Joiner, HTTP transformation and Reusable mapplet.

Experience in Autosys, Control-M and Tidal Scheduling tools, Failure Reporting, Audit Trails, backup and purge strategies and Performance Testing

Data integration with SAP ECC and SAP APO modules using SAP Transports and power center and load into Datamart/Datawarehouse

Loading data into Salesforce Veeva CRM (Objects) with Data Quality checks

Skills

Databases: Oracle, MS SQL Server, Netezza, Teradata

Languages: Python, PERL

Cloud Applications: Snowflake, Data Build Tool (DBT), AWS, Azure

Scheduling Tools: Control-M, Airflow, Autosys and Tidal

Scripting: Unix Shell Scripting

Bigdata Technologies: Hadoop, Spark, Hive

Testing Tools: ALM and QTEST

Other Tools: JIRA, Remedy (SSD), GitHub, GitLab, ServiceNow

Work History

Data Engineer, 04/2022 to Current

Dotcom Team LLC/USAA – San Antonia, TX

Migrate SQL code and historical data from (Netezza, Informatica, Data Stage, Hive) into Snowflake database using AWS S3 storage and DBT (Data Build Tool)

Project is to handle data extraction and aggregate reports of credit report information and credit risk details)

Data ingestion from Experian application into Snowflake data stack model.

Create and modify Shell Script/Python code for QC validation before loading into stage table

MFP Registration and data load into Data Layer 0, then Creating Structed view creation

Worked on Data Integration & Report Layer to aggregate the data on Monthly Load

Create new data models and migrating the existing data models from data stage into DBT (Data Build Tool)

Handling Snowflake objects (Tables, Views, Shares, Time-travel and Zero Clone replications)

Code migration from dev to QA and QA to Prod are through GitLab CICD pipeline

Create Incidents, Service Request and change request using Service Now for Test and Production environments

Data Engineer, 02/2021 to 03/2022

Dotcom Team LLC/Abbott – Chicago, IL

Created Informatica mappings and workflows to extract historical data from Oracle database, Teradata into csv files

Worked on code migration project to transfer the workflow process from Oracle into Azure Databricks ADSL Data Lake

Designed and implemented scalable data pipelines using Azure Databricks for ingesting, transforming, and analyzing large volumes of structured and semi-structured data.

Creating Python programing to convert SQL logic into Python pandas to process files

Developed PySpark notebooks for complex data processing, cleansing, and business logic implementation within the Databricks workspace.

Data Engineer, 06/2019 to 01/2021

Dotcom Team LLC/Fidelity Investments – Raleigh, NC

Worked in PI Engineering team and providing PASView application support and handling data loading issues

Create/Modify data pipelines using Informatica (IICS) mappings, workflows and Oracle, PLSQL Objects (SQL performance tuning, Views, Stored Procedure, Functions) objects based on the business requirements

Creating Snowflake objects and data loading using bulk load and snow pipe, views and shares

Handling csv file using Unix shell scripting and Python program with panda’s packages

Followed Agile methodologies (Jira) to handle tasks and business requirements, Changes are handled through Service now

ETL/Database Developer, 08/2014 to 05/2019

HCL America Inc/Merck – Branchburg, NJ

Responsible for developing, support and maintenance for the ETL processes using Informatica PowerCenter ETL process from SAP ECC, SAP ECC and SAP Basis into Oracle database and implementing business logic to generate sales report

Worked on changes and handling business quires on Arugs Safety (Pharmaco vigilance tool)

Developed various reusable mapplets, Transformations and was responsible for validating and fine-tuning the ETL logic coded into mappings and creating reports in tableau.

Extensively worked on performance tuning at all levels of the Data warehouse. Implemented performance tuning by using lookup caches, gathering stats, dropped indexes and re-created them after loading data to targets and increased the commit interval.

Created Autosys JIL (Job information Language) to schedule the jobs through Autosys scheduler.

Extensively worked on infrastructure migration/up gradation project like Informatica 8x to 9x and oracle 12c which involves extensive testing on 2000 Jobs

Create/Update Change Request, Requirement Specification, Design Specifications and Configuration Specification document to follow SDLC process strictly to adhere GDP (Good Documentation Practice) using Remedy Tool

ETL/Database Developer, 04/2010 to 07/2014

HCL Technologies/Pfizer – Chennai, India

Worked on Quovadx Cloverleaf Integration solution tool to integrate data from different Clinical trial applications like PIMS (Phase 1 management System) and LIMS (Laboratory Information Management System) database (Oracle and PL-SQL)

Worked on Informatica mapping, workflow, Oracle and PLSQL development based on business requirements and Maintain user queries in EDC (Electronic Data Capture application, Oracle Database)

Change Request, Requirement Specification, Design Specifications and Configuration Specification document to follow SDLC process strictly to adhere GDP (Good Documentation Practice)

Providing application support and handling business user question on PIMS & LIMS applications

Education

Diploma in Information Technology, 06/2004 - 06/2007

NTTF (Nettur Technical Training Foundation), Chennai, India

Bachelor of Computer Application, 05/2006 -05/2008

Alagappa University - Karaikudi, India

Certifications

1.Snowpro Core certification Issued on March 2021

2.AWS Cloud practitioner Issued on Jan 2022



Contact this candidate