Post Job Free
Sign in

Data Integration Etl Developer

Location:
Manhattan, NY, 10007
Posted:
April 30, 2025

Contact this candidate

Resume:

Ramesh

Senior Oracle /ETL Developer

Email: ***********@*****.*** Ph: +1-415-***-****

Professional Summary:

•Highly skilled professional with 10 years of hands-on experience in analyzing, architecting, developing, and designing solutions (ETL, Data Warehouse, Data Mart, ODS) in OLTP technologies using star and snowflake dimensional modeling.

•Extensive experience in Requirement Analysis, Design, Development, Implementation, Testing Support of Data Warehousing and Data Integration Solutions using Informatica Intelligent Cloud Services, Informatica PowerCenter and IDQ.

•Developing integrations using Informatica Cloud Data Integration (IICS – CDI Service) For Google Analytics

•Developing Informatica Cloud Mappings, Data task, Task flows, Data replication and PowerCenter task creation.

•Worked with API/Web Services using application integration for Rockwell Supplier through REST End point connector.

•Designed, Developed, and Implemented ETL processes using IICS Data integration.

•Created IICS connections using various cloud connectors in IICS administrator.

•Proficiency in developing SQL with Various relational databases like Oracle, DB2, Hive, SQL Server and Netezza.

•Build highly performant and scalable enterprise-grade ETL processes for populating analytics Ab-initio, Hadoop and Oracle based data warehouses. Working in Agile teams on Windows and Linux development environments

•Bring together data and create views of data sets stored in the Ab-initio based big data platform

•Collaborated with business analysts and stakeholders to understand requirements and translate them into efficient integration processes.

•Architected solutions to enable real-time data synchronization and process automation, improving operational efficiency and data accuracy.

•Handled end-to-end Development and Implementation of Product (Item) Master Data Mart by taking into consideration business cases, objectives, optimization, and requirements.

•Managed code reviews and ensured that all solutions and processes are aligned to architectural/requirement specifications and standards.

•Hands-on exposure with AWS and Snowflake technologies.

•Developed mappings that perform Extraction, Transformation and Load of source data into target schema using various power center transformations like Source Qualifier, Aggregator, Filter, Router, Sequence Generator, look up, Joiner, Expression, Normalizer and Update strategy to meet business logic in the mappings.

•Provided training and coaching to new team members on Agile best practices and facilitated sprint planning, daily standups, demos and sprint review meetings

•Utilized tools such as AWS Glue, Azure Data Factory, and dbt to automate data extraction, transformation, and loading processes, improving data pipeline efficiency and reducing load times.

•Designed and developed data quality mappings and workflows in IDQ to validate, cleanse, and standardize customer data, improving data.

•Conducted data profiling and analysis using IDQ to identify and resolve data quality issues, reducing data error.

•Collaborated with cross-functional teams to define data quality rules and standards, ensuring compliance with organizational data governance policies.

•Automated data quality processes using IDQ, reducing manual effort by and improving turnaround time for data validation.

•Extensively used cloud transformations - Aggregator, Expression, Filter, Joiner, Lookup (connected and unconnected), Rank, Router, Sequence Generator, Sorter, Update Strategy, Union Transformations

•Experience of TWS, Cisco Tidal Enterprise and Ctrl M Schedulers

•Interacted with business, cross functional team, SMEs and BI team on day-to-day basis for critical timelines requirements and any system issues.

•Manage team and assign tasks, review /validate code and help them to fix critical issues in testing and development.

Education:

•Bachelor’s in computer science, JNTU Anantapur, India in 2012.

•Master’s degree in computer science, Campbellsville University, Kentucky in 2016.

TECHNICAL SKILLS:

ETL Tools: Informatica PowerCenter 10.2, 9.6, v10.5.5,s Informatica Intelligent Cloud Services (IICS).

DB Query Languages: Oracle 11g, Oracle 12c, Oracle 19c, Oracle 21c and Teradata

Scheduling Tool: Control-M, AutoSys.

Databases: Oracle 11g, Oracle 21C.

Operating Systems: Windows 11, Linux

Other Tools: Windows, GCP, AWS, Snowflake, Toad, Oracle SQL Developer, JIRA.

Professional Experience:

Client: State of New York, NY Nov 2022 – Till Date

Role: ETL developer / Oracle developer

Responsibilities:

•Developed ETL programs using Informatica to implement business requirements.

•Communicated with business customers to discuss the issues and requirements.

•Installed, configured, and maintained Informatica PowerCenter, Informatica Cloud, and other Informatica suite products, ensuring optimal performance and availability for data integration needs.

•Monitored Informatica server health, job performance, and resource utilization, implementing performance tuning techniques to optimize job execution and minimize latency.

•Created shell scripts to fine tune the ETL flow of the Informatica workflows.

•Worked in Dimension Data modeling concepts like Star join Schema Modeling, Snow-Flake Modeling, FACT and Dimensions Tables, Physical and Logical Data Modeling.

•Extensively Used the XML SQL Oracle (Extract, Exists Node, Extract Value, XMLSequence, AppendChildML) functions to

•generate, insert and manipulate XML files in Oracle 11g XML-DB.

•Developed a PL/SQL package to provide an interface for the SAP-ONE payment processing system.

•Designed the Contractor Management Module for the Demand Side Management Application.

•Configured the Times Ten Cache groups and assigned them to Cache Grids for real time access of data at the application tier.

•Built and managed ETL pipelines using AWS Glue to process and transform data from S3, RDS, and DynamoDB.

• Created a centralized data lake on AWS S3 and implemented ETL processes using Glue and Lambda for data transformation.

•Designed serverless ETL workflows using AWS Lambda, Step Functions, and Glue, reducing operational overhead.

•Utilized AWS Kinesis and Glue for real-time ETL processing, enabling instant data insights.

•Cloud Data Warehousing: Transformed and loaded data into AWS Redshift using ETL tools, enabling efficient querying and analytics.s

•Responsible for SQL/PLSQL package tuning and optimization of the already existing code.

•O Translated business requirements into code changes (existing procedures/ triggers).

•Conducted Performance Review of Oracle Database for business-critical applications with Senior Leadership team

•Generated Stats pack/AWR reports from Oracle 10g/9i database and analyzed the reports for Oracle wait events, time consuming SQL queries, tablespace growth, database growth

•Involved in SQL Query tuning and provided tuning recommendations to ERP jobs, time/CPU consuming queries

•Used Explain Plan, Oracle hints and creation of new indexes to improve the performance of SQL statements O Completed minimized the space related errors in Oracle Databases by conducting monthly review with Senior Management and addressing the space issues proactively

•Designed and implemented custom KPIs and metrics using Qlik expressions, Set Analysis, and advanced scripting.

•Developed high-performance SQL queries and optimized existing queries to improve processing time and database performance.

•Designed and developed interactive dashboards and reports using OBIEE Answers and Dashboards.

•Created ad-hoc and custom reports to meet business requirements for various stakeholders.

•Implemented drill-down, drill-through, and hierarchical reporting for enhanced data analysis.

•Developed key performance indicators (KPIs) and visualizations to support decision-making.

•Dimension (SCD Type1, SCD Type2, CDC and Incremental Load and Fact load processes

•Designed and implemented Star Schema data models for enterprise data warehouses, enabling efficient querying and reporting for business intelligence applications.

•Developed fact and dimension tables in a Star Schema structure, optimizing data storage and retrieval for faster execution of analytical queries.

•Database Design & Development: Oracle PL/SQL, triggers, stored procedures, packages, query optimization

•Data Modeling: Star and Snowflake Schemas, Dimensional Modeling, Data Marts, and Data Warehousing

•Utilized tools TOAD/PL SQL during development of the application.

•Enforce data governance practices within Databricks, implement role-based access controls (RBAC), encryption, and data masking to protect sensitive data and ensure regulatory compliance.

•Created references tables, applications and workflows and deployed that to Data integration service for further execution of workflow.

•Reviewed and analyzed functional requirements, mapping documents, problem solving and troubleshooting.

•Performed unit testing at various levels of ETL and actively involved in team code reviews.

•identified problems in existing production data and developed one-time scripts to correct them.

•Fixed the invalid mappings and troubleshooted the technical problems of the database.

•Developed Cloud integration parameterized mapping templates (DB, and table object parameterization) for Stage, Dimension (SCD Type1, SCD Type2, CDC and Incremental Load) and Fact load processes.

Environment: Data mapping, transformation, workflow optimization,AWS, S3, Informatica Data Quality (IDQ), data validation, Informatica IICS, PowerCenter, Tableau, OBIEE, SQL, Python/Shell scripting, UNIX, PL/SQL, Windows 7, ERWIN.

Client: Anthem Inc, Atlanta, Georgia Oct 2021 – Oct 2022

Role: Oracle Developer

Responsibilities

•Coordinated with the front-end design team to provide them with the necessary stored procedures and packages and the necessary insight into the data.

•Worked on SQL*Loader to load data from flat files obtained from various facilities every day.

•Created and modified several UNIX shell Scripts according to the changing needs of the project and client requirements.

•Designed and implemented database solutions to support business applications, enhancing data accessibility and performance.

•Conducted through PL/SQL code reviews, ensuring adherence to best practices and optimizing existing code.

•Developed custom stored procedures and functions to automate data processing tasks, improving efficiency.

•Collaborated with cross-functional teams to gather requirements and translate them into technical specifications.

•Utilized SQL Loader for bulk data uploads, streamlining data migration from legacy systems.

•Created and maintained comprehensive documentation for database structures and processes, facilitating knowledge transfer.

•Provided production support, troubleshooting issues, and implementing solutions to minimize downtime.

•Developed and maintained multiple repositories on GitHub, ensuring clean and well-documented code.

•Created and managed feature branches for new development, followed by thorough testing and merging into the main branch.

•Conducted code reviews through GitHub Pull Requests, ensuring adherence to coding standards and best practices.

•Write Unix Shell Scripts to process the files on daily basis like renaming the file, extracting date from the file, unzipping the file and remove the junk characters from the file before loading them into the base tables.

•Involved in the continuous enhancements and fixing of production problems.

•Generated server-side PL/SQL scripts for data manipulation and validation and materialized views for remote instances.

•Developed PL/SQL triggers and master tables for automatic creation of primary keys.

•Created PL/SQL stored procedures, functions and packages for moving the data from staging area to higher environments after successful testing and validation.

•Created scripts to create new tables, views, queries for new enhancement in the application using TOAD.

•Created indexes on the tables for faster retrieval of the data to enhance database performance.

•Involved in data loading using PL/SQL and SQL *Loader calling UNIX scripts to download and manipulate files.

•Performed SQL and PL/SQL tuning and Application tuning using various tools like EXPLAIN PLAN, SQL *TRACE, TKPROF and AUTOTRACE.

•Extensively involved in using hints to direct the optimizer to choose an optimum query execution plan.

•Used Bulk Collections for better performance and easy retrieval of data, by reducing context switching between SQL and PL/SQL engines.

•Created PL/SQL scripts to extract the data from the operational database into simple flat text files using UTL_FILE package.

•Partitioned the fact tables and materialized views to enhance the performance.

•Extensively used bulk collection in PL/SQL objects for improving performance.

•Created records, tables, collections (nested tables and arrays) for improving Query performance by reducing context switching.

•Used Pragma Autonomous Transaction to avoid mutating problems in database trigger.

•Extensively used the advanced features of PL/SQL like Records, Tables. Object types and Dynamic SQL.

•Handled errors using Exception Handling extensively for the ease of debugging and displaying the error messages in the application.

Environment: Oracle 11g,12c,19c, SQL * Plus, TOAD,JIRA, ServiceNow, PL/SQL Programming, Database Design, Performance Tuning SQL*Loader, Git Hub, SSRS, SQL Developer, Shell Scripts, UNIX, Windows XP.

Client: Fresenius healthcare, N.C Oct 2018 – Sep 2021

Role: Oracle Developer

Responsibilities:

•Involved in resolving issues present in the product by analyzing the cause of the issue and solving the same.

•Involved in allocation of work and verification of program specifications done by fellow team members.

•Led the development of a comprehensive database architecture for multiple environments.

•Collaborated with stakeholders to gather requirements and ensure alignment with business goals.

•Created and optimized complex SQL queries, stored procedures, and functions for data retrieval.

•Collaborated with teams using GitHub Discussions for knowledge sharing and decision-making.

•Published API documentation and user guides on GitHub Pages for better accessibility.

•Developed ETL processes to integrate data from various sources into SQL Server.

•Utilized monitoring tools to identify and resolve performance bottlenecks in database operations.

•Handled modifications in stored procedures, functions, packages, tables, views at the backend and also in PL/SQL coding both at backend and front end.

•Involved in unit testing for all the self-made changes made respectively in our team.

•Involved in preparing documents for program change specification and program specification for the various changes that need to be made in the existing ERP Product.

•Used HTML and java script for coding at the front end for web applications and forms 6i and reports 6i for desktop applications.

•Created many database structures like tables, sequences and procedures as per the requirement for solving Le issues and for new enhancements.

•I was leading a team of two members and made a code review for their coding and documentation.

•Documented all the details discussed in team meetings and circulated amongst the team members through email.

•Made use of TOAD as an editor to handle with backend database.

•Used MS-Project to maintain the schedule of deliverables

•Responsible for using dynamic SQL in PL/SQL which is an advanced programming technique that makes the applications more flexible and versatile.

Environment: Oracle 12c,19c,11g, Forms 6, Reports 6, PL/SQL, TOAD. ServiceNow, Java Script, HTML, Crystal Reports 9. MS-Excel, MS-Access 2003, Git Hub, Power BI,MS-Word 2003, Adobe Acrobat 9.0, MS-Project

Client: US Bank, Chicago, IL Jan 2017 – Aug 2018

Role: ETL Developer

Responsibilities:

•Responsible for design and development of projects leveraging Informatica PowerCenter ETL tool.

•Interacted with the Business Users to analyze the Business Requirements, High Level Document (HLD) and Low-Level Document (LLD) and transform the business requirements into the technical requirements.

•Designed and Developed Oracle PL/SQL Package for initial loading and processing of Derivative Data.

•Worked with various Informatica client tools like Source Analyzer, Mapping Designer, Mapplet

•Designer, Transformation Developer, Informatica Repository Manager and Workflow Manager.

•Implemented weekly error tracking and correction process using Informatica.

•Performed Data Quality checks and developed ETL and Unix Shell Script processes to ensure flow of data of the desired quality.

•Designed the detailed ETL architecture, including agents, scenarios, packages, data mapping, data extractions, transformations and validations. ETL design and implementation include stand-alone and Java EE agents, ODI data services, ODI Console, ODI studio.

•Configured LDAP to bring users and groups using SQL Authenticator.

•Created multiple maps to show student enrollment from multiple geographical locations.

•Performed geometry calculations using oracle (spatial) complex functions.

•Applications implementation using ArcGIS, JavaScript, PHP, XML, HTML with SQL and Python.

•Skilled in data profiling to identify data anomalies, inconsistencies, and gaps in large datasets.

•Experience in integrating IDQ with Informatica PowerCenter for end-to-end data integration and quality management.

•Implementing and managing image into spatial databases, maps and related products.

•Performed Unit and Integration Testing of Informatica Sessions, Batches and Target Data.

•Worked on transformations Source Qualifier, Expression, Filter, Lookup, Router, Update Strategy, Union, and Sequence Generator.

•Optimized OLAP cube structures using Star Schema principles, supporting multidimensional analysis for complex business queries.

•Automated ETL processes to populate Star Schema tables, ensuring timely and accurate data refreshes for business users.

•Imported and integrated the Knowledge modules per load process and prepared the load modules based the design data models.

•Developed mappings for various business areas in the project by employing the available Transformations.

•Developed operational process and procedures for monitoring ODI scenarios & agents for production troubleshooting and analysis.

•Created and maintained the Data Model repository as per enterprise standards.

•Designed and Implemented Custom Data Mart and generated DDL & DMLs to migrate to different environments.

•Incremental buildout of the data to support evolving data needs by adopting the agile methodology.

•Writing Data Validation scripts using complex SQL, fluid queries to Redshift.

•Developed the SQL scripts and Procedures for loading data through flat file and S3 (Simple Storage Structure) Bucket to Target Redshift tables using Unix Shell.

•Used analytical and Windowing functions of Redshift to implement complex business.

•Performed data manipulations using various Informatica Transformations like Filter, Expression,

Environment: Informatica PowerCenter 9.6, ODI, Toad, UNIX.

Client: INFO Apps Pvt. Ltd, HYD Dec 2012 – Mar 2015

Role: SQL Database Developer

Responsibilities:

•Gather requirements from business analysts, develop physical data models using Erwin, and create DDL scripts to design database schema and database objects.

•Perform database defragmentation and optimize SQL queries, improving database performance and loading speed.

•Involving in the development, maintenance, and enhancement of the PIX application.

•Involving in the PL/SQL code review and modification for the development of new requirements.

•Developing custom stored procedures and packages to support new enhancement needs.

•Using the business objective reports to generate reports using ref cursors that were passed from PL/SQL procedures.

•Optimized SQL scripts using execution plans, indexes, partitions and using Database Engine Tuning Advisor.

•Wrote complex store procedures for data profiling process needed to define the structure of the prestaging and staging area.

•Involved in designing and applying constraints and rules for data integrity.

Environment: Oracle 11g, Toad.



Contact this candidate