Post Job Free
Sign in

Etl Developer Business Operations

Location:
Lake Forest, IL
Posted:
November 07, 2025

Contact this candidate

Resume:

RAMESH MAGANTI

813-***-**** *******@*****.*** https://www.linkedin.com/in/mrameshkumar79/ 1260 Blackburn St, Gurnee, IL 60031

professional summary

Highly accomplished Lead Oracle Database / ETL Developer with 16+ years of experience delivering high-performance, enterprise-scale data solutions across Banking, Healthcare, Logistics, and Life Sciences domains. Recognized as a Subject Matter Expert (SME) in Oracle PL/SQL, ETL pipeline development, and data warehousing — designing, programming, and configuring complex data processing and integration solutions. Proven ability to collaborate with business and technical stakeholders, mentor junior developers, and implement critical production systems without disrupting business operations. Skilled in full SDLC, requirements gathering, functional/technical specification reviews, and database performance optimization, including migrating and managing enterprise workloads on AWS RDS to improve scalability, availability, and operational efficiency.

TECHNICAL SKILLS

Category

Skills / Tools

Years of Experience

Programming / Scripting

PL/SQL, SQL, Python, UNIX Shell Scripting, HTML, Java, Visual Studio

16+ Years

Databases

Oracle 8i–19c, Exadata, SQL Server 2005–2019, PostgreSQL

16+ Years

ETL / Data Engineering

ETL pipeline design, Data Warehousing, Informatica 9x/10x, Oracle Data Integrator (ODI), SSIS, Kafka, Airflow, JSON/XML processing, API integration, Data Modeling (Erwin 8–10), Staging/Fact tables, Error Handling, Materialized views, Partitioning strategies

12+ Years

Performance Tuning

SQL / PL/SQL optimization, Bulk processing, FORALL, Pipelined functions, Indexing, Partition pruning, Execution plan analysis, AWR/ASH reports

12+ Years

Data Analysis

Data profiling, Data validation, Data lineage, Root-cause analysis, Statistical analysis using Python (pandas, NumPy), Visualization using Power BI and Matplotlib

12+ Years

Reporting / BI

Power BI, BI Publisher, SSRS, Splash BI

12+ Years

Automation / Scheduling

Autosys, Control-M, Cron, Job streams, Dependency-driven workflows

14+ Years

Cloud / DevOps

AWS RDS, S3, Git, GitHub, DevOps pipelines, Jenkins, CI/CD

4+ Years

Governance & Compliance

Data governance, Data quality, Regulatory compliance (NIST CSF, ISO 27001, HL7, DICOM), Audit-ready documentation

10+ Years

Soft Skills

Cross-functional collaboration, Requirement gathering, Production support, Incident resolution, technical documentation

16+ Years

Professional Experience

Bank of America – Chicago, IL

Lead Oracle Database/ETL Developer Jun 2024 – Present

Served as Subject Matter Expert (SME) for enterprise Oracle PL/SQL & ETL solutions, leading architectural design, performance tuning, and technical guidance across cross-functional teams.

Conducted requirement gathering sessions with business/functional owners and translated business rules into technical specifications, data mapping documents, and ETL workflow designs.

Migrated and optimized enterprise data workloads using AWS services, including RDS (Oracle), S3 for DATAHUB storage, and IAM for secure access.

Designed and optimized PL/SQL packages, stored procedures, triggers, pipelined table functions, and SYS_REFCURSOR-based APIs, leveraging bulk processing, exception handling, and query tuning.

Developed advanced PL/SQL solutions using dynamic SQL, autonomous transactions, materialized views, JSON/XML parsing, and DBMS Scheduler jobs to support scalable ETL workflows.

Optimized complex SQL queries using analytical/window functions hierarchical queries CASE/DECODE/COALESCE, and partition pruning techniques.

Performed performance tuning using AWR/ASH reports, optimizer plan analysis, indexing strategies, and query rewrites to improve system throughput and reduce processing time.

Engineered and maintained ETL pipelines in Informatica PowerCenter, integrating multi-source data (Oracle, SQL Server, REST APIs), ensuring quality, consistency, and high-volume scalability.

Automated ETL workloads using Autosys with job dependency handling, restart logic, failure alerting, and log capture for fully reliable batch operations.

Designed and implemented real-time streaming pipelines using Apache Kafka, developing producers/consumers for low-latency data exchange and event-driven integrations.

Executed unit testing, integration testing, data validation & source-to-target reconciliation for PL/SQL modules and ETL workflows, ensuring data correctness before production releases.

Implemented SQL*Loader / External Table ingestion workflows to load and transform large flat-file datasets efficiently.

Built and maintained REST/SOAP API integrations supporting operational reporting and enterprise data sharing.

Utilized Git/GitHub & GitHub Copilot to accelerate code development, enforce coding standards, and reduce development cycle duration.

Documented ETL mappings, data lineage, data dictionaries, runbooks, SOPs, deployment guides, and RCA reports for internal stakeholders, auditors, and governance teams.

Collaborated with Data Governance, Security, and Compliance teams to ensure adherence to NIST CSF, ISO 27001, SOX, and internal audit policies.

Delivered BI dashboards and ad-hoc reporting using SQL*Plus, PL/SQL scripts, and Power BI to support operational and executive decision-making.

Provided end-to-end production support for critical data processing and reporting systems, including daily batch monitoring, incident resolution, RCA, service request handling, and SLA adherence.

Worked in both Agile and Waterfall SDLC models, participating in sprint planning, stand-ups, backlog grooming, UAT coordination, and release management cycles.

Key Milestones:

Reduced ETL runtime from 6 hours 1.8 hours using bulk binds & partition pruning.

Achieved 99.9% SLA adherence in production environments.

Delivered a global DATAHUB supporting over 5,000+ users in 50+ countries.

Automated on-call production support workflows with Python & DBMS_SCHEDULER.

Environment: AWS, S3, Oracle 19c/12c, Exadata, PLSQL, Informatica 10x, Java, HTML, CCS, Kafka, Python, JSON/XML, REST/SOAP API, Autosys, SQL*Loader, Toad, Git, DevOps, Jenkins, Airflow, Windows, Unix

AbbVie Inc– Chicago, IL

Lead Oracle PLSQL/ETL Developer Mar 2023 – May 2024

Supported clinical trial data management and drug discovery systems (LSH) through advanced Oracle PL/SQL programming, ETL development, and data harmonization — improving clinical data accuracy and operational efficiency.

Engineered enterprise Data Warehouse (EDW), Data Marts, and ODS to centralize study, patient, and trial data for analytical and regulatory reporting, including workloads migrated to AWS RDS for scalable storage and performance.

Collaborated with biostatistics, data science, and regulatory teams to deliver harmonized datasets for FDA submission pipelines and advanced clinical analytics.

Ensured compliance with GxP, HL7, FDA, and internal data governance standards including data lineage documentation, audit controls, and validation workflows.

Designed, implemented, and optimized end-to-end ETL pipelines in Oracle using PL/SQL packages, procedures, functions, triggers, and materialized views for large-scale clinical and operational datasets.

Applied data modelling best practices using Erwin and Toad, designing snowflake schemas with fact and dimension tables supporting regulatory reporting and analytics.

Developed advanced analytical SQL using window functions (ROW_NUMBER, RANK, LAG, LEAD), hierarchical queries, and conditional logic to optimize analytical workloads.

Implemented incremental data loading, pushdown optimization, and partition pruning in Informatica PowerCenter, including integration with AWS S3 for clinical data ingestion staging.

Integrated multi-source data from Oracle, PostgreSQL, MySQL, and flat-file systems into Spotfire dashboards, and enabled analytical querying on clinical datasets stored in DATAHUB.

Designed Python scripts for flat-file preprocessing, ETL orchestration, and job monitoring — integrated with Autosys for workflow automation.

Integrated Apache Kafka with Informatica PowerCenter and PL/SQL workflows to handle streaming and batch-based clinical event processing for real-time analytics dashboards.

Strengthened DevOps and CI/CD pipelines using Git, GitHub, and Jenkins for automated version control, code validation, and deployment of PL/SQL and ETL changes across environments.

Established data harmonization and quality frameworks, enforcing business rules, validation rules, and lineage traceability across multiple studies and systems.

Key Milestones:

Reduced FDA submission cycles (5 days 1 day) through optimized ETL pipelines.

Improved stability by 25% with proactive monitoring & error handling.

Scaled DW to 1M+ patient records across 30+ clinical trials.

Environment: AWS, S3, Oracle 19c/12c, PL/SQL, SQL, Informatica 10x, HTML, Java, Kafka, Python, JSON, REST API, Git, DevOps, Toad, Jira, Windows, UNIX, Visual Studio.

Baker & Taylor – Charlotte, NC

Senior Oracle Database/ETL Developer Feb 2021 – Feb 2023

Reverse-engineered legacy ETL workflows and architected a high-speed Data Highway, reducing end-to-end data load times from 6+ hours to < 2 hours and improving downstream analytics availability.

Designed and developed complex PL/SQL packages, procedures, and pipelined functions to transform and publish data from staging to EDW fact/dimension tables, ensuring consistency, referential integrity, and audit traceability.

Performed requirement gathering sessions with business SMEs and data analysts to define data rules, validation logic, and reporting expectations; translated functional requirements into technical designs and ETL specifications.

Built automated SQL/Power BI reporting systems, optimizing analytical SQL queries and providing real-time performance and operational dashboards for leadership and business partners.

Implemented Shell scripting and Cron-based ETL automation, improving batch job reliability, exception notifications, and system observability.

Designed and tuned SQL*Loader and External Table data ingestion pipelines to automate high-volume batch loads from flat files into Oracle, improving throughput and reducing manual processing errors.

Migrated on-premises Oracle workloads to AWS RDS, leveraging Python for file handling, API-based data transfers, metadata processing, and pipeline orchestration.

Proficient in Python for data manipulation, ETL automation, orchestration helpers, and ad-hoc validation utilities using pandas and custom process scripts.

Provided ongoing production support, performing daily load monitoring, log analysis, performance troubleshooting, RCA resolution, and SLA adherence for mission-critical data pipelines.

Conducted data validation, regression testing, and source-to-target reconciliation to ensure correctness of transformed datasets before production release.

Developed and maintained technical documentation, including ETL mapping sheets, solution architecture diagrams, SOP/runbooks, deployment checklists, and operational workflow documentation to support cross-team knowledge transfer.

Key Milestones:

Reduced ETL cycle time by 70%, enabling near real-time analytics for business teams.

Migrated 15+ mission-critical datasets from on-premises to AWS RDS with zero data loss.

Automated 50+ Cron jobs with shell scripting, improving batch monitoring and reducing failures by 40%.

Enhanced Power BI dashboards, cutting report refresh times from 30 mins under 5 mins.

Successfully integrated REST API-based external data feeds into the EDW pipeline, improving data freshness and accuracy.

Environment: Oracle 19c/12c, PL/SQL, SQL, Kafka, SSIS, JSON, AWS, REST API, Unix, Git, DevOps

Burlington Northern Santa Fe (BNSF) Railway – Dallas, TX

Senior Oracle PLSQL/ETL Developer Dec 2018 – Jan 2021

Provided production support for data warehouse and reporting applications, monitoring daily batch cycles, troubleshooting failures, and ensuring SLA adherence across rail operations, logistics, and asset management systems.

Optimized ETL feeds by ~30% through Oracle SQL tuning, advanced indexing strategies, parallel execution, and table partitioning, improving nightly processing and report availability.

Designed and maintained enterprise data models (Snowflake Schema) using Erwin for analytics platforms supporting operational dashboards and KPI reporting.

Automated data ingestion pipelines leveraging Shell scripting and Cron scheduling, improving pipeline reliability and reducing manual interventions.

Performed data validation, source-to-target reconciliation, and regression testing across staging, warehouse, and reporting layers to ensure data accuracy and consistency.

Migrated critical SQL Server stored procedures, views, and functions into Oracle 10g+ environments, implementing performance-efficient PL/SQL equivalents while maintaining business logic integrity.

Created detailed technical design documents, ETL mapping sheets, test cases, deployment instructions, and operational runbooks to support development and cross-team onboarding.

Built and enhanced BI dashboards using Splash BI, enabling real-time operational planning and decision-making for transportation, fleet utilization, and track maintenance operations.

Participated in release management and deployment coordination, collaborating with DBAs, application teams, and infrastructure groups for system upgrades, code migrations, and configuration changes.

Resolved complex production defects, performed root-cause analysis, and implemented long-term fixes to improve system reliability and prevent recurrence.

Key Milestones:

Improved ETL throughput by 30% through partition pruning, optimizer hints, and bitmap indexing.

Automated 100+ ETL workflows with Shell scripting, reducing manual intervention by 50%.

Delivered BI dashboards for operational and executive teams, cutting decision-making latency from hours near real-time.

Successfully migrated 10+ critical SQL Server databases into Oracle, maintaining 100% reconciliation accuracy.

Environment: Oracle 9i/12c, PL/SQL, SQL, JSON, Python, REST API, Git, SSIS

Ministry of Health, Muscats, Oman

Oracle Database Developer Mar 2012 – Nov 2018

Developed and implemented the Al-Shifa 3+ Hospital Information System deployed across 180+ hospitals and health centres, supporting patient care, laboratory, pharmacy, radiology, and blood bank operations.

Built ETL workflows using IBM Infosphere DataStage 9.x integrating clinical, diagnostic, and operational data while ensuring compliance with HL7, DICOM, and ICD coding standards.

Designed and optimized PL/SQL stored procedures, packages, and triggers to ensure accurate clinical and administrative data processing across inpatient and outpatient workflows.

Developed and enhanced Oracle Forms 9i modules for patient registration, appointment scheduling, pharmacy dispensing, and inventory management, improving operational efficiency and data quality.

Created Oracle Reports 9i for clinical summaries, billing statements, inventory status, and diagnostic analytics, enabling timely decision-making across multi-facility environments.

Performed functional, integration, regression, and data validation testing for PL/SQL modules and ETL workflows to ensure correctness, reliability, and system stability.

Executed source-to-target data verification, clinical workflow validation, and patient data accuracy checks across laboratory, radiology, billing, and EMR modules.

Authored technical documentation, including design specifications, data mapping sheets, interface specifications (HL7 messages), deployment instructions, and end-user training manuals.

Collaborated with doctors, nurses, admins, and IT teams to gather

Key Milestones:

Successfully deployed Al-Shifa 3+ across 180+ hospitals and clinics, improving patient data accessibility and reporting consistency.

Optimized ETL workflows, reducing data load times by 50% while maintaining HL7/DICOM compliance.

Developed 50+ complex PL/SQL packages and reports used for critical operations such as patient monitoring, inventory, and blood bank management.

Implemented data validation and reconciliation frameworks, achieving 99.8% data accuracy across multiple hospital systems.

Automated healthcare reporting and analytics dashboards, enabling real-time insights for hospital administrators and decision-makers.

Environment: Oracle 11g, PL/SQL, SQL, DataStage 9.x, SSIS, Windows, Forms 9i, Reports 9i

Inter-Tech, Muscat, Oman

Oracle Database Developer Apr 2009 – Feb 2012

Designed a stores & purchase management system with automated balance computation and real-time reporting, which reduced manual errors and improved decision-making speed.

Developed PL/SQL procedures, triggers, and functions to streamline business process automation, resulting in reduced transaction processing time and enhanced system scalability.

Key Milestones:

Enabled real-time reporting for procurement and stock levels, improving decision-making speed by 60%.

Developed modular PL/SQL procedures and triggers, reducing transaction processing time by 40% and enhancing system scalability.

Environment: Oracle 11g, PL/SQL, SQL, Forms 6i/9i, Reports 6i/9i, Windows

EDUCATION

B.Sc. (Science) – ANGRAU University, Hyderabad, India-2002

CERTIFICATIONS

Oracle Certified Professional (OCP) – Oracle University-2008



Contact this candidate