Post Job Free
Sign in

Senior Data Engineer - Oracle Exadata Expert

Location:
Texas
Posted:
April 30, 2026

Contact this candidate

Resume:

Bhargav Reddy Modugulla — Senior Data Engineer

704-***-**** ************.*********@*****.***

PROFESSIONAL SUMMARY:

Experienced Senior Data Engineer with 5 years developing and managing robust data warehousing solutions for complex enterprise environments.

Proficiently implement, configure, and manage Linux-based processes and infrastructure to support critical data warehousing operations and pipelines.

Skilled in identifying and implementing critical system and architecture improvements to enhance data integrity, performance, and scalability.

Adept at enhancing various Linux-based toolsets, shell scripts, jobs, and processes for optimizing operational efficiency and automation.

Expert in optimizing ETL and database load/extract processes, particularly within high-performance Oracle Exadata environments.

Extensive practical experience in Linux environment setup, deep understanding of Unix file systems, and advanced shell scripting for automation.

Strong programming background in Python and Perl, utilized for complex data manipulation, automation, and custom tool development.

Hands-on experience with relational databases, predominantly Oracle, ensuring high-performance data operations and query optimization.

Committed to fostering automation, driving continuous process improvement, and adhering strictly to Agile methodology principles for project delivery.

EDUCATION:

Master of Science in Computer Science (Concentration in Data Science) @ University of North Carolina at Charlotte WORK EXPERIENCE:

Senior Data Engineer @ CVS Health Woonsocket, RI Sep 2025 – Present

Engineered and maintained sophisticated data warehousing solutions leveraging Oracle Exadata for high-performance data storage and analytics.

Implemented and enhanced complex ETL processes using Informatica, ensuring efficient data load and extraction from diverse healthcare sources.

Configured and managed Linux-based infrastructure to support critical data warehousing operations and robust data ingestion pipelines effectively.

Developed extensive shell scripts for automating data pipeline orchestration, system monitoring, and administrative tasks across environments.

Optimized Oracle database performance through advanced SQL tuning and schema design for complex analytical queries and reporting requirements.

Enhanced various Linux-based toolsets, cron jobs, and custom processes for improved data management, reliability, and operational efficiency.

Designed and deployed robust data quality frameworks, ensuring integrity and accuracy across all data warehouse domains and applications.

Utilized Apache Airflow with Python to orchestrate intricate data flows, managing dependencies and scheduling batch processing workloads.

Identified and implemented critical system and architecture improvements to enhance data warehousing scalability, resilience, and performance.

Collaborated with cross-functional teams to understand data requirements and deliver optimized datasets for reporting and business intelligence.

Maintained comprehensive documentation for all Linux-based processes, ETL workflows, and Oracle database configurations for knowledge transfer.

Engaged in Agile sprints, contributing to continuous integration and delivery practices for data solutions while tracking tasks in JIRA.

Technologies Used: Oracle Exadata, Linux, Shell Scripting, Informatica, Python, Apache Airflow, SQL, Unix, Docker, Jenkins, Git, JIRA

Data Engineer @ Morgan Stanley New York, NY Nov 2022 – Jul 2024

Designed and optimized enterprise-scale data warehouse solutions, primarily utilizing Oracle databases for critical financial data management.

Developed and maintained robust ETL processes using Informatica PowerCenter for efficient data extraction, transformation, and loading.

Implemented advanced shell scripts to automate daily operational tasks, data validation, and system health checks on Linux platforms.

Managed and configured Linux-based servers, ensuring optimal performance and security for critical data warehousing applications and services.

Administered Oracle Exadata environments, focusing on performance tuning, capacity planning, and query optimization for financial analytics.

Enhanced existing ETL processes and database load/extract procedures, significantly reducing processing times and resource consumption.

Provided expertise in Unix file systems, including understanding mount types, permissions, standard tools, and pipe operations effectively.

Utilized Python for developing custom data processing scripts and integrating with various financial data sources and APIs.

Orchestrated complex data workflows and dependencies using Apache Airflow with Python, ensuring timely data availability for reporting.

Collaborated with data architects to design and implement system architecture improvements for scalability and reliability of data platforms.

Participated in an Agile development environment, delivering iterative enhancements to data warehouse systems and solutions.

Documented all processes, scripts, and database changes to maintain comprehensive knowledge transfer and operational continuity for the team.

Technologies Used: Oracle Exadata, Linux, Shell Scripting, Informatica, Python, Apache Airflow, SQL, Unix, Git, Jenkins, Agile Data Engineer @ Walmart Bentonville, AR Feb 2020 – Oct 2022

Designed and developed robust ETL processes using Informatica PowerCenter for large-scale retail data processing and analytics.

Crafted and optimized complex SQL queries for efficient data extraction, transformation, and loading into target databases.

Managed and administered Oracle and MySQL relational databases, ensuring high availability and optimal performance for retail operations.

Implemented batch processing pipelines leveraging Hadoop and Hive for processing vast volumes of retail transaction data.

Executed comprehensive data cleansing, transformation, and validation routines to maintain data quality standards across platforms.

Migrated on-premise relational database data to distributed Hadoop environments for enhanced analytics capabilities and storage efficiency.

Developed and utilized Unix shell scripts for automating various data ingestion, processing, and system monitoring tasks effectively.

Implemented robust logging and error handling mechanisms within ETL workflows to ensure data pipeline reliability and traceability.

Configured and managed Linux operating systems for hosting database servers and running critical data processing applications securely.

Gained practical knowledge of Unix file systems, including managing permissions, directories, and standard utility commands for operations.

Collaborated with business intelligence teams to understand data requirements and deliver accurate datasets for reporting and analysis.

Contributed to data warehouse design improvements, focusing on schema optimization and efficient data retrieval strategies for users.

Technologies Used: Informatica, Oracle, MySQL, Hadoop, Hive, SQL, Linux, Unix, Shell Scripting, Git TECHNICAL SKILLS:

Programming Languages: Python, Scala, SQL, Perl, Shell Scripting

Data Warehousing: Oracle Exadata, Informatica, Data Flows, ETL, Snowflake, Redshift, Synapse

Operating Systems: Linux, Unix

Big Data Technologies: Hadoop, Spark, Hive, Databricks

Cloud Platforms: AWS (S3, EMR, Glue, Lambda), Azure (ADLS, ADF, Synapse)

Database Management: Oracle, PostgreSQL, MySQL, MS SQL Server

Orchestration & Scheduling: Apache Airflow

Version Control & DevOps: Git, GitHub, Jenkins, Docker

Collaboration & Methodologies: Confluence, JIRA, Agile, Scrum



Contact this candidate