Satyaprasad Chittala — Senior Data Engineer
913-***-**** ***************@*****.***
PROFESSIONAL SUMMARY
Leveraging around 5 years of dedicated experience in Data Warehousing and Linux-based data engineering environments, focusing on robust data solutions.
Demonstrated expertise in Shell Scripting and advanced Oracle development, crucial for managing complex data warehousing infrastructure and processes.
Proficiently implemented, configured, and managed Linux-based processes, infrastructure, and toolsets to enhance data warehousing operations and efficiency.
Skilled in identifying and implementing critical system and architecture improvements, ensuring optimal performance and scalability for data platforms.
Extensive experience enhancing ETL and database load/extract processes, particularly within Oracle Exadata and other relational database systems.
Adept at Python and Perl for data processing and automation, alongside practical knowledge of Unix file systems and standard command-line tools.
Strong background in utilizing ETL tools like Informatica for data integration and orchestration tools such as Apache Airflow with Python.
Passionate about automation and continuous process improvement, applying Agile methodologies to deliver high- quality data engineering solutions.
Expert in building scalable data pipelines, optimizing performance, and ensuring data quality across diverse structured and semi-structured datasets.
EDUCATION
Master of Science in Computer Science @ University of Central Missouri TECHNICAL SKILLS
Programming Languages: Python, Scala, SQL, Shell Scripting, Perl
Data Warehousing: Oracle Exadata, Informatica, Data Modeling, ETL Design, Performance Tuning, Data Flow Management
Operating Systems: Linux, Unix (file systems, permissions, pipes)
Databases: Oracle, PostgreSQL, SQL Server, Hive, Snowflake
Orchestration & Workflow: Apache Airflow, Control-M, Jenkins
Big Data Technologies: Spark, Hadoop, Kafka, EMR
Cloud Platforms: AWS (S3, EMR, Glue, Lambda), Azure (Data Factory, Synapse Analytics, ADLS)
Version Control & DevOps: GitHub, Docker, CI/CD
WORK EXPERIENCE
Senior Data Engineer @ Humana Louisville, KY Sep 2024 – Present
Designed and implemented robust Linux-based processes and infrastructure specifically tailored for enterprise data warehousing solutions.
Developed complex Shell Scripts to automate critical ETL processes, database loads, and data extracts within the data warehouse environment.
Provided advanced Oracle development expertise, creating and optimizing stored procedures, functions, and packages for data manipulation.
Led initiatives to identify and implement system/architecture improvements, enhancing the scalability and performance of data warehousing systems.
Engineered scalable data pipelines utilizing Informatica for ETL processes, ensuring efficient ingestion and transformation of healthcare data.
Managed and configured Oracle Exadata databases, optimizing query performance and ensuring data integrity for large-scale analytical reporting.
Enhanced various Linux-based toolsets and jobs, integrating new functionalities to streamline data flow and operational efficiency.
Orchestrated complex data workflows using Apache Airflow with Python, scheduling and monitoring daily data warehouse operations.
Implemented rigorous data quality checks and validation mechanisms within the Linux environment to ensure accuracy of healthcare data.
Automated CI/CD pipelines using Jenkins for seamless deployment of data warehousing components and shell scripts.
Collaborated with cross-functional teams in an Agile environment to deliver timely and high-quality data warehousing solutions.
Monitored and maintained Linux server environments, applying best practices for security and resource management.
Technologies Used: Linux, Shell Scripting, Oracle Exadata, Informatica, Apache Airflow, Python, SQL, AWS (S3), GitHub, Jenkins, Agile
Data Engineer @ Goldman Sachs New York, NY Oct 2021 – Jul 2023
Implemented and managed Linux-based infrastructure to support critical data warehousing operations for financial datasets.
Developed and maintained advanced Shell Scripts for automating data ingestion, transformation, and loading into Oracle databases.
Executed complex Oracle development, including database schema design, query optimization, and performance tuning for large tables.
Actively contributed to system and architecture improvements, enhancing the reliability and efficiency of ETL processes.
Utilized Informatica to design and build robust ETL pipelines, integrating diverse financial data sources into the data warehouse.
Managed relational databases, primarily Oracle, ensuring high availability and optimal performance for analytical workloads.
Enhanced existing Linux-based toolsets, scripts, and cron jobs to automate daily data extract and load procedures.
Developed Python scripts for data validation, complex data transformations, and integration with various enterprise systems.
Integrated Apache Airflow with Python for orchestrating intricate data pipelines and ensuring timely data delivery to stakeholders.
Maintained comprehensive documentation for Linux configurations, shell scripts, and Oracle database procedures.
Participated in Agile sprint planning and daily stand-ups, ensuring alignment with project goals and delivery timelines.
Implemented version control using GitHub for all shell scripts, Python code, and Oracle SQL objects. Technologies Used: Linux, Shell Scripting, Oracle, Informatica, Python, Apache Airflow, SQL, Unix, GitHub, Jenkins, Agile
Junior Data Engineer @ Target Minneapolis, MN Nov 2019 – Sep 2021
Assisted in the implementation and configuration of Linux-based processes for retail data warehousing initiatives.
Developed and maintained basic Shell Scripts to automate routine data cleanup and file transfer operations within Unix environments.
Contributed to database development using PostgreSQL and SQL Server, focusing on schema creation and query optimization.
Supported ETL processes by enhancing database load and extract functionalities, ensuring data accuracy for reporting.
Utilized Python for data processing and scripting, assisting in the development of data transformation logic for retail analytics.
Gained practical knowledge of Unix file systems, including mount types, permissions, and utilizing standard tools like `awk` and `sed`.
Assisted in orchestrating data workflows using Apache Airflow with Python, scheduling daily data batch jobs.
Participated in identifying areas for system and architecture improvements within existing data pipelines.
Implemented data quality validation checks using SQL and Python to ensure the integrity of incoming retail data.
Collaborated with senior engineers to troubleshoot and resolve issues related to Linux-based data processing.
Adhered to Agile methodology, actively participating in team meetings and contributing to project delivery.
Maintained version control of scripts and configurations using GitHub, ensuring collaborative development practices.
Technologies Used: Linux, Shell Scripting, PostgreSQL, SQL Server, Python, Apache Airflow, SQL, Unix, GitHub, Jenkins, Agile