Bhargav Chintha — Senior Data Engineer
201-***-**** ****************@*****.***
PROFESSIONAL SUMMARY:
Highly accomplished Senior Data Warehouse Engineer with over 5 years of extensive experience in designing, implementing, and optimizing robust Linux-based processes and infrastructure for complex data warehousing solutions. Possessing profound proficiency in Shell Scripting and Oracle development, including Oracle Exadata, which is crucial for enhancing ETL/database load/extract processes and ensuring data integrity. Proven track record in leveraging ETL tools such as Informatica and orchestration platforms like Apache Airflow with Python to drive automation and continuous process improvement within dynamic Agile methodologies. Adept at identifying and implementing system and architecture improvements for optimal data performance and reliability across relational databases, with strong command of Python, Perl, and Unix file systems.
WORK EXPERIENCE:
Senior Data Engineer @ Axon — Seattle, Washington Jan 2024 – Present
Implemented and configured advanced Linux-based processes and infrastructure to support high-performance data warehousing operations, enhancing system efficiency significantly.
Developed and optimized complex Shell Scripts for automated data extraction, transformation, and loading (ETL) into Oracle Exadata, improving data pipeline reliability.
Managed and maintained Oracle Exadata environments, ensuring peak performance and availability for critical business intelligence and reporting systems.
Designed and executed system and architecture improvements, leading to a 20% reduction in data processing times across core data warehouse functions.
Enhanced existing ETL and database load/extract processes using Informatica, achieving greater throughput and accuracy in data ingestion.
Orchestrated intricate data workflows and dependencies using Apache Airflow with Python, automating job scheduling and monitoring for robust data pipelines.
Conducted in-depth analysis of Unix file systems to optimize storage, permissions, and standard tools, contributing to a secure and efficient data environment.
Collaborated with cross-functional teams to integrate new data sources and implement scalable data models within the data warehouse architecture.
Developed comprehensive documentation for Linux configurations, shell scripts, and ETL processes, facilitating knowledge transfer and system maintenance.
Provided expert guidance on data warehousing best practices and automation strategies, mentoring junior engineers in advanced data engineering techniques.
Ensured data quality and integrity by implementing robust validation rules and monitoring mechanisms throughout the data lifecycle.
Leveraged Python for custom scripting solutions, enhancing data manipulation capabilities and supporting complex analytical requirements.
Technologies Used: Azure, Linux, Oracle Exadata, Shell Scripting, Informatica, Apache Airflow, Python, SQL, Bash Data Engineer @ Humana — Louisville, Kentucky Mar 2021 – Dec 2022
Administered and optimized Linux environment setups, ensuring robust infrastructure for data warehousing and analytics platforms.
Developed and maintained sophisticated Shell Scripts to automate routine database tasks, data cleanup, and system health checks.
Played a key role in Oracle development, including schema design, query optimization, and stored procedure creation for critical data warehouse applications.
Enhanced ETL/database load/extract processes, leading to improved data refresh rates and reduced latency for reporting.
Utilized Python extensively for data manipulation, scripting complex data transformations, and integrating various data sources into the warehouse.
Implemented and managed Unix file systems, including configuration of mount types, permissions, and standard tools to secure data assets.
Contributed to system and architecture improvements for the data warehousing platform, focusing on scalability and performance enhancements.
Collaborated within an Agile framework to deliver iterative data solutions, adapting to evolving business requirements and priorities.
Developed Perl scripts for specific data processing needs, expanding the team's capabilities for legacy system integration.
Managed relational databases, performing regular performance tuning, backup, and recovery operations to ensure data availability.
Identified opportunities for process automation, designing and implementing solutions that reduced manual effort by 15%.
Participated in code reviews and provided technical feedback, ensuring adherence to best practices for data engineering solutions.
Technologies Used: AWS, Linux, Oracle, Shell Scripting, Python, Perl, Unix, SQL, Agile Data Engineer @ RELEX Solutions — Atlanta, Georgia Jun 2019 – Feb 2021
Designed and implemented Linux-based infrastructure components to support diverse data processing requirements and applications.
Developed efficient Shell Scripts for automating data ingestion, transformation, and validation routines within data pipelines.
Optimized ETL/database load/extract processes, significantly improving the speed and reliability of data delivery to end- users.
Managed and configured Unix file systems, ensuring proper access controls and efficient data storage management practices.
Collaborated with database administrators to fine-tune relational databases for optimal performance and query execution.
Contributed to the identification and implementation of system improvements, enhancing overall data platform stability.
Utilized Python for scripting custom data parsers and aggregators, supporting advanced analytics and reporting needs.
Assisted in the development and maintenance of data warehousing solutions, focusing on star and snowflake schema designs.
Executed data migration projects, ensuring seamless transfer of large datasets with minimal downtime and data loss.
Monitored data pipelines for anomalies and performance bottlenecks, implementing proactive solutions to maintain data flow efficiency.
Participated in the full software development lifecycle, adhering to best practices for code quality and deployment.
Supported the integration of various data sources, developing connectors and APIs to consolidate information for business intelligence.
Technologies Used: Google Cloud Platform, Linux, Shell Scripting, Python, Unix, SQL, PostgreSQL, MySQL TECHNICAL SKILLS:
Programming Languages: Python, Perl, SQL, PL/SQL
Data Warehousing: Data Warehousing, Oracle Exadata, Relational Databases, Data Flows
Scripting & Operating Systems: Shell Scripting, Linux, Unix File Systems, Bash Scripting
ETL & Orchestration: Informatica, Apache Airflow, ETL Processes, Data Integration
Databases: Oracle, SQL Server, MySQL, PostgreSQL
Methodologies: Agile, Scrum, Waterfall
EDUCATION:
Master of Science in Information Systems @ Wilmington University