Post Job Free
Sign in

Data Engineer Warehousing

Location:
Dallas, TX
Posted:
April 18, 2025

Contact this candidate

Resume:

Roopin Vipparthi

Data Engineer

********@*****.***

+1-660-***-****

Objective:

Data Engineer with over 6 years of experience in data architecture, ETL development, and data warehousing. Seeking a role to leverage my expertise in building robust data pipelines, optimizing performance, and delivering actionable data insights for enhanced analytics and decision-making. Professional Summary:

-Proficient in ETL development, data integration, and data transformation using Apache NiFi, SAP BODS, Apache Airflow, and custom scripts.

-Strong knowledge of data modeling techniques, data warehousing best practices, and data structure design.

-Hands-on experience with big data frameworks, including Hadoop and Apache Spark, for processing and analyzing large datasets.

-Proficient in SQL, Python, and Java for data manipulation, transformation, and automation.

-Experienced in cloud data storage and processing on AWS and Azure, managing data lakes and data warehousing in the cloud.

-Knowledgeable in data security best practices, encryption techniques, and compliance requirements, with a track record of implementing data security measures.

-Proven ability to perform data quality assessments, identify and address data issues, and conduct database performance tuning for optimal query response times.

-Skilled in automating data processes, reducing manual interventions, and improving overall data pipeline efficiency.

-Collaborative mindset with experience in working with cross-functional teams, data analysts, and data scientists to design effective data models and enable advanced analytics.

-Strong problem-solving and troubleshooting skills in data integration, transformation, and data quality enhancement.

-Led the design and implementation of a data warehouse, improving data accessibility and reporting capabilities.

-Provided mentoring and guidance to a team of junior data engineers, fostering professional growth and knowledge sharing.

Technical Skills:

-ETL Development: Proficient in designing, developing, and maintaining complex ETL processes. Experience with SAP BODS, Apache NiFi, Apache Airflow, and custom scripts for data integration.

-Data Modeling and Warehousing: Strong knowledge of data modeling techniques and data warehousing best practices. Skilled in data structure design and schema optimization.

-Big Data Technologies: Hands-on experience with big data frameworks, including Hadoop and Apache Spark. Proficient in processing and analyzing large datasets.

-Programming Languages: Proficient in SQL, Python, and Java for data manipulation, transformation, and automation.

-Cloud Platforms: Well-versed in cloud data storage and processing on AWS and Azure. Expertise in managing data lakes and data warehousing in the cloud.

-Data Security and Encryption: Knowledge of data security best practices, encryption techniques, and compliance requirements. Implemented data security measures to protect sensitive information.

-Data Quality and Performance Tuning: Proven ability to perform data quality assessments, identify and address data issues, and conduct database performance tuning for optimal query response times.

Professional Experience:

Data Engineer Adobe, Tulsa, OK Jun 2022 – Present

-Led ETL process improvements, reducing data processing time by 15% and ensuring data accuracy.

-Collaborated with data analysts and scientists to develop data models, enabling advanced analytics and insights.

-Automated data processes using SAP BODS, Apache NiFi and Apache Airflow, streamlining data integration and reducing manual interventions.

-Implemented data quality checks, monitoring, and alerts, maintaining data integrity.

-Migrated and managed data on AWS and Azure, resulting in cost savings and improved scalability.

-Worked with Snowflake for cloud-base data warehousing, optimizing data storage and performance for scalable data processing, and analytics.

-Designed and implemented a scalable data warehouse solution AWS Redshift, increased data accessibility by 20% and cut reporting time by 25%.

-Worked with AWS Glue to automate data integration and transformation process and reducing manual ETL tasks by 25%.

-Implemented AWS CloudWatch and CloudTrail for real time monitoring, logging, and ensuring compliance with data security best practices.

-Worked with Hadoop to process large datasets, utilizing Apache Hive for querying and Apache HBase for NoSQL data storage to support real-time data analytics.

-Leveraged Amazon RDS for relational databases, ensuring high availability and optimal performance for transactional systems.

Senior Data Engineer ChenMed, Miami Gardens, FL July 2021 – June 2022

-Designed and implemented a data warehouse, improving data accessibility and enabling faster reporting (20% improvement).

-Managed complex ETL workflows for diverse business intelligence requirements, ensuring data accuracy and optimizing performance.

-Utilized SAP BODS to automate data extraction and transformation processes, improving ETL workflows and ensuring real-time data availability.

-Optimized ETL processes using Snowflake Snowpipe to automate real-time data ingestion, ensuring timely data updates for business rules.

-Automated data extraction and transformation using AWS Glue, enhancing data pipeline efficiency and improving data processing time by 30%.

-Utilized Hadoop Distributed File System (HDFS) to store and manage massive amounts of data with high availability and fault tolerance, reducing downtime by 20%.

-Collaborated with the data science team to build machine learning pipelines, enhancing decision-making and efficiency.

-Used Snowflake’s streams and Tasks to efficiently track data changes and automate Incremental ETL Jobs, enhancing the speed and accuracy of data processing workflows.

-Conducted data profiling and analysis, reducing data anomalies by 20% through data cleansing and standardization.

-Mentored junior data engineers, enhancing the team's capabilities and productivity. Data Engineer Dcube,Hyderabad Apr 2019 – Dec 2020

-Orchestrated data pipeline workflows using Apache NiFi and Apache Airflow, improving data integration efficiency and ensuring real-time data availability.

-Collaborated with business stakeholders to define data requirements, developed data solutions aligned with business goals, and enabled data-driven decisions.

-Performed database performance tuning, reducing query response times by 25% and enhancing system efficiency.

-Used Hadoop’s YARN for resource management and job scheduling, optimizing cluster performance and resource allocation by 15%.

-Collaborated with data scientists to integrate Hadoop-based processing with machine learning pipelines to enhance predictive analytics.

-Implemented Snowflake’s Secure Data sharing to ensure secure and efficient sharing of data across departments and with external partners.

-Conducted performance tuning of Snowflake queries, optimizing them for faster execution and reducing query response time by 20%.

-Designed and maintained cloud-based solutions, using AWS EC2 and Snowflake for scalable data processing, reducing overall processing time for data-heavy tasks by 20%.

-Managed data source connections, including APIs, databases, and flat files, to ensure uninterrupted data flow and increased data availability. Education:

-Masters in Computer Science from Northwest Missouri State University Certifications:

-SnowPro Core Certificate, Snowflake feb 2025

-AWS Certified Data Engineer - Associate, AWS Jan 2025



Contact this candidate