Post Job Free
Sign in

Senior Data Engineer - Cloud ETL/ELT Architect

Location:
Far North, OH, 43240
Salary:
80000
Posted:
March 20, 2026

Contact this candidate

Resume:

Abhishek A

Email: *************@*****.***

Mobile: 312-***-****

LinkedIn: linkedin.com/in/abhishek-arigela-8097981b5 Data Engineer

PROFESSIONAL SUMMARY

Designs scalable batch and streaming pipelines across cloud platforms, applying Python, SQL, Spark, and Snowflake to deliver governed, reliable, analytics-ready enterprise data products consistently.

Builds resilient ELT and ETL workflows with Airflow, Databricks, Azure Data Factory, and AWS Glue, improving data availability, quality, lineage, governance, and scalability enterprise-wide.

Develops dimensional models, lakehouse architectures, and warehouse integrations supporting reporting, governance, compliance, and business intelligence across healthcare, finance, and public-sector data ecosystems effectively securely.

Collaborates with analysts, engineers, and stakeholders to automate ingestion, optimize performance, enforce security standards, and operationalize trusted datasets for enterprise decision-making workflows at scale.

Facilitated excellent written oral communication skills to enhance team collaboration and streamline project updates.

Implemented passion automation continual process improvement to boost operational efficiency and reduce errors. TECHNICAL SKILLS

Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage)

Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps

Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management

Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging

CI/CD and DevOps - Jenkins, GitHub Actions, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting

Programming & Scripting - Python, SQL, Bash, PowerShell, Perl

Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, Informatica

Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL, Oracle, Oracle Exadata

Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight

System Administration - Linux-based processes, Unix file systems PROFESSIONAL EXPERIENCE

State of Ohio Department of Medicaid May 2023 – Present Data Engineer

Analyzed source-to-target mappings and business rules for Medicaid data domains, translating complex requirements into governed warehouse structures supporting policy, operations, reporting, and stakeholder transparency.

Modernized legacy integration workflows with SQL Server, SSIS, and Python, improving processing stability, maintainability, and delivery of curated datasets for public-sector stakeholders statewide effectively.

Established scalable data models and master data practices, enabling consistent cross-program analytics, stronger reference data integrity, reusable reporting assets, and governance standardization enterprise-wide successfully.

Implemented Azure Synapse, Power BI, and SQL-based transformations to support Medicaid reporting, improving accessibility of trusted metrics for program oversight, compliance, and decision-making statewide.

Monitored pipeline performance, failure recovery, and scheduling dependencies across enterprise workflows, ensuring dependable data movement, timely issue remediation, sustained service continuity, and operational visibility.

Engineered a robust Data Warehousing solution using Oracle and Informatica, enhancing data processing speed by 40% and enabling real-time analytics for critical business insights.

Orchestrated system/architecture improvements for Oracle Exadata and Unix file systems, resulting in a 99.95% reliability increase and a 30% reduction in downtime for enterprise applications.

Pioneered automation of backend-focused database load/extract processes with Perl and Linux-based processes, reducing manual intervention by 70% and improving data accuracy by 25%. Norton Healthcare May 2022 – April 2023

Data Engineer

Engineered healthcare data pipelines with Azure Data Factory, Python, and SQL, unifying clinical and operational datasets for accurate reporting, interoperability, governed access, and reliability.

Streamlined Spark and Databricks transformations for large-scale healthcare records, improving data availability, schema consistency, trusted analytics delivery, and cross-functional stakeholder alignment across programs enterprise-wide.

Orchestrated resilient ETL workflows with Apache Airflow and Delta Lake, supporting scalable ingestion, audit- ready lineage, dependable downstream business intelligence consumption, and stronger operational resilience.

Validated data quality controls, reconciliation rules, and metadata standards across warehouse layers, strengthening regulatory alignment, issue resolution speed, confidence in reporting outputs, and transparency.

Configured secure role-based access, encryption, and monitoring across Azure and Snowflake environments, protecting sensitive healthcare datasets while sustaining reliable analytic operations and compliance readiness.

Modernized Agile methodology practices by integrating Linux environment setup and relational databases, boosting project delivery speed by 35% and aligning cross-functional teams for better collaboration.

Architected toolsets and scripts for data flows and data warehouses, achieving a 50% reduction in data processing time and enhancing data-driven decision-making capabilities. Synchrony June 2020 – December 2021

Data Engineer

Architected cloud-native ingestion pipelines with Python, Spark, Airflow, and Snowflake, consolidating financial data sources into trusted warehouse layers for faster downstream analytics delivery consistently.

Optimized ELT workflows across AWS Glue, S3, and Redshift, reducing pipeline bottlenecks while improving data quality, lineage visibility, operational reliability, and governance controls enterprise-wide.

Integrated batch and near-real-time datasets through Kafka and Databricks, enabling governed customer reporting, reusable curated datasets, stronger platform scalability, and broader analytics adoption organization-wide.

Standardized dimensional models and SQL transformations for enterprise reporting, strengthening consistency across lending analytics, dashboard consumption, compliance-oriented validations, and business-facing data trust significantly enterprise-wide.

Automated CI/CD deployment patterns with Git, GitHub Actions, and Terraform, improving release repeatability, infrastructure governance, secure platform change management, and environment stability across platforms.

Quantified impact of standard tools and pipes on Unix file systems, improving mount types and permissions, leading to a 20% increase in system performance and security.

Revolutionized processes with a focus on excellent written oral communication skills and passion automation continual process improvement, enhancing team productivity by 40% and project success rates by 30%. EDUCATION

Master's in Management Information Systems - Northern Illinois University

Bachelor of Technology - Christ University



Contact this candidate