Post Job Free
Sign in

Data Engineer Processing

Location:
Chicago, IL
Posted:
July 31, 2025

Contact this candidate

Resume:

Name: Ashok Bahatam

Email: *****.*********@*****.*** Phone: +1-813-***-****

Data Engineer

PROFESSIONAL SUMMARY:

•Having 6+ years of experience in Information Technology which includes Hands on experience in Data Engineer, specializing in cloud data solutions, ETL development, and data pipeline automation.

•Experience in designing and implementing scalable data pipelines and optimizing data processing workflows using Azure.

•Strong Knowledge and experience on implementing on building and managing cloud-based data warehouses with Snowflake and Azure Synapse, ensuring fast, efficient, and cost-effective data storage and access.

•Skilled in implementing real-time data streaming and event-driven architecture using Azure Event Hubs to support high-

throughput and low-latency data processing.

•Advanced experience in data processing and transformation with Databricks, Apache Spark, and PySpark, delivering scalable and high-performance data workflows.

•Led migration projects from on-premises systems to cloud-based architectures, improving data processing and reducing

operational costs.

•Designed and deployed machine learning models in Databricks to support predictive analytics and drive data-driven business decisions.

•Proficient in automating data orchestration and ETL processes using Apache Airflow, ensuring reliable and efficient data

workflows.

•Strong expertise in infrastructure provisioning using Terraform and CloudFormation, ensuring consistent, reproducible, and scalable cloud deployments.

•Experienced in setting up and maintaining CI/CD pipelines using Jenkins, Azure DevOps, and Git, enhancing collaboration

and streamlining software development processes.

•Adept to monitoring and troubleshooting cloud environments using Azure Monitor and Stack driver, ensuring optimal system performance and uptime.

•In-depth knowledge of data security and compliance, including implementing IAM, Key Vault, and enforcing PII and PCI

compliance across systems.

•Experience with source control systems such as Git, Bit bucket, and Jenkins and in CI/CD deployments.

•Good knowledge in RDBMS concepts (Oracle 112c/1g, MS SQL Server 2012) and strong SQL, query writing skills (by using Erwin & SQL Developer tools), Stored Procedures and Triggers and Experience in writing Complex SQL Queries involving multiple tables inner and outer joins.

•Excellent knowledge with Unit Testing, Regression Testing, Integration Testing, User Acceptance Testing, Production

implementation and Maintenance.

•Excellent communication and collaboration skills, with a proven ability to work effectively in cross-functional teams to meet business goals and technical requirements.

•Ability to review technical deliverables, mentor and drive technical teams to deliver quality products.

•Demonstrated ability to communicate and gather requirements, partner with Enterprise Architects, Business Users, Analysts, and development teams to deliver rapid iterations of complex solutions.

•Versatile and adaptable with a strong drive to stay up with the most recent technological advancements.

TECHNICAL SKILLS:

Cloud Platforms

Azure

Data Warehousing

Snowflake, Azure Synapse

Data Processing

Databricks, Apache Spark, Kafka

ETL Tools

Azure Data Factory, SSIS, Informatica.

Version Control

Git

Machine Learning

Databricks, PySpark

Monitoring Tools

Azure Monitor, Stack driver

Database Languages

SQL

Security S Compliance

IAM, Key Vault, PII, PCI compliance

Business Intelligence S Reporting

Azure Synapse, Snowflake

CI/CD Tools

Jenkins, Azure DevOps

PROFESSIONAL EXPERIENCE:

Client: Principal Financial Des Moines, IA Sep 2023 to Present Role: Data Engineer

Responsibilities:

•Led the migration of on-premises systems to Azure to improve data processing and reduce operational costs.

•Architected a Snowflake data warehouse solution for centralized storage and improved data access.

•Integrated real-time data streaming with Kafka, enabling seamless flow of data between microservices.

•Developed predictive analytics capabilities by deploying machine learning models in Databricks.

•Leveraged Databricks and Apache Spark for large-scale data transformations, ensuring high performance and scalability.

•Implemented event-driven architecture using Azure Event Hubs for low-latency data streaming and real-time processing.

•Built and automated data orchestration workflows using Apache Airflow to enhance ETL processing efficiency.

•Ensured high availability and resilience of cloud resources by implementing Terraform for infrastructure as code.

•Configured CI/CD pipelines with Jenkins and Git to streamline the deployment of new features and updates.

•Employed Terraform and Git for consistent versioning and automated deployment across cloud environments.

•Set up custom monitoring dashboards with Azure Monitor to track performance of cloud-based services.

•Developed and optimized complex SQL queries to improve reporting speed and analytical performance.

Environment Azure, Snowflake, Azure Synapse, Databricks, Apache Spark, Kafka, Azure Event Hubs, Apache Airflow,

Terraform, Git, Jenkins, PySpark, SQL

Client: Western & Southern Financial Group, Cincinnati, OH Jun 2021 to Aug 2023 Role: Data Engineer

Responsibilities:

• Designed and implemented automated data pipelines using Azure Data Factory to enhance ETL workflows.

• Utilized ADF for seamless ETL processing and Snowflake integration

• Migrated on-premises legacy data warehouses to Snowflake, improving performance and storage efficiency.

• Architected a business intelligence solution using Azure Synapse Analytics, enabling fast analytics from diverse sources.

• Built machine learning models within Databricks to support predictive business analytics.

• Engineered Apache Spark data workflows to process large datasets at scale, achieving 30% faster processing times.

• Implemented Kafka for event streaming, enabling high-throughput communication across multiple services.

• Set up Azure Event Hubs to enable event-driven architecture for processing high-volume streaming data.

• Automated ETL tasks with Apache Airflow, reducing manual intervention and improving system reliability.

• Managed infrastructure provisioning using Terraform to ensure consistent and scalable cloud deployments.

• Integrated version control systems like Git with Jenkins to enhance collaboration and streamline CI/CD pipelines.

• Set up Stack driver, and Azure Monitor to continuously monitor cloud resource and ensure uptime performance.

Environment: Azure, Snowflake, Azure Synapse, Databricks, Apache Spark, Kafka, Azure Event Hubs, Apache Airflow, Terraform, Git, Jenkins, Py Spark, SQL

Client: Corteva, Indianapolis, Indiana Jul 2019 to May 2021 Role: Data Engineer

Responsibilities:

•Involved in analyzing business requirements and prepared detailed specifications that follow project guidelines required for project development.

• Created parameterized pipelines in Azure Data Factory to handle dynamic source ingestion from SAP and Salesforce.

• Integrated HubSpot and Eloqua APIs into Azure pipelines for automated campaign data extraction.

• Tuned PostgreSQL table partitions and optimized joins for faster processing of marketing analytics datasets.

• Built custom Azure Functions to cleanse and normalize JSON data from multiple third-party APIs.

• Engineered end-to-end ETL solutions for lead generation, campaign tracking, and CRM sync operations.

• Connected Azure Logic Apps with email and notification systems for real-time alerting of data quality issues.

• Developed web analytics dashboards by blending Google Analytics and CRM data in Power BI.

• Enabled event-driven architecture for real-time data capture using Azure Functions and Event Grid.

• Leveraged Salesforce metadata APIs for schema mapping and automated sync validation.

• Implemented Hotjar data collection workflows for behavior-driven segmentation in retargeting campaigns.

• Applied role-based security and parameterized datasets in ADF for controlled access to marketing data.

• Monitored and optimized pipeline execution using Azure Monitor and Log Analytics.

• Collaborated with marketing and sales teams to align data workflows with campaign strategies and KPIs.

Environment: Azure Data Factory, Azure Functions, Logic Apps, PostgreSQL, SQL Server, Salesforce, SAP, HubSpot, Eloqua, Power BI. ETL Pipelines, Google Analytics, Hotjar, REST APIs, Azure Monitor, CI/CD, Event Grid, JSON, Git



Contact this candidate