Post Job Free
Sign in

Cloud Data Engineer AWS/Azure - Spark & ETL Specialist

Location:
Hyderabad, Telangana, India
Posted:
March 24, 2026

Contact this candidate

Resume:

Poonkgodi ***********@*****.*** +1-437-***-****

Professional Summary

Cloud Data Engineer with 7+ years of experience in data engineering and an additional 7+ years of experience as an Associate in

Banking & Financial Services, delivering scalable, secure, and high-performance data solutions across cloud and enterprise

environments. Extensive expertise in AWS and Azure data platforms, ETL/ELT development, data warehousing, and large-scale data

processing using Spark, Python, and SQL. Strong background in modernising legacy systems, enabling cloud migrations, and building

governed analytical platforms for banking, financial services, and enterprise analytics. Proven ability to work closely with business,

product, and engineering teams to translate complex requirements into reliable, compliant, and production-ready data solutions.

Core Technical Skills

Cloud Platforms: AWS (S3, Glue, Lambda, Redshift, Step Functions, CloudWatch, IAM), Azure (Data Factory, Azure Data Lake,

Synapse Analytics, Databricks)

Data Engineering & Processing: ETL / ELT, Data Warehousing, Data Modelling, Batch & Near Real-Time Processing

Apache Spark (PySpark), Apache Airflow

Databases & Storage: Amazon Redshift, Azure Synapse Analytics, SQL Server, DB2

Enterprise Data Warehouses, Data Lakes, Data Marts

Programming & Querying: Python, SQL, PySpark

ETL & Integration Tools: AWS Glue, Azure Data Factory, Informatica, SSIS

DevOps, Monitoring & Automation: Azure DevOps CI/CD, AWS CloudWatch, Job Scheduling, Workflow Orchestration

Analytics & BI Enablement: Tableau, Power BI, Curated Analytical Datasets, Self-Service Analytics

Security, Governance & Compliance: IAM, Data Encryption, Role-Based Access Control, Data Quality, Validation & Audit Controls

Professional Experience

Cloud Data Engineer Client: Seismic, Toronto, ON January 2025 – Present

Project: Cloud-Native Analytics Platform for Content Enablement

Developing a cloud-based analytics and reporting platform enabling near real-time insights into content usage, customer

engagement, and sales performance across enterprise clients.

Design and maintain highly scalable AWS-based data pipelines ingesting structured, semi-structured, and event-driven data

from multiple SaaS and internal systems.

Develop and optimise ETL and ELT workflows using AWS Glue, PySpark, and AWS Lambda to process high-volume datasets

with low latency.

Implement efficient data lake architectures on Amazon S3 using partitioning, lifecycle policies, and compression strategies

to improve performance and control costs.

Build and manage analytical datasets in Amazon Redshift, optimising schema design, distribution styles, and sort keys for

complex analytical workloads.

Orchestrate end-to-end workflows using AWS Step Functions and Apache Airflow, ensuring fault tolerance, retries, and

dependency management.

Apply data quality frameworks including validation rules, anomaly detection, and reconciliation checks across ingestion and

transformation layers.

Monitor pipeline health and performance using Amazon CloudWatch, implementing alerts and operational dashboards.

Collaborate with product managers, analysts, and business stakeholders to translate analytics requirements into scalable

data models.

Enforce data security and governance through IAM roles, encryption at rest and in transit, and fine-grained access controls.

Deliver curated, well-documented datasets optimised for Tableau and ad-hoc analytics.

Senior Data Engineer Client: Arcadia, Chennai, India February 2023 – October 2024

Project: Enterprise Data Warehouse Modernisation

Led the modernisation of an enterprise data warehouse to support advanced analytics, operational reporting, and business

intelligence across multiple business units.

Designed and implemented end-to-end data ingestion and transformation pipelines using Azure Data Factory, Azure Data

Lake, and Azure Synapse Analytics.

Developed scalable Spark-based transformation logic using Azure Databricks and PySpark for large and complex datasets.

Migrated legacy on-premise ETL workflows to Azure cloud platforms, improving scalability, reliability, and data availability.

Designed and maintained dimensional and fact-based data models to support enterprise-wide analytics and reporting.

Implemented data validation, reconciliation, and audit controls to ensure accuracy, completeness, and consistency across

data layers.

Optimised SQL queries, Synapse workloads, and Databricks jobs to improve processing performance.

Enabled curated datasets for Power BI dashboards and self-service analytics.

Automated deployment, monitoring, and scheduling using Azure DevOps CI/CD pipelines.

Provided production support and troubleshooting while meeting defined SLAs.

ETL Developer / Data Engineer Cognizant Technology Solutions August 2019 – January 2023

Client: Northern Trust

Project: Cloud-Enabled Enterprise Data Platform

Role: Data Engineer October 2021 – January 2023

Designed cloud-ready ETL pipelines using Python, SQL, and Spark to process high-volume banking and asset servicing data.

Supported migration of selected workloads from on-premise systems to cloud-based storage and compute platforms.

Built reusable and modular transformation frameworks to standardise processing across custody, fund accounting, and

investment data domains.

Enhanced enterprise data warehouse schemas to support analytics for trade processing, asset valuation, and client

reporting.

Implemented data validation, reconciliation, and audit checks to ensure regulatory compliance and data accuracy.

Delivered analytics-ready datasets to support business intelligence and regulatory reporting initiatives.

Project: Enterprise Data Integration & Reporting Platform

Role: ETL Developer August 2019 – September 2021

Developed and maintained ETL workflows using Informatica and SSIS for core banking, custody, and risk management

systems.

Integrated data from transactional systems, reference data platforms, and third-party financial feeds.

Implemented complex data transformation, enrichment, and standardisation logic to meet regulatory and management

reporting requirements.

Loaded curated datasets into SQL Server data warehouses and data marts for risk analytics and compliance reporting.

Performed ETL performance tuning to meet strict batch processing SLAs.

Conducted data reconciliation and validation in collaboration with QA and business teams.

Associate – Banking & Financial Services January 2012 – April 2019

Western Union Engineering Tech Pvt Ltd

Project: iWatch – Next Generation Case Management System

Contributed to the development of AML, fraud detection, and regulatory compliance solutions for global money transfer

operations.

Implemented online case creation and transaction monitoring workflows using MSMQ and integrated with TIBCO systems.

Led initiatives to track multiple transaction attempts and identify suspicious activities for US regulatory reporting.

Delivered regulatory reporting solutions supporting large-scale consumer transaction submissions.

Developed PCI-compliant components to securely mask sensitive credit card data.

Supported KYC and KYC-PEP initiatives for customer verification and compliance.

Performed unit testing, system integration, deployments, and ongoing production support.

Technologies: C#.NET, ASP.NET, WCF, Entity Framework, ADO.NET, SQL Server, SSIS, WPF, LINQ, DB2, XML, SOAP UI, LDAP, Okta, API

Security



Contact this candidate