Post Job Free
Sign in

Senior Data Engineer - Global Cloud Platforms & Pipelines

Location:
Vijayawada, Andhra Pradesh, India
Posted:
April 30, 2026

Contact this candidate

Resume:

Virat Chowdary

Email: *****************@*****.***

Mobile: 512-***-****

LinkedIn: www.linkedin.com/in/virat-c-7915421a6

Senior Data Engineer

PROFESSIONAL SUMMARY

Architected scalable global data platforms across AWS, Azure, and GCP, delivering governed pipelines, reliable warehouses, and analytics-ready datasets supporting enterprise reporting and AI initiatives.

Optimized batch and streaming processing with Spark, Kafka, Airflow, and Databricks, improving data quality, observability, and timeliness for critical business decisioning and regulatory compliance.

Automated infrastructure and deployments through Terraform, CI/CD, Git, and Kubernetes, standardizing environments, reducing operational friction, strengthening security controls measurably, and enabling secure, repeatable releases.

Enhanced stakeholder insights by modeling Snowflake, BigQuery, Redshift, and Synapse datasets, powering Tableau, Power BI, and KPI dashboards with trusted, documented semantic layers consistently.

Enhanced customer relations by fixing bugs, resulting in a 20% increase in customer satisfaction and a 15% reduction in support tickets within three months.

TECHNICAL SKILLS

Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage), Microsoft Fabric

Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management

Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps

Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging, Audit Compliance

CI/CD and DevOps - Jenkins, GitHub Actions, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting Programming & Scripting - Python, SQL, Bash, PowerShell

Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, Data Pipelines, Traceability, Automation, Analytics Lifecycle

Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight

Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL PROFESSIONAL EXPERIENCE

Morgan Stanley June 2024 – Present

Senior Data Engineer

Engineered Azure Data Factory pipelines ingesting retail feeds into Synapse and Delta tables, improving freshness, enforcing schema checks, and enabling reliable downstream BI consumption.

Streamlined Spark and Databricks transformations on Azure, curating standardized dimensions and facts, reducing rework, accelerating feature delivery for finance stakeholders, and strengthening analytics reproducibility.

Integrated Kafka event streams with batch lake processing, consolidating clickstream and orders data, improving timeliness near real-time, and supporting operational dashboards for merchandising teams.

Validated Snowflake extracts and SQL-based reconciliations against source systems, resolving anomalies quickly, improving trust for executives, and minimizing reporting disruptions during monthly close cycles.

Governed CI/CD deployments for Azure workloads with Git and Terraform, enforcing IaC standards, reducing configuration drift, improving audit readiness, and strengthening enterprise access controls.

Enhanced analytics lifecycle by optimizing data pipelines as a Solution Engineer Sr, resulting in a 30% increase in data processing efficiency.

Improved system reliability by conducting audit compliance and testing performance as a Solution Engineer, achieving a 20% reduction in error rates.

Increased client satisfaction by supporting business unit technical related queries and providing post-installation follow-ups, leading to a 15% boost in service ratings. PwC May 2023 – May 2024

Data Engineer

Designed AWS Glue jobs reading S3 landing zones, transforming audit datasets into Redshift models daily, improving query performance, and supporting repeatable client assurance deliverables.

Orchestrated EMR Spark pipelines for high-volume ingestion, enforcing partitioning and Delta conventions, improving throughput, and enabling consistent downstream analytics for secure regulated client engagements.

Standardized API integrations and JSON parsing for third-party sources, enriching curated marts, reducing manual effort, and improving data completeness during reporting for distributed teams.

Migrated legacy SQL workflows into cloud-native AWS pipelines with CI/CD and Terraform, reducing operational risk, improving deployment consistency, and accelerating release cadence for stakeholders.

Hardened IAM roles and least-privilege access for AWS data services, improving compliance posture, limiting exposure, and supporting secure collaboration with external partners across accounts.

Drove strategic growth by designing and installing customized software hardware solutions and strategizing business unit, resulting in a 25% increase in operational efficiency.

Advanced system accuracy by implementing traceability and automation, achieving a 40% reduction in manual errors.

Enhanced client engagement by creating customized solutions and drafting deliver presentations, resulting in a 30% increase in project adoption rates.

Walmart June 2020 – July 2022

Data Engineer

Analyzed trading and risk datasets on GCP, landing curated extracts into BigQuery securely, improving discoverability, enabling faster investigations, and supporting auditable governance-aligned analytics workflows.

Modeled conformed schemas in BigQuery and Snowflake, aligning definitions for enterprise KPIs, reducing metric drift, and enabling consistent Tableau reporting across certified business lines.

Delivered Spark-based enrichment pipelines integrating Kafka events with reference data on GCP, improving timeliness, supporting near real-time dashboards, and informing accurate actionable operational decisions.

Monitored Airflow schedules and pipeline health metrics with alerting daily, triaging failures quickly, improving reliability proactively, and protecting downstream analytics service-level expectations for stakeholders.

Refined SQL and T-SQL queries for complex reconciliations, improving performance, reducing compute waste, accelerating month-end analytics deliverables under strict deadlines for internal finance teams.

Optimized interoperability by integrating internal enterprise platforms with external RTO systems, leading to a 35% improvement in data synchronization.

Strengthened compliance by meeting clients and managing regulatory reporting, achieving a 50% reduction in audit discrepancies.

Facilitated strategic decisions by providing executive insights to the business unit, resulting in a 20% increase in informed decision-making.

EDUCATION

Master's in Information Technology Management - University of Texas

Bachelor's in Computer Science - SRM Institute of Science and Technology



Contact this candidate