Post Job Free
Sign in

Senior Cloud Data Engineer Multi-Cloud Analytics Expert

Location:
Sugar Land, TX, 77478
Posted:
December 01, 2025

Contact this candidate

Resume:

Sravanthi K

Email- *****************@*****.***

Phone: 469-***-****

LinkedIn: www.linkedin.com/in/sravanthi-k-28121a103

SUMMARY PROFESSIONAL:

8+ years of experience as a Cloud Data Engineer designing scalable data pipelines and multi-cloud analytics platforms.

Expertise in AWS, GCP, and Azure, with strong knowledge of Python, PySpark, SQL, and ETL development.

Skilled in building and automating cloud infrastructure using Terraform and CI/CD pipelines.

Experienced in real-time streaming, orchestration, and data warehousing for large-scale datasets.

Proficient in cost optimization, monitoring, and production-grade system reliability.

Adept at designing data systems that support analytics, reporting, and machine learning workflows.

CORE TECHNICAL SKILLS:

Programming: Python PySpark SQL Pandas

Cloud Platforms:

AWS: S3 Glue Lambda Redshift Athena RDS DynamoDB CloudWatch IAM VPC

GCP: BigQuery Dataflow Cloud Storage Pub/Sub Cloud SQL AWS Step Functions

Azure: Data Factory Synapse Blob Storage

Big data: Apache Spark, Hadoop

Data Engineering: ETL/ELT Pipelines Data Lakes Data Warehousing Star/Snowflake Modeling

Infrastructure & Automation: Terraform CloudFormation Jenkins GitHub Actions Docker

Databases: PostgreSQL MySQL MS SQL Server Snowflake MongoDB

Streaming: Kinesis Firehose Pub/Sub

Monitoring: Prometheus Grafana CloudWatch

Visualization: QuickSight Power BI Data Studio

Version Control: Git GitHub GitLab

PROFESSIONAL EXPERIENCE:

Optum Aug’2024 – till date

Senior Software Engineer

Responsibilities:

Built and maintained ETL/ELT pipelines using AWS Glue, Lambda, and Python to process high-volume clinical and claims data.

Modeled Redshift datasets using star and snowflake schemas for analytics and BI reporting.

Developed monitoring and alerting using CloudWatch logs, metrics, and dashboards.

Used Java/Scala-based Spark libraries when optimizing Glue/EMR transformations.

Automated AWS environments using Terraform (VPC, IAM, S3, RDS, Glue, Lambda).

Built Spark jobs on AWS Glue/EMR for large-scale transformations.

Integrated EMR-based pipelines with Step Functions for orchestration.

Ensured data quality through schema validation, governance rules, and automated checks.

Supported ML feature pipelines by preparing curated datasets.

Staples Inc Dec’2022 – Nov’2023

Senior Software Engineer

Responsibilities:

Designed batch and near-real-time ETL workflows using Python, PySpark, and Airflow.

Developed Spark pipelines in AWS and GCP for batch and streaming datasets.

Used Step Functions and Lambda for orchestration of cross-cloud workflows.

Used Terraform and CloudFormation to standardize and automate infrastructure deployments.

Built Jenkins CI/CD pipelines for data applications and Python services.

Worked with Java/Scala Spark code modules for batch ETL tuning and pipeline integration.

Implemented streaming pipelines using Kinesis and Firehose for real-time event ingestion.

Improved performance through schema optimization and efficient partition strategies.

Exposure to Node.js for lightweight automation scripts, API integrations, and serverless functions.

TCS Oct’2021 – Nov’2022

Software Engineer

Responsibilities:

Developed ETL pipelines using Python to ingest data into S3, RDS, Snowflake, and Azure Blob Storage.

Tuned SQL queries and database structures in PostgreSQL, Redshift, and Synapse.

Built reusable Terraform modules for S3, IAM, RDS, and Azure services.

Designed Snowflake data models and optimized pipelines using ANSI-SQL.

Built scalable data ingestion on EMR/Spark for large datasets.

Built Snowflake ingestion workflows with Python and Snowpipe.

Optimized ingestion of multi-terabyte datasets using parallel processing and optimized file formats.

NVIDIA Nov’2018 -Mar’2021

Software Engineer

Responsibilities:

Implemented Spark workloads for processing IoT telemetry at scale.

Enhanced data reliability and compliance using schema enforcement and governance controls.

Managed Kubernetes deployments, autoscaling, and resource utilization.

Developed Flask APIs to serve processed datasets to internal teams.

Implemented monitoring using CloudWatch, Prometheus, and Grafana.

Created Terraform modules for reusable, automated infrastructure provisioning.

Improved ingestion reliability with retries, error handling, and schema enforcement.

Google July’2016 – Oct’2018

Software Engineer

Responsibilities:

Built ETL pipelines using Python and BigQuery for product analytics.

Designed partitioned and clustered BigQuery tables to reduce cost and speed up queries.

Automated data validation and aggregation tasks at scale.

Managed CI/CD pipelines using GitHub for automated deployments.

Configured AWS EC2, VPC, IAM, and security policies for internal tools.

Built Python producers to stream real-time events into Kinesis Data Streams.

Reduced data latency by optimizing batching and ingestion strategies.

Implemented governance policies to ensure data quality, security, and compliance.

EDUCATION:

Master of Science in Information Technology, University of the Cumberlands

Bachelor of Technology in Computer Science, Jawaharlal Nehru Institute of Technology, Hyderabad, India



Contact this candidate