Post Job Free
Sign in

Senior Data Engineer with 6+ Years Experience

Location:
Chandler, AZ
Posted:
January 14, 2026

Contact this candidate

Resume:

Lakshmi Chandu Battina

602-***-**** *************.*******@*****.*** linkedin.com/in/lakshmi-chandu-battina/ Professional Summary

Senior Data Engineer with 6+ years of software engineering experience and 3+ years owning production-grade data platforms. Experienced in building and operating large-scale batch and streaming data pipelines using Apache Spark, Kafka, Scala, and SQL across AWS and Azure. Proven track record delivering reliable data systems in regulated financial environments (US Bank) and high-scale, global platforms (Amazon), with a strong focus on ELT design, data quality, and SLA-driven operations.

Experience

Senior Data Engineer May 2023 - Present

US Bank Remote - USA

• Tech Stack: Scala, Spark, Git, Cassandra, Kafka, Docker, Splunk, Rancher, Microsoft Azure, PySpark, Databricks, Delta Lake, PostgreSQL, DynamoDB, SQL, ServiceNow, Jira, Azure Synapse

• Responsibilities:

• Owned end-to-end design, deployment, and monitoring of 15+ ETL pipelines on Kubernetes (Rancher, Docker) and Databricks, ensuring high reliability, scalability, and compliance in a banking domain, improving data processing efficiency by 40%.

• Engineered high-volume data ingestion pipelines from Kafka topics (payloads 10KB+, 1-month retention), transforming messages into Parquet and securely storing in Azure Data Lake for downstream analytics.

• Architected scalable data models in PostgreSQL and DynamoDB, and built reusable file connectors (FixedWidth, Delimited, Parquet) with schema validation, enabling efficient querying and reliable ETL processes.

• Automated monitoring, alerting, and reporting workflows using Datadog and Splunk, maintaining 99.9% uptime and proactively resolving potential pipeline failures.

• Improved code quality, testing, and on-call support, applying advanced SQL and Scala transformations, writing ScalaTest unit/integration tests, and collaborating across teams to ensure system stability and regulatory compliance.

Data Engineer Aug 2022 – Apr 2023

Amazon Seattle, Washington

• Tech Stack: Python, Scala, SQL, Apache Spark, PySpark, Hadoop, Hive, Databricks, Delta Lake, AWS (S3, Glue, Lambda, Kinesis, EMR, Redshift, SageMaker, Bedrock), Step Functions, API Gateway, Git

• Responsibilities:

• Owned the end-to-end design and delivery of high-performance ETL pipelines using Python, SQL, Spark, Databricks, Delta Lake, and AWS, processing millions of live streaming events daily, improving reliability by 35% and enabling data-driven decisions across Twitch.

• Partnered with data scientists, engineers, and business stakeholders to define and standardize core business metrics, translating complex requirements into actionable, analytics-ready datasets using Python, SQL, and AWS cloud-native data solutions for real-time insights.

• Directed development of reusable data cubes and optimized data architectures with Databricks, Delta Lake, and AWS services (S3, Glue, Redshift, Lambda, Kinesis, EMR), simplifying access to complex streaming datasets, accelerating cross-team analytics, and driving measurable business impact.

• Automated data validation, monitoring, and reporting workflows leveraging Python scripts, Spark jobs, and AWS orchestration tools, ensuring 25% higher accuracy, consistency, and compliance across critical datasets while reducing manual interventions.

Software Engineer July 2018 – Dec 2020

Aeodita Tech Pvt. Ltd Visakhapatnam, India

• Tech Stack: Java(Spring-boot), Elasticsearch, Kibana, Maven, Hibernate, Angular JS, PostgreSQL, Gitlab, Design Patterns, JSON

• Developed reusable UI components, improving user engagement by 30%.

• Integrated Elasticsearch/Kibana for real-time monitoring, reducing incident response time by 35%.

• Deployed RESTful microservices on Docker and Amazon ECS, reducing deployment times by 50%.

• Optimized Java backend performance, improving throughput by 25% and reducing CPU usage under load. Technical Skills

Data Engineering: Apache Spark, PySpark, Kafka, Hive, Airflow, Databricks Cloud Platforms: AWS (Glue, S3, Lambda, Kinesis, EMR, Redshift), Azure (ADF, ADLS, Databricks) Programming: Python, Scala, SQL, Java

CI/CD and DevOps: Jenkins, GitLab, Docker, Kubernetes (Rancher), Terraform Databases: PostgreSQL, DynamoDB, Cassandra, MongoDB Monitoring and Logging: Splunk, Datadog, CloudWatch Education

Arizona State University Jan 2021 – July 2022

Master of Science in Information Technology, GPA 4/4 Tempe, AZ Gitam University Jun 2016 – Mar 2020

Bachelor’s in Computer Science, GPA 8.17/10 Visakhapatnam, India



Contact this candidate