Post Job Free
Sign in

Senior Data Engineering Leader with Cloud Expertise

Location:
Santa Monica, CA, 90403
Posted:
February 17, 2026

Contact this candidate

Resume:

Fort Mill, SC +1-508-***-**** *********@*****.*** LinkedIn

Profile

Results-driven AVP of Data Engineering with 12+ years of expertise leading high-performing teams in designing and implementing scalable data solutions. Proven track record managing teams of 10+ engineers delivering enterprise-level ETL pipelines, cloud migrations, and data infrastructure projects. Expert in Python/PySpark development, AWS/Azure cloud platforms, and modern data stack technologies (Airflow, Snowflake, DBT). Strong background in translating business requirements into technical solutions while ensuring data quality, security, and compliance. Adept at stakeholder management, cross-functional collaboration, and driving continuous process improvements.

Assistant Vice President, Data Engineering July 2022 - October 2025

Karthik Sistla

Assistant Vice President Data Engineer

PROFESSIONAL SUMMARY

PROFESSIONAL EXPERIENCE

MOODY'S RATINGS Charlotte, NC

Lead and mentor a team of 12 data engineers in designing and delivering enterprise-scale data integration and application development projects across diverse technology stacks and architectures

Architect and implement production-grade ETL pipelines using Apache Airflow, integrating data from APIs, databases, and

Data Engineering Manager September 2017 - July 2022 message queues into SQL Server and Snowflake data lake, processing 10M+ records daily

Spearhead cloud migration initiatives, successfully migrating 20+ ETL processes to AWS Glue with PySpark optimization, improving processing performance by 40% through multi-node distributed computing

Design and develop scalable microservices architecture using AWS Lambda, Kafka event streams, and database triggers for real-time data processing and event-driven ETL workflows

Worked on POC to design and optimize PySpark ETL pipelines on Databricks, leveraging Delta Lake, auto-optimize, and cluster autoscaling to process multimillion row datasets with improved reliability and cost efficiency

Develop reusable Databricks notebooks and jobs for batch and near real-time data processing, integrating with AWS S3, Bronze–Silver–Gold layer patterns, and CI/CD workflows to standardize data engineering best practices across teams

Lead modernization effort transitioning monolithic codebase to microservices architecture, improving maintainability and enabling independent service deployment and scaling

Implement ELT processes using Apache Nifi data pipelines and DBT for version-controlled transformations, improving data lineage visibility and code maintainability

Establish monitoring and alerting infrastructure using AWS CloudWatch, Datadog, Grafana, and Splunk, achieving 99.9% pipeline uptime and reducing incident response time by 60%

Introduce software quality practices including SonarQube quality gates and pytest unit testing framework, reducing production defects by 35%

Develop complex stored procedures and functions in SQL Server and Oracle for API consumption, optimizing query performance by up to 50%

LENDINGTREE, LLC Charlotte, NC

Managed team of 8 data engineers responsible for integrating revenue data from 12+ departments across diverse data sources,

Data Engineer October 2016 - September 2017

ensuring data accuracy and serving as company's single source of truth

Led cross-functional technical initiatives, coordinating with multiple teams to deliver high-impact data solutions under aggressive deadlines while maintaining quality standards

Designed and maintained enterprise reporting infrastructure using Power BI, Tableau, and SSRS, creating executive dashboards and reconciliation reports consolidating data from SQL Server, MySQL, PostgreSQL, Snowflake, and SQLite

Built scalable Python-based ETL pipelines processing millions of records daily into data lakes for advanced analytics and machine learning applications

Implemented machine learning models for revenue forecasting using time series analysis, improving quarterly and annual revenue predictions accuracy by 25%

Developed recommendation engine using decision tree algorithms trained on historical data lake information to personalize product suggestions based on customer

demographics and behavior

Architected and maintained real-time data integration platform using SSIS, Kafka, Nifi, Spark, and Python APIs, consolidating data from Snowflake, Hadoop, MongoDB, SQL Server, MySQL, PostgreSQL, EventHub, Salesforce, and partner portals

Established data quality framework identifying root causes of data inaccuracies and implementing permanent solutions, reducing data discrepancies by 45%

Collaborated with data scientists, analysts, and audit teams to support predictive model development and ensure SOX compliance

INTEGRACONNECT Madison, MS

Developed and optimized healthcare data maintenance systems, implementing source control technologies to prevent data inconsistencies across environments

Built comprehensive ETL processes using SSIS and Informatica to integrate patient data from multiple sources into reporting and analytical systems

Software Engineer Intern November 2015 - August 2016 Data Engineer February 2012 - December 2014

Master of Information

Technology

Bentley University, Waltham, MA

Master of Information

Systems

Swinburne University, Melbourne,

Australia

Bachelor of Technology

Jawaharlal Nehru Technological

University, India

Created Power BI and SSRS dashboards for multiple departments to monitor patient information and identify data anomalies

Collaborated with business stakeholders on requirements gathering and delivered robust technical solutions supporting data science initiatives

ENVIRONMENTAL DATA RESOURCES Sheldon, CT

Conducted POC projects evaluating cloud solutions and data flow optimization for improved system efficiency

Built performance visualizations using Power BI, Tableau, and Kibana to support business case development

Researched and recommended cloud architecture improvements for enterprise data processing needs

DEXTER LABS INC. Hyderabad, India

Designed and implemented database optimization strategies, improving query performance and system responsiveness

Migrated inline queries to stored procedures, enhancing security and database performance

Developed automated data loading APIs, eliminating manual data entry processes

Implemented access control and source control systems to maintain data integrity across development environments

EDUCATION

Languages &

Scripting

Python, PySpark, T-SQL, PL/SQL, Shell

Scripting, Springboot Java, C/C++, HTML, CSS,

JavaScript

Databases

SQL Server, PostgreSQL, MySQL, Oracle,

Snowflake, MongoDB, BigQuery, Redshift,

Sybase, SQLite, NoSQL

ETL/ELT Tools

Apache Airflow, AWS Glue, SSIS,

Informatica, DBT, Apache Nifi, Mulesoft

Cloud Platforms

AWS (Lambda, Glue, CloudWatch, Kinesis,

MSK, S3), GCP, Microsoft Azure, Snowflake

Big Data &

Streaming

Apache Spark, Kafka, Hadoop, AWS Kinesis

Firehose, EventHub

BI & Visualization

Power BI, Tableau, Looker, SSRS, Kibana,

Microsoft Excel

Development &

DevOps

Git, GitHub, GitLab, Docker, Jenkins,

SonarQube, Jira

Monitoring &

Analytics

Datadog, Grafana, Splunk, Full Story,

CloudWatch

Methodologies Agile, Scrum, Waterfall, Kanban, CI/CD TECHNICAL SKILLS



Contact this candidate