Mounika Nadendla
Email: *****************@*****.***
Mobile: 513-***-****
LinkedIn: www.linkedin.com/in/mounika-nadendla-04273819a Senior Data Engineer
PROFESSIONAL SUMMARY
Data Engineer with 4+ years of experience designing and delivering scalable data pipelines, ETL/ELT workflows, and cloud-native architectures across diverse industries.
Proficient in Azure (ADF, Databricks, Synapse, Event Hub) and AWS (Glue, Redshift, Lambda, Kinesis) ecosystems, with hands-on expertise in big data processing and optimization.
Skilled in developing data lakehouse solutions using Delta Lake and ADLS Gen2, enabling structured bronze– silver–gold architecture for governance, lineage, and high-performance analytics.
Strong background in building real-time streaming pipelines and event-driven architectures to support analytics, dashboards, and machine learning use cases.
Adept in SQL, Python, and PySpark for large-scale transformations, advanced partitioning, and performance tuning across enterprise data platforms.
Coordinated cross-functional team coordination and mentorship, fostering a cohesive work environment and skill growth.
Guided teams to achieve project goals, resulting in a 20% increase in efficiency.
Exercised leadership skills to inspire team collaboration, enhancing overall project outcomes by 15%. TECHNICAL SKILLS
Cloud And Data Platforms - AWS (S3, EC2, Redshift, Glue), GCP (BigQuery, Dataflow, GKE), OCI (ADW, GoldenGate, Data Integration), data infrastructure, Microsoft Fabric, OpenShift
Databases And Warehousing - Snowflake, Oracle, SQL Server, PostgreSQL, MySQL, MongoDB, Teradata, data architecture, query techniques, data artifacts
Programming - Python, Scala, SQL (T-SQL, PL/SQL), Shell Scripting, algorithmic concepts, PL-SQL
Big Data And Etl - Spark, Hadoop, Hive, HBase, MapReduce, Pig, Airflow, Talend, Informatica, SSIS, DataStage, Luigi, Prefect, Oozie, ETL patterns, ETL tools, ETL automation
Streaming - Kafka, Flink, Spark Streaming, Kinesis, GCP Pub/Sub, OCI Streaming, logging data
Visualization - Power BI, Tableau, QuickSight, QlikView, SSRS, Cognos, Excel, Seaborn, Plotly, data visualizations, data insights, Tableau Prep
Devops - Docker, Kubernetes, Jenkins, Git, Terraform, Service Level Agreements, Unity Catalog, CI/CD pipelines, CI/CD automation pipeline management, containers, containerized deployments
Data Governance - Collibra, Oracle Data Catalog, Great Expectations HIPAA, GDPR, CCPA, ISO 27001, security model, privacy requirements, governance processes
Software Architecture - large-scale architecture initiatives, enterprise rollouts
Technical Support And Troubleshooting - troubleshoot, resolve performance issues
Tools And Platforms - Alteryx, RapidMiner
PROFESSIONAL EXPERIENCE
CVS Health May 2024 – Present
Senior Data Engineer
Designed and implemented scalable ETL/ELT pipelines on Google Cloud using Dataflow, Dataproc, and Cloud Composer (Airflow) to process structured and unstructured datasets from multiple sources.
Developed and optimized data warehouses and lakehouses using BigQuery and Cloud Storage, enabling high- performance analytics and reducing query costs by 30% through partitioning and clustering strategies.
Built real-time streaming pipelines leveraging Pub/Sub, Dataflow, and BigQuery to deliver low-latency insights for critical business applications and machine learning use cases.
Migrated legacy on-premises data systems to GCP-native architectures, ensuring improved scalability, reliability, and cost efficiency while applying Terraform and Deployment Manager for infrastructure automation.
Collaborated with data scientists and analysts to integrate Vertex AI, Looker, and BigQuery ML into production workflows, enabling advanced predictive modeling and self-service analytics.
Implemented robust data governance, monitoring, and security frameworks on GCP, ensuring compliance with organizational and regulatory requirements using IAM, Cloud Monitoring, and Data Catalog.
Led data preparation and orchestration efforts, enhancing data accuracy by 25% and reducing processing time by 15% through workflow automation.
Architected and implemented robust architecture design for data processing automation, optimizing performance by 30% and ensuring high code quality.
Provided operational insights and demonstrated leadership skills to guide teams, resulting in a 20% increase in project delivery efficiency.
Developed complex PL-SQL scripts and utilized Alteryx and RapidMiner to streamline data transformations, reducing manual effort by 40%.
Cisco September 2022 – July 2023
Data Engineer
Engineered scalable ETL/ELT pipelines using AWS Glue, EMR (Spark), and Step Functions, enabling batch and near real-time data processing across multi-terabyte datasets.
Designed and maintained data lake and warehouse architectures on Amazon S3, Redshift, and Athena, leveraging partitioning, compression, and Spectrum to optimize query performance and reduce costs.
Built event-driven streaming pipelines with Kinesis Data Streams, Firehose, and Lambda, supporting real-time ingestion and analytics for high-volume transactional systems.
Automated infrastructure deployment with Terraform and CloudFormation, establishing reusable templates for serverless, containerized, and data processing workloads across AWS accounts.
Integrated machine learning workflows by enabling data pipelines that connected SageMaker, Redshift, and S3, streamlining model training, monitoring, and deployment at scale.
Implemented end-to-end security, monitoring, and governance frameworks using AWS IAM, CloudWatch, CloudTrail, and Lake Formation, ensuring compliance with enterprise and regulatory standards.
Created dynamic data visualizations and ETL patterns, facilitating real-time analytics and reducing data processing time by 30%.
Leveraged Tableau Prep and OpenShift for advanced data visualization and containerized deployments, improving data accessibility by 35%.
Utilized ETL tools and spearheaded ETL automation projects to enhance data integration processes, achieving a 50% reduction in data latency.
Managed CI/CD pipelines and implemented CI/CD automation pipeline management, decreasing deployment time by 25% and improving release frequency.
Led large-scale architecture initiatives and enterprise rollouts, significantly enhancing system scalability and reliability across the organization.
Kapital Information Technologies private limited Jun 2020 – August 2022 Data Engineer
Designed and orchestrated end-to-end ETL/ELT pipelines in Azure Data Factory (ADF), leveraging parameterization, dynamic pipelines, and metadata-driven frameworks to process multi-source structured and semi- structured datasets.
Built data lakehouse architectures using Azure Data Lake Storage Gen2 (ADLS) and Delta Lake, implementing bronze–silver–gold zones to support lineage, governance, and optimized query performance for analytical workloads.
Developed and optimized data models in Azure Synapse Analytics using partitioning, indexing, and materialized views, enabling faster query execution and reducing reporting time by 40%.
Designed real-time ingestion pipelines with Azure Event Hub and Stream Analytics, integrating with Databricks
(PySpark/Scala) for streaming transformations and pushing enriched data into Synapse and Power BI dashboards.
Conducted data profiling, cleansing, and enrichment using Databricks and SQL, applying advanced transformations
(windowing, ranking, incremental loads) to improve data quality and trustworthiness.
Implemented governance, monitoring, and security best practices across Azure using Azure Purview (Data Catalog), Role-Based Access Control (RBAC), and Key Vault, ensuring compliance with enterprise and regulatory standards.
Migrated existing Azure Data Factory and Synapse workloads into Microsoft Fabric, optimizing cost and simplifying governance under OneLake unified storage.
Implemented containers and containerized deployments to optimize resource utilization and reduce infrastructure costs by 20%.
Troubleshot and resolved performance issues, increasing system uptime by 15% and ensuring seamless operations in a shared services environment.
Collaborated with scrum teams to improve task dependency tuning and scheduling scalability, resulting in a 30% improvement in project timelines.
Established enterprise-level governance to ensure compliance and consistency across projects, fostering a culture of accountability and transparency.
EDUCATION
Master’s in information technology - University of Cincinnati
Bachelor’s in information technology - Shri Vishnu Engineering College for Women