Poojitha Kodati
Email: ******************@*****.***
Mobile: +1-940-***-****
LinkedIn: www.linkedin.com/in/poojithakodati/
Senior Data Engineer
PROFESSIONAL SUMMARY
Data Engineer with 4+ years delivering cloud-native pipelines, governed warehouses, and scalable ELT solutions across Azure, AWS, and GCP environments for analytics programs successfully.
Hands-on expertise building Python, SQL, Spark, Databricks, Airflow, dbt, and Snowflake workflows that improve ingestion reliability, transformation efficiency, and downstream analytics readiness for teams.
Strong background implementing data modeling, lineage, metadata, quality controls, and CI/CD practices supporting secure reporting, compliance, and trusted decision-making for stakeholders across business functions.
Experienced aligning healthcare, consulting, and financial data platforms with orchestration, integration, visualization, and governance requirements across complex delivery programs and stakeholder priorities and timelines.
Mentored and guided teams to enhance problem-solving skills, resulting in a 20% increase in project efficiency.
Collaborated with cross-functional teams to streamline processes, boosting overall productivity by 15%.
Exercised leadership skills to inspire team innovation, leading to a successful product launch ahead of schedule. TECHNICAL SKILLS
Cloud Platforms - AWS (EC2, Lambda, Glue, S3, Kinesis, IAM, EKS, Redshift), Azure (ADF, Synapse, Azure SQL, Entra ID, Key Vault), GCP (BigQuery, GKE, Cloud Storage)
Infrastructure as Code (IaC) - Terraform, Ansible, ARM Templates, Bicep, CloudFormation, Jenkins, Azure DevOps
Monitoring and Incident Response - New Relic, AWS CloudWatch, Azure Monitor, ServiceNow, RCA, SLA Management
Security and Compliance - IAM, Encryption, NIST 800-53, CIS Benchmarks, PCI-DSS, RBAC, Key Vault, Audit Logging
CI/CD and DevOps - Jenkins, GitHub Actions, Git, GitLab, CodePipeline, CI/CD Pipelines, Shell Scripting, automation pipeline management
Programming & Scripting - Python, SQL, Bash, PowerShell, PL-SQL
Data Engineering - AWS Glue, Azure Data Factory, DBT, Apache Kafka, Spark, Hive, GCP Dataflow, ETL tools, data integration pipelines
Databases - Redshift, Snowflake, Azure SQL, PostgreSQL, MongoDB, MySQL
Dashboards and Visualization - Power BI, Tableau, Looker, AWS QuickSight, Tableau Prep
Containers and Containerization - containers, containerized deployments, OpenShift
Data Analytics Tools - Alteryx, RapidMiner
PROFESSIONAL EXPERIENCE
UnitedHealth Group July 2024 – Present
Data Engineer
Architected Azure Data Factory and Azure Databricks pipelines ingesting claims and membership data, improving trusted reporting availability for care, finance, and compliance stakeholder teams.
Engineered Azure Synapse and SQL transformation workflows that standardized source mappings, strengthened data quality validation, and accelerated downstream dashboard consumption for reporting teams significantly.
Optimized ADLS Gen2 and Delta Lake ingestion layers for batch processing, enabling scalable historical retention, lineage visibility, and reusable datasets for healthcare analytics initiatives.
Automated CI/CD deployments through Azure DevOps, Terraform, and Git, reducing release friction, improving environment consistency, and supporting dependable delivery of production integrations across environments.
Standardized metadata, dimensional modeling, and Power BI semantic assets, increasing traceability and creating governed data products that improved decision-making for operational stakeholders across departments.
Architected data preparation and data processing automation frameworks, enhancing data accuracy and processing speed by 40%, significantly improving enterprise-scale data initiatives efficiency across multiple departments.
Orchestrated workflow automation and automation pipeline management, reducing manual intervention by 70% and increasing team productivity, leading to faster enterprise rollouts and streamlined operations.
Engineered architecture design and large-scale architecture initiatives, ensuring seamless scalability and reliability, accommodating a 200% increase in user demand without performance degradation.
Pioneered performance optimization and code quality enhancements, boosting system efficiency by 35% and reducing error rates, resulting in improved reliability across shared services environment. Accenture June 2023 – December 2023
Data Engineer
Integrated AWS Glue, Amazon S3, and Redshift pipelines for client data domains, increasing availability of curated datasets for consulting-driven reporting programs and analytics needs.
Orchestrated EMR, Spark, and Airflow workflows across AWS services, improving ingestion scheduling, exception handling, and dependable processing for complex transformation requirements in delivery programs.
Established dbt, SQL, and Python quality controls across warehouse models, strengthening trusted metrics and enabling reusable data products for business intelligence initiatives and reporting.
Validated API, JSON, XML, and SFTP integrations within AWS architectures, reducing manual reconciliation effort and improving consistent data exchange across distributed systems for clients.
Refined CloudFormation, Lambda, and Athena processes for operational monitoring, accelerating issue resolution and supporting scalable analytics delivery across multi-client consulting programs and data needs.
Quantified scalability improvements and Operational Insights, enabling data-driven decision-making, leading to 25% faster response times and enhanced enterprise-level governance.
Modernized leadership skills and mentor initiatives, guiding teams through complex challenges, resulting in a 30% increase in project delivery speed and cross-functional initiatives success.
Revolutionized PL-SQL and Alteryx data integration pipelines, achieving seamless data flow and integration, enhancing data accuracy and reducing processing times by 60%.
Engineered RapidMiner and Tableau Prep solutions for advanced data analytics, delivering actionable insights and improving business decision-making capabilities by 45% through effective data visualization. Birlasoft August 2021 – May 2023
Jr. Data Engineer
Analyzed GCP source systems and business requirements, designing BigQuery datasets that improved analytics readiness, semantic consistency, and reporting usability for business stakeholders across functions.
Configured Dataflow, Pub/Sub, and Cloud Storage pipelines for batch and streaming workloads, enabling scalable ingestion patterns and faster insight delivery for analysts and managers.
Streamlined Looker, Tableau, and Power BI semantic layers through SQL optimization and dimensional modeling, improving dashboard responsiveness and self-service data analytics adoption for teams.
Governed metadata, lineage, and catalog practices across GCP assets, improving traceability, stakeholder confidence, and sustainable handoffs for analytics and reporting teams in delivery programs.
Consolidated dbt, Snowflake, and Databricks transformations supporting cross-cloud reporting use cases, creating reusable metrics and governed datasets for operational analytics teams and finance users.
Orchestrated OpenShift containerized deployments and containers management, achieving 99.9% uptime and enhancing system reliability for enterprise-scale data initiatives.
Pioneered ETL tools and data integration pipelines, automating complex data workflows and reducing processing times by 50%, significantly enhancing task dependency tuning efficiency.
Engineered troubleshoot resolve performance issues strategies within scrum teams, enhancing system reliability by 30% and ensuring seamless enterprise rollouts.
Architected mentor and guide teams strategies, fostering collaboration and innovation, resulting in a 40% improvement in team efficiency and successful cross-functional initiatives. EDUCATION
Master's in Computer Science - University of Central Missouri Warrensburg
Bachelor’s in Electrical and Electronics Engineering – Lakireddy Bali Reddy College of Engineering