Deekshitha Vasamsetty
+1-216-***-**** **************@*****.***
PROFESSIONAL SUMMARY
Data Engineer with 6+ years of experience designing and maintaining scalable, production-grade data pipelines using Python, Spark, and AWS. Expertise in developing robust ETL/ELT workflows that ensure data accuracy in healthcare environments. Skilled in architecting HIPAA-compliant solutions and collaborating with cross-functional teams to enhance analytics and machine learning initiatives. Proven ability to deliver reliable, cloud-native systems that drive strategic insights. TECHNICAL SKILLS
• Programming Languages: Java, Python, SQL, Shell Scripting, Scala
• Cloud-based Services: GCP (BigQuery, Vertex AI, GCS, Pub/Sub, GKE, Dataflow, Cloud Build), AWS, Azure
• Data Engineering Tools: Apache Kafka, Apache Spark, Dataflow, Airflow, Beam, TensorFlow, PyTorch, DBT, Data Modeling
• Databases: BigQuery, Firestore, MongoDB, PostgreSQL, MySQL, Oracle, Redshift
• DevOps & CI/CD: Jenkins, GitHub Actions, Docker, Kubernetes, Helm, Terraform
• Monitoring: GCP Monitoring & Logging, ELK Stack, Prometheus, Grafana
• Security: OAuth 2.0, JWT, Spring Security, IAM (GCP/AWS)
• ETL & Messaging: Pub/Sub, Kafka, RabbitMQ, Spark Streaming, REST APIs, Batch Processing, Real-time Processing
• Version Control & Testing: Git, JUnit, Mockito, Cypress
• Software Development Life Cycle: Documentation, Coding Standards, Code Reviews, Source Control Management, Build Processes, Testing, Operations
• Business Intelligence Tools: Tableau, PowerBI, Excel
• Core Competencies: Analytical Thinking, Healthcare Data Integration, Data Governance PROFESSIONAL EXPERIENCE
[24]7.ai Nov 2021 - Jul 2022
Data Engineer
• Engineered Apache Kafka pipelines with Avro serialization for real-time event ingestion, supporting enriched transaction data flows and enabling precise risk evaluation.
• Created Spark-based ETL jobs in AWS Glue and Databricks, utilizing SQL for data transformation to produce analytics-ready formats, reinforcing data quality checks and optimizing reporting processes.
• Implemented streaming ingestion from AWS Kinesis and Kafka topics into Redshift and S3, enhancing real-time processing capabilities with an emphasis on data quality and reduced latency for critical applications.
• Developed RESTful APIs using Python and SQL to expose curated datasets to internal analytics teams and external clients, improving data accessibility for customized reporting and visualization use cases.
• Utilized CI/CD practices with Jenkins, Docker, and Git alongside Airflow for scheduling and managing batch processing pipelines, ensuring streamlined integration and production-ready deployments.
• Integrated TensorFlow and PyTorch scoring models via REST APIs, deploying Dockerized services on Kubernetes clusters to support scalable, low-latency model inference.
• Automated document classification and processing pipelines using AWS Lambda, S3 event triggers, and Textract APIs, reducing manual workload and expediting documentation processing for regulatory compliance.
• Built real-time dashboards using the ELK stack and AWS CloudWatch to monitor service performance, detect anomalies, and visualize throughput across microservices and streaming pipelines.
• Designed RBAC policies and IAM roles in AWS for secure access to Lambda functions, S3 buckets, and Redshift clusters, ensuring adherence to internal security and audit protocols.
• Leveraged AWS CloudFormation templates and Terraform to provision reproducible cloud infrastructure for development, testing, and production data pipelines.
• Integrated monitoring and alerting for ingestion jobs using Prometheus, Grafana, and AWS CloudWatch metrics, minimizing downtime and enhancing incident response.
• Managed schema registry for Kafka pipelines with Avro, ensuring backward compatibility and robust data validation across producers and consumers.
• Maintained operational runbooks and architecture diagrams for microservices and pipelines, supporting efficient cross-functional onboarding and reducing knowledge gaps.
• Conducted unit and integration tests using JUnit and Mockito, ensuring robust microservice code quality, reducing defects during regression, and maintaining data transformation integrity with DBT. Intelenet Global Services Jun 2019 - Sep 2021
Data Engineer
• Created production-grade ETL pipelines using Spark and AWS Glue, leveraging SQL for data transformation to process structured and semi-structured data into optimized formats, and loading results into S3 and Redshift with rigorous data quality measures.
• Leveraged AWS Glue and PySpark for schema enforcement and comprehensive data quality checks, automating validations to minimize manual interventions during batch processing workflows.
• Designed and developed RESTful APIs using Python and SQL, exposing curated datasets via secure endpoints integrated with Apigee Gateway to facilitate reliable data consumption by downstream systems.
• Built Apache Kafka pipelines for asynchronous messaging between services, ensuring event durability and high-throughput processing within distributed microservice architectures.
• Implemented API authentication using OAuth 2.0 and API keys, ensuring secure data access and controlled usage for third-party partner applications.
• Wrote custom serializers and deserializers for Kafka topics to handle Avro-formatted payloads, improving schema compatibility and ensuring backward support for multiple consumers.
• Automated build and deployment processes using Jenkins pipelines, Docker, and Git for continuous integration and zero-downtime production releases of data ingestion jobs and microservices.
• Applied test-driven development practices using JUnit, Mockito, and Cypress, constructing unit and integration tests for APIs and batch jobs to reduce production defects and ensure regression safety.
• Integrated schema registry solutions to manage Kafka topic evolution, ensuring safe data consumption and compatibility for down- stream consumers.
• Designed logging and alerting strategies using CloudWatch and the ELK stack, enhancing monitoring of job statuses and operational visibility.
• Collaborated with cross-functional teams to define data contracts and API specifications, accelerating the delivery of new data products and partner integrations.
• Participated in Agile sprints by contributing to design sessions, peer code reviews, and the delivery of production-ready pipelines aligned with evolving business and data analytics requirements. Wipro Oct 2015 - Jun 2019
Junior Data Engineer
• Migrated legacy ETL processes to Spark-based pipelines using PySpark and Kubernetes, reducing processing time and improving scalability for large-scale batch analytics workloads in production environments.
• Built data ingestion workflows in Spark to process transactional records and load curated outputs into MongoDB and MySQL, supporting reporting, auditing, and reconciliation use cases.
• Developed Java and Python microservices for real-time processing of order and payment data, enabling integration with internal systems and downstream APIs across the financial data pipeline.
• Created REST APIs using Flask and Spring Boot to expose internal fraud detection results, improving latency and allowing seamless communication with front-end platforms and other microservices.
• Containerized services with Docker and orchestrated them using Kubernetes to automate deployment, scaling, and fault tolerance across multiple dev and staging clusters.
• Designed and implemented schema validation for JSON payloads across microservices, improving data integrity and reducing ingestion errors from third-party integrations.
• Built internal API gateways to control and monitor service access, applying authentication mechanisms using tokens and role-based permission enforcement.
• Conducted unit testing and validation of Spark jobs and APIs using PyTest, JUnit, and Postman to ensure consistent functionality and meet release quality standards.
• Participated in sprint planning, backlog grooming, and code reviews with senior engineers, gaining exposure to Agile development practices and continuous integration workflows.
• Documented pipeline logic, data transformations, and integration workflows to ensure maintainability and support onboarding of new engineers to existing systems.
EDUCATIONAL DETAILS
University of Cumberlands, United States
Master of Science, Information Science Security
Jawaharlal Nehru Technological University, India
Bachelor of Science, Computer Science