Zoya
Email: ***********@*****.***
Phone: +1-443-***-****
Summary:
Python Full-Stack Developer with over 6+ years of experience delivering scalable, cloud-native applications and distributed backend systems. Expertise includes architecting microservices, designing high-performance REST APIs, and implementing asynchronous, event-driven workflows using Python frameworks such as FastAPI, Django, and Flask. Skilled in integrating modern frontend technologies, including React and Angular, to build seamless end-to-end solutions.
Highly proficient in AWS and GCP ecosystems, containerization with Docker, orchestration using Kubernetes, and the development of automated CI/CD pipelines. Strong background in secure system design, authentication and authorization mechanisms, logging and monitoring strategies, and optimized database modeling across relational and NoSQL systems.
Hands-on experience with advanced data and streaming technologies such as Apache Kafka, PySpark, TensorFlow, and AWS Glue, enabling the creation of robust, data-driven platforms. Demonstrated ability to incorporate AI/ML components including model training, inference pipelines, and data preprocessing workflows into production-grade applications to enhance automation, intelligence, and system performance.
Recognized for delivering clean, maintainable code, implementing industry best practices, and leveraging emerging technologies to strengthen system reliability and scalability.
Skills:
Category
Key Skills
Programming
Python, SQL, JavaScript
Python Frameworks
FastAPI, Django, Flask, PySpark
AI/ML
TensorFlow, PyTorch, Scikit-learn, MLflow
Cloud & DevOps
AWS, Docker, Kubernetes (EKS), Terraform, GitHub Actions, Jenkins
Data Engineering
Databricks, Spark, AWS Glue, ETL Pipelines
Databases
PostgreSQL, MySQL, MongoDB, Redis, Snowflake
APIs & Microservices
REST APIs, GraphQL, Microservices Architecture
Streaming & Messaging
Kafka, RabbitMQ
Frontend
React, Next.js, Angular
Security
OAuth2, JWT, IAM
Monitoring
CloudWatch, OpenSearch
Visualization
Tableau, Dash, Plotly
Client: Stripe – CA, USA
Python Full Stack Developer March 2023 – Present
Client Summary: Stripe is a leading global payments infrastructure provider powering real-time online transactions, fraud detection, financial automation, and developer-centric payment solutions for enterprises and startups.
Project Summary: Building and enhancing full-stack financial dashboards, API-driven payment systems, authentication modules, and microservices powering real-time transaction workflows. Developed frontend dashboards integrated with scalable Python-based backend services.
Roles & Responsibilities
Developed Python applications for data-intensive use cases, integrating Pandas, NumPy, and SQLAlchemy for efficient data manipulation and transformation.
Designed and implemented scalable applications using object-oriented programming (OOP) principles including inheritance, polymorphism, and abstraction to improve code reusability and maintainability.
Designed, developed, and maintained scalable Python applications and automation scripts for data processing, ETL workflows, and system integration.
Developed Python-based, high-throughput data processing services using Pandas, NumPy, and SQLAlchemy to support real-time transaction ingestion, feature computation, and fraud-signal extraction.
Designed and deployed end-to-end ML pipelines for fraud detection, anomaly detection, and payment-risk scoring using TensorFlow, PyTorch, scikit-learn, and Hugging Face.
Designed and deployed real-time fraud detection models using Python, PyTorch, and Scikit-learn integrated with AWS Lambda, SQS, DynamoDB, and API Gateway.
Designed, deployed, and managed containerized applications on Kubernetes clusters (EKS) to ensure high availability and scalability.
Built and optimized end-to-end ETL pipelines using Databricks, PySpark, Snowflake, and AWS Glue, ensuring reliable ingestion, transformation, and analytics.
Implemented distributed data pipelines leveraging Spark SQL, Delta Lake, and Auto Loader in Databricks for large-scale batch and streaming data.
Architected and optimized SQL queries, stored procedures, and triggers across PostgreSQL, MySQL, and Oracle PL/SQL, improving query performance and scalability.
Developed and deployed Python RESTful APIs using Flask, Django, and FastAPI, integrating seamlessly with frontends and external systems.
Implemented secure authentication and authorization using OAuth2, JWT, and AWS API Gateway to ensure compliance with governance and data security standards.
Implemented Helm charts and Kubernetes manifests for seamless application deployments and version control.
Designed and deployed microservices architectures with Python, Docker, and Kubernetes, ensuring high availability and fault tolerance in AWS environments.
Built modular Python libraries and reusable classes to streamline data processing and automation workflows across multiple projects.
Automated deployments and infrastructure provisioning using Terraform, Jenkins, GitHub Actions, and AWS CodePipeline, streamlining CI/CD processes.
Conducted code reviews, unit testing, and integration testing with Pytest, PyUnit, Jest, and React Testing Library, maintaining high code quality and coverage.
Built NoSQL data pipelines leveraging MongoDB, Cassandra, and Redis, optimizing aggregation pipelines, indexing strategies, and cache performance.
Built and optimized CI/CD pipelines (Jenkins/GitHub Actions) for automated container builds and deployments into EKS.
Developed real-time data processing pipelines with Kafka, PySpark, and PostgreSQL, improving event-driven workflows and reducing processing latency.
Integrated AWS-native services including Lambda, S3, DynamoDB, RDS, SNS, and CloudFormation for serverless and scalable data workflows.
Designed asynchronous task processing with Celery, RabbitMQ, and Redis, reducing API response times and improving throughput.
Built interactive dashboards and reporting tools using Flask, Dash, React.js, Next.js, and Angular, enabling real-time financial and operational analytics.
Developed and optimized fraud-scoring models with advanced feature engineering, hyperparameter tuning, and experiment tracking using MLflow and Weights & Biases.
Deployed batch and real-time ML models using Kubernetes (EKS), Docker, Triton Inference Server, BentoML, and ONNX for low-latency production inference.
Built reusable Python libraries for data validation, feature transformation, aggregation logic, and fraud-rule computation used across multiple ML teams.
Developed data security and compliance frameworks with AWS Lake Formation, IAM policies, and encryption protocols, ensuring governance across sensitive datasets.
Implemented serverless architectures using AWS Lambda and API Gateway, reducing operational costs while maintaining performance.
Designed and optimized ETL pipelines for healthcare and financial systems, automating claims, billing, and reconciliation workflows.
Deployed containerized applications on AWS EKS, integrating CI/CD pipelines for continuous delivery and scalability.
Configured centralized logging and monitoring with Amazon OpenSearch, CloudWatch, CloudTrail, and VPC Flow Logs, enabling real-time observability and auditing.
Utilized MLflow, Apache Airflow, and AWS SageMaker to build MLOps workflows, supporting deployment, retraining, and monitoring of machine learning models.
Developed real-time EDI monitoring dashboards and automated HL7/EDI transaction processing using Python and Kafka for healthcare data exchange.
Integrated GraphQL APIs with React and Next.js applications to optimize client-server communication and frontend performance.
Integrated third-party and internal Stripe APIs to enhance fraud detection signals, user risk profiling, and transaction enrichment pipelines.
Designed dashboards and visualizations using Plotly, Dash, and Matplotlib to present model performance, fraud-capture rates, and anomaly trends to stakeholders.
Built multi-modal ML models incorporating device fingerprints, text signals, behavioral analytics, and transactional metadata to improve fraud-capture accuracy.
Applied 12-factor app principles in designing microservices, ensuring fault tolerance, scalability, and cloud-native best practices.
Collaborated with cross-functional teams including data engineers, analysts, and business stakeholders to deliver efficient, governed, and high-performing solutions.
Ensured data governance, compliance, and security across AWS pipelines by implementing encryption, role-based access, and auditing mechanisms.
Client: Newfront – CA, USA
Python Developer Aug 2020 – Feb 2023
Client Summary: Newfront is an insurance and risk-management technology platform providing data-driven insights and automation for underwriting, claims, and risk modeling.
Project Summary: Built internal full-stack insurance automation tools, document workflows, dashboards, and underwriting management applications. Designed backend microservices and interactive frontend systems for brokers and underwriters.
Roles & Responsibilities
Designed, developed, and implemented Python software architectures for the extraction, storage, and presentation of data in accordance with specified requirements.
Executed application development tasks, addressed performance concerns, and managed software maintenance.
Designed, developed, and maintained scalable Python applications and scripts for automation, data processing, and system integration.
Built and optimized end-to-end ETL pipelines using Databricks, PySpark, and GCP BigQuery, ensuring reliable ingestion, transformation, and analytics.
Implemented distributed data pipelines leveraging Spark SQL, Delta Lake, and Databricks notebooks for high-volume structured and unstructured data.
Developed and deployed RESTful APIs with Flask, Django, and FastAPI, enabling seamless integration between services and client applications.
Developed and optimized RESTful APIs and microservices in Python, ensuring high performance, scalability, and maintainability.
Designed and optimized SQL queries, stored procedures, and indexing strategies in PostgreSQL, MySQL, and BigQuery for large-scale financial and operational datasets.
Developed NoSQL solutions using MongoDB, Cassandra, and Couchbase, implementing partitioning, replication, and change streams for high availability.
Monitored and troubleshooted Kubernetes clusters using kubectl, CloudWatch, and Prometheus/Grafana, improving reliability and uptime.
Built microservices architectures with Python, Docker, and Kubernetes (GKE), ensuring high availability, scalability, and fault tolerance.
Automated deployments and infrastructure provisioning using Terraform, GitHub Actions, and Cloud Build, streamlining CI/CD workflows.
Conducted code reviews, unit testing, and integration testing with Pytest, coverage reports, and CI/CD pipelines, ensuring code reliability and maintainability.
Integrated Redis for caching, rate limiting, and session management, reducing API response times and preventing abuse.
Designed and implemented data warehousing solutions with Snowflake and GCP BigQuery, leveraging partitioning, clustering, and time travel for advanced data management.
Developed real-time streaming applications using Apache Spark and Kafka, improving data processing speed and throughput.
Developed interactive dashboards and data visualization tools using Django, Flask, React.js, Next.js, and Three.js for advanced analytics.
Designed event-driven and domain-driven architectures for scalable financial and healthcare systems using Python and microservices principles.
Configured and maintained containerized applications using Docker and Kubernetes (GKE), optimizing resource utilization and reliability.
Implemented CI/CD pipelines for automated builds, testing, and deployments using GitHub Actions and GCP Cloud Build.
Developed real-time EDI/HL7 processing pipelines with Python and Kafka for healthcare and financial transaction systems.
Optimized trade processing workflows with Python and Golang, reducing execution times and improving system efficiency.
Integrated GraphQL APIs with React and Next.js frontends, optimizing client-server communication and data-fetching strategies.
Implemented business process automation using Camunda BPMN 2.0, streamlining operational workflows.
Collaborated with cross-functional teams including data engineers, analysts, and product stakeholders to deliver efficient, governed, and scalable GCP-based solutions.
Ensured data governance, compliance, and security by applying encryption, IAM policies, and role-based access control across all GCP services.
Python Developer – Internship Jan 2020 – July 2020
Roles & Responsibilities
Built CRUD REST APIs using Flask and Django.
Developed React components for internal dashboards.
Integrated ElasticSearch for advanced search functionality.
Performed frontend debugging using Chrome DevTools.
Wrote unit tests using PyTest and Jest.
Created ETL scripts for data migration and cleanup.
Implemented data validation and schema enforcement using Pydantic.
Client: Greenlight – CA, USA
Data Analyst Nov 2017 – Dec 2019
Client Summary: Greenlight is a financial technology company providing financial analytics, digital banking insights, and real-time transaction intelligence for customers.
Project Summary: Developed advanced analytics dashboards, automated reporting pipelines, and ETL systems that powered financial risk insights, spending behavior analytics, and performance metrics.
Roles & Responsibilities
Built analytics dashboards using Python, SQL, Tableau, Pandas.
Automated ETL pipelines for financial transaction data using Python scripts and SQL stored procedures.
Created forecasting models using Time-Series ML (ARIMA, Prophet).
Designed data validation scripts for data quality and integrity checks.
Optimized SQL queries and built complex analytical datasets in MySQL and PostgreSQL.
Built KPI computation engines for financial reporting.
Developed anomaly detection scripts for transaction irregularities.
Built API data ingestion tools using Python (Requests, Asyncio).
Data Analyst – Internship April 2017 – Oct 2017
Roles & Responsibilities
Performed data preprocessing, feature generation, and trend analysis.
Created automation scripts for daily/weekly data workflows.
Extracted and transformed datasets using SQL and Python.
Developed internal visualizations for executive reporting.
Bachelor’s Degree: Bachelor of Science in Management Information Systems
College/University: University of Maryland Global Campus