Ravali P
Contact: +1-470-***-****
Title: AI/ML Engineer/Data Scientist/GenAI Engineer
E-Mail: **********@*****.***
LinkedIn: https://linkedin.com/in/ravali-pulipati-008a0a283
Summary:
8+ Years of IT experience and around 1+ years in AI/ML Engineer experience designing, developing, and deploying intelligent systems across Retail, Healthcare, Banking, and Financial domains.
Proven expertise in end-to-end ML lifecycle — data acquisition, feature engineering, model development, validation, deployment, monitoring, and retraining at scale.
Proficient in Python (2.x/3.x) and the full SciPy ecosystem (NumPy, Pandas, SciPy, Matplotlib, scikit-learn) with deep knowledge of frameworks like TensorFlow, PyTorch, XGBoost, and LightGBM.
Strong foundation in supervised, unsupervised, and deep learning — regression, classification, clustering, ensemble methods, SVM, CNNs, RNNs, and transformer-based architectures.
Designed and deployed Generative AI solutions (LLMs, text generation, document understanding, and image generation) using AWS Bedrock, Hugging Face, LangChain, and custom transformer models.
Developed and integrated NLP pipelines (BERT, GPT, spaCy, NLTK) for entity recognition, topic modeling, and sentiment analysis across large-scale text datasets.
Hands-on experience with time-series forecasting (ARIMA, GARCH) and graph-based learning, enabling predictive analytics for business-critical KPIs.
Built automated feature engineering and data processing pipelines using PySpark, SQL, and Airflow, operationalizing structured and unstructured data ingestion.
Designed and implemented MLOps workflows with MLflow, DVC, Kubeflow, SageMaker Pipelines, ensuring reproducibility, governance, and seamless CI/CD integration.
Deployed scalable ML APIs and microservices via Flask, FastAPI, Docker, and Kubernetes, supporting low-latency model inference and monitoring.
Established robust CI/CD pipelines using Jenkins, GitHub Actions, and AWS CodePipeline for automated build, test, and deployment of AI/ML workloads.
Proficient in AWS (EC2, S3, SageMaker, Lambda), Azure (ML Studio, Blob Storage), and GCP (Vertex AI, BigQuery) for cloud-native ML model training and deployment.
Implemented real-time model monitoring using Prometheus, Grafana, and Datadog, setting up alerts for data drift, concept drift, and performance decay.
Experienced in big data processing using Hadoop, Hive, and Spark (PySpark, Spark SQL) for distributed computation and scalable data engineering.
Expert in data visualization and storytelling, building interactive dashboards using Tableau, Power BI, Plotly, and Python visualization libraries for actionable insights.
Collaborated with cross-functional teams to translate business objectives into ML-driven outcomes, improving retention, fraud detection, and demand forecasting.
Strong communicator with the ability to bridge business and technical teams, presenting analytical findings and AI strategies to executive leadership.
Passionate about innovation in Generative AI, LLM fine-tuning, and AI-driven automation to create measurable business value and operational excellence.
Technical Skills:
Programming Languages
Python, Java, R
ML Frameworks
Scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM
Data Processing
Pandas, NumPy, Dask, PySpark, SQL
Feature Engineering
PySpark, Featuretools, Scikit-learn, SQL
MLOps & Lifecycle Tools
MLflow, DVC, TFX, Kubeflow, SageMaker
Pipelines, Airflow
Model Deployment
Flask, FastAPI, Docker, Kubernetes, SageMaker, Vertex AI, Azure ML
NLP & LLMs
NLTK, spaCy, Hugging Face Transformers, BERT, GPT, LangChain
Computer Vision
OpenCV, YOLO, CNNs, Detectron2
Version Control
GitHub, GitLab, Bitbucket
Experiment Tracking
MLflow, Weights & Biases, Neptune.ai
Cloud Platforms
AWS(SageMaker,S3,Lambda),GCP(Vertex
AI,BigQuery),Azure(MLStudio,Blob Storage)
Databases
PostgreSQL,MySQL,MongoDB, Cassandra,
Snowflake, Redshift
Big Data & Streaming
Apache Kafka, Hadoop, Spark, AWS
Kinesis
Visualization
Matplotlib, Seaborn, Plotly, Power BI, AWS QuickSight
Scripting & Automation
Bash, Python, PowerShell, Groovy
DevOps/CI/CD
Jenkins, GitHub Actions
Monitoring & Observability
Prometheus, Grafana, Datadog, ELK Stack, SageMaker Model Monitor
AI
Gen AI, Chatbots, Foundation models
Professional Experience:
Georgia State University – Atlanta GA Aug 2025 – Present
AI/ML Engineer Python (Research Assistant Ship)
Technologies: Python, Pycharm, Fast API, GenAI, LLM, LangChain, AWS-S3, GPT- 4o, GPT- 3.5, Image & Text Gen, Postgres & PG Vector DB.
Responsibilities:
Designed and deployed end-to-end machine learning pipelines for real-time classification, regression, NLP, and recommendation systems using Python, Scikit-learn, TensorFlow, and PyTorch.
Built scalable RAG-based GenAI applications leveraging LangChain, Hugging Face Transformers, and custom LLM integrations for knowledge retrieval and enterprise automation.
Fine-tuned and optimized Large Language Models (LLMs) using LoRA, QLoRA, TRL, and Axolotl, achieving reduced latency and improved domain-specific accuracy.
Engineered prompt-optimization and evaluation strategies to enhance LLM reasoning, coherence, and contextual understanding in production environments.
Developed automated document parsing and vectorization pipelines for chunking, embedding, and persisting data into vector databases such as FAISS and OpenSearch.
Designed and implemented semantic agents and knowledge interfaces enabling intelligent retrieval and reasoning across enterprise knowledge bases.
Delivered multimodal AI solutions combining NLP and computer vision techniques to address document analysis, OCR, and contextual understanding tasks.
Architected and optimized data ingestion and ETL workflows using PySpark, SQL, and Airflow to support model training and real-time analytics pipelines.
Built personalized recommendation systems using collaborative and content-based filtering, increasing customer engagement and retention.
Developed dynamic pricing algorithms leveraging demand forecasting and competitor data to improve margin optimization and business profitability.
Integrated and deployed ML models with cloud-native services including AWS (S3, Lambda, SageMaker, Bedrock), GCP (Vertex AI), and Azure ML Studio for scalable, low-latency inference.
Containerized and deployed AI workloads using Docker and orchestration platforms like Kubernetes (EKS/GKE) for fault-tolerant and distributed operations.
Applied MLOps best practices using CI/CD pipelines, Terraform, and model registries for automated training, deployment, and monitoring.
Built and deployed AI chatbots and assistants using customized AWS Bedrock agents integrated with RAG-based knowledge retrieval and OpenSearch vector stores.
Integrated LLAMA and other foundation models into enterprise portals for workflow mapping and intelligent process automation.
Implemented LLM guardrails at both prompt and output levels to mitigate hallucination, toxicity, and off-topic responses while ensuring compliance and safety.
Established model monitoring and retraining pipelines to maintain accuracy, detect data drift, and sustain performance during high-volume or seasonal traffic cycles.
Optum Global Solutions (India) Pvt LTD-Hyderabad, India Apr 2022 – Nov 2024
Python Engineer
Technologies: Python, Keras, Pytorch, Tensor Flow, LLM, LangChain, CNN, AWS, AWS Glue, AWS Bedrock, Anthropic, OpenAI, Llama, Pinecone, FAISS
Responsibilities:
Partnered with cross-functional data science, ML, and platform teams to gather requirements and design AI-driven services, model workflows, and intelligent user journeys.
Built and deployed Python-based APIs for ML inference and data processing using FastAPI and Flask, defining OpenAPI schemas with Pydantic and implementing OAuth2 and JWT authentication.
Integrated AI/ML microservices with enterprise and third-party systems via REST APIs, webhooks, and RBAC-secured token exchanges, ensuring secure and scalable integrations.
Designed and automated data ingestion and ETL pipelines in Python and Databricks, orchestrated model-training workflows with Apache Airflow, and implemented data versioning.
Developed data validation, schema enforcement, and quality monitoring layers to ensure reproducibility and accuracy of training datasets.
Engineered stateful ML pipelines enabling multi-step data transformations, contextual model execution, and adaptive retraining.
Implemented automated testing for ML APIs and pipelines using pytest, requests, and load-testing frameworks to ensure performance, scalability, and model reliability.
Established CI/CD pipelines using Git, Docker, Kubernetes, and Jenkins/GitHub Actions for continuous integration, model deployment, and environment synchronization.
Instrumented AI microservices with structured logging, distributed tracing, and custom metrics dashboards for SLO/SLA tracking and model latency analysis.
Performed performance profiling and optimization of ML inference services using cProfile, async I/O (asyncio), and Redis-based caching, improving throughput and latency.
Designed and secured ML endpoints and data flows with role-based access control, secrets management, input validation, and detailed audit logging.
Authored runbooks, enablement guides, and ML service documentation to streamline handoffs and improve operational readiness.
Built data transformation and analytics layers in Python, integrating insights and predictions into BI tools like Tableau and Power BI for business reporting.
Conducted A/B and canary deployments for new model versions, analyzing production metrics (latency, drift, accuracy, user engagement) to guide rollouts.
Collaborated with data engineers and MLOps teams to align on training schedules, infrastructure SLAs, and cloud deployment strategies.
Refactored legacy Python ML modules into modular, testable components, adding type hints, pre-commit checks, and linting for code consistency.
Implemented asynchronous, message-driven ML pipelines using Celery and Kafka to handle large-scale, event-based data and retraining tasks.
Deployed containerized ML microservices to AWS ECS/EKS, employing blue-green and rollback strategies for safe and seamless releases.
Created monitoring and incident-response playbooks with alerting and observability pipelines, reducing time-to-detect and resolve issues in production ML systems.
Persistent Systems Limited- Pune, India May 2019–April 2022
Python Developer
Responsibilities:
Translated client requirements and project objectives into clear technical specs, user stories, and sprint deliverables.
Framed business problems as data and service workflows in Python, defining inputs, outputs, and success criteria.
Automated data pipelines and reporting with Python and Databricks notebooks, reducing manual effort and turnaround time.
Led daily syncs with management and client stakeholders to align milestones, surface risks, and ensure smooth transitions
Built robust data preparation modules for NLP datasets in Python using Pandas, NumPy, scikit-learn/SciPy; validated, cleansed, and verified data integrity; visualized with Matplotlib/Seaborn.
Developed Python consumers to extract system-generated OLAP logs from Kafka topics with reliable offset management and retry handling.
Implemented PySpark streaming jobs to process application log data from Kafka and persist curated outputs for downstream use.
Wrote Python integrations to AWS S3 (boto3) for secure storage and lifecycle management of logs and artifacts.
Delivered text-mining and statistical modeling workflows on log data; exposed features and summaries for analytics and downstream services.
Built spaCy-based NER pipelines to extract entities from logs and packaged them as reusable Python components.
Produced ad-hoc analyses and operational reports; tracked experiments and metrics in MLflow and surfaced insights via Power BI.
Created CI/CD pipelines and automation scripts in Jenkins to build, test, containerize, and deploy Python services.
Authored PySpark jobs on Azure Databricks and integrated curated datasets with Power BI dashboards for consumption.
Implemented MLOps and job orchestration with Airflow and MLflow; containerized training/inference services with Docker; deployed workloads on AWS SageMaker when required.
Built RESTful APIs with FastAPI; defined Pydantic schemas and OpenAPI docs to enable reliable front-end/back-end communication.
Designed endpoints for authentication, data retrieval, and data mutations with input validation, RBAC, and audit logging to ensure secure operations.
Added structured logging, metrics, and alerting to monitor services in production and speed up incident detection and resolution.
IFACT Technologies- Hyderabad, India August 2017–May 2019
Python Developer
Responsibilities:
Build and maintain RESTful APIs with FastAPI/Flask, define Pydantic schemas, and implement auth (OAuth2/JWT).
Write clean, modular Python using type hints, dataclasses, and PEP 8 standards; document code with docstrings.
Develop ETL/data pipelines with Pandas and SQL; schedule jobs via Airflow/cron; add basic data validation.
Design and query relational databases (PostgreSQL/MySQL), optimize SQL, and use SQLAlchemy/psycopg2 for persistence.
Create unit/integration tests with pytest and pytest-mock; target high code coverage and reliable CI checks.
Debug and profile services using logging, pdb, and cProfile; reduce latency and memory footprint where needed.
Containerize apps with Docker and ship via CI/CD (GitHub Actions/Jenkins); manage env-specific configs and secrets.
Implement async I/O with asyncio and add Redis caching/queues for throughput and resiliency.
Produce clear API docs with OpenAPI/Swagger and example requests/responses for consumers.
Add observability: structured logs, metrics, and basic alerts; track error rates and request latency.
Follow Git best practices (branching, PR reviews, code reviews) and collaborate in Agile/Scrum ceremonies.
Triage and fix defects; participate in on-call rotations with root-cause analysis and postmortems.
Build small CLI tools and automation scripts to streamline repetitive tasks for the team.
Enforce basic security: input validation, least-privilege access, secure secret handling, and dependency scanning.
Write concise technical docs/runbooks and demo features to stakeholders at sprint reviews.
Education:
Bachelor of Technology (B. Tech), (Jawaharlal Nehru Technology University - Kakinada, India- 2017),