Jai Prakash
AI / ML Engineer
Phone: 469-***-****
Email: **********.****@*****.***
Professional Summary:
AI/ML Engineer with 5+ years of experience designing and deploying machine learning models, NLP solutions, and scalable data pipelines for production environments.
Proficient in Python, PyTorch, TensorFlow, HuggingFace, LangChain, and modern deep learning/NLP frameworks, with hands-on experience building end-to-end ML solutions.
Experienced in MLOps and productionization, using Docker, Kubernetes, CI/CD pipelines, and cloud platforms such as AWS and Azure to automate training, deployment, and monitoring.
Developed ML-driven microservices and intelligent applications including embeddings-based search, recommendation pipelines, and real-time inference systems.
Strong expertise in large-scale data engineering, integrating data ingestion, preprocessing, feature engineering, and model serving within distributed and cloud-native architectures.
TECHNICAL SKILLS:
Machine Learning / NLP: PyTorch, TensorFlow, Hugging Face, BERT, GPT, T5, Scikit-learn, LangChain
MLOps & DevOps: Docker, Kubernetes, Jenkins, GitHub Actions, MLflow, FastAPI
Cloud & Data: AWS (EC2, Lambda, S3), Azure (ADF, ML), Databricks, Snowflake, Spark
Databases: PostgreSQL, MongoDB, Redis, Pinecone, Weaviate
Programming & APIs: Python, SQL, FastAPI, Flask, REST APIs
PROFESSIONAL EXPERIENCE:
Client: UHG,
Role: AI/ML Engineer
Duration: Sep 2024 – Present
Description: UnitedHealth Group is a leading healthcare and insurance provider managing large-scale clinical, claims, and administrative data. The organization focuses on improving patient outcomes through intelligent data processing, AI-driven automation, and digital transformation. Their analytics and engineering teams work extensively with machine learning, NLP, and cloud-native platforms to enhance care delivery and operational efficiency.
Responsibilities:
Designed and deployed NLP models (BERT, GPT, T5) for processing medical documents, claim notes, and summarization workflows.
Built RAG-based clinical knowledge retrieval using LangChain, Pinecone, and OpenAI APIs.
Developed scalable AI microservices using FastAPI and deployed them to AWS ECS / Lambda with Docker containers.
Implemented CI/CD pipelines in Jenkins and GitHub Actions to automate model training, packaging, and deployment.
Designed embeddings pipelines using Milvus and Chroma for similarity search across patient records.
Collaborated with data engineering teams to integrate Spark and Databricks pipelines into ML workflows.
Client: Sutherland, India
Role: ML Engineer
Oct 2021 – Jul 2023
Responsibilities:
Designed and implemented advanced machine learning solutions using Python, Spark, Hadoop, and AWS—including distributed Random Forest models and predictive analytics pipelines supporting large-scale data processing.
Developed end-to-end data architecture components, creating conceptual, logical, physical, and canonical data models to standardize data formats and enable seamless integration across enterprise ecosystems.
Executed complex SQL-based data engineering tasks, extensively working with MS SQL Server to retrieve, validate, transform, and generate clinical datasets with high accuracy and performance.
Built and optimized ETL workflows by performing rigorous testing and loading structured and unstructured data from multiple sources into warehouse layers using MLDM and dimensional modeling principles.
Leveraged advanced Python and ML libraries (Pandas, NumPy, Scikit-learn, NLTK, Matplotlib, Seaborn) to develop algorithms for classification, regression, clustering, and statistical analysis to drive actionable insights.
Utilized a broad big-data technology stack including Spark, Scala, Hadoop, HBase, Kafka, Spark Streaming, and MLlib to build scalable data pipelines, streaming solutions, and analytical models across distributed environments.
Client: Nelito Systems Pvt. Ltd, India
Role: ML Engineer
Duration: Jul 2019 – Sep 2021
Responsibilities:
Developed ML models for product recommendations, demand forecasting, and customer segmentation.
Cleaned and transformed large retail datasets using Pandas, NumPy, and Scikit-learn.
Implemented end-to-end ML pipelines on AWS using EC2, S3, and SageMaker.
Designed Tableau dashboards for business reporting and ML monitoring.
Built classification models (Logistic Regression, SVM, Random Forest) for analytics and decision workflows.
Integrated ML workflows with GitLab for version control and automated deployments.
EDUCATION:
Master’s in computer information systems from Harrisburg University of Science and Technology.