Post Job Free
Sign in

Generative AI & Full-Stack Engineer

Location:
Fort Worth, TX
Posted:
February 12, 2026

Contact this candidate

Resume:

Jagadeesh Vadlamudi

Gen AI/ML Engineer

+1-405-***-****

***************@*****.***

LinkedIn: https://www.linkedin.com/in/jaga-deesh-3a8204172/

PROFESSIONAL SUMMARY:

Experienced Generative AI & Full Stack Engineer with 6 years in architecting, developing, and supporting enterprise-grade applications across multiple domains including healthcare, finance, insurance, and retail.

Proficient in Python, JavaScript, TypeScript, React.js, Node.js, FastAPI, Flask, and Django for building scalable full-stack and AI-driven web applications.

Developed AI copilots for enterprise workflows leveraging OpenAI function calling and LangChain tools for CRM, ERP, and ITSM systems.

Expertise in prompt evaluation and LLM benchmarking (RAGAS, EvalFlow, LLM-as-a-Judge) for model reliability and hallucination control.

Delivered domain-specific fine-tuned LLMs using LoRA, PEFT, and QLoRA for cost-efficient deployment on Azure and AWS Bedrock.

Skilled in integrating external APIs (SAP, Salesforce, ServiceNow, Jira) into conversational AI pipelines for end-to-end task automation.

Hands-on experience integrating Large Language Models (LLMs) such as OpenAI, Anthropic, LLaMA, and Falcon into production-grade applications.

Experienced in designing multi-agent architectures (AutoGen, CrewAI, LangGraph) for complex enterprise automation.

Proficient in vector database optimization and RAG performance tuning using FAISS, Pinecone, and ChromaDB for low-latency retrieval.

Implemented LLMOps pipelines integrating PromptLayer, PromptFlow, and Vertex AI Monitoring for scalable LLM lifecycle management.

Adept at AI Governance and Ethical AI implementation, ensuring data privacy, compliance, and model transparency.

Implemented CI/CD pipelines using Jenkins, GitHub Actions, and Azure DevOps, ensuring seamless integration, automated testing, and zero-downtime deployments.

Experienced in data engineering and ETL workflows, connecting enterprise data sources, APIs, and vector databases for AI model enrichment and analytics.

Strong understanding of MLOps and LLMOps practices — including model retraining, versioning, and performance monitoring with MLflow, Weights & Biases, and EvidentlyAI.

Proficient in prompt engineering, fine-tuning, and instruction tuning for custom domain models and intelligent automation solutions.

Delivered end-to-end AI copilots, knowledge assistants, and enterprise chatbots by combining React, LangChain Agents, and OpenAI Function Calling.

Skilled in developing Retrieval-Augmented Generation (RAG) pipelines using LangChain, FAISS, Pinecone, and ChromaDB for contextual and domain-specific AI systems.

Strong background in frontend engineering using React.js, Next.js, Redux, HTML5, CSS3, and RESTful APIs for creating responsive and intuitive user interfaces.

Experience with backend APIs and microservices architecture using FastAPI, Flask, and Node.js/Express, integrating with databases like PostgreSQL, MySQL, and MongoDB.

Adept at legacy system modernization and application support, maintaining and upgrading systems built on .NET, Java, and Oracle PL/SQL environments.

Skilled in cloud platforms (AWS, Azure, GCP) and containerization tools (Docker, Kubernetes) for deploying and managing full-stack and AI applications at scale.

Experienced in application performance optimization, incident management, and production support ensuring SLA compliance and high system availability.

Collaborative team player skilled in working with cross-functional teams — developers, data scientists, and business analysts — to translate technical designs into robust, real-world solutions.

TECHNICAL SKILLS:

Programming Languages

Python, R, SQL, Java, C#

Databases

My SQL, MongoDB, MS SQL server, Neo4j

Gen AI & Machine Learning

Transformers, LangChain, LlamaIndex, Diffusers, Peft, Accelerate, Transformers,

Gradio, NumPy, Pandas, Scikit-learn, Matplotlib, Seaborn, Plotly, SciPy, Statsmodels, XGBoost, LightGBM, CatBoost, TensorFlow, Keras, PyTorch, spaCy, NLTK

Cloud Technologies

Azure (OpenAI, Machine Learning, Cognitive Search, Cognitive Services,Databricks, Kubernetes Service, Functions, Logic Apps, Synapse Analytics, AWS (Bedrock, SageMaker,ECS, EKS, CloudFront, API Gateway, Lambda, DynamoDB, S3, RDS, Aurora)

Web Development

Django, Flask, Fast API, pytest, Selenium, Gunicorn, Uvicorn, Pydantic

MLOps Technologies

MLflow, DVC, Optuna, Grafana, Prometheus, Airflow

LLM Providers

OpenAI (GPT-3.5, GPT-4, Codex, DALL·E, Whisper, CLIP, GPT-3, ChatGPT) Anthropic (Claude 1, Claude 2, Claude3), Meta (LLAMA), Deepseek, Hugging Face (BERT, GPT-2, T5, RoBERTa, DistilBERT)

Frameworks/Libraries

PyTorch, TensorFlow, Hugging Face,Transformers,LangChain,LlamaIndex, FastAPI, Flask, Streamlit, Gradio

RAG & LLMOps Tools

LangChain, LlamaIndex, Haystack, AutoGen, DSPy, PromptLayer, PromptFlow

PROFESSIONAL EXPERIENCE:

Client: Baylor Scott & White Health, Dallas, TX. Jun 2024 – Till Date

Role: Gen AI Engineer

Responsibilities:

Architected and deployed enterprise-grade Agentic AI solutions on Azure AI and GCP Vertex AI, automating claims intake, underwriting analysis, and policy validation workflows using LangChain, LangGraph, OpenAI GPT-4, and AI Platform.

Built multi-agent orchestration systems using AutoGen and LangGraph, coordinating task-specific sub-agents for document validation and claim adjudication.

Integrated PromptFlow pipelines on Azure for automated prompt testing, drift analysis, and evaluation scoring.

Implemented RAG evaluation metrics (Precision@K, Recall@K, RAGAS) to assess contextual accuracy and retrieval quality.

Designed custom embeddings with InstructorXL and OpenAI text-embedding-3-large for improved semantic recall and domain adaptability.

Automated continuous retraining workflows using Vertex AI Pipelines and MLflow for versioned model improvement.

Deployed adaptive caching mechanisms for API-based model inference, reducing latency and cost across AI endpoints.

Built end-to-end RAG (Retrieval-Augmented Generation) pipelines integrating LangChain, FAISS, Vertex AI Pipelines, and BigQuery ML for contextually accurate document retrieval and policy summarization.

Designed and implemented intelligent agents using LangChain Agents and LangGraph, enabling multi-step reasoning workflows for risk scoring, compliance validation, and financial audit reporting.

Developed custom LLMs using TensorFlow Extended (TFX), Vertex AI AutoML, and fine-tuned them on enterprise datasets for personalized recommendations and predictive insights.

Integrated AI pipelines with Cloud Functions, Cloud Run, Pub/Sub, and other cloud-native services, ensuring scalable and event-driven execution across enterprise systems.

Developed RESTful APIs using FastAPI, implementing AAD authentication, RBAC, and secure enterprise integration.

Monitored and optimized AI agent performance using LangSmith, Vertex AI Model Monitoring, and automated retraining pipelines with CI/CD (Cloud Build, GitHub Actions, Terraform).

Containerized AI workflows with Docker and deployed on GKE/Azure Kubernetes Service (AKS) for high availability and scalability.

Leveraged Neo4j and vector databases for semantic search, embedding-based reasoning, and context-aware document retrieval.

Applied advanced prompt engineering techniques including Chain-of-Thought (CoT), few-shot prompting, and tool-augmented reasoning to reduce hallucination and improve decision quality.

Ensured compliance with HIPAA, GDPR, and enterprise governance, implementing PII masking, encryption, and ethical AI principles.

Collaborated with cross-functional teams (data scientists, product owners, risk analysts) to translate business rules into scalable Agentic AI solutions.

Environment: Python, FastAPI, React.js, LangChain, LangGraph, OpenAI GPT-4, Azure AI Search, Azure Data Factory, RAGAS, LangSmith, Docker, Kubernetes (AKS), GitHub Actions, Pinecone, FAISS, Redux, JavaScript, HTML5/CSS3, Azure Blob Storage, MLOps, LLMOps, RAG Pipelines, Unsupervised Learning (K-Means, PCA).

Client: U.S. Bank, Minneapolis, MN. Mar 2023 – May 2024

Role: AI/ML Engineer

Responsibilities:

Integrated AI observability tools (LangFuse, LangSmith, Grafana) for tracing LLM call chains and response reliability.

Implemented data synthesis workflows using synthetic data generation (GPT-4 + Faker) for bias mitigation and model robustness.

Enhanced document intelligence using multimodal LLMs (LLaVA, CLIP) to parse scanned KYC and financial forms.

Developed enterprise-grade LLM-driven AI agents using OpenAI GPT-4, LLaMA, Claude, and LangChain, enabling autonomous document verification, customer support, and banking query resolution.

Designed RAG pipelines leveraging LlamaIndex, Azure AI Search, and embedding-based semantic retrieval for regulatory-compliant decision support.

Engineered Agentic AI workflows using LangGraph and CrewAI to automate complex loan-approval reasoning and compliance verification.

Developed fine-tuning pipelines (LoRA, QLoRA, PEFT) for lightweight adaptation of GPT-4 and Claude models to banking datasets.

Built intelligent chatbot interfaces using LangChain for omnichannel customer support, integrating with KYC, account, and loan workflows.

Developed predictive models (XGBoost, LightGBM, PyTorch) for credit risk, loan default prediction, and customer retention analytics.

Implemented NLP pipelines using spaCy, NLTK, and transformers for document summarization, entity extraction, and automated compliance workflows.

Integrated AI/ML workflows with real-time streaming systems via Apache Kafka, Airflow, Spark Streaming for fraud detection and operational alerts.

Fine-tuned LLMs such as LLaMA, Mistral, Falcon on domain-specific data to improve summarization, intent classification, and AI-driven recommendations.

Built semantic search solutions using vector databases (Pinecone, ChromaDB, Weaviate) and graph-based reasoning with Neo4j and GraphRAG.

Implemented MLOps pipelines with Docker, Kubernetes, and Azure ML, automating model deployment, monitoring, version control, and CI/CD workflows.

Developed scalable APIs using FastAPI and Azure Functions, integrating AI agents into enterprise systems securely.

Applied unsupervised learning (K-Means, Hierarchical Clustering) for customer segmentation and fraud pattern detection.

Led cross-functional collaboration with compliance, governance, and risk management teams to ensure ethical AI practices and adherence to GDPR, PCI DSS standards

Environment: Python, PyTorch, TensorFlow, LangChain, LlamaIndex, Azure AI Search, Kafka, Airflow, Spark Streaming, FastAPI, Docker, Kubernetes, Pinecone, ChromaDB, LightGBM, XGBoost, Scikit-learn, Tableau, Dash, SHAP, LIME, MLOps, Prompt Flow, RAG Pipelines, NLP, SQL, Azure Cloud, AWS EMR, Pandas, NumPy.

Client: NTT DATA, Hyderabad, India. Sep 2020 - Aug 2022

Role: AI/ML Engineer

Responsibilities:

Implemented LLM-driven synthetic data generators to simulate patient scenarios for privacy-preserving training.

Leveraged Hugging Face Hub CI/CD and MLflow for reproducible medical model lifecycle management

Created clinical entity recognition (NER) and text summarization models leveraging BERT, BioBERT, and LlamaIndex to support physician decision-making.

Engineered RAG-based knowledge retrieval systems integrated with LangChain and FAISS, enabling fast access to internal guidelines and research literature.

Built time-series forecasting models using LSTMs and Prophet to optimize patient appointment scheduling and resource utilization.

Developed deep learning architectures (CNN, RNN, Transformer) for diagnostic image classification and clinical report automation.

Designed and deployed end-to-end ML pipelines using Python, TensorFlow, and PyTorch for predictive analytics in claims processing and patient risk assessment.

Built NLP-driven data extraction pipelines using Hugging Face Transformers, spaCy, and NLTK to structure unstructured EHR and clinical note data.

Developed fraud detection models for claims processing using XGBoost, Scikit-learn, and SMOTE, reducing false positives and improving audit precision.

Integrated Agent-based LLMs for clinical decision support using LangChain Agents combined with MedPaLM-like domain fine-tuning.

Developed automated summarization and report drafting pipelines for radiology and lab data using GPT-3 and BioGPT models.

Deployed AI/ML models as RESTful APIs using Flask, FastAPI, and Docker, orchestrated through Kubernetes clusters on AWS EKS.

Automated data pipelines using Apache Airflow and Dataiku DSS, integrating data ingestion, transformation, and model retraining workflows.

Applied PCA and clustering algorithms (K-Means, DBSCAN) to segment patient data and uncover hidden trends for operational research insights.

Integrated real-time analytics pipelines with Kafka and Spark Streaming, improving fraud detection latency and anomaly alerting accuracy.

Developed custom dashboards using Tableau and Plotly Dash to visualize risk scores, prediction confidence, and model drift trends for business stakeholders.

Implemented explainable AI (XAI) techniques using SHAP and LIME to interpret high-risk predictions and ensure model transparency for compliance teams

Leveraged AWS SageMaker, Lambda, and EC2 for scalable model training, deployment, and monitoring in HIPAA-compliant environments.

Implemented MLOps frameworks using MLflow, DVC, and GitHub Actions to streamline CI/CD, experiment tracking, and version control for ML assets.

Performed extensive feature engineering, model validation, and performance tuning using cross-validation, AUC-ROC, and F1-score metrics.

Collaborated with cross-functional teams (Data Engineering, DevOps, Product) to deliver production-grade ML services aligned with cloud-native best practices.

Ensured data security, model governance, and regulatory compliance (HIPAA, SOC2) in all data processing and model deployment workflows.

Environment: Python, TensorFlow, PyTorch, Scikit-learn, XGBoost, Hugging Face, Spark, Dataiku, Airflow, AWS SageMaker, Docker, Kubernetes, MLflow, FastAPI, Pandas, NumPy, Git, RESTful APIs, Linux, Tableau, Postman, VS Code

Client: L Brands Hyderabad, India. Jul 2019 – Aug 2020

Role: Data Scientist

Responsibilities:

Engineered personalized product recommendation systems using Collaborative Filtering, Gradient Boosting, and Matrix Factorization to enhance customer retention.

Implemented data preprocessing and cleansing pipelines using Pandas, NumPy, and Seaborn, processing millions of records for model readiness.

Developed data parsing algorithms in Python and R to automate ingestion, transformation, and distribution of large-scale structured and unstructured datasets.

Applied LDA, Naive Bayes, and Clustering algorithms (K-Means, Hierarchical) to segment customers based on behavioral and demographic attributes.

Built NLP models using NLTK and spaCy to extract sentiment and key product feedback insights, improving customer satisfaction analytics.

Designed and developed predictive machine learning models using Python, R, and Scikit-learn to analyze global retail data and identify customer purchasing patterns.

Built Bayesian HMM, XGBoost, SVM, and Random Forest models to forecast product demand and detect purchasing anomalies across multiple regions.

Applied transformer-based forecasting models (Informer, Temporal Fusion Transformer) for retail demand prediction.

Built automated GenAI-powered insights generation modules using GPT-3 to summarize customer trends for business reports.

Developed cross-market sentiment classifiers using multilingual BERT and fine-tuned sentiment models for localized insights

Created interactive dashboards and data visualizations in Tableau and R Shiny, delivering actionable insights to business stakeholders.

Used SAS Enterprise Guide (EG) and Base SAS procedures (PROC SQL, PROC REPORT, PROC TABULATE) for automated statistical reporting and data validation.

Leveraged SPARQL and XML queries for data linkage and metadata enrichment, improving data consistency across systems.

Conducted feature engineering and applied Regression techniques (Linear, Logistic) to predict financial performance and optimize pricing strategies.

Applied Normalization and De-normalization techniques in SQL Server, Oracle, and Teradata databases for optimal ETL performance and query speed.

Designed ETL workflows for data extraction, loading, and transformation from multi-source systems, ensuring high-quality datasets for analytics.

Performed data lineage analysis to trace and audit transformations, ensuring compliance with enterprise data governance standards.

Created automated reporting scripts in Python for data access, manipulation, and report generation, replacing manual workflows and saving analyst time.

Conducted exploratory data analysis (EDA) with Matplotlib, Seaborn, and Pandas Profiling to identify trends, outliers, and correlations.

Collaborated with global insight managers (UK, Australia, Canada) to align analytics models with regional market requirements and reporting standards.

Presented insights via Tableau dashboards, summarizing customer segmentation and predictive analytics for executive decision-making.

Environment: Python (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, NLTK), R Studio, SAS EG, SQL Server, Oracle, Teradata, Tableau, Shiny, ETL, SPARQL, XML, Machine Learning, NLP, Data Visualization, Data Lineage, Git, Linux.



Contact this candidate