NYMISHA I
940-***-**** **********@*****.*** https://www.linkedin.com/in/nymishai/
Professional Summary:
Highly skilled AI/ML Engineer and Data Scientist with Around 5 years of experience designing, developing, and deploying enterprise-grade analytics and machine learning solutions across banking, healthcare, and e-commerce domains. Strong expertise in Python, advanced SQL, Java 8/11, and JavaScript, with hands-on experience in predictive modeling, customer segmentation, behavioral analytics, churn prediction, CLV forecasting, and recommendation systems. Proven experience in LLM development, fine-tuning, and deployment, including Retrieval-Augmented Generation (RAG) pipelines using LangChain, LangGraph, Hugging Face, TensorFlow, PyTorch, and Keras. Adept at building scalable data pipelines, ETL workflows, and microservices-based architectures using FastAPI, Flask, Spring Boot, Kafka, REST, and gRPC. Experienced in working with relational, NoSQL, and vector databases such as PostgreSQL, MongoDB, ChromaDB, FAISS, and Milvus. Strong background in cloud-native deployments across Azure, AWS, and GCP, leveraging Docker, Kubernetes, CI/CD pipelines (Jenkins, GitLab CI/CD, Azure DevOps) for automation and reproducibility. Skilled in developing interactive dashboards and reports using Power BI, integrating CRM and transactional data to deliver actionable business insights. A collaborative team player with strong experience in Agile/Scrum environments, passionate about transforming complex data into measurable business value, improved customer engagement, and operational efficiency. Technical Skills
Programming & Scripting: Python, SQL (Advanced), Java, R, JavaScript, Bash
Cloud Platforms & DevOps: Azure (Azure Machine Learning, AKS, Data Factory, Synapse, Databricks), AWS (S3, Glue, Redshift, EMR, Sage Maker, Lambda), GCP (Big Query, Dataflow, Vertex AI, Cloud Run), Docker, Kubernetes, Jenkins, GitLab CI/CD, Azure DevOps
Frontend Development: React.js, Redux, HTML5, CSS3, Material UI, Tail wind CSS, Bootstrap
Databases& Vector Stores: PostgreSQL, MySQL, MongoDB, Chroma DB, FAISS, Milvus
Version Control & Collaboration: Git, GitHub, GitLab, Bitbucket, Jira, Confluence
Authentication & Security: JWT, RBAC, OAuth2.0, GCP Secret Manager, Azure IAM, AWSIAM
Performance Optimization & Monitoring: Azure Monitor, Cloud Logging, Cloud Monitoring, Application Insights
Microservices & Messaging: Microservices Architecture, RESTful APIs, gRPC, Apache Kafka
Testing & QA: Junit, Mockito, Selenium, PyTest
Frameworks & Libraries: Spring Boot, Spring MVC, Spring Data, Spring Security, scikit-learn, TensorFlow, PyTorch, Keras, Hugging Face Transformers, TFX, Lang Chain, Lang Graph, Fast API, Flask
Reporting & Documentation: PowerBI, Swagger
Tools & IDEs: Visual Studio Code, IntelliJ IDEA, PyCharm, Jupiter Notebook, Google Collab
Web Services & APIs: REST, gRPC, GraphQL, JSON, XML Professional Experience
Client: City National Bank Los Angeles, CA Aug 2024 - Till Date Role: Data Scientist – AI/ML
Responsibilities:
Developed supervised and unsupervised machine learning models for customer segmentation, risk analysis, and behavioral analytics using Python and SQL.
Designed feature engineering pipelines to prepare high-quality, ML-ready datasets from transactional and CRM data sources.
Built Retrieval-Augmented Generation (RAG) pipelines integrating Azure OpenAI with vector databases (FAISS, ChromaDB) and Azure AI Search to deliver accurate, compliant financial insights.
Integrated ML and GenAI outputs into dashboards and reports, enabling business users to track KPIs, trends, and model performance for data-driven decisions.
Applied advanced prompt engineering and retrieval strategies (hybrid search, re-ranking, top-k tuning) to reduce hallucinations and improve factual grounding in financial use cases.
Leveraged GitHub Copilot to accelerate development of ML pipelines, GenAI services, infrastructure code, and CI/CD scripts, improving delivery speed and code consistency.
Deployed models in cloud environments using CI/CD pipelines and containerization best practices.
Developed and fine-tuned supervised and unsupervised ML models using scikit-learn, TensorFlow, and PyTorch, improving prediction accuracy and explainability in regulated financial environments.
Implemented model evaluation frameworks using accuracy, precision, recall, F1-score, and ROC-AUC to ensure reliable model performance.
Automated data preprocessing, model training, and validation workflows to improve development efficiency and reproducibility.
Integrated ML predictions into reporting and visualization tools to support data-driven decision-making for business users.
Built NLP solutions for document classification, text analytics, and automated data extraction to support banking operations.
Worked closely with business analysts, data engineers, and product teams to align AI solutions with regulatory and compliance standards.
Designed and deployed a distributed LLM-agent execution framework using Model Context Protocol (MCP), JSON-RPC 2.0 and tool contracts and a message-driven async work-queue pattern (RabbitMQ/AMQP-inspired) to standardize big-data workflows; processed 50K records in <5 minutes.
RAG-Driven SQL Automation with AI Agents: Built Retrieval Augmented Generation agents that scan about 30 tables that utilizes Reasoning LLMs with Flash LLMs, write Optimized SQL and Auto-Generate data-rich reports with minimal input.
Developed secure, high-performance RESTful APIs using FastAPI and Flask to expose ML and GenAI models for real-time and batch inference across banking applications.
Created Gen AI + Machine Learning models “Asset Optimizer” Agents that recommend interventions on Trade Discounting, corporate loans and bank assets - raising Return on Tangible Common Equity (RoTCE).
End-to-End Ownership: Oversee the full lifecycle—from Business Case and Architecture to Model Development and Cloud Deployment—utilizing Kafka, RabbitMQ, OpenShift, Maven and containerized microservices.
Collaborate with diverse stakeholders to unify data sources, standardize processes, and ensure Compliance, reliability, and scalability across banking environments.
Contributed to the modernization of Erica, the bank’s AI-powered virtual assistant, supporting large-scale, customer-facing GenAI deployments serving millions of users.
Designed and implemented LLM-based RAG pipelines on Azure OpenAI, integrating Azure AI Search (vector + hybrid retrieval) to deliver accurate, compliant financial responses.
Built scalable document ingestion and embedding pipelines with metadata enrichment to support high-throughput, low-latency semantic search across enterprise knowledge bases.
Applied Copilot suggestions to speed up Terraform/YAML configurations, Dockerfiles, and CI/CD pipeline scripts for GCP deployments.
Applied advanced prompt engineering and retrieval strategies (hybrid search, top-k tuning, re-ranking) to reduce hallucinations and improve factual grounding in regulated financial contexts.
Developed production-grade GenAI microservices using Python and FastAPI, deployed on Azure Kubernetes Service (AKS) with high availability and zero-downtime rollouts.
Established MLOps best practices including CI/CD pipelines, MLflow-based experiment tracking, and real-time monitoring of latency, quality, and model drift.
Partnered with product, compliance, security, and risk teams to ensure responsible AI adoption aligned with regulatory, privacy, and governance requirements.
Researched and piloted emerging Azure AI and GenAI capabilities to accelerate enterprise-scale adoption of LLM-driven solutions.
Environment: Python, SQL, scikit-learn, TensorFlow, PyTorch, NLP, LLMs, LangChain, RAG, FastAPI, Flask, Docker, Kubernetes
(AKS), Azure OpenAI, Azure AI Search, FAISS, ChromaDB, PostgreSQL, CI/CD, MLflow, GitHub Copilot, Power BI, Agile/Scrum Client: Innova solutions Hyderabad, India June 2021 to Aug 2023 Role: AI Software Engineer
Responsibilities:
Designed and developed scalable AI/ML applications supporting analytics, automation, and intelligent decision-making use cases.
Built ETL workflows to ingest, cleanse, and transform large datasets from multiple source systems using Python and SQL.
Developed machine learning models for classification, regression, and clustering to support business insights and forecasting.
Implemented NLP pipelines for text preprocessing, sentiment analysis, and document analytics using scikit-learn and deep learning frameworks.
Created modular ML services and exposed inference endpoints through REST APIs for enterprise system integration.
Optimized model performance through feature selection, hyperparameter tuning, and validation techniques.
Assisted in building and testing machine learning and NLP models using TensorFlow, PyTorch, and Keras.
Optimized model performance through feature selection, hyperparameter tuning, and model evaluation techniques.
Deployed ML workloads using Docker and cloud platforms, ensuring scalability, fault tolerance, and operational stability.
Collaborated with data engineers, QA teams, and product owners to deliver reliable AI solutions following Agile/Scrum practices.
Contributed to technical documentation, design specifications, and knowledge transfer to support long-term maintainability.
Contributed to LLM experimentation and POCs using Hugging Face and LangChain frameworks.
Helped integrate vector databases such as FAISS or ChromaDB for semantic search and embedding storage.
Supported application availability and performance through basic load-balancing concepts and monitoring.
Designed and developed AI-driven applications supporting predictive analytics, reporting, and intelligent automation across enterprise systems.
Built scalable ETL pipelines to ingest, cleanse, and transform large volumes of structured and unstructured data using Python and SQL.
Assisted with containerization and deployment using Docker and basic Kubernetes configurations.
Followed Agile/Scrum practices, collaborating with senior engineers and stakeholders.
Implemented NLP solutions for text preprocessing, sentiment analysis, and document analytics using scikit-learn and deep learning frameworks.
Deployed ML workloads using Docker and cloud platforms, ensuring scalability and operational reliability.
Participated in Agile ceremonies, contributing to sprint planning, code reviews, and continuous improvement initiatives. Environment: Python, SQL, Java, scikit-learn, TensorFlow, Keras, NLP, Machine Learning, ETL Pipelines, RESTful APIs, FastAPI, Flask, Docker, Jenkins, Git, Azure, AWS, PostgreSQL, MySQL, MongoDB, Power BI, Linux, Agile/Scrum, Jira Client: HCL Technologies Ltd Hyderabad, India Sep 2020 - May 2021 Role: Software Engineer
Responsibilities:
Supported the development of data-driven applications by building data processing and analytics components using Python and SQL.
Assisted in designing and implementing machine learning models for basic predictive and analytical use cases.
Performed data analysis, validation, and preprocessing to ensure data quality and consistency across systems.
Supported AI/ML initiatives through data analysis, preprocessing, and model validation activities.
Assisted in developing and testing Python-based ML scripts and APIs.
Helped deploy and monitor applications using containerization and cloud services.
Applied JWT-based authentication, role-based access control (RBAC), and secure credential management using GCP Secret Manager and IAM policies.
Collaborated with team members in Agile environments to meet delivery timelines.
Developed backend services and APIs to integrate analytics and ML outputs into enterprise applications.
Collaborated with senior engineers and data scientists to enhance model accuracy and system performance.
Assisted with deployment and monitoring of applications in cloud and on-prem environments.
Supported development of data-centric applications by building backend components and analytics workflows using Python and SQL.
Assisted in developing and validating machine learning models for basic predictive and analytical use cases.
Performed data analysis, cleansing, and validation to ensure data accuracy and consistency across multiple systems.
Developed backend APIs to integrate analytics and ML outputs into enterprise platforms.
Assisted with deployment, monitoring, and troubleshooting of applications in development and production environments.
Collaborated with senior engineers to improve application performance, reliability, and scalability.
Followed SDLC and Agile methodologies, contributing to sprint planning, defect resolution, and continuous improvement.
Documented technical workflows, data logic, and implementation details for support and audit purposes.
Used Cloud Logging and Error Reporting to monitor application health and troubleshoot production issues.
Documented technical designs, workflows, and implementation details for knowledge sharing and support. Environment: Python, SQL, Java, Basic Machine Learning, Data Analysis, ETL Processes, REST APIs, Docker (Basic), Git, Jenkins, Oracle, MySQL, Linux, Agile Methodologies, Jira, SDLC Education:
Masters in Computer Science, Texas tech University, Lubbock Texas- Sept 2025
Bachelors in Computer Science Andhra University, India – June 2021