Jason Layton
Senior AI/ML Engineer
*****************@*******.*** 480-***-**** Phoenix, AZ 85044
PROFESSIONAL SUMMARY
Senior Machine Learning / MLOps Engineer with 8+ years of experience designing, deploying, and maintaining production-grade machine learning systems across the full model lifecycle. Specialized in building scalable, secure, and highly reliable ML infrastructures supporting real-time inference, monitoring, and continuous delivery. Extensive experience developing end-to-end ML pipelines using Python, SQL, TensorFlow, and PyTorch, with strong expertise in containerized deployments (Docker, Kubernetes), infrastructure automation with Terraform, and CI/CD pipelines using GitHub Actions. CORE SKILLS
Machine Learning & AI: TensorFlow, PyTorch, scikit-learn, SciPy, NumPy, Pandas, NLP (Natural Language Processing), LSTM, LangChain, LangGraph, OpenAI API, Claude, RAG (Retrieval-Augmented Generation), Conversational AI, Agentic Workflows, Predictive Modeling, End-to-End ML Lifecycle Management MLOps & DevOps: Docker, Kubernetes, Terraform, CI/CD (GitHub Actions, Jenkins), Model Versioning, Monitoring & Logging, Automated Model Deployment, Scalable ML Infrastructure Software Engineering & Programming: Python, Go, C/C++, Rust, JavaScript, MATLAB, Selenium, Web Scraping, JSON, React, ExpressJS, Django, FastAPI
Data Engineering & Databases: SQL, NoSQL, Matplotlib Cloud Platforms: AWS, GCP, Azure
Operating Systems & Environments: Linux, Windows
WORK EXPERIENCE
Senior AI/ML Engineer, Jefferies 09/2023 – 10/2025 New York, NY
•Designed and delivered a production-grade, AI-powered analytics platform leveraging Speech Recognition, NLP, and LLM-based systems, demonstrating strong expertise in deploying real-time inference services and scalable ML solutions applicable to regulated enterprise environments.
•Built and optimized end-to-end machine learning pipelines, covering data ingestion, preprocessing, feature extraction, model training, validation, deployment, monitoring, and continuous improvement using Python and deep learning frameworks (PyTorch, TensorFlow).
•Implemented CI/CD pipelines for machine learning systems using Docker, Kubernetes, and Jenkins, enabling automated testing, versioning, and deployment of ML models across development, staging, and production environments.
•Developed Retrieval-Augmented Generation (RAG) pipelines, integrating vector databases, Amazon OpenSearch, and LLM frameworks (LangChain) to enable low-latency search, summarization, and knowledge retrieval over large volumes of structured and unstructured audio and text data.
•Deployed scalable ML infrastructure on AWS, utilizing managed services for secure storage, embedding retrieval, and real-time querying, ensuring high availability, fault tolerance, and performance at enterprise scale.
AI/ML Engineer, Johnson & Johnson 11/2021 – 08/2023 New Brunswick, NJ
•Spearheaded the end-to-end lifecycle management of machine learning solutions for an AI-powered patient care platform, spanning data ingestion, preprocessing, model training, validation, deployment, monitoring, and continuous optimization in a regulated healthcare environment.
•Designed and deployed production-grade predictive models (Logistic Regression, Random Forests, deep learning models) using Python, TensorFlow, and PyTorch to analyze clinical and patient-generated health data, improving outcome prediction accuracy by 10% through rigorous feature engineering and hyperparameter optimization.
•Integrated machine learning models with Electronic Health Records (EHR) systems, enabling real-time clinical decision support, secure data exchange, and seamless embedding of AI insights into existing clinical workflows while adhering to HIPAA and healthcare data governance standards.
•Architected and implemented Retrieval-Augmented Generation (RAG) pipelines, combining vector databases with fine-tuned LLMs to support low-latency natural language search and summarization across structured EHR data, unstructured clinical notes, and medical literature.
•Built and maintained scalable ML infrastructure on AWS, leveraging S3, EMR, and Kubernetes to support distributed data processing, real-time inference endpoints, and high-availability deployments for clinical use cases.
Software Engineer, Badger Technologies 08/2019 – 10/2021 Nicholasville, KY
•Architected and maintained real-time, production-grade machine learning systems processing multimodal data (text, images, structured events) to deliver low-latency, personalized recommendations, demonstrating strong expertise in scalable inference, reliability, and performance optimization.
•Designed and implemented Retrieval-Augmented Generation (RAG) architectures, integrating vector databases with fine-tuned large language models (LLMs) to support semantic search, conversational AI, and knowledge retrieval while reducing hallucinations and improving response accuracy in enterprise environments.
•Built end-to-end data and ML pipelines using Python, Spark, and SQL, supporting data ingestion, preprocessing, feature engineering, model training, validation, and retraining from on-premise systems to AWS-based data platforms (S3, EMR, OpenSearch).
•Developed, trained, and deployed machine learning models using TensorFlow, PyTorch, and Scikit- Learn, exposing models through RESTful APIs to integrate AI capabilities directly into downstream applications and services.
•Implemented CI/CD pipelines for machine learning systems using Jenkins, Docker, and Kubernetes, enabling automated testing, versioning, deployment, and rollback across development, staging, and production environments.
•Established monitoring, logging, and observability for ML services and pipelines, tracking critical performance metrics such as latency, throughput, model accuracy, and data drift to ensure system reliability and continuous improvement.
Software Engineer, HCLTech 08/2017 – 07/2019 Dallas, TX
•Designed and deployed full-stack applications using a modern, consistent tech stack: Python
(Django/FastAPI) and Node.js (Express.js) for backend services, React for responsive frontend interfaces, and PostgreSQL for relational database management.
•Architected and implemented RESTful APIs and a microservices-based backend, utilizing an API Gateway (Spring Cloud Gateway) to manage routing, security, and monitoring for service-to-service communication.
•Built scalable and resilient cloud-native applications on Microsoft Azure, leveraging managed services for compute, storage, and serverless functions to ensure high availability and performance.
•Engineered event-driven communication between distributed services using message brokers like RabbitMQ, enhancing system decoupling and reliability.
•Containerized applications with Docker, orchestrated deployments using Kubernetes, and established CI/CD pipelines with Jenkins to automate testing, integration, and delivery.
•Maintained robust development practices using Git/GitHub for version control and Agile (Jira) for project management, ensuring high code quality through comprehensive testing with JUnit, Mockito, and Selenium.
EDUCATION
Master of Science in Computer Engineering,
Arizona State University
2015 – 2016 Tempe, AZ
Bachelor of Science in Computer Engineering,
Arizona State University
2011 – 2015 Tempe, AZ
CERTIFICATES
Project Management Professional (PMP)
AWS Certified Machine Learning - Specialty