Role – AI/ML Engineer Location – Phoenix, AZ Tech Stack - Python, SQL, Docker & Kubernetes, Fast API, Flask, MLOps, Machine Learning, LLM’s, LangChain or similar orchestrator, Vector DB or similar, GCP, Google AutoML, Vertex AI & Build tools AI/ML Engineer – Design and implement scalable MLOps supportive data pipelines for data ingestion, processing, and storage.
Experience deploying models with MLOps tools such as Vertex Pipelines, KubeFlow, or similar platforms.
Experience implementing and supporting end-to-end Machine Learning workflows and patterns.
Expert level programming skills in Python and experience with Data Science and ML packages and frameworks.
Proficiency with containerization technologies (Docker, Kubernetes) and CI/CD practices.
Experience working with large-scale machine learning frameworks such as TensorFlow, Caffe2, PyTorch, Spark ML, or related frameworks.
Experience and knowledge in the most recent advancements in Gen AI, including Gemini, OpenAI, Claud and exposure to open-source Large Language Models (LLMs). Experience building AI/ML products using technologies such as LLMs, neural networks and others.
Experience with RAG and Supervised Tuning techniques.
Strong distributed systems skills and knowledge.
Development experience of at least one public cloud provider, Preferably GCP.
Excellent analytical, written, and verbal communication skills.