At Virtue AI we believe the future of AI is in our hands because its success will depend on our ability to keep it safe and secure. We're seeking an experienced Applied Machine Learning Engineer to champion our efforts in doing so.
Virtue AI seeks an experienced applied ML engineer who is passionate about AI safety and security, is driven to deliver value, and make an impact. If you’re interested in working alongside an ambitious and a high-caliber team seeking to help ensure AI is secure and safe, come join us. At Virtue AI, we work hard and hold ourselves accountable to our work. Great outcomes demand sustained effort from exceptional individuals dedicated to ambitious goals.
Position Overview
As a Machine Learning and Security Engineer at Virtue AI, you will play a pivotal role in developing and implementing novel large models and algorithms for code guardrails. Your work will directly contribute to advancing our products and services and driving innovation within the industry.
Responsibilities
Building LLM-based agents for various code-related security tasks, such as red-teaming tests for malware and cyber attack generation and blue-teaming vulnerability/attack detection.
Fine-tune customized code LLMs for guardrail purposes with a focus on reducing model size
Apply efficiency inference methods to reduce model latency
Conduct guardrail model and agent evaluation
Requirements
Applicants that satisfy requirements 1&2 and/or 3&4 are highly encouraged to apply for this position
Proficiency in programming languages such as Python, along with expertise in LLM libraries like PyTorch, DeepSpeed, OpenRLHF, vLLM, etc
Experience in LLM finetuning, especially finetuning code LLMs, such as Llama-coder, Qwen-coder
Proficiency in software security-related knowledge, such as static and dynamic program analysis; familiar with the top security vulnerabilities and popular cyber attacks; fluent in popular programming languages, including python/C/C++/Java
Experience in building LLM-based agents for security tasks, such as vulnerability detections and system-level defenses (e.g., sandboxing, sanitization)