PHAN NHAT LINH
Da Nang, Viet Nam ● ***********@*****.*** ● +84-829****** ● DoB: 17/03/2002
PROFILE SUMMARY
As an AI Engineer and researcher with extensive experience at leading AI companies in Vietnam, I specialize in semantic retrieval models and advanced model optimization techniques. My expertise in fine-tuning large language models (LLMs) and deploying them in real-world applications has consistently driven innovative solutions. Known for my analytical prowess and problem-solving skills, I thrive in dynamic environments where cutting-edge AI technology is at the forefront. MORE
Linkedin: https://www.linkedin.com/in/linh11/
Github: https://github.com/Linhvjc
Huggingface: https://huggingface.co/linhphanff
EDUCATION
Computer Science University of Greenwich Da Nang, Viet Nam SEPTEMBER, 2020 – MAY, 2024
70% tuition scholarship
Top 3 students majoring in information technology in the Fall semester of 2022
The best student majoring in information technology in the Fall semester of 2023
The best student majoring in information technology in the Spring semester of 2024 Bachelor degree with high distinction, GPA: 3.90 (UoG Percentage: 72.8%) PROFESSIONAL EXPERIENCE
AI Engineer / AI Researcher FTECH CO., LTD Da Nang, Viet Nam SEPTEMBER, 2022 – APRIL, 2025
Researched and implemented Autogen technology to develop multi-agent systems for customer support in the Customer care, using hand-off strategy.
Designed and implemented comprehensive training and evaluation datasets for Large Language Models (LLMs), specializing in Question and Answer systems within the gaming domain. Executed fine-tuning processes to enhance model performance and domain relevance.
Conducted in-depth research on Reinforcement Learning methodologies with a focus on aligning output responses. Explored novel approaches to optimize interaction quality and model alignment in real-world applications.
Engineered instructional datasets and fine-tuned LLMs for applications in historical contexts. Focused on improving model accuracy and relevance by tailoring data and training processes to historical narratives and queries.
Analyzed and assessed LLM performance to derive actionable insights for ongoing model enhancements. Implemented evaluation frameworks to systematically measure and improve model efficacy and user interaction quality.
Investigated memory retention and knowledge integration capabilities of LLMs, emphasizing In-Context Learning
(RAG) techniques. Evaluated the effectiveness of memory augmentation strategies to enhance model adaptability and contextual awareness.
Developed models for semantic similarity leveraging contrastive learning techniques. Implemented custom strategies to address false negatives and applied bespoke loss functions to improve model precision and recall.
Customized training pipelines to increase batch size for contrastive learning, improving computational efficiency and model convergence. Applied advanced techniques such as Memory Bank and MoCo to expand batch processing capabilities.
Researched and applied Knowledge Distillation methods to streamline model architecture and optimize performance while preserving high-quality outputs. Focused on balancing model efficiency with computational demands.
Addressed challenges associated with long context processing by exploring chunking strategies and innovative methods such as Hi Transformers, BigBert, and Nomic embeddings. Enhanced model capabilities to handle extended context lengths effectively.
Investigated and implemented advanced loss functions, including focal loss and margin loss, to refine classification model performance and address class imbalance issues.
Utilized Wandb for comprehensive experiment tracking, hyperparameter tuning, and model registry. Ensured systematic management of experiments and optimization of model configurations. AI Research Intern VinAI Ha Noi, Viet Nam FEBRUARY, 2022 – AUGUST, 2022
Assess existing methods for optimizing NLP models, with a focus on reducing model size without compromising performance.
Create and implement compact yet powerful NLP models using techniques like pruning, quantization, and distillation.
Fine-tune hyperparameters and apply reduction methods, such as knowledge distillation, to ensure models are lightweight while maintaining desired performance levels.
Perform rigorous experiments on various datasets to validate the performance of optimized models, ensuring their effectiveness in real-world applications.
AI Trainer Remotetasks / Outlier Remote JANUARY, 2023 – PRESENT
Participated in projects aimed at improving the writing capabilities of generative AI models.
Worked on various Vietnamese writing projects to train AI models.
Responsibilities included ranking responses generated by AI models, composing short stories on given topics to enhance AI creativity, and assessing the factual accuracy of AI-produced text.
Contributed to multiple high-impact AI training projects, ensuring quality and accuracy in AI-generated content, and improving AI models through detailed feedback and creative input.
Gained advanced understanding of AI model training, enhanced Vietnamese writing and editing skills, and experience in evaluating and improving AI-generated content.
Participated in projects aimed at improving the writing capabilities and Python coding skills of generative AI models.
Worked on various Vietnamese writing projects and Python code training tasks to enhance AI models.
Contributed to multiple high-impact AI training projects, ensuring quality and accuracy in AI-generated content and code, improving AI models through detailed feedback and creative input. SKILLS
Machine Learning, Deep Learning
Pytorch, FastAPI, AutoGen, TaskingAI, Langfuse
Argila tool, Wandb
Python, Javascript, C#
Problem-Solving, Self-learning
ACHIEVEMENTS
At FTECH CO., LTD:
The best Semantic Similarity model with Long context in Vietnamese language domain (private model). Others:
Top 2 Semantic Similarity Most Download in Viet Nam, public at linhphanff/semantic-base-vi (February 2024)
Top 3 Google Developer Student Hackathon – AI Challenge 2023
Top 3 Blockchain technology for all groups registered to participate in the FPT Education system nationwide 2022.
Toeic Score (2021): 600