Post Job Free
Sign in

Data Scientist, ML & Backend Lead (Converstional AI)

Location:
Tempe, AZ
Salary:
160,000
Posted:
April 15, 2026

Contact this candidate

Resume:

Alberto Olmo

Phone-Alt 719-***-**** j Envelope ********@***.*** j LINK aolmo.github.io j GRADUATION-CAP Alberto Olmo (Scholar) SKILLS

Languages: Python, Java, C++, SQL, Linux bash, PHP, JavaScript Libraries: PyTorch, TensorFlow, Keras, NumPy, OpenCV, Pandas, Matplotlib, Scikit-learn, Scikit-image, SciPy, Bootstrap Tools: GCP, Azure, Figma, Jira, Git, Linux, LATEX

EXPERIENCE

Briefcase Vianai Systems Inc. Jan. 2023 – Present

Data Scientist & Scrum Leader Palo Alto, CA

Led backend development of a conversational Q&A system enabling context-aware interactions with the platform, improving user engagement and experience.

Led backend development of a feature that automatically selects appropriate data sources during conversations based on contextual information, eliminating the need for manual user selection achieving a 90%+ accuracy.

Led backend development of a feature that removed unnecessary and error-blocking messages while implementing accurate fallback responses.

Led backend development of a suggested questions generation pipeline during financial conversations, providing accurate and context-based executable follow ups.

Implemented a pipeline allowing customers to create and manage custom questions not previously available in the system.

Implemented a process enabling customers to configure preloaded conversations for specific users and roles. Contributed to backend implementation and feature planning. Briefcase National Laboratory of the Rockies (former NREL) May 2022 – Aug. 2022 Machine Learning Research Intern Golden, CO

Led research and development of a physics-informed autoencoder for turbulent flow data, incorporating constraints such as incompressibility and enstrophy to improve physical fidelity of compressions.

Achieved a 12x speedup in model training convergence and enhanced interpretability without sacrificing reconstruction accuracy using gradient-based explainability techniques.

Published work: Olmo, A., Zamzam, A., Glaws, A., & King, R. (2023). Physics-Driven Convolutional Autoencoder Approach for CFD Data Compressions: Preprint. Presented at the Machine Learning and Physical Sciences Workshop at the 36th Conference on Neural Information Processing Systems (NeurIPS). Briefcase Arizona State University Aug. 2018 – Dec. 2022 Graduate Research Assistant Phoenix, AZ

Researcher at the Yochan Laboratory under the supervision of Prof. Subbarao Kambhampati. Publications include evaluations of reasoning capabilities of large language models (LLMs), as well as explainability and bias in generative adversarial networks (GANs) and large image classifiers.

Published works in: Artificial Intelligence Journal (AIJ), International Conference on Machine Learning (ICML), International Conference on Automated Planning and Scheduling (ICAPS), and Neural Information Processing Systems (NeurIPS).

Briefcase University of Phoenix May 2021 – Aug. 2021 Business Analytics Intern Phoenix, AZ

Researched and used machine learning models for user categorization using non-PII graph and tabular data, leveraging algorithms such as logistic regression, neural networks, and clustering. EDUCATION

GRADUATION-CAP Arizona State University Aug. 2018 – Dec. 2022 Computer Science Ph.D. – 3.9/4.0 Phoenix, AZ

Coursework: Statistical Machine Learning, Planning/Learning Methods in AI, Human-Aware AI, Fundamentals of Statistical Learning.

Dissertation: Investigated failure modes of large-scale models, including large language models, generative adversarial networks, and image classifiers [1]. Published in leading venues such as Artificial Intelligence Journal

(AIJ) [2], ICAPS [3], and NeurIPS [4].

Awards: Fully funded Ph.D. as a Graduate Research Assistant at the Yochan Laboratory. GRADUATION-CAP University of Colorado at Colorado Springs Aug. 2016 – Dec. 2017 Computer Science M.S. – 4.0/4.0 Colorado Springs, CO

Coursework: Computer Vision, Operating Systems, Design and Analysis of Algorithms, Software Project Management, Computer/Network Security.

Thesis: Implementation of a variational autoencoder that sped up computation of the PixelCNN generative neural network by 8x [5].

Awards: Received the Balsells Grant, which fully funded the master’s degree and covered all living expenses. GRADUATION-CAP Universitat Autònoma de Barcelona Aug. 2016 – Dec. 2017 Computer Science B.S. – 8.5/10 Barcelona, Spain

Coursework: Calculus, Statistics, Algebra, Databases, Artificial Intelligence, Software Engineering, Company Organization and Management

Research: Worked on the Periscope and ELASTIC projects for the CAOS department.

Awards: Received an Undergraduate Research Assistant grant for contributions to the Computer Architecture and Operating Systems department. Awarded the Student of the Year grant for the first academic year at UAB. LINKS

LINKEDIN LinkedIn:// alberto-olmo/ · GRADUATION-CAP Google Scholar:// Alberto Olmo · LINK Website:// aolmo.github.io/ RESEARCH

[1] Olmo, Alberto. Analyzing Failure Modes of Inscrutable Machine Learning Models. PhD thesis, Arizona State University, 2022.

[2] Niharika Jain, Olmo, Alberto, Sailik Sengupta, Lydia Manikonda, and Subbarao Kambhampati. Imperfect ImaGANation: Implications of GANs exacerbating biases on facial data augmentation and snapchat face lenses. Artificial Intelligence, 304:103652, 2022.

[3] Olmo, Alberto, Sarath Sreedharan, and Subbarao Kambhampati. GPT3-to-plan: Extracting Plans from Text Using GPT-3. In ICAPS Knowledge Engineering for Planning and Scheduling - 2021. https://icaps21.icaps-conference.org/workshops/KEPS/Papers/KEPS_2021_paper_8.pdf, 2021.

[4] Karthik Valmeekam, Olmo, Alberto, Sarath Sreedharan, and Subbarao Kambhampati. Large Language Models Still Can’t Plan (a benchmark for LLMs on planning and reasoning about change). In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022.

[5] Olmo, Alberto. Autoregressive Density Estimation in Latent Spaces. Master’s thesis, University of Colorado Colorado Springs. Kraemer Family Library, 2017.

[6] Olmo, Alberto, Sailik Sengupta, and Subbarao Kambhampati. Not all failure modes are created equal: Training deep neural networks for explicable (mis)classification. In ICML, 2020 (Uncertainty and Robustness in Deep Learning). https://sites.google.com/view/udlworkshop2020/accepted-papers; https://www, 2020.

[7] Zahra Zahedi, Olmo, Alberto, Tathagata Chakraborti, Sarath Sreedharan, and Subbarao Kambhampati. Towards understanding user preferences for explanation types in model reconciliation. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 648–649. IEEE, 2019.

[8] Karthik Valmeekam, Matthew Marquez, Olmo, Alberto, Sarath Sreedharan, and Subbarao Kambhampati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. Advances in Neural Information Processing Systems, 36:38975–38987, 2023.

[9] Olmo, Alberto, Ahmed Zamzam, Andrew Glaws, and Ryan King. Physics-driven Convolutional Autoencoder Approach for CFD Data Compressions. In NeurIPS Machine Learning and the Physical Sciences Workshop - 2022.

https://ml4physicalsciences.github.io/2022/files/NeurIPS_ML4PS_2022_138.pdf, 2022.

[10] Karthik Valmeekam, Sarath Sreedharan, Matthew Marquez, Olmo, Alberto, and Subbarao Kambhampati. On the Planning Abilities of Large Language Models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706, 2023.

[11] Sarath Sreedharan, Olmo, Alberto, Aditya Prasad Mishra, and Subbarao Kambhampati. Model-free Model Reconciliation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pages Pages–587. https://doi.org/10.24963/ijcai.2019/83, 2019.

[12] Olmo, Alberto. Extensió del plugin de sintonització de paràmetres MPI de l’eina PTF. 2016.



Contact this candidate