Post Job Free
Sign in

Assistant Engineer

Location:
Seattle, WA
Posted:
February 21, 2021

Contact this candidate

Resume:

Chase Lee

********@**.**********.*** 206-***-****

EDUCATION

University of Washington - Seattle, WA Expected Graduation March 2022 Masters. Computer Science Major GPA: N/A

University of Washington – Seattle, WA 2018 - 2020 Bachelor of Science. Computer Science & ACMS (Discrete Math & Algos) Major GPA: 3.84 Relevant Courses

Deep Learning, Machine Learning, Statistical Methods in Computer Science, Natural Language Processing & Capstone, Distributed Systems, Artificial Intelligent, Algorithms, Complexity, Data Structures & Parallelism SKILLS

Proficient with: Python, Java, Pytorch, Sklearn, Pandas, Numpy

Has experienced with : C, C++, R, LaTex, Matlab

EXPERIENCE

Graduate Research Assistant

Noah’s Ark – Seattle, WA January 2021 - Current

Analyzing knowledge base probes to discover which templates lead to better performance in retrieving knowledge from language models such as BERT, GPT2, ELMO

Creating simple semantic preserving transformations to break language models to drop performance in knowledge bases probing task using Python and Pytorch Graduate Teaching Assistant – Systems Programming

University of Washington – Seattle, WA January 2021 - Current

Teaching low level programming principles and implementation of data structures in C and C++

Holding weekly sections teaching 20~25 students from 150~200 students in class. Additionally, hosting weekly office hours to explain concepts to students and help them with homework assignments Software Engineer Intern

Kernel Labs – Seattle, WA March 2020 – December 2020

Helped to bootstrap a privacy compliance start-up

Used Trafilatura to perform web scraping to create around 20,000 HTML dataset. Developed NLP software that determines if a company’s privacy policy is compliant with data privacy law using XLNET model. Achieved similar score on OPP-115 dataset as Polisis paper Natural Language Processing Capstone

University of Washington – Seattle, WA March 2020 – June 2020

As a senior level capstone, reproduced three metrics that were used to determine contextualization for word embedding model for BERT, GPT2, and ELMO

Adjusted Self Similarity metric by adding variance into account. Adjusted metric showed that BERT is more contextualized than GPT2 by 0.3 z-score while original metric showed GPT2 is more contextualized than BERT by 0.4 z-score

Built a static embedding from Bert using PCA by taking first principal component to represent each word. This performed better than glove static embedding in spam classification using SVC



Contact this candidate