Sign in

Data Python

Philadelphia, Pennsylvania, United States
November 14, 2018

Contact this candidate


Tiankai (Carl) Guo

**** ******* *****, *********, ** 22201, 612-***-****,


I am an analytical, fast learner with 5-year experience at Georgetown University where I interpret and analyze data for various projects. I have proficient knowledge in statistics, mathematics, and analytics within various fields and professional applications. In addition, my education and years of advanced research have given me an excellent understanding of multiple statistical models and analytics tools to effectively analyze big data. I am eager to use my skills in statistics, data mining, and optimization to provide data-driven solutions to industrial problems and invent innovative ways of building commercial strategic initiatives. INDUSTRY EXPERIENCE

Deloitte Research Fellow Apr. 2017 – Apr. 2018

• Topic: For-Profit Colleges - Is it worth the time and money, or are people better off with the traditional route?

• Focus on civic engagement and political engagement, create hypothesis based on literature review.

• Script, clean, organize the data by Python and SQL; do quantitative analysis and build model to predict the engagement levels by R and Python, implementing various machine learning methods (Regression, and Tree-Based Methods). Peace Corps in Washington, DC Sep. 2017 – Jan. 2018

• Data Analysis and Evaluation Intern

• Collect, do quantitative and qualitative analysis, data visualization and report data related to Volunteer Recruitment and Selection activities, strengthening the measurement and evaluation of Volunteer Recruitment and Selection performance and programs.

Georgetown University Jan. 2018 – Present

Research Assistant, School of Foreign Service

• Text Mining & Natural Language Processing analysis for Kenya Parliamentary Debates Teaching Assistant

• Graduate course "Massive Data Fundamentals", hold weekly 3-hour lab, design and grade homework. EDUCATION

Georgetown University, Washington DC (GPA 3.7/4.0) Sep. 2016 – May. 2018

• Master of Science in Data Science

• Related Coursework: Data Visualization (Python, R, and Tableau), Probability Modeling/Stat Computing (R), Optimization

(R and Python), Massive Data (Hadoop, Spark, AWS), Statistical Learning (R language), Structures and Algorithms

(Python), Natural Language Processing (Python), Computer Vision (Python), Advanced Database. University of Minnesota, Twin Cities Jan. 2012 – Jan. 2016

• Bachelor of Art in Mathematics, Actuarial Specialization

• Related Coursework: Actuarial Math, Econometrics, Finance, Mathematics of Options, Futures and Derivatives Securities, Insurances, C++/C, Algorithms and Data Structures


Analysis of Residential Housing Prices in Washington, DC

• Explored the relations among home price and various factors. Link: Handwritten Digits Classification by Neural Network

• Performed one hidden layer Artificial Neural Nets to solve the multi-classification problem by R language. TripAdvisor Reviews Texting Mining on the Cloud

• Analyzed insightful information derived from customer reviews as part of an effort to help design the strategic directions of several hoteling companies.

• Handled large scale JSON database, wrote MapReduce program & performed Topic Modeling with PySpark on AWS EMR. Sentiment Analysis of IMDB movie reviews (Natural Language Processing Project)

• Implemented two methods of sentiment analysis: Traditional bag of words model, and Deep learning model depended on Google’s word2vec tools in Python.

• Built the models to get surprisingly accurate predictions of whether a review is thumbs-up or thumbs-down. Application of Restaurant Finder

• Developed an application that ranks the restaurants that are geographically close to a user-specified address and best match users’ cuisine preferences.


• Computer skills: Python, R, SQL, MS Office, Matlab, C++/C, Java, Stata, SAS, HTML, Spark, and Tableau.

• Proficient data analytical ability with programming language in R and Python.

• Experienced with big data tools in Hadoop Ecosystem, including HDFS, MapReduce, Hive, Pig, and Spark.

• Experienced in working with Amazon Web Services using EC2 for computing and S3 as storage mechanism.

• Experienced with web scraping and APIs.

• Personal Website:

Contact this candidate