Post Job Free

Resume

Sign in

DATA SCIENTIST

Location:
Austin, TX
Posted:
March 07, 2023

Contact this candidate

Resume:

Cesar Contreras

Senior Data scientist

P: 878-***-**** E: ads6bc@r.postjobfree.com

Professional profile.

●Sr. Data Scientist and ML Engineer with 8+ years of experience in applying Machine Learning, Deep Learning, Statistical Modeling, Data Mining, Data Visualization, and Data/Business Analytics to solve complex business problems

●Proficient in managing the entire data science project life cycle and actively involved in all the software life cycle phases, including SDLC, Agile, and Scrum methodologies

●Skilled in transforming business concepts and needs into mathematical models, designing algorithms, and building and deploying custom business intelligence software solutions; successfully built models with deep learning frameworks such as TensorFlow, PyTorch, and Keras.

●Brilliant in applying statistical analysis and machine learning techniques to live data streams from big data sources using Spark and Scala.

●Hands-on expertise in the application of statistical learning methods including regression analysis, forecasting, decision trees, random forest, classification, cluster analysis, support vector machines, and naive Bayes techniques.

●Demonstrated excellence in advanced statistical and predictive modeling techniques to build, maintain, and improve real-time decision systems.

●Excellent in understanding new subject matter domains, and designing and implementing effective novel solutions to be used by other experts.

●Strong interpersonal & analytical skills, with abilities to multitask & adapt, handling risks under high-pressure environments; creative problem solver, able to think logically and pay close attention to detail.

Technical Skills

Libraries:

NumPy, SciPy, Pandas, Theano, Caffe, SciKit-learn, Matplotlib, Seaborn, Plotly, TensorFlow, Keras, NLTK, PyTorch, Gensim, Urllib, BeautifulSoup4, PySpark, PyMySQL, SQAlchemy, MongoDB, sqlite3, Flask, Deeplearning4j, EJML, dplyr, ggplot2, reshape2, tidyr, purrr, readr, Apache, Spark.

Machine Learning Techniques:

Supervised Machine Learning Algorithms (Linear Regression, Logistic Regression, Support Vector Machines, Decision Trees and Random Forests, Naïve Bayes Classifiers, K Nearest Neighbors), Unsupervised Machine Learning Algorithms (K Means Clustering, Gaussian Mixtures, Hidden Markov Models, Auto Encoders), Imbalanced Learning (SMOTE, AdaSyn, NearMiss), Deep Learning Artificial Neural Networks, Machine Perception

Analytics:

Data Analysis, Data Mining, Data Visualization, Statistical Analysis, Multivariate Analysis, Stochastic Optimization, Linear Regression, ANOVA, Hypothesis Testing, Forecasting, ARIMA, Sentiment Analysis, Predictive Analysis, Pattern Recognition, Classification, Behavioral Modeling

Natural Language Processing:

Processing Document Tokenization, Token Embedding, Word Models, Word2Vec, FastText, BagOfWords, TF/IDF, Bert, Elmo, LDA

Programming Languages:

Python, R, SQL, Java, MATLAB, and Mathematica

Applications:

Machine Language Comprehension, Sentiment Analysis, Predictive Maintenance, Demand Forecasting, Fraud Detection, Client Segmentation, Marketing Analysis, Cloud Analytics in cloud-based platforms (AWS, MS Azure, Google Cloud Platform)

Deployment:

Continuous improvement in project processes, workflows, automation, and ongoing learning and achievement

Development:

Git, GitHub, GitLab, Bitbucket, SVN, Mercurial, Trello, PyCharm, IntelliJ, Visual Studio, Sublime, JIRA, TFS, Linux

Big Data and Cloud Tools:

HDFS, SPARK, Google Cloud Platform, MS Azure Cloud, SQL, NoSQL, Data Warehouse, Data Lake, SWL, HiveQL, AWS (RedShift, Kinesis, EMR, EC2, Lambda

Professional Experience

Sr. Data Scientist and ML Engineer (NLP Engineering) - Etsy (REMOTE) Oct 2021 – Present

(Etsy, Inc. is an American e-commerce company focused on handmade or vintage items and craft supplies. These items fall under a wide range of categories, including jewelry, bags, clothing, home décor and furniture, toys, art, as well as craft supplies and tools.)

Worked as a Sr. Data Scientist and NLP Engineer at Etsy where we used NLP for sentiment analysis of customer reviews to gain insight into how satisfied customers are with products and the overall shopping experience to help the company identify areas where they need to improve and make data-driven decisions. We also analyzed customer search queries to identify frequently used keywords in those queries and optimized product titles and descriptions to include those keywords to improve the product’s visibility in search results. Analyzed customer reviews to identify a product's most frequently mentioned features and incorporated that information in descriptions to make products more visible. Developed and implemented NLP models using deep learning techniques resulting in a 20% increase in customer satisfaction scores and a 15% increase in sales.

Utilized NLP techniques such as sentiment analysis with BERT, text classification with ULMFiT, Named entity recognition with Spacy, and Text summarization with GPT-2, to gain insight into customer satisfaction (sentiment analysis score), improve customer experience, and increased sales.

Deployed models on AWS SageMaker for easy scalability and accessibility.

Analyzed customer search queries and product descriptions using NLP and deep learning techniques, such as keyword extraction with TF-IDF, embedding with Word2Vec, topic modeling with LDA, and language translation with OpenNMT, to improve search engine optimization (SEO improvement) and product recommendations, resulting in increased visibility and sales.

Deployed models on AWS Elasticsearch for improved search capabilities and AWS Comprehend for natural language processing.

Utilized tools such as TensorFlow, PyTorch, NLTK, and Spacy to implement NLP and deep learning techniques in e-commerce applications.

Achieved significant improvements in customer satisfaction (sentiment analysis score), sales (product recommendation accuracy), and search engine optimization (SEO improvement) by utilizing NLP and deep learning techniques in e-commerce projects and deploying them on AWS for easy scalability and accessibility, using AWS services such as SageMaker, Elasticsearch, Comprehend, etc.

Built strong relationships with stakeholders by effectively communicating project progress and results, and presenting data-driven recommendations.

Used Tableau, Power BI, and Plotly for visualizations.

Lead Data Scientist (AI/ML Engineering) – Humana (REMOTE) Mar 2020 - Oct 2021

(Humana Inc. is the third largest health insurance provider in the US)

Led a team to integrate OCR technology into the healthcare provider's existing workflow, resulting in a more streamlined and efficient process for data entry and claims processing. Utilized OCR technology to extract text from various types of healthcare documents, such as medical records, prescriptions, lab reports, and insurance claims to automate data entry and streamline claims processing.

Developed OCR models that can extract data from different healthcare forms, such as prescriptions, lab reports, and patient information forms, reducing manual data entry by 75%.

Utilized OCR technology to extract text from insurance claims, reducing the number of denied claims by 20% and increasing the efficiency of the claims processing process.

Implemented an OCR-based system to automatically extract patient information from medical documents, which streamlined the data entry process and improved patient care.

Developed and trained OCR models using deep learning techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for extracting text from various types of healthcare documents, including medical records, prescriptions, and lab reports.

Utilized natural language processing (NLP) techniques such as named entity recognition (NER) and text classification to extract structured data from unstructured text, improving the accuracy of the OCR system.

Implemented a hybrid OCR system that combines traditional rule-based OCR with deep learning-based OCR for improved accuracy and flexibility in handling different types of documents.

Developed custom NLP models for extracting specific information from medical documents such as patient demographics, diagnosis, and treatment information

Utilized OCR and NLP models to automatically extract codes and information required for claims processing, reducing the need for manual intervention and increasing the speed and accuracy of the claims processing process.

Implemented connectionist temporal classification (CTC) loss function to improve the performance of RNN-based OCR models for recognizing text in variable-length documents.

Created ensembles of CNNs and RNNs models for text recognition in images, resulting in a significant increase in the accuracy and robustness of the OCR system.

Utilized transfer learning techniques to fine-tune pre-trained CNNs and RNNs models on a healthcare-specific dataset, resulting in a significant reduction in training time and an increase in accuracy.

Deployed OCR and NLP models on Azure Kubernetes Service (AKS) for scalable and efficient processing of healthcare documents.

Utilized Azure Machine Learning (AML) for model deployment, monitoring, and management of the OCR and NLP models in a production environment.

Implemented Azure Functions to trigger the OCR and NLP models for real-time processing of incoming healthcare documents.

Utilized Azure Blob Storage for storing and managing large amounts of healthcare documents and the OCR output.

Set up Azure Log Analytics for monitoring and troubleshooting the OCR and NLP models in production.

Used Azure Cognitive Services such as Computer Vision and Text Analytics to enhance the OCR and NLP model’s capabilities.

ML Engineer (Computer Vision) - Micron Technology, Austin, TX Sep 2018 - Feb 2020

Stereolithography nano-machined stacks are a type of 3D printing technology that uses a laser to solidify layers of resin into a finished product. We used computer vision and machine learning techniques to detect and predict flaws in these stereolithography nano-machined stacks.

●Expertise in image processing and analysis, including techniques for feature extraction and image classification.

●Familiarity with deep learning frameworks such as TensorFlow, PyTorch, or Caffe.

●Experience in training and evaluating deep learning models using large datasets.

●Knowledge of neural networks, specifically CNNs and their architectures such as ResNet, Inception, DenseNet, etc.

●Experience with transfer learning, which is the process of using pre-trained models as a starting point for further training on new datasets.

●Experience with the latest CNN models and architectures such as EfficientNet, YOLO, and Mask R-CNN.

●Ability to develop and implement computer vision-based solutions for quality control and manufacturing process optimization.

●Strong programming skills in Python and experience with data analysis tools such as Pandas, NumPy, and Scikit-learn.

●Experience with data visualization tools like Matplotlib, Seaborn, and Plotly.

●Strong understanding of statistics and the ability to apply statistical analysis to large data sets.

●Used Spark MLlib, Spark's Machine learning library to build and evaluate different models.

●Developed machine learning algorithms utilizing Caffe, TensorFlow, Scala, Spark, MLLib, R SciPy, Matplotlib, NLTK, Python, SciKit-Learn, etc.

●Developing and enhancing statistical models by leveraging best-in-class modeling techniques.

●Experience with machine learning engineering, including designing and implementing ML workflows, and building and deploying ML models in production environments.

●Experience with MLOps (machine learning operations) including model versioning, monitoring, and maintaining ML systems in production.

●Implemented CI-CD pipelines to automate data pre-processing, model building, and model deployment stages.

●Implemented unit-testing, integrated testing

●Experience with orchestration tools such as Docker and Kubernetes.

●Working with project team representatives to ensure that logical and physical ER/Studio data models were developed in line with corporate standards and guidelines.

●Excellent written and verbal communication skills, ability to explain complex technical concepts to a non-technical audience

●Built strong relationships with stakeholders by effectively communicating project progress and results, and presenting data-driven recommendations.

Data Scientist - Santander Bank, Boston, MA Jan 2017 - Sep 2018

(Santander Bank’s principal market is the northeastern United States. It offers an array of financial services and products including retail banking, mortgages, corporate banking, cash management, credit card, capital markets, trust and wealth management, and insurance.)

Worked with a team of statisticians, mathematicians, engineers, computer scientists, and experts in econometrics and Artificial Intelligence (Machine Learning and Deep Learning) specialized in advanced analytics for the Risk division at Santander. We developed quantitative models to support decision-making processes through the discovery and communication of meaningful patterns in the data.

●Built Artificial Neural Network models to detect anomalies and fraud in transaction data.

●Stratified imbalanced data to ensure fair representation of the minority data in all data sets used for cross-validation of the model.

●Consulted with regulatory and subject matter experts to gain a clear understanding of information and variables within data streams.

●Extracted data from Hive databases on Hadoop using Spark through PySpark.

●Used R’s dplyr for data manipulation, as well as ggplot2 for data visualization and EDA.

●Utilized Scikit-Learn, SciPy, Matplotlib, and Plotly for EDA and data visualization.

● Used Scikit-Learn’s model selection framework to perform hyper-parameter tuning using GridSearchCV and RandomizedSearchCV algorithms.

●Developed unsupervised K-Means and Gaussian Mixture Models (GMM) from scratch in NumPy to detect anomalies.

●Employed a heterogeneous stacked ensemble of methods for the final decision on what transaction was fraudulent.

●Deployed model using a Flask API stored through a Docker container.

●Evaluated the performance of our model using a confusion matrix, accuracy, recall, precision, and F1 score. Took careful consideration of the recall score.

●Utilized Git for version control on GitHub to collaborate and work with the team members.

Data Scientist – Webtrends, Portland, Oregon Oct 2014 - Dec 2016

(Webtrends is a digital intelligence and analytics platform that provides detailed data and insights on customer behavior and engagement.)

As a Data Scientist at Webtrends, I worked on analyzing the performance of marketing campaigns across multiple channels, such as email, social media, and paid advertising. Applied machine learning techniques to build models that predict which content or products are most likely to be of interest to a given user.

Developed and implemented machine learning models for market campaign analysis, resulting in a 15% increase in click-through rates and a 10% increase in conversion rates for clients

Utilized natural language processing techniques to analyze customer feedback and identify key drivers of customer satisfaction and loyalty, which were used to personalize marketing campaigns and improve customer retention

Conducted statistical analysis and A/B testing to optimize email and social media campaigns, resulting in a 25% increase in open rates and a 20% increase in engagement

Worked with large datasets to identify patterns and trends in customer behavior, and used data visualization techniques to present findings to clients and internal teams

Collaborated with cross-functional teams to implement data-driven solutions for website personalization and targeted marketing campaigns

Built and maintained a data pipeline to collect, clean, and process large amounts of web and marketing data for analysis.

Developed and deployed models to production using technologies such as Python, R, SQL, Tableau, and AWS.

Developed and trained models using various algorithms such as Random Forest, XGBoost, Neural Networks, etc.

Built dashboards and reports to track key performance metrics and provide insights to stakeholders.

Maintained and updated the data science infrastructure and tools.

Communicated and presented data-driven insights and recommendations to stakeholders and clients.

Stayed up-to-date with the latest advancements in the field of data science and machine learning in the context of market campaign analysis and personalization.

Educational Credentials

Bachelor of Engineering – ITESO

Certifications - AWS Architecture, Core Services



Contact this candidate