Post Job Free

Resume

Sign in

Data Social Media

Location:
Visakhapatnam, Andhra Pradesh, India
Posted:
March 04, 2020

Contact this candidate

Resume:

Leeladhar

Data Scientist adb4ym@r.postjobfree.com

469-***-****

SUMMARY:

A Data Scientist with lot of new acquired skills, to fit my solution to the problems and the ability to mine hidden gems located within large sets of structured, semi-structured and unstructured data by applying various machine learning techniques and exploratory analysis.

Hands on Experience: Claim Classification, Health Insurance Premium Prediction, Customer Churn, Sales Forecasting, Market Mix Modeling, Survival Analysis, Customer Classification, Recommendation Systems, Text Mining, Sentiment Analysis

Machine Learning Algorithms:Regression (Linear regression, Logistic regression), Decision tress (CHAID, CART, Random Forest), Time Series Forecasting (ARIMA, ARIMAX, Holts Winter), Cluster Analysis (Kmeans, Kmeans++), KNN, SVM, CNN

Toolkits:Python, R, TensorFlow, Keras,Seaborn, Matplotlib, Hive, PySpark, Oracle, HBase, PostgreSQL, Unix Shell Scripting

Around 6 years of rich experience in Machine Learning Algorithms, Data Mining techniques, Natural Language processing

Agile Certified Professional.

Worked on end-to-end basis from gathering business requirements, pulling the data from different data sources, data wrangling, implementing machine learning algorithms, deploying models and presenting end results to clients.

Mined and analyzed huge datasets using Python and R languages. Created an automated data cleansing module using supervised learning model in python.

Worked with different data set manageable packages like Pandas, Numpy, Scipy, NLTKetc

Implemented various statistical tests like ANOVA, A/B testing, Z-Test, T-Test for various business cases.

Worked with various text analytics libraries like Word2Vec, GloVeetc.

Knowledge in Seq2Seq models, Bag of Words, Beam Search and other natural language processing (NLP) concepts.

Experienced with Hyper Parameter Tuning techniques like Grid Search, Random Search.

Worked with outlier analysis with various methods like Z-Score value analysis, Liner regression, Dbscan (Density Based Spatial Clustering of Applications with Noise) and Isolation forest

Knowledge in PostgreSQL and Unix Shell Scripting. Designed and developed wide verity of PostgreSQL modules and shell scripts with maximum optimization

Commendable knowledge in SQL and relational databases (Oracle, SQL Server, gpadmin)

Worked with Tableau visualization to create business reports with key KPIs

Experienced with DevOps tools like Docker, Container, Jenkins.

Worked with DevOps teams to help them in deployment by writing python code for custom logics to achieve Infrastructure as code concept.

PROFESSIONAL EXPERIENCE:

CIGNA, Connecticut Nov’ 2018 –Till Date

Data Scientist Researcher

Role Summary: CIGNAis a leading Health Care insurance provider all over the US. The role involves exploring different data solutions and use cases. Implementing various models that helps the business to grow continuously and yield profitable insights.

Responsibilities:

Worked with claim classification models to reduce the different workloads for the Core Operations team.

Explored and created different new data sets to work with and implement few data science workflow platforms for future applications.

Designed and implemented workflow methodologies for the claim predictions API and also involved in wrapper classes creations to pull the required data.

Worked on various regression models to predict the Invoice Premium Prediction with highly non-linear data.

Created various models like SVM with RBF kernel, Random Forest, Extra Trees, Multi Perceptron Neural Network, KNN, Ridge Regression models.

Worked with K-fold cross validation and other model evaluation techniques throughout different projects.

Implemented CNN model to go through various documents coming from downstream to identify set of images.

Created a sq2seq model to leverage natural language processing for the claim scenario texts.

Worked with text extraction modules such as Tesseract to extract text from various documents and process the text with NLTK.

Created clustering algorithm to discover various combinations of data for QA regression scenarios to automate the daily claim regression testing.

Successfully reduced the amount of time that data team takes to supply the data for various regression test cases by using Random Forest Classifier to classify the clustered data as per scenario.

AVON – New York May’ 2016 – Nov 2018

Data Scientist

Role Summary: AVON produces a chain marketing strategy for their business which will lead the data to spread across over multiple countries. The job involves creating multiple machine learning models for various business problems such as creating a customer satisfactory analysis, Region sales target groups, underperformed product sales forecasting, sentiment Analysis, regression model for predictive analysis, classification, Reinforcement learning and clustering are some of the job responsibilities as well.

Responsibilities:

Created an Automated Ticket Routing algorithm for the offshore team using Natural Language processing and other machine learning algorithms.

Analyzed and significantly reduce customer churn using machine learning to streamline risk prediction and intervention models.

Worked with K-Means, K-Means++ clustering and Hierarchical clustering algorithm to sort of the customer classification.

Worked with outlier analysis with various methods like Z-Score value analysis, Liner regression, Dbscan (Density Based Spatial Clustering of Applications with Noise) and Isolation forest

Used cross-validation to test the models with different batches of data to optimize the models and prevent overfitting.

Worked with PCA(Principle Component Analysis), LDA(Linear Discriminant Analysis) and other dimensionality reduction concepts on various classification problems on various linear models.

Worked with sales forecast and campaign sales forecast models such as ARIMA, Holt-Winter,Vector Autoregression (VAR),Autoregressive Neural Networks (NNAR).

Experimented with predictive models including Logistic Regression, SupportVector Machine (SVC) and re-enforcement learning to prevent the retail fraud.

Worked with ETL developers to increase the data inflow standards using various preprocessing methods.

Worked with Survival Analysis for customer dormancy rates, periods and inventory management.

Created a customer service upgrade which is an automated chatbot to better assist the online customers using text classification and knowledgebase.

Responsible for design and development of advanced R/Python programs to prepare to transform and harmonize data sets in preparation for modeling.

Worked with tableau tool in order to represent the data in visual format and better describe the problem with solutions.

Developed classification models of user behavior based on website activity to create/enhance buying stage classifications.

Deep knowledge of a scripting and statistical programming language like python. Advanced SQL ability to efficiently work with very large datasets. Ability to deal with non-standard machine learning datasets.

Infosys Ltd – India, Client: Adidas Apr’ 2015– May’ 2016

Data Analyst

Role Summary: The purpose of this role is to support both data engineers and data scientists in their day-to-day tasks. This includes exploration of various internal and external data sources, integration and preparation of data for consumption in advanced analytics as well as the execution of analytical tasks itself.

Responsibilities:

Provided Data Science Process Support in daily tasks.

Explored data sources (e.g. Clickstream, mobile tracking, social media, etc.) in terms of relevance, data quality, availability, technical accessibility, coverage and completeness and take necessary steps which are suggested by senior data scientist up on presenting the findings.

Modeled new data points into analytical data model, document semantics and specify relations to existing data points

Specified and developed ETL/ELT processes from data source which includes Hadoop, SQLServers to final format in analytical database

Created specific data marts for data science access, analytical applications and simplified views for business user access

Specified data quality rules for individual data domains to achieve data quality along with reliability of data

Interacted with Global IT horizontal organizations and service partners (internal and external) to drive implementation and testing

Developed a plan to go-live and handover of new data streams to Service Manager on each new enhancement

Handled analytical tasks under the guidance of Senior Data Scientist (statistical programming, e.g., in R)

Investigated and resolved technical and data consistency issues (as reported by monitoring teams) and coordinate with data owners

Made data available on the fly for ad hoc analytics and prototyping (FastTrack, bypassing modeling and structured implementations)

Performed most of the reporting tasks using various reporting tools based on each department preference.

Infosys Ltd – India, Client: Travis & Perkins May’ 2014–Apr’ 2015

Support Data Engineer

Role Summary: As a Data Analyst we work with the Plumbing and Heating Marketing Teams to provide data that will enable marketing activity and reporting. Deliver accurate sales and customer data for direct marketing activity to achieve businesses objectives. Support the day to day data elements of direct marketing activity (mail, email, phone, SMS). Ensure these are produced to desired standards and delivered to appropriate team/agency to agreed deadlines. Work closely and communicate effectively with marketing and multi-channel teams to scope requirements, recommend improvements on customer selections and testing to achieve campaign objectives. Report and analyze performance of direct marketing activity using appropriate tools, develop, report and identify trends in customer trading behavior, customer information and marketing activity on a regular basis.

Responsibilities:

Worked with huge number of marketing teams all over UK and performed requirement gathering and analysis.

Monitored and resolved issues of the data flow on daily basis. Also created viewsfor the reporting team to use the data for the marketing numbers on daily basis.

Worked with reporting team to resolve data discrepancies and logical data corrections which are occurring throughout reports.

Worked with MicroStrategy tool to implement sales report models for daily business users.

Also implemented automated report distribution program for the daily routine tasks of generating reports and delivery.

Worked in reporting team to create multi customer trend pattern analysis and designed a very effective, interactive report model for the higher management.

Used data mining techniques for outlier detection and created an algorithm to connect the patterns between customer trends.

Resolved 18% of performance issues by tuning very complex modules.

Designed and developed automated troubleshooting programs to minimize the support team intervention and achieved a success by a marginal improvement of 19%.

Education Qualification:

Valparaiso University – Master’s in information technology, Computing Track with GPA of 3.9, Dec - 2017



Contact this candidate