email@example.com linkedin.com/in/aditya-tanikanti7 812-***-****
Master of Science in Data Science December 2016
School of Informatics and Computing, Indiana University Bloomington Bloomington, U.S.A
Key Courses: Intro to Statistical Inference, Data Mining, Sentiment Analysis, Real World Data Science, Big Data Analytics.
Bachelor of Engineering in Computer Technology March 2012 Yeshwantrao Chavan College of Engineering (YCCE), Rashtrasant Tukdoji Maharaj Nagpur University Nagpur, India
Key Courses: Applied Mathematics, Algorithm Design and Analysis, Data Structures, Software Engineering, DBMS, OOPS. TECHNICAL SKILLS
Software Languages: R, Python, Java-J2EE(Struts 1.2, Spring MVC, REST, Hibernate), Sparql, C/C++.
ETL/Reporting/BI Tools: Informatica, KNIME, Tableau, MS-Excel, VOSviewer, Micro Strategy.
Databases/Big Data Frameworks: Oracle 11g/12c, Hadoop, MongoDB, MySQL, TOAD, MS Access.
Operating Systems: Windows, Unix(Solaris).
Libraries: D3.js, Numpy, Pandas, Scikit-Learn, Theano, matplotlib, ggplot2, Gensim.
Machine Learning/Statistical Concepts: Supervised Learning (Regression, Classification, SVM, KNN, Market Basket Analysis), Unsupervised Learning (Clustering, Neural Networks), Ensemble Methods, Recommendation Systems, Text Mining/Sentiment Analysis, Hypothesis Testing, MANOVA.
Organization: Indiana University Bloomington, Bloomington IN USA August 2016-Current Profile: Associate Instructor
Co-Instructor for an undergraduate Python programming course (Information Infrastructure I).[Technologies: Python]
Organization: Jewelry Television, Knoxville TN USA June 2016-August 2016 Profile: Data Intern
Worked with Database Administration Team on installing, upgrading and maintaining ORACLE 11g & ORACLE 12c Database.
SQL Querying and optimizing along with Table Partitioning.
Briefly introduced to Informatica tool and worked on ETL transformations using the same. Also learnt BI tool Micro Strategy
Organization: Wipro Technologies, Pune MH INDIA June 2012-June 2015 Profile: Software Engineer
Performed duties of Scrum Master - responsible for coordination, requirements derivation and implementation of projects in an AGILE framework using Jira.
FINANCE WORKSTATION (FWS-Credit Suisse): Developed a one stop web portal as a part of 15 member team, used by the employees of Credit Suisse primarily for generating/displaying reports and making transactions.
MATCH (MasterCard): Collaborated with a team of four to work on MATCH(Member Alert to Control High Risk Merchant System), a web based risk management application specifically designed for the issuers of the Master Card powered debit/credit cards to detect fraudulent merchants i.e. those who misuse these cards. ACADEMIC PROJECTS
Movie Recommendation System: Built a mini movie recommendation system on the Movie Lens Data Set users by utilizing the techniques of collaborative/content filtering based on historical record of items that the users have viewed.[Tools: R, Tableau]
Comparative Study of hand crafting and traditional features for Sentiment Analysis: Inspired from the Yelp Dataset Challenge, performed a comparative study of 3 classification algorithms Support Vector Machines, Bernoulli Naïve Bayes and Long Short Term Memory (Neural Networks) on data from Word embedding, Bag of Words and Hand Crafted features data models, to predict the sentiment of the Yelp business user reviews. [Tools: Python, MongoDB]
KNIME Clinical Data Query Analysis Sponsored by Eli Lilly & Co: Analysis on clinical trial data/query data reporting reasons for missing data, incongruences within data, trends within text data etc.[Tools: KNIME, Oracle SQL Developer]
Stock Market Prediction System using Data Mining Techniques: Used technical indicators like Relative Strength index, Exponential Moving Average, Convergence Divergence etc. to predict next day opening stock price and performed a comparative study of traditional Data Mining techniques such as SVM, Decision Trees, Adaptive Boosting, ARIMA time series analysis etc.[Tools: R, Python]
K-Means Clustering using Map Reduce: Created sequence file from directory of text documents, Tokenized and generated TF-IDF vector for each document from sequence file. Applied map reduce K-means algorithm to form clusters. [Tools: Hadoop, Java]