Post Job Free

Resume

Sign in

Advanced big data analytics developer

Location:
Los Angeles, CA
Posted:
February 14, 2018

Contact this candidate

Resume:

Praneeth Uma Venkata Madduri

469-***-**** ac4g1t@r.postjobfree.com www.linkedin.com/in/praneethmadduri1991 PROFESSIONAL SUMMARY

* ***** ** **** ********** as Data Engineer involved in the entire data science project life cycle, including Data Acquisition, Data Cleaning, Data Manipulation, Data Mining, Data Validation, and Data Visualization. Experienced at analyzing, cleaning, wrangling, coding, implementing, testing, and maintaining database solutions in Teradata, SQL, Big Data and ETL solutions using informatica. Experienced in machine learning algorithms such as linear regression, logistic regression, decision tree(CART), random forest, SVM, k-nearest-neighbors, naïve Bayes, Bayesian network, k-means clustering, neural networks, recommendation system design and more. Strong skills in Statistics methodologies such as hypothesis testing, ANOVA, PCA and correspondence analysis using Python, SAS and R. Graduated with Masters in Business Analytics from UTD, with coursework emphasized on statistical analysis, predictive modeling, forecasting, Machine Learning and big data. Looking forward to work with and contribute to teams that rely upon scalable applications to process structured and unstructured data into big data platform and use machine learning, NLP and data analytics for discovery and decision making.

EDUCATION

The University of Texas at Dallas May 2017

Master of Science (MS) – Business Analytics

Jawaharlal Nehru Technological University, Kakinada May 2012 Bachelor of Technology, Electronics and Communications Engineering TECHNICAL SKILLS

Languages: Python, R, SAS, SQL, PL/SQL, Teradata, MATLAB, Unix Tools: Informatica, Qlikview, Tableau, SSIS, SSRS, Business objects, Control-M, TWS, Microsoft Excel, MS SQL Server Datamining Techniques: Decision Trees, Association Analysis, Naïve Bayes, K-NN, PCA, Ensemble etc Statistical Techniques: Linear, Multiple, Logistic and probit regression, Anova, Hypothesis Testing etc Concepts: ETL Services, Data Warehousing, Text Mining/NLP, Report/Dashboard Creation, Machine Learning Algorithms Big Data Skills: Hadoop, MapReduce, Hive, Spark, Pig, Kafka, sqoop PROFESSIONAL EXPERIENCE

AT&T, Los Angeles, United States November 2017 – Present Advanced Big Data Analytics Developer - Contractor

• Performed POC to implement big data processing applications to collect, clean and normalization large volumes of open data from Hadoop Data Lake.

• Worked on Clustering, classification of data using machine learning algorithms to detect over payments of TAM(Tower Asset Management) system

• Involved in building a ML engine over Hadoop Data Lake between REM and Accounts payable to detect over payments which will ultimately save $2B.

• Involved in migration of entire data analytics platform from SAS to RCloud(Open Source Project of AT&T Rcloud.social) to achieve cloud based big data analytics solution. Tata Consultancy Services, Hyderabad, India December 2014 – March 2016 Data Science Engineer – CLIENT (JP Morgan Chase & Co)

• Built and optimized CCB critical jobs prediction models, automated credit risk modelling with machine learning algorithms using Python.

• Validated and selected models using k-fold cross validation methods, error metrics and worked on optimizing models for higher accuracy

• Performed Data extraction using HIVE and Map-Reduce and involved in migrating ETL process to Pig.

• Built a ETL pipeline from Teradata to Hadoop ecosystem using Sqoop to complete few business flows and involved in building an acknowledge system to report the process completion on Hadoop.

• Created data visualization using Tableau, R ggplot2, Python matplotlib, MS Visio and PowerPoint and reported weekly progresses and presented final results to partners

• Involved in designing data modeling using various advanced machine learning algorithms and for visualizing and reporting the results in support of strategic decision making.

• Developed a quick decision making platform using big data technologies, for faster analytics reports and adhoc calculations, improving processing time by 50%.

Tata Consultancy Services, Hyderabad, India March 2013 – December 2014 Business Intelligence and Analytics Developer – CLIENT (JP Morgan Chase & Co)

• Involved in designing, developing, implementing and supporting solutions on ETL Informatica 8.X/9.X, databases

(SQL and Teradata)

• Implemented Data Migration for Banking Domain Client database from DB2 to Teradata platform resulting in cost reduction up to 20%

• Established Performance tuning at Database level to improve ETL load timings and decrease run times by 25%

• Supported live production and handled more than 10000 tickets to establish the ETA

• Aligned data migration and integration teams to streamline project objectives and refine transformation processes resulting in decreased friction and enhanced coordination by 50%

• Built resilient data architecture to provide Chase the ability of analytics and ad-hoc reporting

• Developed mapping to build Staging, SCD type 2 dimensions and Facts in snowflake as well as starflake schema for transforming data elements and channelizing the extracts of AML-KYC feed from upstream to Chase data warehouse – ICDW to comply with fraud prevention using Teradata as target.

• Analyzed information system needs, evaluating end-user requirements, custom designing solutions, troubleshooting for complex information systems management.

• Worked on Deployment Activities like migrating informatica code, database objects from one environment to another environment

• Migrated the complete architecture from EDW to ICDW and cut costs by $5M+ SELECTED ACADEMIC PROJECTS

Future Work Force Project- Fidelity Investments January 2017 - April 2017

• Helped to identify data correlations that could prevent business incidents and also see how else we can leverage business subject matter experts through an “on-Demand” Platform.

Recommender System using Apache Spark August 2016- December 2016 Employed user-based collaborative filtering to recommend and promote new music to a listener basing upon their audio playlist.

Accident Analyzer using SAS January 2016 - April 2016

• The vehicular accidents are not random as they appear due to which we built a smart recommendation and data analysis system built on SAS to suggest remedies and reduce such occurrences in future SELECTED ACHIEVEMENTS

• Top 10 teams, Excelsior (2016): Ericsson Datamining Hackathon, Dallas TX

• On the Spot Award, Tata Consultancy Services (2015): For leading team and achieved objectives within ETA.

• Star of the quarter, Tata Consultancy Services (2014): For outstanding delivery at workplace.

• Service Selection Board Interview Finalist, Indian Army and Navy (2012): Finalist out of 500 candidates.

• Top 1%, EAMCET (2008): Prestigious Engineering Entrance Examination by Government of India.



Contact this candidate