Post Job Free
Sign in

Big Data Developer with professional experinence in Hadoop and Spark.

Location:
Bowling Green, OH
Posted:
June 20, 2016

Contact this candidate

Resume:

Sri Sandeep Velichety

Email: *******.*********@*****.*** 216S Mercer Road, Apt 11 Linkedin:https:/www.linkedin.com/in/velichety-sri-sandeep-b231722a Bowling Green, Ohio,43402

Phone: 419-***-****

mailto:*******.*********@*****.***

Summary:

Passionate about IT with extensive experience in building scalable and data driven application leveraging vast array of technologies including Mainframe,ETL,Analytics and Hadoop and actively seeking for opportunities in Analytics and BI .

Experience in building applications in Mainframes.

Expert in ingesting data into the Big Data eco System.

Have profound knowledge in Retail Merchandise Assortment/Planning/Pricing Operations.

Skills and Certifications:

Big Data Tools - HIVE, PIG, Map Reduce, Spark, HDFS

Certifications - Python (Coursera), IBM DB2 Associate, Certificate of Engineering Excellence in Big Data Analytics and Optimization from Carnegie Mellon University

Scripting Languages – R, Python, Bash Shell Scripting

Business Tools and Databases - SQL, Teradata, MySQL, DB2, SQL Server, SAS Enterprise Miner

Analytics Expertise - Visualizations and Storyboarding, Data Mining, Text Mining, Big Data, Web

Coursework - Data Mining, Regression, Big Data Using Spark, Time Series, Business Intelligence, Data Base,Management Systems, Optimization, Exploratory Data Analysis, Project Management

Education:

Master of Science in Analytics Graduation Date: June 2016

Bowling Green State University Bowling Green, OH

Bachelor of Science in Information Technology Graduation Date: May 2010

Jawaharlal Nehru Technological University, Hyderabad, India

Academic Projects:

Performed various analysis tasks based on 40 years of NCDC data which is having 70 million records.

Worked with Spark and Scala, Hive, Impala for major data analysis.

Created a Stock Market Portfolio Building Application that takes data of S&P 500 on daily basis and forecast the cost of each share based on the model tailor made. Create risk buckets based on volatility. Select shares based on the risk bucket with their individual budget allocation percentage.

Extracted features that affect the sales of a sales of a store using statistical techniques. Analyzed

Store level data of over 1,115 stores across 2 years daily sales consisting 1 million records.

Professional Experience:

Module Lead/Developer – Tata Consultancy Services (November 2013-June 2015)

Act as a SME and module lead in migration of applications from Mainframes and Teradata warehouse into Big Data.

Working with Teradata and Mainframe team to insert data into Hadoop.

Create migration scripts for conversion of Mainframe data to Big Data.

Build a warehouse in Big data and load the required data into the tables.

Build UNIX scripts to load data into HDFS.

Maintenance and support till the application stability.

Create reports for the business users.

Analyst and Developer – Tata Consultancy Services (February 2011-November 2013)

Maintenance of Assortment Planning and Pricing applications in Mainframes.

Streamline the existing process for improved productivity.

Migration of Legacy applications and sun setting of various components.

Perform regular health check of the databases for smooth and stable functionality.



Contact this candidate