Post Job Free

Resume

Sign in

Data Project

Location:
Bengaluru, KA, 560001, India
Posted:
February 11, 2016

Contact this candidate

Resume:

Chinna Reddeiah. B

Mobile: +91-805******* E-Mail:acthsy@r.postjobfree.com

Professional Summary:

Having 2.3 years extensive exp. Including 1.0 years of exp in Big Data and Big Data Analytics using Hadoop.

Having experience on IBM Biginfosights, Cloud era 4.x, and hortonworks clusters.

Having Hands on Exp. In using Hadoop Ecosystems Such as HDFS, HIVE, PIG, SQOOP, IMPALA, HUE, FLUME, SOLR.

Having hands on exp. in Writing Map Reduce Jobs in Hive and PIG.

Having exp. in Importing, Sorting and Exporting data from Different systems to HDFS using SQOOP.

Using Hadoop ecosystem components for Storage and processing the data, exported data into Tableau using Live Connections.

Having the exp. on using Oozie to Define and Schedule the Jobs.

Having exp. on Storage and Processing in HUE covering all Hadoop Ecosystem Components.

Configured and working with 11 nodes Hadoop cluster. Installation and Configuration of Hadoop, HBASE, Hive, Pig, SQOOP and Flume.

Working experience in building and deploying Java web applications in Windows and Linux environment.

Experience with Web server configuration and management (Apache).

Experience in MYSQL & Oracle.

Professional Experience:

Working with FlintQube Info ways pvt ltd as a software Engineer from Jan 2013 to till date.

Worked in Valycon IT Solutions as a service desk engineer from July 2012 to Dec 2012.

Technical Skills:

HADOOP Framework : HDFS, Map Reduce, PIG, Sqoop, Impala, HUE, Flume, Hive, Oozie, Solr, and Zookeeper.

Databases : HBASE, MongoDB, Oracle10g

Application Servers : Apache, Web logic, Tomcat

Programming languages: C++, Core Java

Operating system : Windows family & Linux

Education Qualification:

B. Tech (Electrical& Electronics Engineering) - Jawaharlal Nehru Technological University - Anantapur, Andhra Pradesh.

Project Experience:

Project #1

Client: Tapiture.com Duration: September 2014 to till date

Project: Online Shopping

Environment:

Cloud era 4.X, Hadoop, Map Reduce, HDFS, HIVE, Impala, Sqoop, Tableau.

Description:

The Tapiture is one of the leading online shopping from US. I was involved as Hadoop Developer with team

Size of 6. Here we got TB's of Data which is generated from E-Commerce portal. We are using Hadoop to get

that data into a single place like HDFS and process that data by using Hive to get the Price and Product Info

Once we got the Info we need to share that Info for further Analysis by using Tableau.

Responsibilities:

•Involved in analyzing the system and business.

•Involved in importing data from Oracle to HDFS using SQOOP.

•Involved in writing Hive queries to load and process data in Hadoop File System.

•Involved in creating Hive tables, loading with data and writing hive queries which will run internally in Mapreduce way.

•Involved in working with Impala for data retrieval process.

•Exported data from Impala to Tableau reporting tool, created dashboards on live connection.

Sentiment Analysis on reviews of the products on the client's website.

•Exported the resulted sentiment analysis data to Tableau for creating dashboards.

Project #2

Client : Log Rhythm Networks Duration: October 2014-Tilldate

Project: Network Log Analysis

Environment:

Oracle10g, Hadoop, HBASE, MYSQL, pig, HBASE, Flume, Map reduce, Sqoop, Hive

Operating system: Ubuntu and Centos

Description:

Network Log analysis is the project to analyze the data in the company network logs. Analysis of the data

Collected from network devices will help increase the network security. It converts logs from various networks devices and host devices to detailed reports. There are reports on top sources, top destinations, top unique pairs, top services, country wise top sources and destinations, blacklisted IPs and user bandwidth. The ultimate goal of the network analysis is to observe for any network attacks. In this project, we do the following things in backend.

•Store the log file into HDFS.

•Loading data into Hive table.

• Summarizing the data with hive and pig.

• Generating the reports.

• Sending the notification to customers.

Responsibilities:

• Created the 11 nodes Hadoop cluster environment by using Cloudera CDH4 version.

• Involved in importing the data from various formats like JSON, XML to HDFS environment.

• Involved in Hadoop ecosystem components configurations and also involved in Hadoop ecosystem components (Map Reduce, Hive, SQOOP, Flume, HBASE, pig) performance testing and benchmarks.

•Implemented the Map Reduce program for converting log files to CSV format.

•Involved in transfer of data from post log tables into HDFS and Hive using SQOOP.

•Involved in creating Hive tables, and then applied Hive QL on those tables for data validation.

•Developed and Tested Map Reduce Jobs in Java to analyze the data.

•Implemented the hive partitions, hive joins.

•Involved in Cluster maintenance, Cluster Monitoring and Troubleshooting.

•Involved in managing and reviewing data backups and log files.

•Involved in the implementation of flume for loading logs into HDFS.

•Involved in HBASE configuration, HBASE java API.



Contact this candidate