Post Job Free

Resume

Sign in

Data Project

Location:
Hyderabad, Telangana, India
Posted:
February 13, 2016

Contact this candidate

Resume:

Deepika.B

Email ID: actiik@r.postjobfree.com

Contact Number: +919*********

Career Objective

Seeking a challenging position in software industry where my talent and hard work can add value to the growth of the organization with self-career growth.

Professional Summary

2+ years of IT experience with gathering and analyzing customer’s technical requirements, development, management, maintenance and production support projects on platforms like Spark,Hadoop and Java.

Extensive working experience on Apache Spark Modules like Spark core, Spark SQL .

Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture by using Hadoop.

Extensive knowledge in Hadoop architecture and its ecosystems.

Experience in data flow languages like Pig Latin, developing Pig Scripts and UDF’s.

Having experience in query language HiveQL and Hive UDF’s.

Having experience with schema design using NoSQL stores like HBase.

Hands on experience in migrating RDBMS data to HDFS and Hive using SQOOP.

Developed PIG data workflows to process very large datasets.

Knowledge on Apache spark and Scala.

Involved in Hive-Hbase integration and Hive-MongoDB integration to solve the business requirements.

Work Experience

Working as a Software Engineer in Info Stairs Technologies from September 2013 to Till Date.

Technical Skills

Hadoop ecosystems : MapReduce, Pig, Hive, Sqoop, Oozie, Flume,Spark,scala

Yarn,Kafka

NoSQL Databases : HBase, mongoDB

Programming Languages : Core Java

Frameworks : Hadoop Map Reduce, Spark

Web Technologies : XML, JSON, HTML

Scripting & Query Languages : HiveQL, SQL, Python, Unix/Linux

Database : Oracle 10g, MySql

Operating System : Linux/Unix, Win XP, 7, 8.1,cloudera

Development Tools : Eclipse, Edit Plus.

Educational Summary

B.Tech from Gokul Institute of Science and Technologies, Bobbili,Vizianagaram,JNTUK.

Organizational Experience

PROJECT #2:

Project : V3 Product

Duration : January 2015 to present

Environment : Spark Core,Spark SQL,Hadoop,HDFS,HIVE,PIG,SQOOP

Organization : Vitech Asia ltd

Client : Vitech, USA

Description:

Vitech Systems Group is one of the world’s leading providers of administration software to pension, investment, insurance and health plan administrators.V3 mission is to help organizations improve the efficiency and effectiveness of their operations while increasing service levels and broadening service offerings. We do this via the V3 System, our enterprise software platform that addresses a broad array of administration requirements and includes native workflow, imaging and self-service capabilities.

Responsibilities:

Implementation using Spark core,Spark SQL .

Developed the Unix shell scripts for creating the reports from Hive data.

Design and build Hadoop solutions for big data problems

Completely involved in the requirement analysis phase, requirement Gathering, Impact analysis

and System study.

Involved in collecting the data from different data sources using SQOOP.

PROJECT #1:

Project Name : Data processing using NoSql databases

Role : Team Member

Environment : Hadoop, Apache Pig, Hive, SQOOP, UNIX, HBase and MySQL.

Hardware : Virtual Machines, UNIX.

Team Size : 15

Description:

The purpose of the project is to perform the analysis on the Effectiveness and validity of controls and to store terabytes of log information generated by the source providers as part of the analysis and extract meaningful information out of it. The solution is based on the open source Big Data software Hadoop. The data will be stored in Hadoop file system and processed using Map Reduce jobs, which intern includes getting the raw data, process the data to obtain controls and redesign/change history information, extract various reports out of the controls history and Export the information for further processing.

Responsibilities:

Worked on setting up pig, Hive and HBase on multiple nodes and developed using Pig,Hive and HBase, MapReduce.

Developed the Sqoop scripts in order to make the interaction between Pig and MySQL.

Developed Map Reduce application using Hadoop, Map Reduce programming and HBase.

Involved in developing the Pig scripts

Developed the UNIX shell scripts for creating the reports from Hive data.

Completely involved in the requirement analysis phase.

Place: Hyderabad Deepikarani .B

Date :



Contact this candidate