RESUME
NAME: Aruna. Edupalli
E-mail: acu0tq@r.postjobfree.com
Mobile: +91-984*******
OBJECTIVE:
Having 2 years of experience in IT industry, currently looking for a challenging position in BigData Technologies which I will be able to put my diverse skill set to work.
PROFESSIONAL SUMMARY:
Having good knowledge on application development involving Analysis, development and maintenance of various applications using HADOOP
Experience in Apache Hadoop ecosystem such as Hive, Pig, MapReduce etc.,
Exposure on Query Programming Model of Hadoop (Hive and Pig)
Understanding of Name node, Data node, Job Tracker, Task Tracker, Secondary Name node etc.,
Executed jobs in Hadoop local mode, Pseudo Distributed mode, Hadoop Cluster mode for production
Good knowledge on core Java
Capable of picking up new technologies with a minimum learning curve
TECHNICAL SKILLS:
Technologies : Apache Hadoop, Core java
Hadoop ecosystem : Hive, Pig, Hbase, MapReduce, Sqoop, spark, flume
Database : SQL, Oracle, HBase
Operating System : Windows, Linux, Ubuntu, Cent OS
IDEs : Net Beans, Eclipse, My Eclipse.
Productivity Tools : MS-Office, Open Office, Libre Office
File Formats : ORC,Parquet,CSV,AVRO
EDUCATIONAL QUALIFICATIONS:
B.Tech from PACE Institute of Technology & Sciences in 2012 with an aggregate of 70%.
Intermediate in S.S.N junior college with an aggregate of 86%
SSC at P.V.R.M.G.H.School with an aggregate of 73.33%
PROFESSIONAL SUMMARY:
Presently working as a Junior Software Engineer for Solix Technologies pvt ltd from August 2014 to till date
PROJECT#1
Title : GMPC
Tools Used : Hadoop version 1.0, hive, pig
Description :
This project is having semi structured data related to a music player. The data is about the users from different cities who are logging into the site with date and time. We need to analyse the data and have to generate a report.
PROJECT#2
Title : LinkedIn profiles
Tools used : HDFS, hive, pig, Flume
Description :
This project will contain all the details of linkedin website users data in unstructured format. we need to download that data from the source by using Flume and need to generate some analytical data regarding client specification
PROJECT#3
Title : system analytics
Tools used : HDFS, Hive
Description :
This project will contain unstructured data related to a portal website with different respected user details. We need to analyse that data and need to change that unstructured to structured data and then generate report.
PROJECT#4
Name : EDMS
Version : 6.5, 7.0
Domain : Database
Tools used : HDFS, Hive, HBase, Sqoop, Solr server, JDK 1.7, JSP, Servlet, Oracle
Role : Hadoop Developer
Description :
This project is all about the Enterprise Database Management System which deals all of about all types of data, database and its management. As a Hadoop Developer, my role is to add the functionality and modules of these existing database management in-terms of Big-Data. Involves in migrating of data from any source database to Hadoop ecosystems according to client requirement. That means we need to transfer the data from traditional RDMS to HDFS or hive or Pig or Hbase . Not only RDMD Formal files like CSV,XML format files also we need to transfer to the BigData Stack. Our company product will provide different functionalities like data migration, data security and data archiving.
Roles and Responsibilities:
Basic Knowledge on Cluster Setup
Integrating HDFS, HIVE-HBASE, SQOOP, SOLR modules to product EDMS
Adding source code functionalities of validations, migration for EDMS in java.
Well versed with HDFS CLI commands
Written java code to migrate tables from traditional databases to BigData
Written code to read & create different file formats like ORC, Parquet
Written java code to transfer files from local system to Hadoop platform
Did manual testing to test the performance of Migration in the Tool
DECLARATION:
I hereby declare the above finished information is true to the best of my knowledge and brief.
Place :
Date : (E.ARUNA)