Post Job Free

Resume

Sign in

Data Management

Location:
Hyderabad, Telangana, India
Salary:
20% of hike on my salary
Posted:
February 10, 2016

Contact this candidate

Resume:

P KRISHNA KISHORE

M No: +91-996******* E-Mail: actg5a@r.postjobfree.com

Professional Summary:

5+ years of overall IT experience in Application Development in Java and Big Data Hadoop.

* ***** ** exclusive experience in Hadoop and its components like HDFS, Map Reduce, Hive, Apache Pig, Sqoop, Flume and HBase.

Good working knowledge with Map Reduce and Hive.

Involved in writing the Pig scripts to reduce the job execution time

Have executed projects using Java/J2EE technologies such as Core Java, Servlets Jsp and Hibernate.

Experience in developing and deployment of web applications in Tomcat 7.0.

Excellent communication, interpersonal, analytical skills, and strong ability to perform as part of team and also individually.

Exceptional ability to learn new concepts.

Hard working and enthusiastic.

Professional Experience:

Working as Hadoop Developer in Phore Technologies, Hyderabad, India since Sept-2013

Worked as Software Developer in Vedanta Aluminium Ltd, Lanjigarh, Odisha for 3 years.

Educational Qualifications:

Pursuing MCA (Master of Computer Applications) under J.N.T.University.

B.Sc (Computer Science) from Osmania University, Hyderabad in 2010.

Skill Set:

Languages

Java, Java Script, HTML, MapReduce, Sqoop, Pig, Hive, Hbase.

J2EE Technologies :

JSP, Servlets and JDBC

Servers

IBM Web Sphere Application Server 7.0, Web Logic and Apache Tomcat

Frameworks

Hibernate and Hadoop.

Java IDEs

Eclipse.

Databases

Oracle (DDL, DML, DCL and PL/SQL)

Projects Profile:

Project 1:

Title : Re-Plat Forming to Hadoop

Client : Target Minneapolis, Minnesota, USA.

Environment : Hadoop, Java, Oracle

Duration : Dec 2014 to till Date

Role : Hadoop Developer

Description:

The purpose of the project is to store terabytes of log information generated by the ecommerce website and extract meaning information out of it. The solution is based on the open source Big Data s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs. Which intern includes getting the raw html data from the websites, Process the html to obtain product and pricing information, Extract various reports out of the product pricing information and Export the information for further processing.

This project is mainly for the replatforming of the current existing system which is running on Web Harvest a third party JAR and in Oracle DB to a new cloud solution technology called Hadoop which can able to process large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client requirements with the increasing competion from his retailers.

Roles and Responsibilities:

Moved all crawl data flat files generated from various retailers to HDFS for further processing.

Involved in transferring files from OLTP server to Hadoop file system & involved in writing Queries with HiveQL.

Involved in database connection by using SQOOP to import and Export Data from Oracle database to HDFS.

Process and analyze the data from Hive tables using HiveQL using Installed and Configured Hive and also written Hive UDFs.

Involved in creating Hive tables, loading with data, map joining, some arithmetic Operation and writing hive queries which will run internally in map reduce way.

Writing the results in output tables inside HDFS along with the final reports.

Involving in solutions and code review meetings within team.

Completely involved in the requirement analysis phase.

Project 2:

Title : Re-hosting of Web Intelligence

Client : Bank Of America, Charlotte, NC, USA

Role : Hadoop Developer

Duration : Dec 2013 to Nov 2014

Platforms & Skills : HDFS, Map Reduce, Apache PIG, Hive and Sqoop

Description:

Analyze price of items on ecommerce websites like Amazon, Walmart, target etc using web scrawling. Pull the product& model data from e-commerce websites to local file system. This data will be shipped off to HDFS for high level processing using PIG. Invalid data will be dumped to local file system and purged through cron jobs. Find the best price offered for a given product& model on all ecommerce websites for the given feed. Valid data and processed data will be exported Oracle database.

Responsibilities and Contributions:

Setting up the Hadoop Cluster on Virtual Machines

Setting password less Hadoop.

Setup PIG.

Copy the web crawled data into hdfs location

Written PIG script that will take input as log files and parse the logs and structure them in tabular format effective querying on the log data.

Debugging, performance optimization the code.

Export the data to Oracle using Sqoop.

Project 3:

Title : eForecaster

Client : Birlasoft, India

Duration : From May 2012 to Aug 2013

Environment : Java, Oracle, Eclipse (IDE), Tomcat (Server).

Description:

To automate the timesheet submission process for the Birlasoft consultants working on various client projects across multiple locations and to generate reports on the same. This application also enables employees to apply for leave before creating timesheets. While creating timesheet, leave days and holidays will be automatically filled in to avoid employees to fill in inappropriate data. Other days will be filled as present.

Responsibilities:

Designing and implementing the code

Responsible for creating interactive web pages using JSP, Servlets.

Implemented client side validations using JavaScript.

Involved in writing Database Queries.

Project 4:

Title : IBSAS (Iris Based Security Automation System)

Client : Worth Technologies

Duration : Oct 2010 to April 2012

Technologies : Core Java, JDBC, Servlets and JSP

Application Server : Web Logic Server

IDE : Eclipse

Database : Oracle

Role : Developer

Description:

The generic components are useful in making use in any of the similar type of projects. The Generic Component Development (GCD) has four generic components identified and developed. They are List Population Component, Document Management, Mail Management and User Management.

List Population component acts as a mediator between the data model and the user interface. This is a reusable component that can be used in any project. Document Management component deals with the activities like uploading of the document, replacing the existed document, deleting the Document and getting the hyperlink of the document in any application.

Mail Management deals with the management of mails (both single and bulk mails).This can be also reused across the applications. User management deals with the activities regarding the creation of users under a group, adding a group to other group, changing the user from one group to other group.

Roles and Responsibilities:

Performed in analysis of the application and derived requirements.

Worked in developing Application Programming.

Worked on several Jsps using JavaScript, Div features for handling business flows.

Worked on CSS and HTML for designing new static pages.

Implemented JDBC programming for connecting to server side.

Abstracted several classes using interfaces during Multimodal Biometric interaction.

Implemented Servlet for business logic and Jsp pages for dynamic content presentation.

Worked in developing Registration module for this application.

Performed Unit testing for this application.

Thank You P.KRISHNA KISHORE



Contact this candidate