Post Job Free
Sign in

Developer Data

Location:
Edison, NJ
Salary:
70000
Posted:
July 02, 2016

Contact this candidate

Resume:

Sartaj Singh Oberoi

* ******** ****,

Edison, NJ 08820

908-***-****

acvjm4@r.postjobfree.com

PROFILE SUMMARY

●2 years of experience in Big Data Hadoop with hands-on experience in Map-Reduce, HDFS, Pig, Hive, Hbase, Sqoop, etc.

●Done 5 Project related to Hadoop.

●A Cloudera Certified Hadoop Developer (License: 100-014-640).

●In depth understanding of Object Oriented Programming (Core Java, C++ basics) .

●Experience in Front-End (Design) Web Development using technologies like HTML, CSS, JavaScript.

●Experience in working in high pressure team environment with intense development activity on multiple projects simultaneously.

●Excellent communication and interpersonal skills.

TECHNICAL COMPETENCIES

Programming Languages : Java, C++, PHP

Big Data and its Tools : Hadoop, HDFS, Map-Reduce, Pig, Hive, Hbase, Sqoop.

Databases : MySQL, NoSql (Hbase).

Servers : Apache Tomcat.

Web Development : HTML, CSS, JavaScript, JQuery.

IDE’s : Eclipse, Sublime Text.

OS : Windows, Linux (Ubuntu 14.04, 15.10).

Education

August 2012 - July 2016

GGSIPU, New Delhi, INDIA - Bachelor of Technology

Major: Computer Science Engineering

EXPERIENCE

GTBIT, New Delhi - Hadoop Developer (Lead Developer)

Jan 2016 – May 2016

Project: Stock Analysis.

Description: This project was developed to generate a Fortune teller for stocks. This worked with the deployment of machine learning with Hadoop. Custom written algorithms were put into use to give the application a sense of how to determine the output. Multiple aspects like the stock value, market environment, tenders, past year data & historical data were taken into account.

Responsibilities:

●I worked with the historical data sets and also on getting new information of any tenders passed or deals placed. We started off with 5 companies, but gradually we increased.

●I had worked on to getting the data into the system. I was working on HDFS, Hive and Map-Reduce.

●Also, me and my mentors worked on making the algorithm that would support our project in a way that it had to be made just once, but can be deployed on multiple data sets.

●The Algorithm took around 70-80 days to be deployed while the data was successfully stored in Hbase.

●The result being that we were able to get nearby 68% success rate, which was higher than expected. The reason for only 68% success was because of the fact that the application was in its growing stage and therefore, would give higher accuracy as the data grows.

Stock IO Technologies, New Delhi - Hadoop Developer (Sub-team developer)

Jan 2016 - March 2016

Project: Historical Stock Data Processing.

Description: This project was had a target to get the historical data of multiple stocks over the last 15 years so that a graph of how the stock has performed over the years can be prepared. Over 250 Companies were taken hen this was started, processing each company to get the average of its high, low & closed on values for each year, plotting a graph for 15 years.

Responsibilities:

●I had the responsibility to handle the operation of getting the stock data of over 80 companies.

●I was in charge of putting the data into HBase and using it to get the values required.

●This process was simply handled by me and 4 other members by a Script that was written to simplify things. The script included things like Sqoop commands (to get the data into databases) and other queries which would get us the values for all the required years.

●This data was further handled by another team who plotted it on a graph using Tableau.

EduPristine, New Delhi – Jr. Hadoop Developer

Feb 2015 - Jan 2016

Project: Area Mapper

Description: Area mapper was a project initiated to gain the knowledge of various places such as Banks, Schools, Hospitals, Restaurants, etc in an area. This project was basically developed for Maharashtra and Karnataka. We used to generate reports out of the data dividing the data according to Area, Pin Code or even by the name of the bank or any other specified entities.

Responsibilities:

●I was a part of the original team who decided the structure and how things should work with the system.

●I was travelling a lot as the tea required to map many places with the ratings & services provided.

●I helped in development of Pig Script, keeping it updated and running it every second day.

●Getting over 270 GB of data into HDFS, Hbase, etc from a relational database (MySQL).

●Sharing reports at the end of each week to my team.

EduPristine, New Delhi- Jr. Hadoop Developer

Feb 2015 - Jan 2015

Project: Info Feed processing.

Description: This project was started in June 2014. This project was specially made to get the information of various employees in the company who used to interact with the outside customers. The main purpose was to get all the contact numbers of all the customers who have been contacted by an employee or who the employee have contacted and also to get the messages sent by the employees.

Responsibilities:

●I was responsible to make custom Mapper & Reducer jobs to retrieve data.

●The data was kept in Hbase through which we used to access the records for the Mappers.

●Also helped in making a JSP site which used to fetch this data for each employee ID.

●I also worked on Front end web development during this project.

●Processing almost 0.6 Gb of employee data daily.

GTBIT, New Delhi - Hadoop Developer (Lead Developer)

Jan 2016 – May 2016

Project: College Result Progress.

Description: This project was basically made for our college, in which the results of each batch/branch is compared to how the previous batch/branch performed. The report generated from this show if the how the quality of education has affected the changes in grade of a batch and how it affects the placements of students in companies.

Responsibilities:

●I collected data for the past 3 years (4th year only) from the University’s website.

●I took the initiative to create a system that can ingest large amount of student data for my college so that over a longer period of time, we would be able to compare the performance.

●Our main motive was to show the change of results (+ve or -ve).

●So, I & other developers of my team made our System which we used to get the values of Grade of various batches, comparing them with the previous year results and plotting a graph for it.

●Results are yet to be acknowledged.



Contact this candidate