Manoj Tanga
Email: **********@*****.***
Phone: +1-217-***-****
Summary:
Experienced Hadoop developer with strong foundation in distributed file systems like HDFS, HBase in CHD and HDP environments. Excellent understanding of the complexities associated with big data with experience in developing modules and codes in MapReduce, Sqoop, Hive, and Pig to address those complexities.
A quick learner and passionate towards working in Hadoop and related technologies.
Education:
Master of Science in Computer Science December 2016
University of Illinois at Springfield, Springfield, IL GPA: 3.58/4.00
Bachelor of Science in Electronics and Communication Engineering April 2015
Jawaharlal Nehru Technological University, Hyderabad, India GPA: 3.50/4.00
Technical Skills:
Big Data Ecosystems: Apache Hadoop, HDFS, MR1, YARN (MR2), Hive, HBase, Sqoop, Flume, Oozie, Zookeeper, Pig, Apache Kafka.
Hadoop Distributions: Cloudera (CHD), Hortonwsorks (HDP).
Programming Languages: Java, C, SQL, UNIX Shell Scripting, Assembly Language (8085/8086).
Databases: Oracle, MySQL, NoSQL.
IDE and Tools: Eclipse, Tableau.
Operating Systems: Windows, Linux, UNIX.
Cloud: Microsoft AZURE, Amazon Web Services(AWS).
Professional Summary:
University of Illinois, Springfield, IL August 2016 to December 2016
Graduate Assistant
Responsibilities:
Imported data from Oracle and other RDBMS sources to HDFS through Sqoop and processed it.
Analyzed large data sets by running Hive queries.
Involved in creating most efficient Hadoop Data Flow to reduce the processing Time.
Good Knowledge in using Hadoop distributions on cloud for both AZURE and AWS.
Created Hive external tables, added partitions and worked to improve the performance of hive.
Loaded data from HDFS into PIG shell and written PIG Latin Scripts for data transformations and stored the data back to HDFS.
Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs.
Worked hands on with ETL process and Involved in the development of the Hive scripts for extraction, transformation and loading of data into other data warehouses.
Written Map Reduce java programs to analyze the log data for large-scale data sets.
Scheduled automated tasks with Oozie for loading data into HDFS.
Assisted in exporting analyzed data to relational databases using Sqoop.
Involved in various performance tuning activities.
Continuously monitored and managed the Hadoop cluster using Web UI.