Email: *************@*****.***
SEGUN ADELANKE Contact no: 713-***-****
Objective:
Seeking a challenging solution development position, with a strong emphasis on Hadoop and Java technologies, where I can use my current skill set, ability to learn quickly, 2 years of experience, and desire to define and create the best solutions possible to becoming an invaluable asset to the company.
Professional Summary:
2 years of experience in java, Hadoop development and Hadoop Administration
Built real-time Big Data solutions using HBase handling billions of records
Implementing Hadoop based data warehouse, integrating Hadoop with Enterprise Data Warehouse systems
Built scalable, cost-effective solutions using Cloud technologies
Provisioning, installing, configuring, monitoring, and maintaining Hadoop/HDFS, MapReduce, HBase, Pig, Sqoop, Amazon Elastic MapReduce(EMR), Accumulo, Yarn, HDFS, Flume, Oozie, Hive, Cassandra and Spark development.
Providing hardware architectural guidance, planning and estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment
Adding new nodes to an existing cluster, recovering from a Name Node failure.
Decommissioning and commissioning the Node on running cluster.
Installation of various Hadoop Ecosystems and Hadoop Daemons.
Recovering from node failures and troubleshooting common Hadoop cluster issues.
Scripting Hadoop package installation and configuration to support fully-automated deployments.
Supporting Hadoop developers and assisting in optimization of map reduce jobs, Pig Latin scripts, Hive Scripts, and HBase ingest required.
Areas of Expertise:
Big Data Ecosystems: Map Reduce, HDFS, HBase, Hive, Pig, Sqoop, Oozie, Flume.
Programming Languages: Java, R, pig, hive, sql, C.C++
Scripting Languages: JSP & Servlets, JavaScript, and Bash
Databases: NoSql, MySQL.
Tools: Eclipse, Net beans, cloudera.
Platforms: Windows (2000/XP), Ubuntu, CentOS
Work Experience:
Texas Southern University, Houston Tx: Office of Information Technology May 2014- Till Date (Storage, Hadoop Appliance) Roles: Principal Hadoop Developer Project Details: Environment : Hadoop, CentOS, HDFS, Hive, Sqoop, Flume, Pig and HBase Role : Hadoop Developer Team Size : 3 Project Description:
Worked with the Big Data Storage Team (acquired from Parascale) that was building TSU Big Data Storage appliance.
Virtualized Hadoop in Linux KVM to provide a safer, scalable analytics sandbox on the appliance.
Developed a HDFS plugin to interface with proprietary file system. Advised file system engineers on Hadoop specific optimizations.
Implemented component based tests / benchmarks.
Work with support team on Hadoop performance tuning, configuration, optimization, and job processing.
Coordinated with technical team for installation of Hadoop & production deployment of software applications for maintenance.
Formulated procedures for planning and execution of system upgrades for all existing Hadoop clusters
Academic Qualification:
M.sci (Hon), Texas Southern University, Houston Texas June 2016 B. Agr Tech (Hon), Federal University of Technology, Akure, Nigeria Aug 2008