Post Job Free

Resume

Sign in

Hadoop Administrator

Location:
Vernon Hills, IL
Posted:
August 05, 2020

Contact this candidate

Resume:

Mohammed Zeeshan Ali Hadoop Administrator

Chicago, Illinois : 847-***-**** ade34m@r.postjobfree.com zeeshanali.488 www.linkedin.com/in/mohammed-zeeshan-ali-2816784a PROFESSIONAL SUMMARY:

Over 8 years of experience with an emphasis on Big Data Technologies, Administration, Development, Engineering, and Design of Java-based enterprise applications with extensive knowledge of multiple scripting and programming languages with excellent problem-solving skills.

CORE SUMMARY OF QUALIFIATION:

Excellent understanding, knowledge and Hands - on experience on major components in Hadoop Ecosystem using

(Cloudera Manager & Ambari UI) with various tools on Big Data distributed storage and processing such as Apache Hadoop, HDFS, Map-Reduce, YARN, Hive, HBase, Spark, Zookeeper, Oozie, Sqoop, Flume, and more.

Administration, management, monitoring, debugging, capacity planning and performance tuning of Hadoop Clusters using Cloudera, MapR and Hortonworks distributions.

Experience in designing, installing, and configuring complete Hadoop ecosystem.

Adding services such as pig, hive, HBase, flume, Sqoop, Kafka, Oozie and zookeeper.

Experience in Hadoop Query Engines like Hue and setting up workflow jobs.

Experience in managing the cluster resources by implementing fair and capacity scheduler.

Monitoring the Hadoop Cluster metrics through Ambari.

Used Icinga and Ganglia for more additional comprehensive cluster monitoring.

Enhance the cluster by HA (High Availability) for Name Node and Zookeeper.

Involved in log file debugging to come up with root cause analysis for various failures.

Ingesting logs of data from various sources into HDFS Using Flume and Kafka

Worked on NoSQL databases including HBase, Cassandra and MongoDB.

Expertise in Hive performance tuning.

Experience in import/export data using Sqoop from HDFS to RDMS and vice-versa.

Experience in developing ETL process using Hive, Sqoop and Map-Reduce Framework

Familiarity and experience with data warehousing and ETL tools like Talend.

Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure.

Experienced in Ranger settings.

Knowledge in Cassandra read and write paths and internal architecture. EDUCATION:

Masters of Sciences

Chicago State University, Illinois, USA. (2016-2018) Computer Science Engineering Cumulative GPA: 4.00

Bachelor of Technology

JNTU, Telangana, India. (2007-2011)

Electronics and Communication Engineering Cumulative GPA: 2.95 TECHNICAL SKILLS

Hadoop Ecosystem: Hadoop, HDFS, YARN, Map Reduce, Hive, Pig, Zookeeper, Sqoop, Oozie, Spark, Kafka, Storm, Kerberos, Flume.

Web Technologies: HTML, XHTML, XML, XSL, CSS, JavaScript

Server-Side Scripting: Shell, Perl, and Python.

Database: Oracle, Microsoft SQL Server, MySQL, DB2, SQL, RDBMS.

Programming Languages: C, Java, Python, SQL

Databases: HBase, My Sql, Mongo DB, Cassandra

Hardware : Cisco M3 server, Dell sever, HP ProLiant

SDLC Methodology: Agile (SCRUM), Waterfall.

Operating Systems: Windows, Centos, UNIX, Linux RHEL 6.9 & 7.6, DB2 PROFESSIONAL EXPERIENCE:

Hadoop Administrator, Bank of America, Chicago - Illinois (US) Sep 2018 - Present Responsibilities:

Installed, configured and monitored Hadoop Clusters using Cloudera.

Worked on setting up Hadoop cluster for the Production Environment.

Analyzed services and made recommendations to optimize performance of cluster and its services.

Upgrade Hadoop Distribution of Cloudera 5.11 to 5.12.

Added services such as Hive, Spark, Sqoop, Flume, Oozie, zookeeper.

Configured High availability for the Spark History Server.

Commissioned and decommissioned datanode by proactive monitoring.

Configured YARN queues using Fair Scheduler to assign optimum resources to each team.

Assigned Sentry policies on Hive

Loaded log data into HDFS using Flume and Kafka.

CBO Scripts to ingest hive metrics to record and graphically represent the trend of daily activity.

Worked on Hadoop clusters capacity planning and rebalancing.

Monitored and Debugged Hadoop jobs/Applications running in production.

Implemented disaster recovery for the cluster with two schedules going from the primary cluster to DR.

Troubleshooting, Managing, and patching the cluster.

Log files management in debugging and coming up with RCA.

Used Ganglia (Medusa) to Monitor and Nagios for cluster component level monitoring.

Provided ad-hoc queries and data metrics to the Business Users using Hive.

Created partitioned external tables in Hive.

Setting up quotas and creating ACL group by adding members.

Document and manage failure/recovery.

Maintained and backed up meta-data.

Involved in back-up, recovery processes and capacity planning for Cassandra Cluster.

Provided monthly updates on the health of the cluster to higher management.

Developed integration scripts for Cassandra and Oracle using Sqoop for the bulk load. Environment: CDH 5.12.1, Hive, Impala, Spark2, Kafka, Flume, Sqoop, Cassandra. Hadoop Administrator, AT&T Corporate Office, Schaumburg-Illinois May 2017 – Aug 2018 Responsibilities:

Worked on Hortonworks distribution of Hadoop

Supporting 3 different EDL PROD clusters as a Hadoop administrator

Providing support to DEV, UAT, and Prod clusters

On-boarding New users, teams and applications

Working with end-users to troubleshoot and resolve incidents with data accessibility

Implementing Storage Policies for Hot, Warm, and Cold based on the use cases

Cluster maintenance, High Availability, HDFS support and maintenance, including commissioning and decommissioning of nodes

Investigation and resolution of application and data issues in Hadoop clusters

Automating Day to Day activates for HDFS, Hive operations

Worked with the data engineering team to support development and deployment of Spark and Hadoop jobs

Performed Hadoop upgrades, patches with proper backup plans to avoid any data loss

Recommend and implement standards and best practices related to cluster administration

Contributed to the architecture design of the cluster to support growing demands and requirements Environment: HDP 2.6.2, Zookeeper, HDFS, YARN, MapReduce2, Spark, Hive, Hive LLAP, HBase, Kerberos, Red Hat, Shell Scripting, Ranger.

Hadoop System Admin, Sugar CRM, Raleigh – NC Apr 2014 – Apr 2016 Responsibilities:

Solid understanding of the Hadoop Distributed File System.

Responsible for the implementation of Hadoop infrastructure

Installation, configuration and upgrading of Hortonworks distribution of Hadoop.

Configured replication with replica set factors, priority, and server distribution.

HDFS, cluster monitoring, and performance tuning of Hadoop ecosystem.

Work on UNIX operating systems to handle system tasks related to Hadoop clusters.

Enabling security/authentication using Kerberos and ACL authorizations using Apache Sentry.

Analyze vendor suggestions/recommendations for the environment and design implementation.

Perform short- and long-term system/database planning and capacity planning.

Strong communication skills, analytic skills, quick learner, organized, and self-motivated. System Administrator, Conversant Software Technology Private Ltd (India) Apr 2011 – Jan 2014 Responsibilities:

Worked as a System Administrator on Linux- UNIX platforms.

Installing and maintaining the Linux server and monitoring system metrics and logs.

Administered Linux servers Linux Red hat for several functions including managing Apache/Tomcat server, MySQL database, and firewalls in both development and production.

Performed common system administration task including adding users, creating file systems.

Maintained server, network, and support documentation including application diagrams.

Adding, removing or updating user account information, resetting passwords etc.

Strong Knowledge on Linux commands.

Keep a track of the appropriate software and upgrade the software packages.

Develop and maintain the documents, library, and procedural documents of the system.



Contact this candidate