Post Job Free

Resume

Sign in

Manager Hadoop

Location:
Austin, TX
Posted:
April 09, 2020

Contact this candidate

Resume:

Chandra Harsha Theegela Contact: 937-***-****

Email: adcqrp@r.postjobfree.com

●IT professional with more than 5 years of total experience in Hadoop Administration and Linux Administration.

●Experience in designing, installation, configuration and management of Cloudera and Hortonworks Hadoop Distribution

●Experience in building and deploying multi-node, shared/ Multi-tenant production, development and Dev clusters with different Hadoop components (HIVE, PIG, SQOOP, OOZIE, FLUME, HBASE, ZOOKEEPER) using Cloudera Manager and Apache Ambari

●Experience in building Production Kafka clusters on Hortonworks distribution, and enabled Kerberos on the cluster

●Installing and configuring Kafka cluster and monitoring the cluster using Grafana, Opera, Splunk

●Configuring Kerberos and integrating with Active Directory

●Providing security for Hadoop Cluster with Kerberos, Active Directory/LDAP, and TLS/SSL utilizations and dynamic tuning to make cluster available and efficient

●Expertise in Managing, Monitoring and Administration of Hadoop for Multi Hundred Node Cluster with different distributions like Cloudera CDH and Horton works HDP

●Experience in managing the cluster resources by implementing fair scheduler and capacity scheduler

●Worked with Flume for collecting the logs from log collector into HDFS

●Experience managing, performance tuning, patching Kafka clusters in Linux environment.

●Designed and implemented by configuring Topics in new Kafka cluster in all environment.

●Experience in tuning, monitoring the spark real time streamlining job.

●Hands on experience in analyzing Log files for Hadoop, eco system services and finding root cause

●Experienced with different big data compression techniques like LZO, GZIP, and Snappy

●Involved in customer interactions, business user meetings, vendor calls and technical team discussions to take right choices in terms of design and implementations and to provide best practices for the organization

●Worked on Disaster Management and recovery on the Hadoop clusters

●Experience in performing minor and major upgrades on large multi-tenant production clusters

●Decommissioning and commissioning the Data Node on running Hadoop cluster.

●Installation, patching, upgrading, tuning, configuring and troubleshooting Linux based operating systems Red Hat and Centos and virtualization in a large set of servers

●Experience with system integration, capacity planning, performance tuning, system monitoring, system security, operating system hardening and load balancing

●Experience in Managing and Scheduling cron jobs such as enabling system logging, network logging of servers for maintenance, performance tuning and testing.

●Job monitoring using Resource Manager and taking necessary actions if the job is not progressing, by either killing the job if it is hung or increasing the priority of the job in real time.

●Handsome experience in Linux admin activities on RHEL& Cent OS.

●Experience monitoring and troubleshooting issues with Linux memory, CPU, OS, storage and network.

●This included taking care of high disk utilization by performing periodic clean up and on-demand clean up.

●Good Experience in setting up the Linux environments, Password less SSH, creating file systems, disabling firewall, Selinux and installing Java.

●Worked with Cloudera Support/ IBM to raise cases whenever we were unable to resolve the issue in-house and got on webex calls with them to resolving the issue.

●Hands on experience on service now, Ipcenter and Nagios

●Expert in setting up SSH, SCP and VSFTP connectivity between UNIX hosts

EDUCATION:

Candidate Master’s in Information Technology (2019) from University of Cumberland’s, Kentucky.

Master’s in engineering from University of Dayton, Ohio.

Bachelor of Technology in Engineering from S.R.M University, Chennai, Tamil Nadu.

CERTIFICATIONS:

Red Hat Certified Systems Administrator (RHCSA). [ cert.no: 170-036-339]

Docker Certified Associate. [ cert.no:11298398]

SKILL SET:

Operation Systems

Redhat Linux (v6.2,7.2, Enterprise Linux), CentOS(v6.x, v7.x), Windows 2003/2008 server

Hadoop Framework

Hdfs, Map Reduce, Pig, Hive, Flume, Kafka, Yarn, Sqoop, Zookeeper, Oozie, Hue, Spark, Hbase.

Hadoop Distributions

Cloudera Distribution of Hadoop (CDH), Hortonworks (HDP)

Virtualization

VMware

Containerization Orchestrators

Kubernetes, Docker swarm

Network Load Balancer

F5-load balancer

Processes

Incident Management, Change Management

PROFESSIONAL EXPERIENCE:

SNG Infotech Jan 2019 –Present

Client: VISA

Austin,Tx

ROLE: Sr. Software Administrator

Cloudera

Built Multi-Tenant Dev cluster as well as the kafka production cluster

Performed Hadoop Administration on production Hadoop clusters (Cloudera)

Handled all levels of system administration, automation, and security on large Multi-tenant Hadoop Clusters

Worked on installation, configuration, and optimizing, supporting and monitoring Hadoop clusters

Performed major maintenances like Master roles migration (Namenode, Hive Metastore, Zookeeper server, Resource manager, CM services) and minor like CDH and CM upgrades on the existing Hadoop clusters

Worked on installation and configuring Kerberos for Authentication and sentry for Authorization

Worked on Enabling TLS for Cloudera Manager Agents

Configured Multi Cloudera Manager Dashboard in Dedicated mode to help support teams to monitor all production cluster at one place

Enabled HA for Oozie, Hive Metastore and also enabled load balancing of Hive Server2 hosts

Performed Tuning and Increased Operational efficiency on a continuous basis

Installation of non-Hadoop services like ESP, NDM, WLM etc. on the production servers

Ensured production scalability and stability of large production clusters

Performed optimization, capacity planning of a large multi-tenant production clusters

Hadoop cluster capacity planning, optimization of cluster to meet the SLA

Resource management in multi-tenant clusters using Fair-Scheduler

Installed and Worked with Monitoring tools like Splunk, Grafana and Opera

Performed Kernel Patching on data nodes using BMC tools

Scheduled and triggered scripts using BMC blade logic

Environment: HDFS, MapReduce, Yarn, HBase, Hive, Kafka, Spark, Kerberos, Pig, Sqoop, Solr, Impala, HDFS Encryption, Cloudera mangers services using Cloudera Manager.

Hortonworks:

Built cluster from scratch on HDP 3.1 and ambary 2.7 where services include Kafka, Spark, HDFS, hive, Ranger, Yarn and Zookeeper

We currently manage 40 + Production clusters including dedicated clusters for KAFKA, SPARK, HBASE and multi-Tennent

Reviewed Linux and Hadoop resiliency to failures and implemented HA functionality where possible: Name Node, Resource manager, etc.

Integrated Ranger with Kafka and Ambari with LDAP

Installed MIT Kerberos on the cluster and later migrated it to AD

Experience in debugging on disk level and network level issues

Experience in configuring alerts and incident to Service Now using Ambari API

Worked with the application team to on-board their application on the cluster and worked on resolving the issues

Fine tune the performance parameters on the clusters as per application requirement

Environment: HDFS, Tez, Yarn, Hive, Kafka, Spark, Kerberos, Ranger, Ambari, Hortonworks.

I.T.Hippies L.L.C Jan 2016 – Jan 2019

Client: Statefarm Insurance Company

BLOOMINGTON, IL

ROLE: Hadoop Administrator

Responsibilities:

Experience in start to end process of Hadoop cluster setup where in installation, configuration and monitoring the Hadoop Cluster in Cloudera.

Experience in Installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH4, CDH5), Yarn distributions.

Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files.

Installation of various Hadoop Ecosystems and Hadoop Daemons.

Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement.

Involved in loading data from UNIX file system to HDFS.

Experience in deploying versions of MRv1 and MRv2 (YARN).

Provisioning, installing, configuring, monitoring, and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Spark, Kafka, Oozie, Pig, Hive.

Implemented Capacity schedulers on the Yarn Resource Manager to share the resources of the cluster for the Map Reduce jobs given by the users.

Installation of various Hadoop Ecosystems like Hue, Kafka Hive, Pig, Hbase etc.

Hands on experience in deploying and managing multi-node development, testing and production of Hadoop Cluster with different Hadoop components (HIVE, PIG, SQOOP, KAFKA, OOZIE, FLUME, ZOOKEEPER, HBASE) using Cloudera Manager and Hortonworks Ambari.

Installed and configured Spark on multi node environment.

Monitored workload, job performance and capacity planning.

Expertise in recommending hardware configuration for Hadoop cluster.

Involved in designing and implementation of secure Hadoop cluster using Kerberos.

Installing, Upgrading and Managing Hadoop Cluster on Cloudera distribution.

Managing and reviewing Hadoop and HBase log files.

Experience in creating Kafka Topics, Solr collections.

Cluster maintenance as well as creation and removal of nodes using Cloudera Manager

Performance tuning of Hadoop clusters and Hadoop MapReduce routines.

Administration, installing, upgrading and managing distributions of Hadoop (CDH4, CD5, Cloudera manager), Hive, Hbase.

Experience in analyzing Log files for Hadoop and eco system services and finding root cause.

Environment: Hadoop, HDFS, Hive, Yarn, Flume, Kafka, Spark, Impala, Sqoop, Oozie, HBase, Shell Scripting, Ubuntu, Linux Red Hat, Kafka, Spark, Hue, Solr, Cloudera.

I.T.Hippies L.L.C Aug 2015 –Jan2019

BLOOMINGTON, IL

ROLE: Technical Consultant

Responsibilities:

Providing strategic advice on using technology to achieve goals.

Understanding customer requirements and business objectives.

Design IT systems and networks ensuring the right architecture and functionality.

Evaluate, monitor and review the installation to minimize the technical issues and provide resolutions for the issues.

Installation, Configuration & Upgrade of Linux (RHEL 6/7).

Installation and configuration of Red Hat Enterprise Linux and Windows Virtual machines for the disaster recovery project.

Installed ESXi operating systems on the physical blades for VMware virtualization.

Created custom image/template for Red Hat Enterprise Linux on vCenter via vSphere client.

Created Host profiles and applied host profiles to the ESXi hosts.

vMotion’d virtual machines across the ESXi hosts within a cluster.

Performed configuration of virtual discs including addition of virtual discs to the virtual machines, resizing the discs on vCenter for Redhat 6.x, 7.x and windows servers.

Deployed and de-provision of virtual machines using Cloud resourcing management tool and vCloud Automation Center.

Configure disk volumes using Logical Volume Manager (LVM) on Linux servers.

Package Management using YUM and RPM utilities.

Setup of YUM repositories in both server and client.

Performed operating the systems to boot into different run levels, service start and stop running on the VM’s.

Troubleshooting the vFabric Gemfire related issues and resolving them and making configuration changes as needed Monitoring the Gemfire Agents and Gemfire Datafabrics causing GFMON tool.

Perform User and Group administration on Linux servers via Apache LDAP

Performed backup of virtual machines using the Avamar backup utility.

Activation of servers from checkout pool to active pool by configuring data group list on the F5 load balancer.

Perform site/disaster recovery using the F-5 load balancer.

Performed Remote administration using Remote Desktop Connection and HP-ILO (Integrated Lights-out) for physical servers.

Managed Network troubleshooting applications TCP/IP including Ethernet, IP addressing & Sub netting, routing.

Enterprise resource planning included ITIL processes for Configuration Management, Change Management, Problem management and Incident Management.

Responsible for day-to-day operations involving monitoring, health checks, remote administration of systems in production environment.

Provide 24*7 admin support for our current virtual environment via remedy ticketing system.

Environment: Tomcat 6.0.x, 7.0.x, 8.0.x on Red hat LINUX 6.0/7.0., Apache web server, Rabbit MQ, WMQ 7.x, IBM HTTP server, LDAP, F5 BIG IP Load Balancer, Tivoli Performance Viewer, Log Analyzer, Heap Analyzer, UNIX, RHEL and Windows, HPSM, IPCenter(Monitoring)

GDN INFOTECH, Inc, Indianapolis, IN Feb 2015 – May 2015

ROLE: Systems Administrator

Responsibilities:

Installed RedHat Linux Enterprise Server 5.3 and 5.4 on HP Blade Servers using Kickstart Installation.

Installed and maintained several Linux and Windows servers on a virtual environment.

Created mount points for Server directories and mounted these directories on the Servers.

Created Collaborative work directories (group shares) and setup Access Control Lists for directories.

Diagnosed and troubleshoot software and hardware related problems.

Installed and deployed operating system and security related patches and fixes.

Developed Shell scripts for automating the batch jobs.

LVM, File system management, User account management, data backups and user’s logon support.

Involved in testing of products and documentation of necessary changes required in this environment.

Track, and assign all calls coming into the Support Line.

Provided off-hours on-call assistance to end-users.



Contact this candidate