Post Job Free

Resume

Sign in

Hadoop Administration

Location:
Tampa, FL
Posted:
March 24, 2017

Contact this candidate

Resume:

Jayakanth

Email-Id: aczgkv@r.postjobfree.com

Phone: 281-***-****

Professional Summary:

Over 9 years of experience in Hadoop administration and support, Middleware administration, Data Warehouse/Data Mart development and implementations of Data warehousing projects and good knowledge in Client/ Server technologies.

3 years of experience on Hadoop Administration has SME. Responsibilities include cluster pre-deployment, node selection, software installation, configuration, tuning memory related parameters, cluster maintenance, cluster OS level, HDFS and HIVE performance tuning.

Hands-on experience on Configuration, Administration and monitoring of HBase, Hive, Oozie, Hue, Tez, spark components in Hadoop Ecosystem and knowledge of Mapper/Reduce/HDFS Framework.

Good knowledge in Hadoop cluster capacity planning, commissioning and decommissioning nodes on timely manner.

Hadoop cluster deployment and monitoring by Ambari (Version 2.x) and monitoring.

Successfully upgraded the Hortonworks HDP from 2.2.4.2 to 2.3.6

Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4), Yarn distributions.

Experience in troubleshooting MapReduce issues and working with other teams to determine the root cause is desired.

Work with business intelligence, ETL and application teams to troubleshoot complex problems that impact our customer’s ability to perform required business functions.

Orchestrate the Hive queries, Sqoop, Pig and Flume scripts are scheduled through Autosys.

Identify the bugs in existing job by analyzing the data flow, evaluating the transformations and fixing the bugs.

Implemented security and configuration of LDAP, Netscape Directory Server, Siteminder single sign-on configurations.

Proficient in shell scripting and performance tuning of UNIX system with proven sills in UNIX System Administration.

Experience with monitoring tools like Nagios, Ganglia and Wily Introscope.

Experience in creating physical volume, logical volume, volume groups using LVMs in Linux, HP-UX.

Troubleshooting and configuring, which include networking and IPTables, resolving hostnames, SSH keyless login.

Experience in Middleware Administration using Oracle/BEA Weblogic and IBM Websphere Application server under heterogeneous environments like Unix/Linux, Solaris and Windows NT with Installations, Configuration, App deployment, performance tuning including troubleshooting and maintenance.

Education:

Master of Science in MIS, University of Houston Clearlake, Houston, Texas.

Bachelor of Technology in Electrical and Electronics Engineering, JNT University, India

Technical Skills:

Operating Systems

Solaris, Red Hat LINUX, HP-UX, Windows 98/2000/NT/XP.

Hadoop and components

Hortonworks HDP 2.2.4/2.3.6, CDH3/4, Ambari 2.2.0, Cloudera Manager, HDFS, Hive, HUE, Hbase, ZooKeeper, Flume, Pig, Sqoop, Flume, Spark

Programming Languages

C, C++, SQL

Application Server

BEA WebLogic Server 6.x./7.x./8.x./9.x/10.x, Apache Tomcat, IBM Webspher

Web Server

Oracle Http Server, Apache 2.x, iPlanet http server, IBM-HTTP

Scripting Languages

WLST, UNIX Shell Scripting (Bourne, Korn, C and Bash), Perl

Motoring tools

Wily Introscope, JMeter, Mercury Load Runner.

Database

Oracle 8i/9i/10g, MS SQL Server 7.x/2000, My SQL and DB2.

Networking & Protocols

TCP/IP, Telnet, HTTP, HTTPS, FTP, SNMP, LDAP, DNS, DHCP.

EMPLOYMENT HISTORY:

Worked as Sr.Hadoop Admin/ SME in Wellcare, Tampa, FL since Oct 2014 – Present

Worked as Hadoop Administrator in BCBS, Dallas, TX from Dec 2013 – Oct 2014.

Worked as SAP Administrator in ExxonMobil, Fairfax, VA from Mar 2011 – Dec 2013

Worked as Oracle Fusion Middleware/ Weblogic Admin Verizon, Irvine, CA May 2009 – Mar 2011

Worked as Weblogic Admin in HP Enterprise Services LLC, Rancho Cordova, CA May 2008 – Apr 2011

Professional Experience:

Client: Wellcare, Tampa, FL Oct 2014 – Present

Role: Sr. Hadoop Administrator/ SME

Responsibilities:

Hands on experience Installation, configuration, maintenance, monitoring, performance, tuning and troubleshooting Hadoop clusters in different environments such as development, UAT and production.

Administering Hortonworks Hadoop environments Design, Install, configuration, maintenance, monitoring, performance tuning, troubleshooting build and support cluster setup in an enterprise environment such as Development, UAT and Production.

Responsible for designing and implementing Hadoop High availability on Name Node, Hive Server2 and Mysql Meta data.

Installation and configuration of Hortonworks distribution HDP 2.2.x/2.3.x with Ambari.

Configured Ambari views like Capacity scheduler, Hive, Tez, Files.

Experience in troubleshooting MapReduce issues and working with other teams to determine the root cause is desired.

Provided technical guidance during SDLC for Data Ingestion.

Implemented Capacity Scheduler to share the resources of the cluster for the map reduces jobs given by the users.

Responsible for Mysql Metadata backups with automation scripts in external servers.

Administering and configuration of Hadoop ecosystem tools like Hive, Hbase, flume, Spark, Pig, Sqoop.

Responsible for design and implementation of HDP upgrade process with minimum downtime.

Successfully upgraded Hortonworks Hadoop distribution stack from 2.2.4 to 2.3.6

Monitoring and controlling local file system disk space usage, log files, cleaning log files with automated scripts.

Installed and configured Kerberos to secure cluster.

Integrated cluster with LDAP and enables single sign on for nodes, Hue and Ambari.

Integrated Hadoop with BI tools like SAS, Tableau and IBM Cognos.

Expertise in Hadoop cluster task like Adding Nodes, Decommissioning Nodes without any effect to running jobs and data.

Install, configure and manage of Hadoop Cluster spanning multiple racks using automated tools like puppet.

DRBD implementation of Name Node Replication to avoid single point of failure.

Manage and review Hadoop Log files.

Monitored system activities and fine-tuned system parameters and configurations to optimize performance and ensure security of systems.

Monitoring Hadoop cluster using tools like Nagios, Ganglia, Grafana and Ambari metrics.

Setup Sqoop connecters JDBC drivers for Teradata, Oracle and MySql databases.

Scheduling cron jobs for file system check using Fsck and also running the balancer for uniform load on the nodes

Worked on resolving production issues and documenting root cause analysis and updating the tickets using ITSM.

Organized various meetings with teams as mentioned above for tracking their requirements as inputs to get this done and designed the plan/steps involved and took user approvals.

Environment: Hortonworks (HDP 2.2.4/2.3.6), Ambari 2.x, RHEL6.8, Informatica BDE, Attunity, Greenplum, HDFS, Hive, Pig, Spark, Sqoop, Flume, Tableau, Cognos, SAS, Oracle 11, Linux shell scripting.

Client: BCBS, Dallas, TX Dec 2013 – Oct 2014

Role: Hadoop Administrator

Responsibilities:

Responsible for implementation and ongoing administration of Hadoop Cluster.

Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.

Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.

Enterprise Performance tuning of Hadoop clusters and Hadoop MapReduce routines.

Monitor Hadoop cluster job performances, connectivity, security and capacity planning.

Configuring of Hive, PIG, Impala, Sqoop, Flume and Oozie in CDH 5

Manage and review Hadoop log files, File system management and monitoring.

Major Upgrade from CDH3 to CDH4

Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades as required.

Scheduling and Managing Oozie Jobs to automate sequence of rotational activity.

Deciding on security and access control model for cluster and data protection.

Testing plan of production cluster before and after hadoop installation for HA's and performance.

Planning on requirements for migrating users to production beforehand to avoid last minute access issues.

Planning on data topology, rack topology and resources availability for users to share as required.

Planning and implementation of data migration from existing staging to production cluster.

Installed and configured Hadoop ecosystem components like MapReduce, Hive, Pig, Sqoop, HBase, ZooKeeper, fuse and Oozie.

Supported MapReduce Programs and distributed applications running on the Hadoop cluster.

Prepared multi-cluster test harness to exercise the system for performance, failover and upgrades.

Ensured data integrity using 'fsck' and another Hadoop system admin tools for block corruption.

Performed a POC on cluster back using distcp, Cloudera manager BDR and parallel ingestion.

Implemented commissioning and decommissioning of data nodes.

Performed various configurations, which include networking and IPTables, resolving hostnames, SSH key less login.

Implemented Kerberos Security Authentication protocol for existing cluster.

Continuous monitoring and managing the Hadoop cluster through Ganglia and Nagios.

Environment: Hadoop CDH3/4, HDFS, Mapreduce, Yarn, Hive, Pig, Flume, Oozie, Sqoop, Cloudera Manager MySql.

Client: ExxonMobil, Fairfax, VA Mar 2011 – Dec 2013

Role: SAP Data Services Administrator

Responsibilities:

Migrated legacy SAP (Tiger, R/2, R/3, 3.1H and Oracle manufacture) systems data to SAP ECC 6.0 IS-Oil & Gas.

Extensively used SAP Data Services Designer and Management Console.

Creating jobs using SAP Data Services to extract and transform data from the source systems to the target SAP.

Implementing the business rules, validations and restructuring the data into target SAP.

Provide the business and IT users with the pre-load and post validation reports after validation on source data and transformation.

Used various De-duplication techniques Data Quality Transforms and other logics.

Involved with Business and Functional Resources while creating Mapping Documents for Data Migration for MM and PTP transaction objects.

Using of SQL tools like Toad to validate the data and extract enrich/valid/invalid reports.

Migrated code from Local/Central repository to different environment repository.

Extensive experience in migrating the data in three cutover sessions and involved in three GoLive’s.

Responsible for full Migration cycle of objects across modules including SAP technical and Functional assistance on the project.

Work with SAP configuration team, support team for KT and other Load issues.

Environment: SAP BO Data Services 3.5,SAP, Toad 10.5, Business objects XI R3, Oracle 10g/9i, IDOC’s and LSMW Flat files, Windows XP

Client: Verizon wireless, Irvine, CA May 2009 – Mar 2011

Role: Oracle Fusion Middleware/ Weblogic Administrator

Responsibilities:

Experience in Installation & configuration of Oracle Enterprise Performance Management (Hyperion) 11g, Oracle Discover 11g, SOA 11g, Web Center suite 11g, Identity Management suite 11g, Universal Content Management 11g, OBIEE, and ODI.

Supporting the existing production environment on rotation basis i.e., Legacy, Web logic 8.x, Docmentum, Crosslogix.

Experience in Installation, Configuration of Oracle Service Bus/Aqua logic service bus.

Experience in Installation, configuration of SOA suite and involved in the trouble shooting the issues on SOA suite.

Experience with Oracle Fusion Middleware 11g (Version 11.1.1.2 /11.1.2),Oracle Web Logic (Version 10.3.3+).

Troubleshoot issues; diagnose issues, problem resolutions and Performance Tuning.

In-depth Knowledge in concepts based on Domains, clustering, High-Availability, Firewalls, Load Balancers & Routers.

Expertise in deploying Enterprise Java Beans and J2EE Applications (WAR, JAR and EAR).

Involved in the Load testing using the Oracle Load Testing tool work on various load test cases.

Have a good knowledge on JVM Heap Size, Garbage collector, Out-of-Memory, Server hang, Struck threads, Thread and Heap analysis.

Proficient in providing backup and recovery procedures for Oracle WebLogic Server.

Provided/created relative documentation for Oracle Fusion Middleware 11g.

Experience in architecting and installing FMW 11g based on Clustered environments.

In-depth Knowledge in concepts based on Domains, clustering, High-Availability, Firewalls, Load Balancers & Routers.

Comprehensive experience in production support environment, UAT, testing and development.

Best Practice experience in administration, installation & configuration of Oracle FMW 11g.

Regular status reports on a weekly basis.

Knowledge transfer to client personnel.

ENVIRONMENT: WebLogic 10.3.3/10.3.4, Oracle Fusion Middleware 11g, Oracle Enterprise Linux 5.x, Hyperion 11g, Oracle Discover 11g, SOA 11g, IDM 11g, Oracle Load Testing, ODI 11g, Oracle Http Server, UCM 11g, Jrockit- 1.6

Client: HP Enterprise Services LLC, Rancho Cordova, CA May 2008 – Apr 2011

Role: Weblogic Administrator

Responsibilities:

Installed, Configured and Administered WebLogic 8.x in clustered environments.

Responsible for handling the Support Incident Database - the online ticketing system.

Configured domains, clusters, JDBC Connection Pools, Data sources, JMS Servers.

Deployment and troubleshooting of JAR, WAR, and EAR files in clustered environment.

Comfortable in monitoring mission critical applications using tools like Introscope and Sitescope.

Design, Configuration and Testing of IBM MQ 6.0 Series as a foreign JMS with WebLogic Server.

Configured LDAP using Netscape directory Server for user authentication.

Used configuration wizard extensively to create and manage WebLogic domains.

Installed the WebLogic Service Packs, WebLogic Patches.

Involved in migrating the setup from WebLogic 8.1 -> 9.x.

Configured the reliable HTTP Session management for clustered applications, DNS, FTP and Virtual hosts.

Installed and Configured Sun Java Web Server 6.1 to proxy with WebLogic Server.

Provided backup and also provided the recovery steps for protection from the most common failure scenarios.

Installed and configured JBoss 4 and integrated Apache Web Server to work with the Application Server.

Configured BIG IP F5 Load balancer to provide high scalability, availability and reliability.

Managed the software release and control the source code on UNIX (Solaris) using Clear Case.

Deployed and undeployed both JBoss services and custom applications.

Successfully deployed Oracle IDM on JBoss 4/Oracle 10g to manage the identities of employees.

Renewed WebLogic certificates, also installed the new certificates for the newly created regions.

Developed scripts for automatic startup and shutdown of Admin Server and Managed Servers.

Tuned TCP/IP, JVM’s, Garbage Collections, Java Stack and Native Thread.

Resolved complex issues related to subsystems such as Plug-ins, Security, JDBC, Clusters, EJB, Web Apps, JMS, Server and Domain Migrations and JDBC Drivers.

ENVIRONMENT: WebLogic 8.x/9.x, Apache 2.0 Server, IBM MQ 6.0, Oracle 9i/10g, JBoss4, Oracle IDM, Sun Web Server 6.1, Red Hat Linux4, F5 Load Balancer, JDBC, JNDI, JMS, Introscope, Sitescope, Siteminder, LDAP.



Contact this candidate