Post Job Free

Resume

Sign in

Entry Level Data Platform

Location:
Austin, TX
Salary:
135000
Posted:
July 14, 2023

Contact this candidate

Resume:

Raghuram Pola Senior site reliability engineer

Mobile:+1-512-***-****

Email: adyalk@r.postjobfree.com Alternate email: adyalk@r.postjobfree.com https://www.linkedin.com/in/raghuram-pola-95795b12b/ Professional Seeking All Level Managerial assignments in Hadoop and System Administration/Technical Support with a growth oriented organization Big data Hadoop and UNIX Administration:

• 12 + years of extensive experience on HADOOP & Cloud Technologies, different operating systems Solaris, Linux& Aix, Hp-Unix.

• Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache hadoop, Hortonworks data platform (HDP) distributions and Cloudera (CDH5), Cloudera data platform (CDP).

• Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.

• Managing User/Group Administration through Ansible tower, User Security Administration.

• Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.

• Adding new nodes to an existing cluster, recovering from a Name Node failure.

• Creating file systems, disk partitioning and troubleshooting.

• Scheduling and executing regular system management activities, including system reboot perform System Backup and Restores.

• Strong knowledge on HDFS framework,map reduce and yarn concepts, Zoo keeper mechanism.

• Involved in installing Hadoop Ecosystem components.

• Hands on experience on Ambari and Hortonworks data platform (HDP) upgrades.

• Extensive knowledge on Spark job failures troubleshooting.

• Spark jobs out of memory issues at Driver and executor nodes, Reviewing the thread dumps.

• Installation of customized Spark and adding the services through Cloudera na Hortonworks.

• Reviewing the Spark long running jobs, Allocation of resources to the jobs.

• Checking the Spark jobs infrastructure issues and application issues. Educational qualifications:

• Master of Computer Applications (M.C.A) from Osmania university.

• Bachelor of Science (B.sc Computers) from Osmania university. Professional Experience

• Currently working as Technical lead at Wipro Technologies, (Austin, Texas & Hyderabad) from April 2nd 2018 to till now.

• Worked as Technical services specialist at CGI, Hyderabad from 8th Jan 2018 to 30th March 2018.

• Worked as Technical specialist services in IBM, Hyderabad from 30th Jan 2017 to 03rd January 2018.

• Worked as Associate consultant (L3 Support) with HCL Technologies Ltd, Hyderabad from 18th May 2012 – 25th Jan 2017.

• Worked as Engineer Ops with Tech Mahindra Ltd, Noida from 13th December 2010

–10th May 2012.

• Professional certification:

• AWS Certified Solutions Architect - Associate (Validation Number VN8V39XDCFEE15WQ)

• AWS Certified Data Analytics - Specialty (Validation Number MN9Z2XXBLBB11WCR)

• Sun certified System Administrator (SCSA) for Solaris10 part 1 & 2.

• ITIL v3 certified.

• Veritas Storage foundation and high availability 5.0 for UNIX certified.

• Professional training attended for VCS by Symantec.

• Az-104 certified.

Devops Tools:Ansible:

• Installation of Ansible master and client servers.

• Running adhoc commands and creating inventories.

• Creating playbooks as per the requirement of environment.

• Installing and managing Ansible Tower.

• Executing the playbook templates and troubleshooting the failed jobs.

• Running adhoc commands from the tower and testing the prod,dev & Test environments.

Amazon Web services:

Creation of EMR clusters and adding the services as per the requirement.

Creation of S3 Bucketing, Elastic load balancer.

Expertise with AWS Tools (EC2, S3, VPCs, RDS)

Experience with clustering / load balanced solutions

Hadoop job failures and AWS clusters.Spark jobs failures troubleshooting.

Distributed Copying of files from S3 to Local & Local to S3 . REDHAT LINUX and Oracle Linux

Installation of RedHat Linux operating system

Creating User accounts and maintenance as per request

Managing and controlling the system services using service and chkconfig commands

Installation, Configuration of Administration of Logical Volume Manager

Creating yum repository using RHEL OS CD or RHEL ISO image

Troubleshooting network connectivity problems, user login problem and server booting

Creating, managing and administrating PV’s, VG’s and LV’s using Logical Volume Manager

Creating RAID-0,1,5 implementation using LVM as per the requirements

Device labeling, bringing disk under LVM control, making volumes and file systems.Creating, extending and reducing the Volume Groups on online(VG’s)

Importing and exporting Volume Groups across the system common storage

Creating, resizing, restarting, and removing Logical Volumes (LV’s).

Configuring the Disaster recovery of the servers and implementation plan.

Managing LVM/SVM/VxVM/VCS/ZFS/LDOMS/ZONES on Solaris & Linux operating systems.

Having the good knowledge of Virtualization with VMWARE ESxI. Technical Skills:

Work profile & Responsibilities:

Working as SRE with APPLE client from Wipro April 02ND 2018 to till now. Infrastructure size: 70 clusters, Environment: HDP and Cloudera, CDP clusters.

• Working as SRE role into Big-data hadoop platform.

• Cluster build activities and migrations .

• Managed multiple 2000 node & 1500, 1000 node clusters.

• Providing hardware architectural guidance, planning and estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment.

• Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, HDP distributions and Cloudera

(CDH5,CDH6,CDP ).

Hadoop Eco

systems

HDFS,MapReduce,Pig,Hive,HBase,Flume,Oozie,sqoop,Zookeeper,S park

Hadoop

Stack

Hortonworks data platform (HDP) distributions and Cloudera

(CDH5), Cloudera data platform (CDP).

Devops

Tools

Ansible,Docker,Git,Puppet,Jenkins, CI & CD tools Arches deployment tool, Wiggles

OS & Veritas Redhat/centos/Debian/Fedora/Oracle Linux (5.0,6.0 & 7.0),Solaris 8,9 & 10,11, Solaris Zones, ZFS,REDHAT AND SESU

LINUX,LDOMS.

Veritas volume Manager, Veritas Cluster

Hardware Sun workstations, Sun Ultra Enterprise Servers

(E2900,E4800,E4900),SunFire servers(X4170/V240/V440/ V480/280R/V880/E6800/E6900/12K/15K/25K/T5140), T5140 and Hp DL 380 G5, HP BLADE 460CG1, CISCO UCS C240-M3 And X86 SERVERS,HP and Dell, IBM model servers.

Networking TCP/IP, NFS, NIS, Auto mount, DNS, Jumpstart Installation Tools Camcs,Infoblox,Splunk And Powerbroker, Powerkeeper, Netcool, Jira, BMC Remedy,SNOW,Ambari,Snow,Centrify,Autosys. DNS Infoblox

Cloud AWS,Azure & Oracle cloud basics

• Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.

• Installed and configured multi-nodes fully distributed Hadoop cluster.• HDP to CDP migrations.

• Responsible to manage data coming from different sources.

• Expertise in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.

• Adding new nodes to an existing cluster, recovering from a Name Node failure.Decommissioning and commissioning the Node on running cluster.

• Managing nodes on Hadoop cluster connectivity and security

• Experienced in managing and reviewing Hadoop log files

• Builds, Deployments failures.

• Jenkins jobs failures associated with AODC and AWS clusters job failures.

• Jenkins jobs failures associated with Kubernetes cluster failures.

• Identifying the Deployment failures and connecting with different application teams and fixing the issues.

• New Jenkins jobs creation and monitor, Troubleshooting the issues.

• Kernel Patching of all cluster hosts (NN,DN,RM and Zookeeper,Hbase) up to date.

Worked as Technical services specialist at CGI with Bell Project, Hyderabad from Jan 8th 2018 to March 30th 2018.

• Taken the knowledge transfer from the team regards to environment set up and different type of technologies.

• Worked on Hadoop and Oracle cloud and OS issues related to Hadoop clusters.

• Designed, executed ETL operations utilizing appropriate Hadoop stack

• Written the Ansible yaml’s as per the requirement Worked as Technical specialist services in IBM with ASEA Brown Boveri client from January 30th 2017 to January 03rd 2018. Infrastructure size: 6 clusters, Environment: Cloudera, Hortonworks, Venilla Hadoop Distribution.

• Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.

• Adding new nodes to an existing cluster, recovering from a Name Node failure.

• Decommissioning and commissioning the Node on running cluster

• Built hadoop cluster from scratch in a “start small and scale quickly” approach

• Implemented capacity scheduler for efficient utilization of cluster resources

• Performance tuning in hadoop architecture and Working on Hadoop Issues.

• User creation and installation of single and multi node hadoop cluster. May 18th 2012 – Jan 25th 2017 date HCL comnet, Hyderabad Hadoop & UNIX admin (2012-2014) with VISA client and Toyota Finance Client from

(2015-2017).

Infrastructure size: 4 clusters, Environment: Cloudera, Hortonworks, Venilla Hadoop Distribution.

• Providing 24x7 support to VISA client FOR SINGAPORE & USA for more than 8 clusters around 2000 servers.

• Good in writing Linux Shell Scripting for deployments and configuration.

• Installing and configuring splunk forwarder to forward the logs from namenode, jobtracker and other relevant hadoop services.

• Server hardening, Decommission of Linux and Solaris servers.

• Server builds VMware ESX hosts and lun mapping, Ip assignation and VMware tools up-gradation, Vmotion the servers.Creation and cloning the templates.

• Hands on experience on Os upgrade, Firmware upgrades and Os patching (SVM) on Midrange, Entry level and high range servers.

• Hands on experience on Live upgrade patching on Solaris servers.

• Os hardening on servers as per the VISA standards.

• Linux kernel patching and rollback if new kernel failure occurs

• Installing the packages using rpm, yum & up2date commands.

• Involved in data center migrations (New installations and Application related installations like Net backup and Java upgrades as per the user requirement,

• Communicating with the various clients and scheduling and implementing the tasks, Powering off the old servers and decommissioning the servers)

• Hands on experience to booting issues.

• Package and Patch Management. Adding and Deleting Packages and patches according to the requirement.

• Increasing the file systems in ZFS, importing, exporting and scrubbing the pools.

• Troubleshooting the GRUB boot issues.

• Coordinating with Hardware vendors (ORACLE, SYMANTEC, HP, and ECS & CISCO) and replacing the components if necessary.

• Linux kernel patching through rpm and yum, firmware upgrade through GUI mode. Net-backup client installation on Solaris and Linux servers.

• Kernel patching in ufs and zfs environment (Solaris 10 ufs and zfs,Solaris 11 zfs) December 13th 2010 – May 10th 2012 Tech Mahindra Ltd with AT&T client, Noida UNIX Admin (L3 Support)

• Responsibilities:

• Providing 24x7 support to AT&T client USA for more than 12000 servers.

• Firmware upgrades on Midrange, Entry level and High range servers.

• Handling P1 issues, outage calls and Interacting with AT&T clients and PSA.

• Hands on experience on Migration of Solaris OS (5.8 and 5.9 to 5.10) and Redhat Linux (5.0 to 6.0 & 7.0) by using Plate spin.

• Hands on experience on Migration of VXVM (4.0 to 5.0 and 6.0) and VCS (4.0 to 5.0 and 6.0) in test and production environment.

• Package and Patch Management. Adding and Deleting Packages and patches according to the requirement.

• System administration, maintenance and monitoring various day-to-day operations.Experience on File system issues and Disk management, Scheduled tasks by using crontab & at jobs.

• Hands on experience in Jump Start installation in Solaris.

• Troubleshooting NIC issues, managing the SMF services.

• Backups of SUN and other utilities like tar, and gzip, FTP, Telnet, RSH and SSH.

• Disk mirroring and RAID implementation using Solaris Volume Manager and VERITAS Volume Manager Resizing the volumes and increasing file system in VxVM. Adding, removing & modifying the service groups & resources.

• Adding & deleting a new node to the existing cluster.

• Working experience with VERITAS Clusters like bringing Service/Resource groups online/offline, switching service groups between cluster nodes.

• Installation of user application agents and Oracle Server/Client/Agent.

• Configuring NIC and troubleshooting NIC issues.

• Performance tuning like monitoring Virtual memory, adding additional swap space as needed.

Personal Details

Marital Status : Married

Languages known : English, Hindi and Telugu

Visa Status : H1B



Contact this candidate