Post Job Free
Sign in

Unix Administrator Cyber Security

Location:
Othon P. Blanco Municipality, Quintana Roo, 77965, Mexico
Posted:
March 11, 2023

Contact this candidate

Resume:

Giri Nadarajah

Professional Training/Qualifications:

Bachelor of Computer Mathematics from Carleton University, Ottawa – 2002

Hortonworks HDP Certified Administrator (HDPCA)

Cloudera Certified Administrator for Apache Hadoop (CCAH)

Sun Certified Solaris Administrator (SCSA)

Sun Certified Network Administrator (SCNA)

IBM Certified System Administrator (WebSphere Application Server v 6.1)

Technical Skills:

Big Data Ecosystem: Hadoop, Map Reduce, HDFS, Hive, Pig, Sqoop, Ranger, Apache Avro, Mahout, Solr, Putty, Oozie, Flume, Knox, Ranger, HBase, Zookeeper, NiFi, `MiNiFi, Zeppelin, Hortonworks DPS, HDP, HDF, Grafana, and Doctor Elephant

Ticket Tracking Systems: Remedy, ServiceNow, APPLIX Ticketing System, and Service Center

Operating Systems: Windows07/2008/2012, Red Hat 5/6.x Solaris 9/10/11

Monitoring: HP OpenView, Ganglia, Nagios, uptime, TWS

Virtualization: VMware, LVM, ZFS, LDOM, Containers (Solaris8 & 9)

Directory Server: Active Directory, iPlanet/Sun ONE Directory, IBM Tivoli Directory

Naming Services: NIS/YP and DNS, LDAP, HDFS

Software: Microsoft Excel, MS Office, MS PowerPoint, MS Visio, MS Outlook, MS Access

Professional Experience:

Rogers Telecom, Toronto, ON August 2017 - Present

Big Data Admin

Drive Proof of Concept (POC) and Proof of Technology(POT) evaluation on interoperable technology platforms

Devised BigData strategy with comprehensive design and roadmap, which included innovations for the next generation of technology.

Encryption NPE environments and Document the steps for executing in Prod Environment (Range KMS)

Proof of concept for LLAP, DataPlane Service (DPS) & Propose solution

SME for managing large scale server infrastructure using Linux / RHEL based operating systems with Hadoop Hortonwoks (HDP) and Hadoop stack including HDFS, Pig, Hive, MapReduce, Sqoop, Flume, Spark, Kafka and others.

Provided hive export import script for QA and Dev

Propose NPE Strategy and Procedures

Worked with NPE automation to sync all the environment and provide the solution

Daily collecting Ambari blueprint data in all the environments

If there are any configuration changes within the same environment an email notification is generated to the team.

Configuration changes comparison is done between development to production & QA to production

Provide support at Onsite during business hours.

Jointly agree on the Deliverables with the designated Rogers manager

Build scripts to automate routine tasks, infrastructure /configuration automation with orchestration systems tools such as puppet.

Attend Status Meetings and provide updates on the Deliverables

Identify and report risks and issues to the designated Rogers manager in a timely manner

Track and report hours to the designated Rogers manager in a timely manner.

Employed best practices to design, configure, tune and secure Hadoop cluster using Ambari.

Performed capacity planning and managed capacity utilization to sure high availability and multi-tenancy of multiple Hadoop clusters.

I stuck those two together but check that it actually makes sense from a technical point

Provided technical input to network architecture (TCP/IP, DNS, WINS, DHCP, AD, etc...) and datacenter teams during project solution design, development, and deployment and maintenance phases.

Troubleshoot day to day issues on multiple Hadoop Cluster.

Assisted with preparing and reviewing vendor SOWs

Worked closely with hardware and software vendors to design optimal environment for Big Data moved all your vendor things together

HDFS File system management and monitoring.

Responsible for the new and existing administration of Hadoop infrastructure.

Closely work with Enterprise Data and infrastructure, network, database, business intelligence and application teams to ensure business applications are highly available and performing within agreed on service levels.

Include DBA Responsibilities like pushing DDLs to Production environment, work with Enterprise data enabling team on implementation, software installation and configuration, database backup and recovery, database connectivity and cyber security

Working with ODBC an JDBC connectivity issue with end clients

In command of setup, configuration and security for Hadoop clusters using Kerberos

Accountable for storage, performance tuning and volume management of Hadoop clusters and MapReduce routines.

Working with HDP2.6x architecture to implement best practice

Adding new nodes in to the PROD / Dev/ QA and configured Hadoop clusters integrated with Kerberos security

Documenting project design and test plan for various projects landing on Hadoop platform

Telus, Edmonton, AB August 2016 – August 2017

Senior Hadoop Administrator

Technical Lead and SME on IT Projects and/or worked with other groups as part of project team. Familiar with tools like MS project, Visio, excel, and word to produce technical documentation including dataflow diagrams.

HDFS File system management and monitoring.

HDFS support and maintenance.

User provision for Datalake users in the Prod and Non-prod

Manage and analyze Hadoop log files

Responsible for the new and existing administration of Hadoop infrastructure.

Closely work with Enterprise Data and infrastructure, network, database, business intelligence and application teams to ensure business applications are highly available and performing within agreed on service levels.

Include DBA Responsibilities like pushing DDLs to Production environment, work with Enterprise data enabling team on implementation, software installation and configuration, database backup and recovery, database connectivity and cyber security

Working with ODBC an JDBC connectivity issue with end clients

In command of setup, configuration and security for Hadoop clusters using Kerberos

Accountable for storage, performance tuning and volume management of Hadoop clusters and MapReduce routines.

Working with HDP2.6x architecture to implement best practice

Adding new nodes in to the Datalake and configured Hadoop clusters integrated with Kerberos security by Hortonworks (HDP 2.4.2.0 to 2.6.x) on Linux Platform

Involved with building hosts (kickstart, PXE boot) for automation and configuration management using orchestration systems tools such as Puppet

Create Knox, Ranger polices and integrates with Kerberos

Documenting project design and test plan for various projects landing on Hadoop platform

Performance tuning of Hadoop clusters and Hadoop MapReduce routines

Experience monitoring overall infrastructure security and availability, and monitoring of space and capacity usage including Hadoop, Hadoop clusters, Hadoop API's and Hadoop stack including HDFS, Pig, Hive, MapReduce, Sqoop, Flume, Spark, Kafka and others

Monitor datalake connectivity,security, performance and File system management

Conduct day-to-day administration and maintenance work on the datalake environment

New technologies are tested when business request (POCs).

Provide technical inputs during project solution design, development, deployment and maintenance phases

Work closely with hardware & software vendors, design & implement optimal solutions

Assist and advise network architecture (TCP/IP, DNS, WINS, DHCP, AD, etc...) and datacenter teams during hardware installations, configuration and troubleshooting

First point of contact for vendor escalation. For escalation, tickets created with required information’s and follow up until issue has been resolved.

Provide guidance and assistance for administrators in such areas as server builds, operating system upgrades, capacity planning, performance tuning.

CIBC Bank, Toronto, ON March 2011 – July 2016

Hadoop Platform Engineer / Administrator

Installed and configured Hadoop clusters integrated with Kerberos security managed by Cloudera

Working with data delivery teams to setup new Hadoop users.

Setting up Linux users, manually setting up Kerberos keytabs and principals and testing HDFS, Pig, Hive, MapReduce, access for the new users

Install Knox, Ranger and integrates with Kerberos and LDAP

Documenting project design and test plan for various projects landing on Hadoop platform

Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Ambari and other tools

Work closely with platform and other engineering teams to set level expectations for big data projects

Performed several upgrades on Cloudera distribution of Hadoop using Ambari

Performance tuning of Hadoop clusters and Hadoop MapReduce routines

Screen Hadoop cluster job performances and capacity planning

Migrated existing data to Hadoop from RDBMS (SQL Server & Oracle) using Sqoop for processing data.

Implement best practices to configure and tune Big Data environments, application and services, including capacity scheduling

Experience monitoring overall infrastructure security and availability, and monitoring of space and capacity usage including Hadoop, Hadoop clusters, Hadoop API's and Hadoop stack including HDFS, Pig, Hive, MapReduce, Sqoop, Flume, Spark, Kafka and others

Responsible for loading unstructured and semi-structured data into Hadoop cluster coming from different sources using Flume and managing.

Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required

Build Unix Shell scripts to automate routine tasks, infrastructure /configuration and automation.

Knowledge on Hadoop Architecture and ecosystems such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, YARN and Map Reduce programming paradigm

Experience with deploying Hadoop in a VM as well as physical server environment

Monitor Hadoop cluster connectivity and security and File system management

Database backup and recovery and Database connectivity and security

Perform capacity planning based on Enterprise project pipeline and Enterprise Big Data roadmap

Provide technical inputs during project solution design, development, deployment and maintenance phases

Work closely with hardware & software vendors, design & implement optimal solutions

Assist and advise network architecture (TCP/IP, DNS, WINS, DHCP, AD, etc...) and datacenter teams during hardware installations, configuration and troubleshooting

Experience with handling critical production under structured change management guidelines (Open tickets, design and implementation documents followed by ECM which can be approve the change and used by rest of LOB to triage production issues)

Create, accept/update task, open routine and U1, U4 tickets and Present the production changes behalf of Big date team in the Weekly meeting (TAB/CAB) and answer any questions about the changes

Provide guidance and assistance for administrators in such areas as server builds, operating system upgrades, capacity planning, performance tuning.

UNIX Administrator

Standalone Systems:

Device configuration

Disks, physical, logical slices, and Format

Mounting, Create and Maintaining File Systems (Linux and Unix)

Scheduled Process Control (SPC)

The Boot PROM and System Boot Process

Administration of Software Packages and Patches (RPM, YUM)

High-Availability environments and fail-over techniques

VERITAS

VERITAS Volume Manager and VCS installation and configuring.

Procedure for LUN addition & FS resize, creation in VXVM

Multipath health check

Clearings faults, Online/ offline resource

Freeze / Unfreeze service group

Basic, advanced Troubleshooting

Incident Management and Change Management

Work with EIM for PROD – Sev 1 /Sev2 issues

Open U1, U4 changes

Work with LOB to get downtime, and verification for the changes

Incident bridge process / conference call / bridge calls

Escalating/engaging another team ITS/AO/ACS/DCIS/

Scotia Capital (Scotia Bank) Toronto, ON August 2007 – January 2011

Systems Administrator

Support and troubleshoot UNIX and MS Windows issues via Fax, APPLIX Ticket System and over phone

Provided prompt, effective and day-to-day technical support to clients via phone email and in person

User account creation in NIS and Local environments. Also NFS file system in all the environments (PROD, DEV, UAT) under Bank policies

Used VI editor to edit password, shadow, group, net group, auto home files to create, remove or change user accounts

Perform with development team to test new projects or update application in UAT and production environments

Troubleshot user connectivity issues and server issues over the phone and escalated to appropriate external revolver group electronically

Responsible for creating, removing or changing Sybase, Oracle database user accounts and lock, unlock and reset password for existing users

Local Servers Administrator

Familiarity with standard UNIX command line tools

Create/remove/change 500 SUN Local server accounts via fax, email or Ticket system request

Responsible for resetting passwords for SUN server accounts

Created local directories for users and granted permissions

Maintain and update all local UNIX users account in the database

Responsible for backup and cleanup of password and shadow files

CareFirst BlueCross BlueShield. Columbia, MD, USA

UNIX/ LDAP Administrator July 2005 – February 2007

Migrating Directory Server v5.1 to v5.2and install all request patches to run directory server properly

Monitoring configuration files in database for database cache hits ratio of entries in directory server.

Reconfiguring and tuning new instances of Tivoli Directory Server v5.1/5.2 with backup files

Monitoring replication status and maintaining replica and master synchronization in order to maintain integrity of searches on replica consumers.

Creating complex replication streams by enabling replication on replicas with replication agreements on master/ hub directory servers.

Creating security permissions by creating rules realms and policies with in multiple policy servers for protecting resources stored on web servers.

Evaluates new software/hardware products including enhancements/ upgrades/ fixes to existing software products

IBM WebSphere Application, v6.0 packaging and installation in an enterprise environment (browser, HTTP server, plug-in, firewall, database servers, WebSphere MQ, load balancing)

Install, verify, and troubleshoot WebSphere Application Server, create profiles

Implement security policies and protect WebSphere resources

Create clusters and cluster members

Create and configures DRS (Data Replication Service) replication domains

Response for WebSphere backup/restore and Archive Configuration tasks

Install and configure IBM HTTP Web Sever



Contact this candidate