Sign in

Azure Aws

Stamford, CT
October 21, 2019

Contact this candidate


Prathyusha K

Sr. Cloud/DevOps Engineer

347-***-**** LinkedIn:

Certified AWS Developer associate with 7+ years of IT experience with excellent knowledge in Configuration Management, Continuous Integration and Continuous delivery(CI/CD),Build and Release, Linux and System Administration with a major focus on Cloud Platforms Azure, Amazon Web services(AWS) and also includes great knowledge on the principles and best practices of software configuration Management (SCM) in agile, scrum and waterfall methodologies.


Hands on experience in Azure cloud worked on Azure web application, App services, Azure SQL Database, Azure Blob storage, Azure Functions, Virtual machines, Fabric controller, Azure AD, Azure Data Factory, Azure Service Bus and notification hub. Proficient in using Azure service fabric to package, deploy, manage scalable and reliable microservices and containers.

Experienced in migrating on premise storage to Microsoft Azure using Azure site recovery, Azure backups and deployed Azure IaaS virtual machines and cloud services (PaaS role instances) into secure Vnets and subnets with Azure Internal Load balancers.

Extensively worked with various Azure services like Web Roles, Worker Roles, Azure Websites, Caching, Azure SQL, Net worker servicers, API Management, Active Directory (AD) services infrastructure in advocating, maintaining, and monitoring. Azure Active Directory (AAD) infrastructure incorporated with periodic auditing, troubleshooting and performance.

Expertise in migrating the existing v1(classic) Azure infrastructure into v2 (ARM), scripting and templating the whole end to end process. Migrated on-prems to Windows Azure by building Azure Disaster Recovery Environment, Azure Recovery Vault and Azure Backups from the scratch using PowerShell script.

Experience in Integrating Spinnaker across AWS EC2 instances and in multiple AZ of the AWS VPC's to ensure Continuous Delivery, monitor application deployments

Proficient as Cloud Administrator, involved in configuration for Web apps, function apps, V-net integration, HCM, Application gateway, App Insights, Active directory, Azure Key Vault, Encryption and Security on AZURE using ARM templates and PowerShell script.

Experience in Creating a log analytics workspace and enabling cluster addon, leverage integrated Azure Kubernetes Service monitoring to figure out if requests are failing, inspect Kubernetes event or logs. Also monitored Kubernetes clusters health by using Prometheus and Grafana.

Expertise in solving manual redundant infrastructure issues by creating CloudFormation Templates using AWS's Server less application model. Deployed RESTFUL API's using API gateway and triggering Lambda Functions.

Extensively worked on infrastructure development and operations by involving in designing and deploying using AWS services like AWS EC2 Instance, AWS Kinesis, Route53, DNS, ELB, EBS, AMI, IAM, VPC, S3, RDS, Elastic Beanstalk, CloudFront, Elastic block store(EBS),Cloud trail, Dynamo DB, Cloud Watch monitoring.

Hands on experience in managing Infrastructure on AWS focus on high-availability, fault tolerance and auto scaling using Terraform templates along with Continuous Integration and Continuous Deployment with AWS lambda and AWS code pipeline.

Hands on Experience in AWS cloud for Docker and Kubernetes. Used EKS to manage containerized applications using its nodes, Config Maps, selector, Services and deployed application containers as Pods.

Developed microservices on boarding tools leveraging Python and Jenkins allowing for easy creation, maintenance of build jobs and deploy services in Kubernetes.

Experience in Building and deploying the application code using CLI of Kubernetes called kubectl, kubelet, kubeadm, Kubespray and Schedule the jobs using Kube-scheduler. Managed K8s charts using Helm and created reproducible builds of the Kubernetes applications.

Proficient in performing Continuous Delivery in a Microservice infrastructure with Amazon cloud, Docker and Kubernetes. Container management using Docker by writing Docker files and set up the automated build on Docker HUB.

Extensively used Terraform to a reliable version and created infrastructure on Azure. Also created resources, using Azure Terraform modules, and automated infrastructure management. Used Terraform to map more complex dependencies and identify the network issues.

Experienced with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation and Used Auto scaling for launching Cloud instances while deploying microservices.

Experience on working with Ansible Tower to create projects, inventory files, templates and scheduling jobs. Wrote Ansible playbooks with python SSH as the Wrapper to Manage Configurations of Azure Nodes and Test playbooks on Azure Virtual machines.

Experience in Automation of various infrastructure activities like continuous deployment, application server setup, stack monitoring using Ansible playbooks and has integrated playbooks with Rundeck and Jenkins.

Worked with Chef Enterprise hosted in On-Premise, Installed Workstation, Bootstrapped Nodes using Knife and Berkshelf dependency manager. Wrote Recipes, Cookbooks in Ruby and uploaded them to Chef-Server. Managed On-site OS, Applications, Services, Packages using Chef for AWS EC2, S3, Route53, ELB with Chef Cookbooks and automated by testing Chef Recipes, Cookbooks with Test-Kitchen/Chef Spec.

Hands on experience in configuring Jenkins by identifying and installing required plug-ins. Wrote Groovy scripts to configure Build Jobs, Build Pipelines and by using Jenkins created a master and slave configuration to implement multiple parallel builds through a build farm.

Extensive experience in installing, configuring, and administering Jenkins CI tool on Linux machines. Worked on setting up the Jenkins CI/CD pipeline configurations for all the microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Designed end to end automation of infrastructure and continuous delivery of the applications by integrating cloud formation scripts, Jenkins, AWS, CHEF cookbooks and recipes. Design and developing an automated deployment system using CHEF and Jenkins.

Expertise in managing Nexus Artifactory repositories for the Maven artifacts and dependencies. Configured and Administered Nexus Repository Manager for GIT repositories and builds.

Experience in working on version control systems like GIT and subversion and used source code management client tools like VisualSVN, GIT BASH, GIT HUB, GIT LAB, Bitbucket and other command line applications.

Excellent hands on experience working with monitoring tools such as Nagios, Splunk, Grafana. Worked with Nagios and Splunk for load balancing, integrating, monitoring and checking the health of applications.

Experience in setting up and managing ELK (Elastic Search, Log stash & Kibana) stack to collect, search. Analyzed logfiles across servers, log monitoring and created geo-mapping visualizations using Kibana in integration with AWS Cloud watch and Lambda.

Hands on experience in using JIRA as bug tracking system. Configured various workflows, customizations and plug-ins for JIRA bug/issue tracker and integration of Jenkins with Jira/GitHub to track change requests, bug fixes, manage tickets for corresponding Sprints.

Experience in scripting languages like Python, Ruby, Perl, Shell, Bash and familiar with storage, Networking, PowerShell commands. Experienced in creating automated PowerShell Scripts for web app deployment.

Expertise in file system concepts like LVM, SVM, VxVM. Creating new file systems, increasing and decreasing file systems, mounting file systems, unmounting file systems and troubleshooting Disk space issues. Involved in System Analysis and Performance Monitoring of Red Hat Linux.



Tools Used

Cloud Environments

Microsoft Azure, Amazon Web Services

Configuration Management

Ansible, Ansible Tower, Chef, Puppet

Build Tools

ANT, Maven, Gradle

CI/CD Tools

Jenkins, Bamboo, Spinnaker

Monitoring Tools

Splunk, Nagios, CloudWatch, Elastic Search, Log Stash, Kibana (ELK)

Container Tools

Docker, Kubernetes

Scripting/programming Languages

Python, Shell (PowerShell/Bash), Ruby, YAML, JSON, Perl, Groovy, JavaScript, C,

PHP, Java/ J2EE, .Net, Spring, Spring MVC, REST Web services

Version Control Tools

GIT, SVN. Subversion, Bit Bucket, Git Lab

Operating Systems

Windows, UNIX, RHEL, CentOS, Ubuntu, & SOLARIS


SQL Server, MYSQL, Oracle, NoSQL, MongoDB, Dynamo DB, Cassandra

Change Management

Remedy, Service Now

Testing/Ticketing Tools

Jira, Selenium, SonarQube

Web/Application Servers

Apache Tomcat, WebLogic, Oracle Application Server

Virtualization Tools

Oracle Virtual Box, VMWare, vSphere, Vagrant


Client: Deutsche Bank- New York

Role: Sr. Cloud/DevOps Engineer (October 2018 to Present)


Involved in migrating the application from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) by converting existing solution to Windows Azure Worker Role and then Configuring ARM, VM, Networking, Private Static IP Address and Public IP address.

Configured Azure Automation DSC configuration management to assign permissions through RBAC, assign nodes to proper automation accounts and DSC configurations, to get alerted on any changes made to nodes and their configuration.

Created and configured HTTP Triggers in the Azure Functions with Application insights for monitoring and performing load testing on the applications using the VSTS and used Python API for uploading all the agent logs into Azure blob storage. Used Azure blob to access required files and Azure Storage Queues to communicate between related processes. Configuring the Azure load balancer to load balance incoming traffic to virtual machines in Azure Cloud.

Responsible for creating and managing Azure AD tenants, manage users and groups, and configure application integration with Azure AD. Integrate on-premises Windows AD with Azure AD, configure multi-factor authentication (MFA) and federated single sign-on (SSO).

Migrated data from on-premise SQL Database servers to Azure SQL Database servers sent by designing the Azure Data Factory Pipelines using the Azure Data Factory copy tool and Self-Hosted Runtimes.

Implemented greenfield project leveraging Docker and Azure Kubernetes Service (AKS) for use, including configuration standards, implementing infrastructure with secure networking, and CI/CD pipelines. Used Azure Kubernetes Service to deploy a managed Kubernetes cluster in Azure.

Created Azure cloud using Kubernetes that supports DEV, TEST, and PROD environments. Implemented a production ready, load balanced, highly available, fault tolerant, auto scaling Kubernetes infrastructure and microservice container orchestration.

Developed ARM templates to deploy all Azure infrastructure with nested templates, shared templates, logic to support environment sizing, environment types, naming standardization. Migrated core networking infrastructure to Terraform to better align with enterprise tooling this included Gateways, Nets, Subnets, NSGs.

Worked on deployment automation of all the Microservices to pull the image from the private Docker registry and deploy to Docker swarm cluster using Ansible.

Implemented Blue/Green Deployments with Zero downtime where the current environment is replicated to latest version with Kubernetes to resolve the Bug fixes and redirect the traffic to it once the issues are resolved.

Configured Kubernetes Replication controllers to allow multiple pods such as the Jenkins master server in multiple minions and managed Kubernetes charts using Helm.

Extensively worked sin developing APIs using Kubernetes to manage and specify the copies of the containers to run the actual servers in the cloud environment. Scheduled, deployed and managed container replicas onto a node cluster using Kubernetes.

Configured applications that run multi-container Docker applications by utilizing Docker Compose tool which uses a file configured in YAML format. Used Kubernetes to manage containerized applications using its nodes, Config-maps, Selector, Services and deployed application Containers as pods.

Worked to set up Jenkins as a service inside the Docker swarm cluster to reduce the failover downtime to minutes and to automate the Docker containers deployment without using configuration management tool.

Proficiency in writing Docker files with best practices along with Creating Docker Images, Docker limiting, Docker container management along with Docker volumes, container-based DB and services, Docker Artifactory (JFrog) configuration and setup and worked on Docker Container Snapshots.

Developed Ansible Playbooks for automating the Infrastructure, deployment process. Managed build and deployment scripts using YAML in Ansible, triggered the jobs using Jenkins to move from one environment to across all the environments.

Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.

Implemented by using Ansible to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points, Packages and Used Ansible-galaxy to create Ansible roles which can be reused multiple times across the organizations and calling these reusable roles using the 'requirement.yml' file in roles.

Involved in writing various custom Ansible playbooks for deployment orchestration and developed Ansible Playbooks to simplify and automate tasks. Protected encrypted data needed for tasks with Ansible Vault.

Integrated Docker container-based test infrastructure to Jenkins Continuous Integration test flow and set up the build environment integrating with Git and Jira to trigger builds using Webhooks and run jobs on Slave Machines.

Built end to end CI/CD Pipelines in Jenkins to retrieve code, compile applications, perform tests and push build artifacts to Nexus Repository and Deploy to orchestrate changes across servers and components.

Worked in GIT forks, tagging, handling merge requests and notifications. Setting up the GIT repos for Jenkins build jobs.

Implemented the use of Nagios and Splunk tools for monitoring and analyzing the network loads on the individual machines and automated the testing of web applications using Selenium Web driver.

Created dashboards and visualizations using Splunk, Grafana and Nagios for performance and activity monitoring and setting up Splunk to capture and analyze data from various layers Load Balancers, Webservers and application servers.

Code utility scripts in PowerShell to modify XML configuration files dynamically, which is used during the release process and Used PowerShell scripts to handle various SharePoint admin jobs like backup, restoration, solution install/deploy.

Involved in developing custom scripts using Python, Shell to automate the deployment process and for Task scheduling, Systems backups for RHEL.

Used GIT as a source code management tool for creating local repo, cloning the repo, adding, committing, pushing the changes in the local repo, saving changes for later (Stash), recovering files, branching, creating tags, viewing logs.

Performed installation, configuration, upgrades, Package administration and support for Linux systems on the client side using RedHat satellite network server and Worked on patches installation, patch upgrades on Red Hat Linux servers by using RPM & YUM.

Environment: Microsoft Windows AZURE, Azure AD, Azure SQL, Azure Network, Web Applications, Kubernetes, Virtual Machines, Ansible, Jenkins, Docker, Python, Power shell, Microsoft Azure Storage, SonarQube, Groovy, Maven, Git, Gitlab, ELK, Splunk, Jira, Nexus, Tomcat, GitHub, Linux

Client: Sirius XM Radio Inc Washington, DC

Role: Cloud/DevOps Engineer (March 2017 to September 2018)


Migrated existing AWS infrastructure to server less architecture (AWS Lambda) deployed via Terraform and Implemented AWS Lambda functions to run scripts in response to CloudWatch events in the Amazon Dynamo DB table or S3 bucket to HTTP requests using Amazon API gateway and invoked the code using API calls made using AWS SDKs. `

Working on the Amazon Aurora database service on the AWS cloud and Implemented automatic machine Disaster recovery on AWS cloud and also setting up databases using RDS, storage using S3 bucket, Amazon Glacier by configuring instance backups to the S3 bucket and deployed instances multiple availability zones to ensure fault tolerance and high availability.

Worked on Custom Domain, Record Sets, DNS health checks to route the traffic by Amazon Route53

for applications hosted in AWS Environment and managing users and groups using AWS IAM.

Created and managed AWS Cloud Formation Stack using VPC, subnets, EC2 instances, ELB, S3 and integrated it with CloudTrail. Versioned CloudFormation templates are stored in GIT, visualized CloudFormation templates as diagrams and modified them with the AWS CloudFormation Designer.

Orchestrated and migrated CI/CD processes using Cloud Formation, terraform templates and Containerized using Docker setup in Vagrant, AWS and Amazon VPCs.

Involved in launching Docker containers on pods on top of the multi-node Kubernetes cluster in an AWS environment with the help of KOPS and kubectl.

Created cloud instance and Docker deployment from tagged docker builds published to Amazon ECR through the HashiCorp Terraform. Build a Kubernetes POC for container orchestration.

Used Docker microservice automated deployment pipelines and build jobs for web-mobile user application services through CD/CI pipelines on AWS EC2 with ECS, ECR. Deployed web-mobile user application microservices using Node.js, Ruby, Python on AWS Elastic Beanstalk with AWS S3, PostgreSQL RDS, and VPC.

Written Terraform templates, Chef Recipes and pushed them into Chef Server for configuring EC2 Instances, and deployed code into the required environments using AWS Code Deploy.

Developed and managed many servers utilizing both traditional and cloud-oriented providers (AWS) with the Chef platform and written cookbooks for various DB configurations to modularize and optimize project configuration.

Running Ansible Playbooks written in YAML using EC2 systems Manager Run Command for running complex workloads on AWS and managing a large group of instances for better security, performance and reliability and Created Ansible Playbooks to provision Apache web servers, Tomcat servers, Nginx, Apache Spark and other applications.

Used various services of AWS for this infrastructure. Used EC2 virtual servers to host Git, Jenkins and configuration management tool like Ansible. Converted slow and manual procedures to dynamic API generated procedures.

Configured Jenkins to obtain the code from GITLAB, analyze them using the SonarQube/Sonar Scanner, build them using Maven, Selenium Tests for Testing, and store the successfully built artifacts to Nexus Artifactory.

Written MAVEN Scripts to automate build processes and managed MAVEN repository using Nexus Tool and used the same to share snapshots and releases and built end to end CI/CD pipelines in Jenkins to retrieve code, compile applications, perform tests and pushed built artifacts to orchestrate changes across servers and components.

Hands-on in migrating build.xml into pom.xml to build the applications using Apache MAVEN. Enhanced

existing Continuous Integration system Jenkins and official nightly builds and managed it solely. Installed

Multiple Plugins for smooth build and release pipelines.

Streamed AWS CloudWatch Logs to Splunk by triggering AWS Lambda and pushing events to Splunk for real- time Analysis and Visualization.

Hands on writing Bash and Python scripts, to supplement automation provided by Ansible and terraform for tasks such as encrypting Amazon EBS (Elastic Block store) volumes backing AMI's and Scheduling Lambda functions for routine AWS tasks.

Created the ELK stack (Elastic search, log stash, Kibana) for log managing within EC2 and setup log Analysis for AWS logs using ELK and manage searches, dashboards, custom mapping and automation of data.

Scheduled Cron jobs, Systems backup and Linux OS installation by Setting up Kickstart File, Attach OS revision disk, deploy kickstart, Request IP address, register with satellite server, Install VM Tools and assign new network adapter RHEL 6/7 using the VCenter and NUTANIX.

Environment: AWS, Lambda, Jenkins, S3 bucket, Route53, SQL, AWS Kinesis, Terraform, AWS CloudFormation, AWS DynamoDB, AWS Code Pipeline, AWS ECS, AWS EKS, AWS ECR, Chef, AWS ALB, Microservices, Docker, Kubernetes, Ansible, Vagrant, Jira, Apache Tomcat, JFrog.

Client: CVS Health, Woonsocket, RI

Role: DevOps Engineer (July 2015 to December 2016)


Launched AWS EC2 Cloud Instances using Amazon Web Services (Linux/ Ubuntu/RHEL) and Configured the launched instances with respect to specific applications. Created Snapshots and Amazon Machine Images (AMI's) for mission-critical production servers for backup.

Expertise in creating an AWS Virtual Private Cloud (VPC) with multiple subnets and deployed application

and database servers with different Security Groups, N/W ACLs, NAT Gateways to serve the purpose of security.

Maintained DNS records using Route53. Used AWS Route53 to manage DNS zones and give public DNS names to elastic load balancers IP's.

Worked with Knife command line tool for creating Recipes, Cookbooks, bootstrapping nodes and worked with chef supermarket.

Responsible for delivering an end-to-end continuous integration a continuous delivery system for the products in an agile development approach using Puppet and Jenkins.

Developed Puppet modules to automate deployment, configuration, and lifecycle management of key clusters and wrote Puppet manifests for deploying, configuring, and managing collected for metric collection and monitoring.

Implemented a Continuous Delivery pipeline with Docker, Jenkins and GitHub. Whenever there is a change in GITHUB, our Continuous Integration server automatically attempts to build a new Docker container from it.

Responsible for installation and configuration of Jenkins to support various Java builds and Jenkins plugins to automate continuous builds and publishing Docker Images to the Nexus Repository.

Implemented Docker -maven-plugin in and Maven POM to build Docker Images for all microservices and later used Docker file to build the Docker Images from the java jar files.

Set up Jenkins server and build jobs to provide Continuous Automated builds based on Polling the Git source control system during the day and periodic scheduled builds overnight to support development needs using Jenkins, Git, and Maven.

Worked on NoSQL database MongoDB to replica setup and sharding. Also experienced in managing replica set. Installed, Configured, and Managed Monitoring Tools such as Nagios and Splunk for Resource/Network Monitoring.

Used Selenium for continuous inspection of code quality and to perform automatic reviews of code to detect bugs. Automated Nagios alerts and email notifications using Python script and executed them through Chef.

Configured dashboards RDS in Elastic, Log stash & Kibana (ELK). Used ELK to set up real time logging and analytics for Continuous delivery pipelines and applications.

Created Python scripts to totally automate AWS services which includes web servers, ELB, Cloud Front distribution, database, EC2 and database security groups and application configuration, this script creates stacks, single servers, or joins web servers to stacks.

Used Docker to configure Postgres Docker Image and Nexus Proxy Repository with SSL configuration

for secure connections.

Implemented cloud services AWS and implemented Bash, Perl, Python Scripting.

Responsible for configuring and maintaining Squid server in Linux. Deployed Java applications into

Apache Tomcat Application Servers.

Environment: AWS EC2, Cloud front, Docker, Jenkins, Chef, Ruby, Ansible, ELB, Splunk, Nagios, Maven, Subversion (SVN), GitHub, Bitbucket, Linux, ELK, Jira, RHEL, Terraform

Client: FedEx Denver, CO

Role: Build & Release Engineer (August 2013 to Feb 2015)


Developed and implemented build and deployment process using Bamboo in various environments such as QA, UAT, PROD using Bamboo.

Administered Bamboo servers which include install, upgrade, backup, adding users, creating plans, installing the local/remote agent, adding capabilities, performance tuning, troubleshooting issues, and maintenance.

Setting up continuous integration and formal builds using Bamboo with the Artifactory repository and Resolved update, merge and password authentication issues in Bamboo and JIRA.

Used Puppet to manage web applications, Config files, Database, Users Mount Points and Packages and using Puppet developed the scripts to push patches, files and maintain configuration drift.

Used Puppet for creating scripts, deployments for servers, and managing changes through Puppet master server on its clients.

Integrated Puppet with Apache in the Linux AWS Cloud environment using Puppet automation, developed load tests, monitored suites in Python, and integrated puppet modules into Jenkins jobs for CI/CD framework.

Initiated responsibility for administering the SVN servers which included install, upgrade, backup, adding users, creating repository/branches, merging, writing hooks scripts, performance tuning, troubleshooting issues, and maintenance. Implemented a GIT mirror for SVN repository, which enables users to use both SVN and GIT.

Worked on Client-side hooks such as GIT committing and merging, server-side hooks run on network operations such as receiving pushed GIT commits

Reviewed existing manual Software builds, developed scripts to automate repeated tasks that are more susceptible to errors and risk using Perl and Shell Scripting.

Administered Nexus server which includes installing, upgrade, maintenance of repositories, performance tuning, troubleshooting issues, and maintenance.

Worked on high-volume crash collecting and reporting system, built with Python. Performed dispatcher role to distribute tasks assigned to the team.

Involved in building and configuring Red Hat Linux Servers using Kickstart server as required for the project. Maintained maximum uptime and maximum performance capacity for enterprise prod, QA, and UAT/staging.

Environment: Puppet, SVN, GIT, ANT, Jira, Perl, Shell, Bamboo, RHEL, Windows, Nexus.

Client: Atos Syntel Chennai, India

Role: Sr. Linux Administrator (June 2012 to July 2013)


Configured and installed RedHat and Centos Linux Servers on both virtual machines and bare metal Installation.

Worked in the infrastructure team on installation, configuration and admiration of centos, RHEL

Worked on the UNIX, Red hat Linux ES3.0, Linux desktop, SUSE Linux Enterprise Server 9.0 and AIX 5.2/5.1/4.3 and ubuntu.

Installation and the configuration of Red Hat cluster, Veritas cluster Server and veritas NetBackup, Apache 1.3x, Tomcat, WebLogic 9,10 and JBoss

Solid network and systems troubleshooting experience with HTTPS\HTTPS, SFTP, FTP, NFS, SMB, SMTP, SSH, NTP and TCP/IP, Internet Security, encryption.

Worked on Volume management, Disk Management, Software RAID solutions using VERITAS volume manager and Solaris volume Manager.

Troubleshooting Linux network, security related issues, capturing packets using tools such as Iptables, firewall, TCP wrappers, NMAP.

Executed LVM tasks like creating physical volumes, volume groups, logical volume and file system.

Worked on deployment of Routers, Switches, Hubs, Firewalls, IDS, load balancers, VPN Concentrators and worked on volume/File system management using LVM.

Involved in building Linux VM's using VM templates and kick start servers to build multiple servers over the network.

Environment: Solaris, Yum, RPM, Routers, Switches, Load balancers, VPN, Windows, Linux, WLST, Nexus.

Client: Inforaise Hyderabad, India

Role: Linux Administrator (July 2011 to May 2012)


Install Firmware Upgrades, kernel patches, systems configuration, performance tuning on Unix/ Linux systems.

Jumpstart and kickstart OS Integration, DNS, DHCP, SMTP, Samba, NFS, FTP, SSH, LDAP integration.

Customized and compiling the Linux kernel according to the requirements also good in networking concepts and various communication protocols creating and maintaining new VM boxes using Linux virtual machine templates.

Finalized configuration files and set file permissions. Created Crontab commands to perform system Backup and automated administration tasks using scripting in Linux systems.

Worked on the Nagios core monitoring tool for alerting the servers, switching, and send data via

network through specific plugins. Performed activities using Nagios both on Linux and windows systems.

Setup of full networking services and protocols on Solaris, including NIS/NFS, DNS, SSH, DHCP, TCP/IP, applications, and print servers to insure optimal networking, application, and printing functionality. Deployed latest patches for Sun, Linux and Application servers.

Wrote Shell Scripts for automated deployments especially in handling all the tasks before kicking of WLST scripts or admin console deployments.

Responsible for keeping the servers up and running as well as providing direct user support for any

technical issues related to Linux and Windows Systems.

Environment: LDAP, Linux, LVM, RAID, TCP, ACL, Bash, SSH, Java, Shell, API

Contact this candidate