Post Job Free

Resume

Sign in

Aws Cloud Configuration Management

Location:
Chicago, IL
Salary:
$70
Posted:
October 18, 2023

Contact this candidate

Resume:

Name: Jawahar Reddy

Email Id: ad0g2c@r.postjobfree.com

Number +1-414-***-****

PROFESSIONAL SUMMARY:

Over 9 + years of IT experience as DevOps Engineer, AWS Architect & Developer, Azure Developer & Administrator, Linux System Administrator and application development working on server - based operating system kernel configurations on Red-hat Linux, CentOS, SUSE, and Ubuntu 12.x/13. x. Tuning kernel Parameters, Trouble Shooting System & Performance Issues.

Have experience with Serverless/PaaS technologies (API Gateway, Lambda, Dynamo, S3, etc

In-depth knowledge in AWS cloud services like EC2, S3, RDS, VPC, Cloud Front, Route53, Cloud Watch, OpsWorks, IAM, SQS, SNS and SES.

Expertise in DevOps, Release Engineering, Configuration Management, Cloud Infrastructure Automation, it includes Amazon Web Services (AWS), Ant, Maven, Jenkins, Chef, SVN, and GitHub.

Worked on Microsoft Azure (Public) Cloud to provide IaaS support to client. Create Virtual Machines through Power Shell Script and Azure Portal.

Developed customer facing web application using ASP.NET 4.0 C# and convert to XML data file.

Implemented DevOps tools suites like GIT, ANT, Maven, Jenkins, JFrog Artifactory, CircleCI, Docker, Docker Swarm, Kubernetes, Nexus repository, Chef, Ansible, cloudwatch & Nagios in traditional environments & cloud environment

Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform.

Excellent understanding and knowledge of NOSQL databases like MongoDB, HBase, and Cassandra.

Involved in Conversion of classic ASP web application to latest ASP.NET MVC5 and Angular JS

Hands on in JavaScript development tools like webpack, NPM, Grunt, GraphQL and testing tools like Jest, Mocha etc.,

Well versed with Big data on AWS cloud services i.e. EC2, S3, Glue, Anthena, DynamoDB and RedShift.

Extensively worked on Jenkins, Docker for continuous integration and for End to End automation for all build and deployments.

Expert in Azure Kubernetes Service. Implemented the end to end Creation, Configuration and Deployments to AKS.

Utilized the Azure APIM DevOps Resource Kit to extract ARM templates of APIs created in Azure APIM.

Installed and configured foreman with puppet automation for auto-provisioning the Linux machines in Open Stack and VMware environments.

Used Azure, Python and ansible for various configuration management activities after the infrastructure as code activities are completed.

Focused on cloud-native solutions such as Kubernetes, Istio etc.

Experience in using New Relic to track the changes across CI/CD pipeline and infrastructure.

Monitored and tracked SPLUNK performance problems, administrations and open tickets with SPLUNK.

Extensively worked with Change tracking tools like JIRA, BMC ServiceDesk and ITSM.

Proven experience in migration of IBM Mainframe based legacy applications into high available and reliable CI/CD pipelines.

Integrated SonarQube testing with build tools for zero downtime for deployments and to deliver the build and deploy pipeline effective and efficient. Involve in Sprint meetings, retrospectives and grooming and planning sessions.

Involved in using Terraform migrate legacy and monolithic systems to Amazon Web Services.

Creating CI/CD pipelines, deployment planning and performing deployment tasks using automation.

Experienced in in setting up of AWS relational databases like Aurora, MYSQL, MSSQL, and NoSQL database DynamoDB.

Experience in Performance tuning and benchmarking of Hadoop Cluster.

Involved in HBase setup and storing data into HBase, which will be used for further analysis.

Develop framework for converting existing PowerCenter mappings and to PySpark(Python and Spark) Jobs.

Experience in Security integration of Hadoop Cluster.

Developed ASP.NET MVC 4 application in Test Driven Development environment using Microsoft Test as the Testing framework.

Hands-on experience on installing and configuring IBM WebSphere MQ, Message Broker and MQ MFT.

Highly Involved into Data Architecture and Application Design using Cloud and Big Data solutions on AWS, Microsoft Azure.Azure

Expertise in Automation tools such as Selenium WebDriver, Selenium IDE/RC, Selenium Grid, Java, Jenkins (continuous integration, Regression tests), Maven (Regression tests), Eclipse, Cucumber, TestNG (Regression tests) and Junit.

Experience in writing code in Perl to develop and deploy continuous test cases, in combination with CI tools like Jenkins.

Experience at working on Software Development Life Cycles and Agile Programming & Agile Ops Methodologies.

Hands-on experience with monitoring tools like Prometheus, Dynatrace. And worked with Apache Kafka and Zookeeper.

Automated configuration management and deployments using Ansible playbooks and YAML. Experienced in developing Ansible roles and Ansible Playbooks for the server configuration and Deployment activities.

Experience in Configuration Management, Change/Release/Build Management, Support and Maintenance under Unix/Linux Platforms (REDHAT and CENTOS).

Experienced in scaling Amazon RDS, MySQL, MongoDB, DynamoDB instances vertically and horizontally for high availability.

Technical Skills:

Operating Systems

UNIX, Red hat Enterprise Linux, Ubuntu, Windows 98/NT/XP/Vista/7/8.

SCM Tools

Subversion, GIT and Perforce.

Build Tools

Apache Ant, Maven, Bazel, cMake, Gradle

CI Tools

Jenkins, Apache Anthill Pro, Bamboo.

Repositories

Nexus, Artifactory.

Configuration Management Tool

Chef, Puppet, Ansible.

Web Service Tools

JBOSS, Apache Tomcat, IntelliJ IDEA, Oracle

Web logic, IBM Web sphere, IIS Server, TibcoFTL

Languages/Utilities

Shell Script, APACHE ANT Script, Batch Script, Ruby, Perl, C, Python, Core Java.

Networking

TCP/IP, NIS, NFS, DNS, DHCP, WAN, SMTP, LAN, FTP/TFTP.

Technologies

AWS (EC2, S3, EBS, ELB, Elastic IP, RDS, SNS, SQS, IAM, VPC, Cloud Formation, Route53, CloudWatch, Microsoft Azure and Rack space OpenStack.

Databases SQL Server

Oracle, DB2 and Teradata.

Cloud Platform

AWS, Microsoft Azure, GCP, PCF.

Monitoring and profiling tools

Nagios and Splunk.

Containerization Tools

Docker, Docker swarm, Kubernetes

Cyber Security Vendor Equipment/ Technology

Palo Alto Firewalls Configuration and Management at scale, Cortex XDR, GlobalProtect, Wildfire, Prisma Access, Prisma Cloud.

Professional Experience:

Client: Motorola Solutions, Allen, TX Aug 2021 – Till Date

Role: Site Reliability Engineer (SRE) / AZURE DevOps Engineer.

Responsibilities:

Experience in Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

Wrote docker composer scripts to deploy AWS performance and compliance monitoring stack consisting different toolsets like Grafana/Prometheus/Telegraf/Influx/Chronograf.

Create Pyspark frame to bring data from DB2 to Amazon S3.

Experience in scripting languages like Python, Ruby, Perl, Shell, Bash and familiar with storage, Networking, PowerShell commands. Experienced in creating automated PowerShell Scripts for web app deployment.

Experience with capacity planning, continuous integration and application deployment using Jenkins and Subversion (SVN) and GIT for version control, Maven and Ant for Building and Packaging.

Expertise to analyze and correlate events through Splunk search strings and operational strings.

Implemented a production ready, load balanced, highly available, fault tolerant Kubernetes infrastructure with rancher, kops, EKS.

Worked with security team to install Istio and configured proxy rules for routing connections between microservices.

Experienced in GoLang Microservices using channels, routines, functional interfaces, and various frameworks.

Experienced in Apache Spark for implementing advanced procedures like text analytics and processing using the in-memory computing capabilities written in Scala.

I assisted in the integration of DevSecOps pipeline components to include, using a code repository, an artifact repository, security assessment platform, and an orchestrated integration and delivery platform to enable automated application building, testing, securing and deployment.

Responsibilities included design of UNIX and Linux operating systems, extensive use of shell scripting, software packaging, code debugging.

Used Istio service mesh to implement dynamic service discovery and traffic management including traffic shadowing, traffic splitting, and service-to-service communication reliability.

Experience in VMwareTanzuproducts, includingTanzuApplication Services (PCF) and RabbitMQ.

Migrated feature flag features from using Launch Darkly to Kameleoon.

Created Infrastructure in a Coded manner (Infrastructure as Code) using Terraform.

I was part of the team that design and integrate of capabilities to establish a DevSecOps pipeline, utilizing lab and cloud resources to design, build, test and evaluate functional components and technologies.

Hands on working experience with Jenkins continuous integration Tools including installation, configuration of jobs, pipelines, security set up etc.

Strong understanding of SDLC and Business Analysis practices and good knowledge of Six Sigma, ITIL, CMMi Process.

Deploy and monitor scalable infrastructure on Amazon web services (AWS) & configuration management using Chef.

Installation of Istio for service mesh.

Created and configured AWS EC2 instances, S3 buckets, ELB, Route 53 DNS entries, VPC, Security Groups, Cloud Formation templates, Cloud Watch monitoring alarms, etc.

Developed installer scripts using Maven, Python for various products to be hosted on Application Servers.

Experience in writing code in Perl to develop and deploy continuous test cases, in combination with CI tools like Jenkins.

Extensive experience in building CI/CD pipelines using Hudson, Bamboo, Jenkins, and TeamCity for end - to-end automation for all builds and deployments.

Experience working with service mesh like Istio for advanced service discovery of microservices running in the cluster.

Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.

Used C#.NET as language to develop code behind business logic.

Able to written manifests and Ruby scripts to customize the Puppetas per requirement configuration.

Understand business thresholds and setup alerts and monitoring using Splunk and other tools like Hubble.

Extensively worked on developing and managing IBM - Mainframe (COBOL, DB2, VSAM, JCL, CICS AND IMS DB) and Cognos Powerhouse 4GL (QTP, QUICK, QUIZ and COM Procedures) based applications.

Developed using Angular 6.0 and set up Jasmine-Karma, PhantomJS setup to unit test the application.

Written Templates for Azure Infrastructure as code using Terraform to build staging and production environments. Integrated Azure Log Analytics with Azure VMs for monitoring the log files, store them and track metrics and used Terraform as a tool, Managed different infrastructure resources Cloud, VMware, and Docker containers.

Experience in developing and designing POCs using Scala and deployed on the Yarn cluster, compared the performance of Spark, with Hive and SQL/Teradata.

Provide guidance to development team working on PySpark as ETL platform.

Coordinated with users on Teradata Query Optimization. Assisted FDL business user on writing well-tuned SQL queries against the EDW to maximize user experience without impacting other users.

In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce concepts.

Implemented Security Scans like Static and Dynamic Application testing at each layer of DevOps life cycle and converted the existing DevOps methodologies/workflows to DevSecOps model.

Established infrastructure and service monitoring using Prometheus and Grafana.

Apache, Unix/Linux, Web Applications, Managing Web Sites, Scripting Languages.

Experience in creating DockerContainers leveraging existing Linux Containers and AMI's in addition to creating Docker Containers from scratch.

Configured vSphere 7.0 with Tanzu. Enabled Kubernetes (k8s) data domain.

Demonstrated end to end troubleshooting skills by leading complex support calls with vendors (JFrog, VMwareTanzu)to improve the overall performance of CI/CD tools

Used JavaScript, React, GraphQL, Python, Django, S3, Postgres for the creation of “Business Delivery Service”.

Created various Parser programs to extract data from Autosys, Business Objects, XML, Informatica, Java, and database views using Scala.

Working experience in Map Reduce programming model and Hadoop Distributed File System.

Experience in using Scala for coding the components in Play and Used Maven to build and generate code analysis reports.

Implementing PowerShell scripts in Azure Automation to fix the known issues from OMS.

Hands on experience in GCP services like EC2, S3, ELB, RDS, SQS, EBS, VPC, EBS, AMI, SNS, RDS, EBS, Cloud Watch, Cloud Trail, Cloud Formation GCP Config, Autoscaling, Cloud Front, IAM, R53.

Exposure to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, and Testing, and Implementing, Post - production analysis of the projects.

Install and Manage security reporting tools to monitor any Active Directory changes . Plan and manage all the migrations and upgrades related to Active Directory and Domain controllers .

Managing Azure Infrastructure Azure Web Roles, Worker Roles, SQL Azure, Azure Storage, Azure AD Licenses. Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Portal.

Experience in designing Azure Virtual Network, Implementing Point - to-Site VPN, Vnet-to-Vnet VPN, Vnet Peering and Network Security Groups (NSG).

Managed and reviewed Hadoop log files.

Established infrastructure and service monitoring using Prometheus and Grafana. Performed installation and managed Grafana Dashboards to visualize metrics collected by Prometheus Logs. Responsible to set up & configure monitoring and metric gathering system around Prometheus and Grafana.

Skilled in support activities, analysis, technical design, testing, trouble shooting for both batch & online application programs of mainframe applications.

Performed GAP analysis for the modules in production, conducted Feasibility study and performed impact analysis for proposed enhancements, Identified risks and project impacted

Set up and maintained Logging and Monitoring subsystems using tools loke; Elasticsearch, Fluentd, Kibana, Prometheus, Grafana and Alertmanager.

Expertised on Java Technologies and servers like Servlets, Jsp, xml, weblogic, Apache, Jetty, Ruby Rails, Tomcat and Jboss servers.

Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades.

Responsible helping to develop appropriate ITIL based policies and processes for Incident, Problem and Change Management.

Designed and configured Azure Virtual Networks (VNets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies and routing.

Worked on loading CSV/TXT/DAT files using Scala/Java language in Spark Framework and process the data by creating Spark Data frame and RDD and save the file in parquet format in HDFS to load into fact table using ORC Reader.

Implemented Action class in selenium to handle mouse and keyboard actions.

Hands-on experience with monitoring tools like Prometheus, Dynatrace. And worked with Apache Kafka and Zookeeper.

Designing and implementation of RPA tasks using Blue Prism framework (Connectors, VBO, ACI, Process Studio, Control Room and System Manager) to update the Provider directory from excel, website data through screen scraping and using OCR (Optical Character Recognition) from printed material.

Integration of Application with monitoring tool New Relic for complete insight and proactive monitoring.

Experienced writing python, yaml, ruby and shell scripts to automate the deployment and scheduling.

Centralized monitoring and logging for the systems that are running on cloud and on-premise, using tools such as New Relic / Splunk .

Performed extensive back-end validations, which include designing and executing SQL queries using Oracle SQL Developer and SQL Server for Impact analysis purposes

Worked on infrastructure work Docker containerization and maintained Docker Images and containers.

Set-up databases in GCP using RDS, storage using S3 bucket and configuring instance backups to S3 bucket. prototype CI/CD system with GitLab on GKE utilizing kubernetes and Docker for the runtime environment for the CI/CD systems to build and test and deploy.

Experienced in shell scripting using ksh, bash, Perl, Ruby and Python to automate system administration jobs.

Create A SPLUNK search processing Language (SPL) queries, reports and dashboards.

Managed AWS infrastructure as code using Terraform.

Good Expertise at using Selenium Synchronizations with conditional synchronization and unconditional synchronization (Implicit, Explicit) wait statements.

Developed environments of different applications on AWS by provisioning on EC2 instances using Docker, Bash and Terraform.

Focused on automation, containerization, and integration monitoring and configuration management.

Experience in Server infrastructure development on AWS Cloud, extensive usage of Virtual Private Cloud (VPC), CloudFormation JSON template, CloudFront, ElastiCache, Redshift, SNS, SQS, SES, IAM, EBS, ELK, Auto Scaling, DynamoDB, Route53, and CloudTrail.

Used ADO.NET and data objects such as Data Adapter, Data Reader, Dataset, data table for consistent access to SQL data sources.

Client: AT&T, Appleton, WI July2019 to July2021

Role: Sr. DevOps/AWS Engineer

Responsibilities:

Experience writing data APIs and multi-server applications to meet product needs using Golang.

Experience with AWS services EC2, VPC, ASG, EBS, ELB, S3, Route54, DynamoDB, RDS, EBS, SNS, CFT, CloudWatch and CloudFront on private, public and hybrid cloud infrastructure.

Expertise in migrating key systems from on premise hosting to Amazon Web Services (AWS).

Implemented Dynatrace managed End to End and deployed one agent on various land scape technologies.

Worked on Continuous Integration CI/Continuous Delivery (CD) pipeline for Azure Cloud Services using CHEF.

Used Kubernetes to cluster Docker containers in runtime environment throughout the CI/CD

Migrated on premises lower environments to Cloud SQL and GCE in GCP cloud to streamline OLTP.

Created Container’s for Api’s using Docker in LINUX to get deployed in Rancher Server.

Managed Ansible Playbooks with Ansible modules, implemented CD automation using Ansible, managing existing servers and automation of build/configuration of new servers.

Worked on migration from Hudson to Jenkins and from Clear case to Github.

Extensive experience in Amazon Web Services(IaaS) migrating like creating AWS VMs, storage accounts, VHDs, storage pools, migrating on premise servers to AWS Cloud and creating availability sets in AWS.

Configured LDAP, to Tanzu VMWare PCF for multiple applications.

Experience in architecting and Configuring public/private cloud infrastructures utilizing Amazon Web Services (AWS) including EC2, Elastic Load - balancers, Elastic Container Service (Docker Containers), S3, CloudFront, RDS, DynamoDB, VPC, Direct-Connect, Route53, CloudWatch, CloudFormation, IAM.

Initiated migration of source code from Clear case to Github across organization

Implemented Dynatrace on different cloud technologies like AWS, Azure and GCP. Worked on Dynatrace APM End to End implementation.

On boarded more than 100 plus .Net projects to VSTS and configured Builds and releases.

Experience in installation, configuration, deployment and management of enterprise applications using WebSphere Application server 8.x/7.x/6.x/, WebSphere Portal Server 8.0/7.0/, WebSphere Process Server (BPM) 8.5/7.0 and WebSphere MQ 7.x/8.x on various platforms like AIX, Linux and Windows 2003.

Hands on experience on Unified Data Analytics with Databricks, Databricks Workspace User Interface, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL.

Used AWS Glue for the data transformation, validate and data cleansing.

Involved in designing and deploying a multitude application utilizing almost all the AWS stack (including EC2, Route 53, S3, RDS, Dynamo DB, SNS, SQS, IAM) focusing on high-availability, fault tolerance, and auto scaling in AWS Cloud Formation.

Extensive experience with the GoLang language and integrating various stacks including Java, JavaScript, AJAX, jQuery, AngularJS, ReactJS, NodeJS, Angular, Bootstrap, JSON, XML and Python.

Experience in developing applications for android operating system using Android Studio, Eclipse IDE, XML, Android SDK and ADT plugin.

Working with AWS services such as EC2, VPC, RDS, CloudWatch, CloudFront, Route53 etc

Extensive experience in web development, application development using Visual Studio.NET technologies like C#, ASP.NET MVC 5, ASP.NET,ADO.NET, XML,Web Services, WCF, and WPF.

Used AWS glue catalog with crawler to get the data from S3 and perform sql query operations

Worked on Angular with TypeScript and other latest client-side technologies including ReactJS, ES6, Gulp, NodeJS, RxJS, Angular CLI, Webpack, Chrome DevTool, Karma and Jasmine.

Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Up AWS and Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, Manage Clusters In Databricks, Managing the Machine Learning Lifecycle.

Experience in developing Spark applications using Spark-SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for Analyzing& transforming the data to uncover insights into the customer usage patterns.

Wrote Micro services to export/import data and task scheduling and Spring boot, spring and Hibernate in the micro services and Swagger API for Micro services.

Good experience in writing Helm Charts, Kubernetes yaml files for deployment of microservices into kubernetes clusters.

Implement Atlassian Tools upgrades, and partner with other IT staff to coordinate infrastructure maintenance and system migrations.

Maintaining cloud infrastructure using AWS EC2, S3, Cloud watch, Cloud Formation, Route 53 and Created monitors, alarms and notifications for EC2 hosts using Cloud Watch.

Built Subversion from source code to meet custom requirements and lead migrations from UberSVN to Subversion and its administration.

Designed the ETL code using SAP Data Services to implement Type II Slowly Changing Dimensions with Surrogate keys.

Performed regular and periodic backups/replications of VMs using vRange and involved in P2V and V2V conversion with the help of VMware Converter in different environments.

Worked in front end development using HTML, CSS, JavaScript, JQuery, JSON.

Using Dynatrace Analyzed application performance while we are doing testing.

Expertise in using AWS S3 to stage data and to support data transfer and data archival. Experience in using AWSRedshit for large scale data migrations using AWS DMS and implementing CDC (change data capture).

Developed and managed cloud VMs with AWS EC2 command line clients and management console implemented DNS service through Route 53 on ELBs to achieve secured connection via https.

Container management using Docker by writing Docker files and set up the automated build on Docker HUB and installed and configured Kubernetes.

Migrated on premise Infrastructure on to AWS Cloud using Rehost“lift and shift” methodology and developed Continuous Integration pipeline usingJenkinsto deploy a multitude of applications utilizing the AWS services IncludingVPC,EC2, S3, RDS, and IAM,Elastic load balancing, Auto scaling, Cloud Front, Elastic Beanstalk,Cloud Watch by focusing on high-availability, fault tolerance, and auto-scaling.

Client: Walgreens, Killeen, TX Oct 2016 – June 2019

Role: DevOps Engineer

Responsibilities:

Worked on Keystone service on OpenStack to manage the users and groups by assigning the role and policies as per project.

Monitored and fine-tuning system and network performance for server environments running Red hat Linux, Ubuntu, Solaris.

Worked on creating the Docker containers and Docker consoles for managing the application life cycle.

Worked in Git implementation containing various Remote repositories for a single application

Created log rotate scripts to clear up space on the servers.

Seeking DevOps opportunities to extend my expertise in Continuous Integration & Deployment practices. Experience in DevOps, Build, AWS services and Salesforce on Linux and Windows environments.

Worked with developers to resolve the git conflicts while merging the feature branches.

Setting up SQL Azure Firewall and Create, manage SQL Server AZURE Databases.

Created a complete release process doc, which explains all the steps involved in the release process.

Updated the Database Tables running the Database Scripts.

Provisioned the servers (RHEL/Ubuntu) as per the request of the development and operations.

Configured and maintained Jenkins to implement the CI process and integrated the tool with GIT and Maven to schedule the builds.

Experience in migration of On - Premises RHEL and Windows servers to cloud platforms including AWS EC2 and Azure.

Setup/Managing Linux Servers on EC2, EBS, ELB, SSL, Security Groups, RDS, and IAM.

Installation, Configuration and Maintenance of Samba, Apache Tomcat, Web Sphere and J boss servers in AIX and Linux environment.

Troubleshooted and resolved Build failures due to infrastructure issues reduced by 95% stabilizing the build process.

Developed this application using HTML5, CSS, material controls to render HTML and used typescript to code the application logic.

Setting up the auto deployment process for different applications in different environments and implementing the auto deployment process.

Deployed the static content to apache web servers and applications to Tomcat Application server

Designed and implemented Chef, including the internal best practices, automated cookbooks CI and Proficient in deploying and support applications on WebSphere, TOMCAT, WebLogic application servers

Setup a centralized login mechanism (ELK with File beat) based on Docker

Extensively worked with Scheduling, deploying, managing container replicas onto a node using Kubernetes and experienced in creating Kubernetes clusters work with Helm charts running on the same cluster resources.

Experience managing multiple Project Object Model (POM) files using Maven, in the multi-tier environment with parent and child POM dependencies.

Experience in open-source Kafka, zookeepers, Kafka connects.

Written Ant and Maven Scripts to automate the build process. Configured Bamboo for doing the build in all the non-production and production environments

Administered and Implemented CI tools Hudson/Jenkins for automated builds with the help of build tools like ANT, Maven, Gradle.

Involved with set up of Continuous integration and daily builds using Jenkins with artifactory repository manager.

Bootstrapping automation scripting for virtual servers, using VMWare clusters.

Client: TransUnion, Chicago, IL Feb2014– Sep 2016

Role: Linux Administrator

Responsibilities:

Installed, Configured and Maintained Debian/RedHat Servers at multiple Data Centers.

Configured RedHatKickstartfor installing multiple production servers.

Installation of Red Hat Satellite 6 Server

Installation, Configuration and administration of DNS, LDAP, NFS, NIS, NIS and Send mail on RedHat Linux/Debian Servers.

Installed, Configured and Maintained Active Directory Windows Server 2008

Migration from SUSE Linux to Ubuntu Linux

Created Audit Report using aureport

Configured, Maintained OVM

Configured, Maintained, Installed OEL

Installation, Configuration and administration of VMWAREIBM Blade servers

Experience working with production servers at multiple data centers.

Experience using BMC bladelogic client automation Tool

Installed Checkpoint Firewall to secure networks

Experience in migration of consumer data from one production server to another production server over the network with the help of Bash and Perl scripting.

Used Puppet for Monitoring system and automation.

Installed and configured monitoring tools such as munin and nagios for monitoring the network bandwidth and the hard drives status.

Developed and supported the Red Hat Enterprise Linux based infrastructure in the cloud environment.

Configured Azure VMs for Windows SystemsExperience with using Centrify

Developed automation scripting in Python core using Puppet to deploy and manage Java applications across Linux servers.

Education: Bachelor’s in computer science from Alliance University Bangalore India in 2012.



Contact this candidate