Post Job Free

Resume

Sign in

Devops Engineer Automation

Location:
Dallas, TX
Salary:
70$/hr
Posted:
February 21, 2024

Contact this candidate

Resume:

Anwar Hussain Shaik.

(Senior Azure DevOps Engineer) Phone: 469-***-**** Email: ad3s7o@r.postjobfree.com

Linked In : https://www.linkedin.com/in/anwar-shaik-036a5316

SUMMARY

** ***** ** ** *****ience which includes 6 years of experience as DevOps automation engineer & CI/CD (Continuous Integration/Continuous Deployment) engineer with around working on DevOps/Agile operations process and as Developer around tools (Code review, unit test automation, Build & Release automation, Environment Management) along with Kubernetes Cluster Administration

4 years of experience on Linux Administrator. Experience in Linux System Administration, Build Engineering & Release Management process, including end-to-end code configuration, building binaries & deployments and entire life cycle model in Enterprise Applications and Cloud Implementation all within that suit the needs of an environment under DevOps Culture.

In-depth knowledge of DevOps management methodologies and production deployment Configurations. Skilled in Software Development Life Cycles and Agile Programming Methodologies. Created blue-green deployment techniques using Spinnaker, reducing release downtime and making rollbacks simple in the event of problems.

Helm charts were created and designed with the purpose of expediting the deployment of Kubernetes applications and guaranteeing consistency between environments.

Proficient in using VSTS for version control, including Git repositories.

Experience using PowerShell for Infrastructure as Code, especially with tools like Azure Resource Manager (ARM) templates or AWS CloudFormation. Ruby, PowerShell, or Unix/OS X shell scripting experience (and prepared to dive into all three)

Expertise in configuring, Azure web apps, Azure App services, Azure Application insights, Azure Application gateway, Azure DNS Azure Traffic manager, App services, Analyzing Azure Networks with Azure Network Watcher, Implementing Azure Site Recovery, Azure stack, Azure Backup and Azure Automation.

Well Versed in building and deploying applications on to different environments such as QA, UAT, and productions by developing utilities in Shell and Python scripting

Creation of infrastructure in AWS & Azure using Infrastructure as Code with help of Terraform Scripts.

Created Azure Automation Assets, Graphical runbook, PowerShell runbook that will automate specific tasks, deployed Azure AD Connect, configuring Active Directory Federation Service (AD FS) authentication flow, ADFS installation using Azure AD Connect, and involved in administrative tasks that include Build, Design, Deploy of Azure environment.

Competent extent of skills on DevOps essential tools like Docker, Git, Bit Bucket, Jenkins, Maven, ADO pipelines, Terraform, Ansible.

Set-up CI/CD pipelines with Jenkins and hands-on experience in building freestyle projects, pipeline, multi-branch pipeline Jobs through Jenkins file.

Defined dependencies and plugins in Maven pom.xml for various activities and integrated Maven with GIT to manage and deploy project related tags

Strong background in .NET development, including proficiency C#.

Experience with .NET Core and its cross-platform capabilities.

Worked on containerizing Open Stack services in Docker using Ansible

Authorized pom.xml, build.xml files performing releases with the Maven, ANT release plugin, and managing artifacts, performed .Net build.

Knowledge of ASP.NET for web applications and services.

Experience in Java, Python or Ruby development is a plus (including testing with standard test frameworks and dependency management systems, knowledge of Java garbage collection fundamentals).

Strong scripting skills in languages such as JavaScript, PowerShell, or Python.

Expertise in using PowerShell DSC (Desired State Configuration) for configuring and maintaining consistent server states.

Writing Chef Recipes for Deployment on build on internal Data Centre Servers. Also, re-used and modified same Chef Recipes to create a Deployment directly into Amazon EC2 instances.

Strong knowledge of Windows PowerShell to write new, and refactor existing, scripts

Strong understanding of object-oriented programming (OOP) principles and design patterns in Java.

Managed Maven environment by setting up local, remote, and central repositories with required configuration in maven configuration files

Experience with containerization using Docker, generation of Docker images from Docker file and pushing that image to private registries.

Well experienced in using Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy

Proficient in designing, implementing, and maintaining ELK Stack solutions for log aggregation, analysis, and visualization.

Extensive experience with AEM, including installation, configuration, and administration.

Adept at creating and managing CI/CD pipelines for AEM applications.

Experience in all the prominent AWS services like Cloud Watch, Trail, and Cloud Formation and worked on Azure & AWS DevOps tools like ADO pipelines & AWS Code-Pipeline to build a continuous integration or continuous delivery workflow using AWS Code-Build, AWS Code-Deploy, and worked with many other AWS tools to build and deploy a microservices architecture using ECS.

Created AWS cloud formation templates to create custom-sized VPC, subnets, EC2 instances, ELB, security groups. Worked on tagging standard for proper identification and ownership of EC2 instances and other AWS Services like Cloud Front, cloud watch, RDS, S3, Route53, SNS, SQS, Cloud Trail.

Orchestrated and migrated CI/CD processes using Cloud Formation and Terraform Templates and Containerized the infrastructure using Docker, which was set up in Azure & AWS and VPCs.

Designed and implemented CI/CD pipelines using VSTS to automate the build, test, and deployment processes.

Proficient in ServiceNow development and configuration.

Experience with ServiceNow IT Service Management (ITSM) and IT Operations Management (ITOM) modules.

Expertise in using Ansible to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points, and Packages. Ansible to assist in building automation policies

Configured Spinnaker to monitor and display important deployment metrics using integrations with tools such as Prometheus and Grafana.

Contributed to the Grafana open-source community by sharing knowledge, submitting bug fixes, and participating in discussions to improve the platform's functionality and usability.

Automated dashboard creation and maintenance using Grafana's APIs and scripting languages such as Python or Bash, reducing manual effort and ensuring consistency across environments.

Automated dashboard creation and maintenance using Grafana's APIs and scripting languages such as Python or Bash, reducing manual effort and ensuring consistency across environments.

Experience using tools like Ansible, Puppet, or Chef for Windows infrastructure automation.

Profound experience in Configuring and management of cloud infrastructure using Azure which includes virtual machine, App service & AKS, Amazon Web Services (AWS) including EC2, Auto Scaling, EBS, Elastic Beanstalk, S3, VPC, Elastic Load Balancer, Cloud Watch, Cloud Formation, SNS, IAM, and SES.

Management of Azure & AWS storage infrastructure systems and configuring cloud platform such as virtual network multi-site, cross-site, VM's, AWS ctive Directory, Elastic Load Balancers

Optimized volumes and EC2 instances, set up Security groups & Created multi-AZ VPC instances

Performed Capacity planning & Performance Analysis, Uptime & Response time analysis

Ample knowledge in using Tomcat, IIS Application servers for deployments

Experience with MS SQL Server DB Tasks (DML, DDL Executions, Log Rotations, Tablespace adjustments, client administration, and so forth.)

Responsible for all aspects of the Software Configuration Management (SCM) process including Code Compilation, Packaging, Deployment, Release Methodology and Application Configurations.

Focusing on high-availability, fault tolerance and auto-scaling in cloud formation. Creating snapshots and Amazon machine images (AMIs) of the instances for backup and creating clone instances.

Exposure to Remedy, ALM, JIRA tracking tools for tracking defects and changes for Change Management.

Created guidelines and documentation for team members on effective Git usage.

Integrated Git with collaboration tools like Jira, Confluence, or Slack for enhanced communication and project management.

Major strengths are familiarity with multiple software systems, ability to learn quickly new technologies, adapt to new environments, self-motivated, team player, focused adaptive and quick learner with excellent interpersonal, technical, and communication skills.

Proficient in designing, implementing, and managing Redwood deployments for large-scale applications, ensuring high availability, scalability, and fault tolerance.

Extensive experience using Terraform to define Redwood infrastructure components as code, enabling automated provisioning and configuration management of Redwood environments.

Extensive experience using Terraform to define Redwood infrastructure components as code, enabling automated provisioning and configuration management of Redwood environments.

Proficient in containerizing Redwood applications using Docker and managing them at scale with Kubernetes, ensuring efficient resource utilization and easy application deployment and management.

Certifications

Certified Azure Administrator

Certified Kubernetes Administrator

Certified AWS Developer – Associate

Education Qualification

Bachelors in Computer Science, GITAM University, Vishakapatnam.

Technical Skills

Operating Systems

Linux, WINDOWS, Unix

Languages

YAML, SHELL, JavaScript, PYTHON, JAVA, .NET, PowerShell

Databases

MySQL, Mongo DB, PL/SQL,

DynamoDB, PostgreSQL

Artifactory Repositories

ACR, ECR, Jfrog, Artifactory

CM Tools

Ansible, Terraform, Chef, Puppet

Web/Application Servers

Tomcat, IIS

Build Tools

MAVEN, Dot Net

CI Tools

Azure DevOps Pipelines, AWS DevOps Code Pipeline, Jenkins, Github Actions

Automation container

Docker, Kubernetes.

SCM Tools

Atlassian Bitbucket, Azure Repos and Gitlab

Bug Reporting Tools

JIRA, ServiceNow

System Monitoring Tools

Splunk, CloudWatch, Prometheus and Log Analytics work Space.

Methodologies

Agile/Scrum, Waterfall

IAC tools

Terraform

Cloud

Amazon Web Services: EC2, ECS, S3, ELB, Autoscaling, KMS, Elastic Beanstalk, VPC, Direct Connect, Route 53, Cloud trail, Lambda

Azure: VM, Blob Storage, VMSS, App Service, ACR, AKS

Open stack

Experience

Client: North Carolina health and Human Services, Raleigh, NC Duration: July 2022 to Present

Role: DevOps Engineer /Azure

Responsibilities:

Create CI/CD pipelines for the deployment of services and tools to the Kubernetes cluster hosted on Bare metal.

Deployment of CNF on Kubernetes clusters using Helm charts and TCA tool.

Create Value files based on test deployments done on test clusters and elevate them to production clusters.

Install and configure ELK stack on the environment to ship logs from applications hosted on a cluster

Managed Azure Infrastructure Azure Web Roles, Worker Roles, VM Role, Azure SQL, Azure Storage, Azure AD Licenses, Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Azure Portal.

Responsible for implementing containerized based applications on Azure Kubernetes by using Azure Kubernetes service (AKS), Kubernetes Cluster, which are responsible for cluster management

Provided training and support to team members on Grafana best practices, data visualization techniques, and dashboard design principles.

Designed, implemented, and maintained Grafana dashboards to monitor the performance and health of infrastructure components such as servers, databases, and network devices.

Designed, implemented, and maintained Grafana dashboards to monitor the performance and health of infrastructure components such as servers, databases, and network devices.

Utilized ELK Stack to analyze logs and troubleshoot issues quickly, improving system reliability and reducing downtime.

Configured and automated the Azure DevOps Pipelines & Jenkins Pipelines/Build jobs for Continuous Integration and Deployments Dev to Production environments.

Managed Redwood application configurations effectively using tools like Ansible or Puppet, ensuring consistency and reliability across different environments and deployments.

Implemented security best practices and compliance standards (such as HIPAA, GDPR) in Redwood environments, ensuring data protection, access control, and regulatory compliance.

Utilized VSTS DevOps for Infrastructure as Code (IaC) with tools like ARM templates, Terraform, or Ansible.

Worked on continuous integration and continuous delivery jobs for several teams in dev and test environments using shell, Groovy.

Setup Bugzilla on Linux VM to keep track of bugs in deployment cycle and environment issues.

Created an automation script to take backup of Bugzilla data at night and store the backups on a specific storage location.

Enabled security parameters by using ACL and Gossip encryption key features on Console.

Creation of CI/CD pipelines to integrate with vault and retrieve secrets to be used in pipeline jobs.

Worked on automating cron jobs which aid in scheduling dev, model, and prod jobs and disables the job after execution, as self-service to developers.

Restricted user access/service accounts access over jobs on Jenkins using Assign and managing roles for security purposes in development and test environments.

Scripted AEM tasks and workflows for efficiency and repeatability.

Monitoring and maintaining disk space issues on nodes connected to Jenkins for dev and test environments.

Generated reports on Jenkins for jobs executed for each channel of business for a period in aiding metrics review.

Creation of hooks on Bitbucket repositories in aiding automation of Jenkins jobs.

Expertise in using tools like Maven and Gradle for building Java applications.

Creation of jobs to handle F5 balanced load environments deployments in dev environment.

Configuring of Fortify Static code analysis on Azure DevOps/Jenkins jobs in dev and test.

Configuring Azure Key Vault services to development teams for handling secrets in dev, test, and production environments using both UI and CLI in Jenkin jobs.

Configuring on-prem servers on Jenkins to aid in dev and test deployments for several teams, managing and maintaining credentials on Jenkins.

Created Bitbucket projects and repositories based on the taxonomy standards set by the architecture department. Migrated repositories and Jenkins jobs from git, svn to bitbucket.

Integrated Git with Continuous Integration/Continuous Deployment (CI/CD) pipelines for seamless delivery.

Create and manage the Azure & AWS cloud infrastructure for applications from various channels in the organization using Terraform.

Certs monitoring on various applications maintained on Cloud to keep check of expiring certs and renewing them to avoid application failure.

Configured Azure VM & EC2 instances and launched the new instances with the same configuration using AMIs.

Provisioned VM & EC2 instances of different types by creating security groups and managing EBS volumes.

Worked on Auto Scaling for providing high availability of applications and EC2 instances based on a load of applications by using Cloud Watch in AWS.

Designed and implemented disaster recovery strategies and backup solutions for Redwood applications and data, ensuring business continuity and data integrity in the event of failures or disasters.

Deploy the artifacts to staging and Production environments from artifact tools like ECR, ACR. Build the docker image, publish it to a DTR repo.

Optimized SQL queries and database performance for efficient data retrieval and processing.

Worked closely with development teams to optimize SQL queries, database schema, and overall database performance.

Monitor the deployed applications using performance monitoring tools like ELK and Grafana.

Management and monitoring of cloud resources and services in AWS using CloudWatch.

Created alarms and trigger points in Cloud Watch based on thresholds and monitored the server’s performance, CPU Utilization, disk usage in Dev and test environments.

Monitors the Kubernetes Cluster jobs and performance.

Working on upgrading Kubernetes cluster, commissioning & decommissioning of Nodes, Pods

Environment: Jira, Confluence, Bitbucket, Jenkins, AWS Cloud (VPC, EC2, S3, Autoscaling, ECS, CloudWatch, Elasticsearch), Azure Cloud (VM, Blob, VMSS, VNET, AKS), Elasticsearch, Kibana, Linux, Windows, Terraform, Python, shell scripting, Gitlab CI/CD, Kubernetes, Vault, Shell scripting, YAML, TCA, Linux – RHEL, Grafana

Client: Walmart Global Tech, Sunnyvale. Duration: June 2020 to June 2022

Role: AWS DevOps Engineer

Responsibilities:

Automated Build and Deployment process-setup Continuous Integration and Continuous Deployment of applications onto different environments like Dev, QA, and Production.

Designing, deploying and maintaining the application servers on AWS infrastructure, using services like EC2, S3, Glacier, VPC, Lambda, Route53, SQhedS, IAM, Code Deploy, Cloud Front, RDS, and CloudFormation etc.

Implemented the various services in AWS like VPC, Auto Scaling, S3, Cloud Watch, EC2.

Worked with the different instances of AWS EC2, AWS AMI’s creation, managing the volumes and configuring the security groups.

Experience in creating AWS AMI, have used Hashi corp Packer to create and manage the AMI's.

Creating Cloud Formation templates using JSON.

Developed Chef Recipes to configure, deploy and maintain software components of the existing infrastructure.

Proficient in automation tools such as PowerShell, Ansible, Puppet, or Chef for configuration management.

Implemented alerting and notification mechanisms within Grafana to proactively identify and respond to critical issues, minimizing downtime and service disruptions.

Deployed Docker Engines in AWS platforms for containerization of multiple applications, dockerized application which included packaging, tagging, and pushing the image to the Jfrog/nexus Artifactory.

Used Maven to build rpms from source code checked out from Subversion repository, with Jenkins and Artifactory as repository manager.

Stayed current with emerging Grafana features, updates, and industry trends to continuously enhance monitoring capabilities and drive operational excellence.

Implemented automation of AWS infrastructure via Terraform and Jenkins - software and services configuration via Ansible playbooks.

Implemented security best practices in ServiceNow configurations.

Utilized Python for creating custom monitoring scripts and plugins to collect, analyze, and visualize system performance metrics.

Leveraged VSTS DevOps to enhance team collaboration, manage work items, and track project progress.

Worked on the Cloud Watch to monitor the performance environment instances for operational and performance metrics during the load testing.

Designed the data models to be used in data intensive AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end traceability, lineage, definition of Key Business elements from Aurora.

Worked with the AWS S3 services in creating the buckets and configuring them with the logging, tagging and versioning.

Created the trigger points and alarms in Cloud Watch based on thresholds and monitored logs via metric filters.

Used JFrog Artifactory to store and maintain the artifacts in the binary repositories and push new artifacts by configuring the Jenkins project using Jenkins Artifactory Plugin.

Hands-on experience in data modeling for Cassandra, optimizing schema design, and query performance tuning.

Implementing backup and recovery strategies for Cassandra databases to ensure data integrity and availability.

Used AWS-CLI to suspend an AWS Lambda function. Used AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS.

Experience on Continuous Integration and Continuous deployment using various CI tools like Jenkins, Hudson and automating deployments using Continues Integration tools like Jenkins on Application Server JBOSS and Tomcat.

Using the Docker file containers has run for the MongoDB and linking it with new container which will be the client container to access the data.

Mirrored the Docker images required for Spinnaker from external registry to private Docker Registry.

Experienced in maintaining containers running on cluster node are managed by OpenShift Kubernetes.

Maintained Single and Multi-container pods storage inside a node of OpenShift (Kubernetes) cluster.

Planning, implementation and testing of new Data Centre for Disaster Recovery migrations, involved in an Agile/ Scrum environment and daily stand-up meetings.

Update Maven scripts to use Artifactory repo instead of local repositories.

Worked on the AWS IAM service and creating the users & groups defining the policies and roles and Identify providers.

Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.

Used OpenShift for Docker file to build the image and then upload the created images to the Docker registry.

Created Jenkins on top of Kubernetes in team environment to remove dependencies on other teams.

Cloud infrastructure maintenance effort using a combination of Jenkins, Ansible and Terraform for automating CICD pipeline in AWS.

Implemented robust monitoring solutions for DynamoDB, utilizing tools like CloudWatch to track performance metrics, set alarms, and respond to incidents proactively.

Integrated DynamoDB logging into centralized logging systems for comprehensive visibility.

Implemented security best practices for PostgreSQL, including user access controls, encryption, and compliance with relevant standards (e.g., GDPR, HIPAA).

Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.

Involved in writing Java API for Amazon Lambda to manage some of the AWS services.

Automated the web application testing with Jenkins and Selenium.

Automated the continuous integration and deployments CI/CD using Jenkins, Docker, Ansible and AWS Cloud Templates.

Used Jenkins pipelines to drive all micro services which builds out to the Docker registry and then deployed to Kubernetes by Creating Pods and managed using Elastic Kubernetes Service (EKS).

Environment: Amazon Web Services, EKS Kubernetes, Jenkins, Ansible, Kubernetes, Python, PowerShell, Jira, Web logic, UNIX, VMware, Artifactory, Shell, Perl, JSON, Docker, PostgreSQL, Cassandra, Git, GitHub, Bitbucket, ELK.

Client: Truist Bank, NC. Duration: Sep 2018 - May 2020

Role: Cloud DevOps Engineer

Responsibilities:

Involved in Migration to AWS and implemented the Serverless architecture using the Various AWS services like AWS API Gateway, CloudWatch, Elasticsearch, SQS, DynamoDB, Lambda Functions, CloudFormation, S3, etc.

Automated the infrastructure in cloud Automation using AWS Cloud Formation templates, Serverless Application Model Templates and deployed the infrastructure using Jenkins.

Designed, developed, deployed the complete CICD on cloud and managed services on AWS.

Implemented and maintained CI/CD pipelines for AEM applications, ensuring seamless and reliable releases.

Created various stacks in CloudFormation which includes services like Amazon EC2, Amazon S3, API Gateway, Amazon RDS, Amazon Elastic Load Balancing, Athena.

Provisioned Azure VM & EC2 instance using the ARM/SAM Templates.

Involved in Architectural design and implemented the CloudFormation Templates for the whole AWS infrastructure.

Worked with Auth0 and JSON web tokens for authentication and authorization security configurations using Node.js.

pcf

Implemented the data Forking/Mirroring the HTTP request to the AWS Cloud and On-Prem servers using the Mirror module in the NginxPlus.

Created CloudWatch alarms for monitoring the Application performance and live traffic, Throughput, Latencies, Error codes and notify users using the SNS.

Utilized New Relic data to identify and address performance bottlenecks in applications and infrastructure.

Worked on AWS Lambda to run the code in response to events, such as changes to data in an Amazon S3 bucket, HTTP requests using AWS API Gateway, and invoked the code using API calls made using AWS SDKs.

Involved in setting up the Ansible & Terraforms Installed/Upgraded to the latest version.

Used Kubernetes to orchestrate the deployment scaling and management of Docker containers.

Hands-on experience Amazon EKS to manage Containers as a Service (CaaS), to simplify the deployments of Kubernetes in AWS.

Used Amazon EKS to create its Kubernetes workers through the EKS wizard and Puppetized the Linux configurations, Services like (NginxPlus, Keepalived, Chrony, Cassandra,), Audits, Yum Updates, Patches, Application and Middleware Servers (WSO2 ESB, Identity, Gateway), Users, Cron Jobs, NFS and Share Mounts.

Setting up monitoring solutions for Cassandra clusters using tools such as Prometheus, Grafana, or DataStax OpsCenter.

Worked with Ansible to deploy both Openstack/Non-openstack related components like Nagios Infrastructure, Resource Orchestrator, and different applications KVM vm’s.

Worked with fully automated Ansible and Vmware, Vcenter environments for Openstack and Non-Openstack deployments. Nagios for Monitoring Openstack and Non-openstack Services and written plugins for monitoring services.

Customized and developed Puppet modules, Ruby Templates for an application like New relic, Nginx Plus, SVN Mirror, RabbitMQ, DB Patching, Backup, and Updates and Extensively worked on web/high-performance Load Balancer servers like NGINX, Installed and configured the Nginx service for Load balancing and reverse proxying the incoming traffic.

Implemented Infrastructure as Code using tools like AWS CloudFormation or Terraform to automate DynamoDB table provisioning and configuration.

Created SSL and Digital Certificates for secured communication between servers using OpenSSL and Key tool.

Installed, configured, and maintained SonarQube for static code analysis, code coverage, and quality gates.

Developed and implemented software release management strategies for various applications according to the agile process.

Integrated ELK Stack with DevOps tools like Jenkins, Ansible, and Docker for seamless automation and continuous monitoring.

Installed Chef Server Enterprise on premise/workstation/bootstrapped the nodes using knife and automated by testing Chef recipes/cookbooks with test-kitchen/chef spec.

Automated build, test, and deployment processes for Windows applications.

Experience in deploying PostgreSQL in containerized environments using Docker and orchestrating with Kubernetes.

Managed PostgreSQL instances within container orchestration platforms for scalability and ease of maintenance.

Integrated DynamoDB into CI/CD pipelines to ensure consistent and reproducible deployments.

Ensured smooth communication between ELK and other systems for a comprehensive DevOps workflow.

Developed, Supported, and Monitored the application in case of any Production issues and worked on-call support.

Integrated VSTS with collaboration tools like Microsoft Teams or Slack for improved communication.

Environment: AWS, RHEL (6,7), Ansible, Jenkins, Windows, Bitbucket, SonarQube, NginxPlus (r16), WSO2ESB (4.9.0), New Relic, Splunk, Yaml, Json, JMeter, ELK, Shell Script.

Client: Wells Fargo Home Mortgage, Des Moines, IA Duration: June 2017 to Aug 2018

Role: Build Release Engineer

Responsibilities:

Created and maintained user accounts in Red Hat Enterprise Linux (RHEL)and other operating systems.

Troubleshooting and maintaining of TCP/IP, Apache HTTP/HTTPS, SMTP and DNS applications.

Configuration of NIS, DNS, NFS, SAMBA, SENDMAIL, LDAP, TCP/IP, Send Mail, FTP, Remote access Apache Services on Linux &Unix Environment.

Migrated different projects from Perforce to SVN

Performing NIC bonding on Linux Systems for redundancy.

Diagnosed and resolved problems associated with DNS, DHCP, VPN, NFS, and Apache.

Created Bash/shell scripts to monitor system resources and system maintenance.

Create and Update Documentation for current Patching process. Coordinate with Lines of Business to schedule patching.

Implemented and managed infrastructure deployments using tools like Terraform with Python scripts.

Installed, tested and deployed monitoring solutions with Splunk services.

Resolved configuration-based issues in coordination with infrastructure support teams.

Maintained and managed assigned systems, Splunk related issues and administrators.

Skilled in deploying, configuring and administering Splunk clusters.

Installed and configured servers using Red Hat Linux Kick Start method.

Used puppet configuration tool for some recurring tasks as per request.

Integrated AEM with CI/CD tools to automate testing, building, and deployment processes.

Integrated New Relic APM (Application Performance Monitoring) to track and analyze application performance.

Set up New Relic Infrastructure to monitor servers, containers, and cloud-based resources.

Created Linux Virtual Machines using VMware Virtual Center, creating VM Templates, and troubleshooting all Virtualization related issues.

Automated data lake deployment processes to improve efficiency and reduce errors.

Implemented infrastructure as code (IaC) practices to ensure consistency and repeatability in data lake environments.

Expertise in remote access and application virtualization technologies such as XEN, VMware, ESX, ESXi, etc. on Linux.

Implemented strategies to scale ELK



Contact this candidate