Post Job Free
Sign in

Configuration Management Us Citizen

Location:
Norwalk, CT
Salary:
$140
Posted:
September 08, 2025

Contact this candidate

Resume:

RAZA SHEIKH – US Citizen

Email: **********@*****.***

Mobile: +1

Professional Summary:

IT professional with 10+ years of experience worked as DevOps AWS Engineer, AWS/DevOps /ML Engineer, Build and Release Engineer and System Admin.

Extensive experience includes migration from on-premises to Cloud.

Worked on automation tools and mostly involved in the areas of DevOps, Continuous Integration and Continuous Delivery/Deployment pipeline, Build and release management.

Have a good understanding on operations and development to deliver quickly.

Solid experience in Automation, Configuring and Deploying instances on Amazon web services (AWS) including EC2, ELB, Auto Scaling, S3, VPC, Route53, Cloud watch, Cloud trail, AMI, IAM, Security groups, SNS and Roles

Experienced in setting up Amazon EC2 instances, virtual private cloud (VPC), and security groups. Setting up databases in AWS using RDS, storage using S3 bucket and configuring instance backups to S3 bucket.

5+ years of Hands-on experience with Azure and strong understanding of Azure capabilities and limitations, primarily in the IaaS Space

Architecture and Implementation experience with medium to complex on premise to Azure migrations.

Experience in using configuration management tools like Chef, Puppet and Ansible.

Worked on Ansible and Ansible tower as configuration management tool, to automate repetitive tasks, quickly deploys critical applications, spin - up nodes in AWS and proactively managing changes using Ansible Playbooks.

Experience in writing Chef cookbooks in all aspects of chef concepts like Chef Server, Chef Automate, Chef workstations, Chef Nodes, Chef Client and various chef components

Experience configuring and managing Puppet master server, updating, and creating modules, pushing them to Puppet Client. Worked on Puppet Dashboard, and Puppet DB for configuration management

Administering the entire Docker setup along with cluster management and port forwarded traffic from docker containers to EC2 instances

Created and managed a Docker deployment pipeline for custom application images in the cloud

Setting up Jenkins jobs and created a Docker image after successful build and launch the server with that images and maintain tagging of images that acts as rollback strategy.

Expertise in designing, developing efficient, reusable, and reliable backend software in C and C++, multithreading in UNIX and Linux platform using Boost library and STLs such as sets, maps, list, stacks and queues, data structures and algorithms.

Linux (RedHat 6/7) UNIX System, IBM AIX, Server Installations, configurations, upgrades and migrations on Linux and AIX.

Experienced in setting up a Continuous Delivery environment with the use of Docker continuous build, and delivery tools.

Familiar with Kubernetes for orchestration and management of containers across multiple server hosts

Experience in administrating, deploying and managing RedHat, Ubuntu and CentOS servers.

Experienced in building deployment and automated solutions using shell scripting and Python.

Efficient in building and configuring the image using Docker.

Understanding of TCP/IP, data networks (LAN/WAN) and IP tables

Working with Kubernetes to deploy the application.

Ability to deploy, manage, and operate scalable, highly available, and fault-tolerant systems.

Experience in working with various Python Integrated Development Environments like VIM, PyCharm and Atom.

Education details:

University Of Punjab

Bachelors in computer science 2014

Skills

Cloud Platforms: AWS (EC2, S3, VPC, RDS, Lambda, ECS, EKS, CloudFormation, CloudWatch), Azure

DevOps Tools: Terraform, Ansible, Jenkins, GitHub Actions, Code Pipeline, Code Build, Docker, Kubernetes

CI/CD Pipelines: Jenkins, GitLab CI/CD, AWS Code Pipeline

Configuration Management: Ansible, Chef, Puppet

Monitoring & Logging: CloudWatch, Prometheus, Grafana, ELK Stack

Programming Languages & Scripting: C, C++, Bash, Python, PowerShell

Version Control: Git, Bitbucket

Security & Compliance: IAM, KMS, Secrets Manager, Security Groups, Guard Duty

Operating Systems: Linux (Amazon Linux, Ubuntu, CentOS), Windows Server

Professional Experience:

Client: Early Warning Services, Remote Jun 2022 - Present

Role: Senior DevOps Engineer / SITE RELIABILITY ENGINEER

Responsibilities:

Configured Autoscaling groups and elastic load balances for launching EC2 instances using cloud formation template, configured Ansible to manage AWS environments and automate the build process for

Experience in using AWS Lambda to execute code in response to triggers AWS services such as S3, DynamoDB, Kinesis, SNS, and CloudWatch.

Understanding of secure-cloud configuration, (CloudTrail, AWS Config), cloud-security technologies (VPC, Security Groups, NACL) and cloud-permission systems (IAM).

Developed strategy to migrate Dev/Test/Production from an enterprise VMware infrastructure to the IaaS Amazon Web Services (AWS) Cloud environment.

Designed and deployed VLAN segmentation for network isolation, improving security posture and optimizing traffic flow within data centers and field sites.

Integrated CLI commands Jenkins and GitLab CI pipelines for artifact management, service validation, and post-deployment testing.

Created Terraform modules to utilize cloud formation during terraform deployments to enable more control or missing capabilities.

Function as part of a Firewall and Security team in support of Checkpoint Firewalls, Zscaler Proxy, Juniper Portals, SecAuth, Open LDAP, and Active Directory.

Automated BIOS/UEFI provisioning workflows using vendor-specific tools (e.g., Dell Command Configure, Lenovo WMI) to streamline bare-metal deployment.

Wrote CLI-based deployment scripts integrating Git, Jenkins pipelines, and Ansible to streamline builds and multi-environment rollouts.

Managed different infrastructure resources, like physical machines, VMs and Docker containers using Terraform.

Hands on Experience on Cloud automation, Containers and PaaS (cloud foundry) which helps to trigger the inherent originality of an individual using Terraform.

Designed, deployed, maintained and lead the implementation of Cloud solutions using Confidential Azure and underlying technologies.

Integrated GPU support into automated provisioning workflows using Ansible and shell scripts for consistent configuration across edge nodes.

Migrating Services from On-premises to Azure Cloud Environments. Collaborate with development and QA teams to maintain high-quality deployment

Designed Client/Server telemetry adopting the latest monitoring techniques.

Worked on Continuous Integration CI/Continuous Delivery (CD) pipeline for Azure Cloud Services using CHEF.

Converted existing Terraform modules that had version conflicts to utilize cloud formation during terraform deployments to enable more control or missing capabilities.

Tuned BIOS boot priorities and firmware settings to support PXE/network boot for automated OS imaging.

Integrated Terraform with Jenkins to manage the AWS Infrastructure using Terraform plugin.

Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.

Creating Hardware Raid and LVM's and administering them in RedHat Linux.

Configuring RedHat systems over the network, implement automated tasks through cronjobs, resolving tickets according to the priority basis.

Monitored VPN uptime and DNS query performance; performed root cause analysis on connectivity issues to minimize downtime.

Installation of patches and packages using RPM and yum in RedHat Linux.

Setup, configured, and debugged network configurations for RedHat servers and workstations

Installed RedHat Linux using kickstart.

Integrated Linux CLI tools with CI/CD pipelines to support secure deployment of IAM scripts, policies, and configurations.

Documented standard BIOS/UEFI configuration procedures for lab and production systems to ensure consistency across fleet deployments.

Used Ansible server and workstation to manage and configure nodes. Well versed with Ansible Playbooks, modules and roles.

Planned and Implemented Disaster Recovery Jenkins pipeline that spins up complete end to end infrastructure and deploys code using Helm carts in different regions for High availability and no interruptions.

Client: Medtronic, Remote Oct 2020 - Jun 2022

Role: DevOps/AWS Engineer

Responsibilities:

Defining EC2 Instance Security Groups, configuring Inbound / Outbound rules and tagging AWS EC2 resources.

Use agile methodology throughout the project. Involved in weekly and daily bases release management.

Working with a strong team of architectures, backend developers to gather function and non-functional requirements.

Developed Dynamic ColdFusion Grids and HTML Grid to populate the Data.

Remote administration and system configuration through scripting, Linux CLI (Bash, Perl, Python), SaltStack (Puppet), and Rackspace (OpenStack) console.

Configured VLAN tagging and segmentation across network interfaces to isolate traffic for secure lab and field deployments.

Built Azure environments by deploying Azure IaaS Virtual machines (VMs) and Cloud services (PaaS)

Microsoft Visual Source Safe (VSS) has been used for all the source code maintenance among the whole team members.

Developed re-usable templates using ColdFusion.

Built and maintained CI/CD pipelines with Jenkins and GitHub Actions automating previously manual deployments, which eliminated recurring configuration errors and cut production release time from hours to under 30 minutes

Managed Helm-based Kubernetes deployments for scalable microservices and reduced downtime during updates

Implemented monitoring using AWS CloudWatch and X-Ray to track system metrics, trace requests, and identify performance bottlenecks.

Performs hands on configuration and management of AWS services via the dashboard and CLI console.

Configured Cisco ASA and Checkpoint firewall layers securing existing Data Center infrastructure. Migrated information security from Cisco PIX to ASA5500 with LAN-failover platform.

Troubleshot boot-time issues by modifying UEFI/Legacy boot modes, resolving conflicts between secure boot policies and unsigned drivers.

Validated thermal and power profiles of GPU systems in field deployments to ensure reliable performance under constrained environments.

Configured and maintained site-to-site and client VPNs using OpenVPN and IPsec, enabling secure remote access for distributed teams and edge devices.

Supported customer with configuration and maintenance of PIX and ASA firewall systems.

Created reusable Terraform modules to manage networking, computing, and IAM resources

Automated server provisioning and configuration with Ansible and Chef to reduce drift and speed up deployments

Wrote Python and Bash scripts to automate backups, snapshots, and failovers across regions

Led performance tuning by optimizing Lambda functions, SQL queries, and API Gateway throughput

Optimized AWS costs by implementing serverless solutions and using AWS Cost Explorer and Trusted Advisor

Created Git branching strategies and tag-based workflows for versioned deployments.

Troubleshot and resolved IP conflicts, VLAN misconfigurations, and DNS failures through detailed packet analysis and log inspection.

Implemented security best practices with IAM roles, KMS encryption, and audit logging.

Installed and configured NVIDIA drivers and CUDA toolkits on Ubuntu-based systems; validated compatibility with TensorRT, cuDNN, and container runtimes.

Leveraged Boto3 SDK to build custom Python scripts that automate interactions with AWS services

Wrote python scripts for implementing Lambda functions. Created API as a front door application to access data or functionality from backend services running on EC2 and code running on Lambda or any web application.

Created scripts in Python (Boto) which are integrated with Amazon API to control instance operations.

Designed, built and coordinated an automated build & release CI/CD process using Gitlab, Jenkins and Puppet on hybrid IT infrastructure.

Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Amazon Elastic Load Balancing, Amazon SWF, Amazon SQS, and other services of the AW’S infrastructure.

Running build jobs and integration tests on Jenkins Master/Slave configuration.

Managed Servers on the Amazon Web Services (AWS) platform instances using Puppet configuration management.

Involved in maintaining the reliability, availability, and performance of Amazon Elastic Compute Cloud (Amazon EC2) instances.

Conduct systems design, feasibility and cost studies and recommend cost-effective cloud solutions such as Amazon Web Services (AWS).

Client: NTT Data, NC Jan 2019- Sep 2020

Role: DevOps Engineer

Responsibilities:

•Installing and configuring Apache and supporting them on Linux production servers.

•Creating, Updating, and maintaining the NIS databases. Creating NIS Clients.

•Setting up the Ansible control machine (RHEL7) and configured the remote host inventories via SSH.

•Worked closely with other development and operations teams to understand complex product requirements and translated them into automated solutions.

•Worked on Azure Active Directory.

•Involved in AWS architectural design to provision the AWS resources.

•Involved in writing the packer scripts to generate the machine images for AWS.

•Managed the AWS cost cutting by writing the Ansible playbook for auto start/stop of AWS resources at a particular time of the day by triggering it from Jenkins.

•Providing a test-driven development for Ansible by using Server spec. Written spec tests for checking if servers are configured correctly.

•Setting up Server spec on the local and writing the test cases to check the configurations and impotency of the remote servers.

•Versioned the playbooks on the source code management tool GitHub.

•Used Jira for tracking and ticketing.

•Created the trigger points and alarms in Cloud Watch based on thresholds and monitored logs via metric filters.

•Worked on the AWS IAM service and creating the users & groups defining the policies and roles and Identify providers.

•Kubernetes dashboard to access the cluster via its web-based user interface and implement microservices on Kubernetes Cluster.

•Experienced in maintaining containers running on cluster node are managed by OpenShift Kubernetes.

•Used OpenShift for Docker file to build the image and then upload the created images to the Docker registry.

•Created Jenkins on top of Kubernetes in team environment to remove dependencies on other teams.

•Worked on open-source development tools like Docker Containers, Mesos and Kubernetes.

•Implemented the effective Data sizing of the ELK Cluster based on the data flow and use cases.

•Documented network architecture, VPN setups, firewall policies, and VLAN assignments for operational transparency and knowledge sharing.

Expertise in JIRA for issue tracking and project management. Experienced with Installing and Configuring the NEXUS Repository manager for sharing the artifacts within the company. Also, Supported and developed tools for integration, automated testing and release management.

•Responsible for managing the GCP services such as Compute Engine, App Engine, Cloud Storage, VPC, Load Balancing, Big Query, Firewalls, and Application monitoring using Google Stack Driver.

•Involved in Research of the project application architecture to support/resolve build, compile, and test issues/problems. Environment: ANT, MAVEN, Subversion (SVN), CHEF, Docker, Vagrant, EC2, Ansible, JIRA, LINUX, RHEL, SNS, SQS, Kubernetes, Shell/Perl Scripts, Bitbucket, Python, TFS, SCM, API, GIT, Jenkins, Tomcat, Java, Azure TFS, Azure VSTS, Visual Studio, Visual Studio Code, GitBash, Python.

•Generating the SAS tokens for the storage accounts to be accessed and configuring access to the storage account by limiting selected networks.

•Involved in continuous integration and continuous deployment system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the system to build and test and deploy.

•Used Docker for packaging applications and designed the entire cycle of application development and used Virtualized Platforms for Deployment of containerization of multiple apps.

Client: BATTELLE, OH Mar 2017- Dec-2018

Role: DevOps Engineer

Responsibilities:

Designed and implemented cloud migration projects focused on automating legacy on-premises system migrations to AWS using Infrastructure as Code (IaC) with CloudFormation and Terraform, facilitating rapid, repeatable deployment of resilient cloud environments.

Integrated DNS with Active Directory and other directory services to provide seamless service discovery and authentication for network resources.

Developed complex Python automation scripts and employed scripting languages to create scalable pipelines integrating AWS CodePipeline, Lambda, and S3 for continuous deployment and real-time data processing workflows.

Engineered secure, scalable VPC peering and VPN connections to seamlessly integrate hybrid cloud infrastructures while maintaining stringent adherence to cybersecurity principles and regulatory requirements.

Leveraged Python Celery distributed task scheduling to orchestrate asynchronous job queues critical for transforming large-scale data sets, improving data flow consistency and processing throughput.

Applied deep knowledge of AWS IAM to design fine-grained, role-based access control (RBAC) policies that ensured secure identity and access management aligned with corporate governance and security compliance frameworks.

Conducted detailed environment health checks using Lambda-triggered CloudWatch Events, automating remediation of detected anomalies and improving system reliability and uptime.

Utilized cost optimization techniques such as rightsizing EC2 instances and automating resource cleanup, delivering measurable reductions in monthly AWS billing without sacrificing performance or availability.

Collaborated effectively with software development teams adopting containerized microservices architectures, integrating Docker and ECS with AWS services to enhance deployment agility and scalability.

Authored and delivered comprehensive training sessions and documentation for cloud automation best practices, empowering IT staff to independently manage and optimize cloud resources.

Coordinated security audits and compliance reviews, demonstrating proactive governance and adherence to cybersecurity standards through meticulous IAM policy enforcement and infrastructure hardening.

Client: HPE, TX Apr 2015- Feb-2017

Role: DevOps Engineer

Responsibilities:

Configured and implemented storage blobs and Azure files - Created Storage accounts, Configured the Content Delivery Network (CDN), custom domain, Managing access and storage access key.

Experience in Windows Azure Services like PaaS, IaaS and worked on storage like Blob (Page and Block), Azure. Well experienced in deployment & configuration management and Virtualization.

Developed and supported the Software Release Management and procedures. I am also experienced with working on Subversion, Proposed and implemented branching strategy.

Used Docker for setting Azure Container Registry with Docker and Docker-compose and actively involved in deployments on Docker using Kubernetes.

Used Azure Kubernetes service to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal, with the Azure CLI, also used template driven deployment options such as Resource Manager templates and terraform.

Responsible for Deploying Artifacts in GCP platform by using Packer.

Configured servers to host Team Foundation Server (TFS) instance to set up and manage Continuous Integration (CI) using Team Foundation (TF) Build Service.

Built hybrid mobile apps for iOS using Apache Cordova, Back Bone JS and jQuery Mobile.

Worked closely with software developers and DevOps to debug software and system problems

Responsible for Administering and Monitoring Visual Studio Team System (VSTS), taking backups and consolidating collections at the time of migration from one version of VSTS to another.

Defined dependencies and plugins in Maven pom.xml for various activities and integrated Maven with GIT to manage and deploy project related tags.

Responsible for designing and deploying the best SCM processes and procedures with GitHub, GIT. Familiar with analyzing and resolving conflicts related to merging of source code for clear case.

Used Apache Kafka for importing real time network log data into HDFS.

Improved the performance of SQL Scripts by using Object Role Modelling methodology.

Docker used the automation pipeline and production deployment & implemented the setup for Master slave architecture to improve the Performance of Jenkins. Used Jenkins for Continuous Integration and deployment into Tomcat Application Server.

Profound Experience in designing Strategies to increase the velocity of development and release for Continuous integration, delivery and deployment, by using technologies like Bamboo and Jenkins. Also, experience in using SCM tools like GIT, Subversion (SVN) and TFS on Linux platforms in maintaining, tagging and branching the versions on multiple environments.



Contact this candidate