Post Job Free
Sign in

Devops Engineer, Cloud Engineer, Sote Reliability Engineer

Location:
Chicago, IL
Posted:
December 03, 2025

Contact this candidate

Resume:

SYED ABDULLAH AL HASNI

+1-779-***-**** *************@*****.*** https://www.linkedin.com/in/syed-abdullah-al-hasni-8a7b4929b/ PROFESSIONAL SUMMARY

Accomplished Cloud DevOps Engineer offering extensive expertise in automating and optimizing cloud infrastructures with AWS and Azure. Known for strategically migrating on-premises applications to cloud-based platforms, utilizing advanced CI/CD pipelines, and orchestrating containerized applications using Kubernetes for superior deployment efficiency. Demonstrates a proven ability to enhance system reliability through comprehensive solutions that leverage diverse cloud technologies and DevOps practices. Driven by a commitment to operational excellence, the goal is to support organizational growth and innovation through the effective use of cutting-edge DevOps strategies and collaborative approaches. TECHNICAL SKILLS

• Operating systems: Linux (Red Hat 6/7, CENTOS 8), Solaris 9/8, HP-UX 11.11

• Web/Application Servers: WebLogic, WebSphere, Apache Tomcat, JBoss

• IaC & Config Management Tools: Terraform, Ansible, Chef, Puppet

• CI/CD & Build Tools: Jenkins, Azure DevOps, CircleCI, Maven, ANT, Gradle

• Version control tools: Git, Bitbucket

• Containerization tools: Docker, Kubernetes, Openshift

• Networking/Protocol: TCP/IP, NIS, NFS, DNS, DHCP, SMTP, FTP/SFTP, HTTP/HTTPS, NDS, Cisco Routers/Switches, WAN, LAN

• Scripting: Python, Ruby, Bash shell, Power shell scripting, PHP, JSON

• Virtualization Technologies: VMWare ESX/ESXi, Windows Hyper-V, Power VM, Virtual box, Citrix Xen

• Cloud Environments: AWS, Microsoft Azure, GCP

• Databases: MySQL, AWS RDS

• Programming/Web Technologies: Shell scripting, Bash, Java, Python

• Reporting Monitoring tools: Nagios, Splunk, CloudWatch, Grafana.

• Reporting Ticketing tools: JIRA

PROFESSIONAL EXPERIENCE

T-Mobile Jan 2024 - Present

Cloud DevOps Engineer Texas, United States

• Worked with the operations team to optimize Azure security protocols and develop specific remediation strategies, resulting in improved process efficiency while also creating automation systems with PowerShell scripts and JSON templates for ongoing enhancement of Azure services.

• Worked on Designing, Planning, and implementing existing on-premise applications to Azure cloud (ARM), Configured and deployed Azure Automaton Scripts using Azure stack services and utilities focusing on Automation.

• Developed and maintained build pipelines using tools like Webpack, Gulp, or Grunt to compile TypeScript and JavaScript code into optimized bundles for deployment.

• Provided training and documentation to educate team members on best practices for monitoring and log management, empowering them to effectively utilize monitoring tools and troubleshoot issues independently.

• Leveraged Azure Kubernetes services to deploy and manage Kubernetes clusters via Azure CLI and the portal, incorporating Terraform and Resource Manager templates for efficient, template-driven deployment; facilitated the creation of virtual machines and broader infrastructure within the Azure cloud environment.

• Implemented Continuous Integration/Continuous Deployment (CI/CD) pipelines using Jenkins, GitLab CI/CD, or CircleCI to automate the testing, building, and deployment of TypeScript and JavaScript applications.

• Designed and deployed several applications using the full AWS stack, including EC2, Route 53, S3, EKS, VPC, and more, while resolving Terraform module version issues with Cloud Formation for improved control and capabilities, ultimately boosting scalability and reliability.

• Implemented Amazon EMR clusters to facilitate seamless and efficient transfer of large volumes of data between on-premises systems and cloud storage, leveraging EMR's distributed processing capabilities to significantly accelerate data transfer speeds and reduce transfer latency.

• Developed and optimized complex SQL queries on Snowflake to support analytical dashboards and reporting for real-time business insights.

• Used AWS Athena to run serverless queries on large datasets stored in S3, reducing infrastructure overhead and improving query efficiency.

• Designed and implemented data lifecycle management policies using S3 Lifecycle Rules to automate data archival, deletion, and tiering based on access patterns and compliance requirements.

• Led the automation of server configuration and management using Ansible and Cobbler, significantly enhancing deployment efficiency by automating the build of new servers and managing existing ones, which reduced manual errors and streamlined operations.

• Designed and implemented backend microservices using Golang, utilizing gRPC and Protocol Buffers for high-performance, lan- guage-agnostic service communication.

• Integrated gRPC services with Protocol Buffers to enable fast, type-safe communication between distributed components in a microservices architecture.

• Implemented service discovery, load balancing, and authentication within Golang-based gRPC microservices using tools like Envoy and Consul.

• Wrote Ansible Playbooks with Python SSH as the Wrapper to Manage Configuration of AWS nodes and Tested Playbooks on AWS instances using Python. Run Ansible Scripts to Provide Dev Servers.

• Implemented GitLab's infrastructure-as-code capabilities for provisioning cloud resources or managing server configurations, enhanc- ing DevOps practices.

• Integrated API-related tasks into CI/CD pipelines using automation tools (e.g., Jenkins, GitLab CI) to ensure seamless deployments and updates.

• Managed version control with Git within Copado, enabling seamless collaboration, tracking, and rollback capabilities for development teams.

• Integrated Copado with Jira for end-to-end visibility and traceability of user stories, ensuring compliance with agile sprint goals.

• Configured and optimized Copado deployment flows including data deployment, test automation, and validations, reducing manual intervention.

• Installed Docker using the toolbox and created, tagged, and pushed custom Docker container images, improving deployment efficiency and enabling seamless container management for faster application updates.

• Maintained Docker containers and images for runtime environments using containerization tools, which enhanced system reliability and facilitated the development of distributed cloud systems with Kubernetes

• Utilized Amazon S3 as a reliable and cost-effective object storage solution to store and manage static assets, application binaries, and large datasets in DevOps workflows.

• Designed, implemented, and managed JFrog Artifactory repositories to efficiently store and manage artifacts.

• Automated infrastructure activities such as Continuous Deployment, Application server setup, and stack monitoring using Ansible Playbooks with Jenkins, enhancing deployment efficiency

• Configured Jenkins as a build engine to deploy applications across DEV, QA, UTA, and PROD environments. Developed microservice onboarding tools using Python and Jenkins, streamlining build job creation and Kubernetes deployment

• Configured and managed Source code using GIT and resolved code merging conflicts in collaboration with application developers and provided a consistent environment. Implemented continuous integration using Jenkins and GIT.

• Integrated Datadog with CI/CD pipelines to automate metric collection and analysis throughout the software development lifecycle, enhancing monitoring and performance insights

• Implemented data backup and restore strategies for DynamoDB using AWS Backup, Data Pipeline, or custom scripts to ensure data durability and disaster recovery readiness.

• Installed and configured Jenkins Plugins to support the project-specific tasks and Automated Deployment of builds to different environments using Jenkins.

• Utilized GIT and GitHub repositories to maintain source code, deploying code through Travis and Terraform to Amazon Web Services, ensuring efficient code management and deployment processes

• Designed and automated ETL workflows using Apache Airflow, orchestrating complex data pipelines with dependency management and retry logic for fault tolerance.

• Built serverless data transformation jobs using AWS Glue and PySpark to clean, enrich, and load structured and semi-structured data into S3 and Redshift.

• Processed terabyte-scale datasets on AWS EMR using Spark, optimizing cluster configurations for cost-effective and high-performance data processing.

• Worked on version control systems like Subversion, and GIT and used Source code management client tools like GitBash, Git Hub, Clear Case, Git GUI, and other command-line applications in terms of branching, tagging, and maintenance on UNIX and Windows environments.

Datamatics Apr 2020 - Nov 2021

AWS DevOps Engineer India

• Utilized AWS services like IAM, VPC, EC2, and S3 to enhance cloud infrastructure, improving system scalability and security

• Developed Puppet modules with Jenkins for CI/CD of managed products, enhancing deployment efficiency and reliability, and wrote install scripts in Python using Puppet helper functions

• Developed and implemented efficient CI/CD pipelines with AWS CodePipeline and CodeBuild that automated and streamlined the deployment process for containerized applications on ECS Fargate, significantly reducing deployment time and minimizing errors.

• Resolved update, merge, and password authentication issues in Jenkins and Jira, improving system reliability, and maintained inventory tracking with Jenkins, setting alerts for full servers

• Developed a pipeline workflow that created Dockerized development environments, triggering builds in Jenkins and storing new Docker images, which streamlined the development process

• Worked on Terraform for creating stacks of VPCs, ELBs, Security groups, SQS queues, and S3 buckets in AWS and updated the Terraform Scripts based on the requirement regularly.

• Wrote Ansible playbooks which are the entry point for Ansible provisioning, where the automation is defined through tasks using YAML format and run Ansible Scripts to provision Dev servers.

• Installed and configured GIT to implement an agile-friendly branching strategy, providing continuous support and maintenance for software builds, which improved development workflow efficiency

• Maintained build-related scripts developed in ANT, Python, and Shell. Modified build configuration files including ANT's build.xml.

• Executed federated queries across distributed data sources using Presto, enabling unified analytics across heterogeneous data environ- ments.

• Implemented partitioning, clustering, and caching strategies in Snowflake to significantly improve query performance and reduce compute costs.

• Automated user story promotion, regression testing, and sandbox deployments using Copado, reducing deployment errors and cycle time.

• Utilized Copado's metadata deployment tools to streamline and secure complex Salesforce org releases.

• Maintained and monitored compliance gates, quality gates, and security scans in Copado pipelines to meet enterprise-level standards.

• Wrote ANT Scripts to automate the build process. And also Responsible for automated Scheduled Builds/Emergency Builds and Releases using ANT scripts for Enterprise applications (J2EE) and Worked on Microsoft .Net technology Stack.

• Worked on the installation and configuration of the monitoring tool Nagios. Deployed Nagios server and configured Nagios client through Nagios plugin (NRPE) using chef cookbooks and recipes.

• Used JavaScript for client-side validations and was involved in developing JSPs for developing the view of the applications.

• Created versioned .proto files to define data schemas, ensuring backward compatibility and seamless API evolution.

• Collaborated with frontend and backend teams to define and maintain protobuf contracts across services in a CI/CD environment.

• Built unit and integration tests for Golang gRPC services using Go's native testing framework and mocked protobuf interfaces.

• Wrote Powershell scripts for .Net applications deploys, service installs, and Windows patches/upgrades.

• Collaborated with security teams to configure Splunk for compliance monitoring and reporting, integrating it with other tools to enhance system visibility and security

• Improved speed, efficiency, and scalability of the continuous integration environment by automating processes with Python, Ruby, shell, and Powershell scripts, leading to faster deployment cycles

• Performed database performance tuning and query optimization using WebLogic, leading to improved application response times. Designed AWS Cloud Formation templates to create custom-sized VPC, subnets, and NAT, ensuring successful deployment of web applications and database templates

• Integrated Airflow with AWS Glue and EMR to manage data ingestion, transformation, and scheduling across multiple environments.

• Monitored and debugged data pipeline failures using Airflow logs and AWS CloudWatch, ensuring consistent SLAs and data quality.

• Managed Linux infrastructure using Puppet to enhance efficiency and speed, resulting in streamlined operations

• Installed and configured Tomcat and Jboss on Linux servers, improving application deployment and server performance Bosch Nov 2018 - Mar 2020

DEVOPS ENGINEER / SRE ENGINEER India

• Provisioned infrastructure using Cloud Formation templates with AWS services like S3, EC2, and ELB, improving deployment speed and resource management

• Maintained scalable environment for application servers using command line interface for Setting up and administering DNS system in AWS using Route53.

• Configured and managed multiple AWS instances, optimized security groups, ELB, AMIs, and Auto Scaling for cost-effectiveness and system availability; developed Lambda functions for automated code deployment and monitoring changes in S3 and DynamoDB tables, improving system reliability

• Created and managed AWS Cloud Formation Stack using VPC, subnets, EC2 instances, ELB, and S3 and integrated it with CloudTrail. Versioned CloudFormation templates are stored in GIT, visualized CloudFormation templates as diagrams, and modified with the AWS CloudFormation Designer.

• Designed and deployed applications using a wide range of AWS services, including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, and IAM, enhancing system resilience through improved high-availability, fault tolerance, and auto-scaling capabilities, resulting in increased operational efficiency and performance.

• Created a relational MySQL Database schema to configure storage capacity, backup retention, and Disaster recovery security groups and added READ replicas to offload the read traffic from the primary instance by using the RDS console.

• Created EBS volumes to store persistent data and use snapshots to reduce failure. Taking point-in-time snapshots gave the ability to backup Amazon EBS volumes to S3.

• Worked on AWS Lambda to run code in response to events, such as changes to data in an Amazon S3 bucket, Amazon DynamoDB table, and HTTP requests using AWS API Gateway, enhancing system responsiveness and automation

• Created metadata-driven ETL pipelines in Glue for dynamic schema handling, reducing manual intervention and increasing scalability.

• Deployed custom applications using Chef, executed schema updates with Liquibase, and coordinated everything with Jenkins. Configured and maintained Jenkins to implement the CI process and integrated it with Maven to schedule builds, improving deployment efficiency and system integration

• Implemented Docker to wrap up the final code and set up development and testing environments using Docker Hub, Docker Swarm, and Docker Container Network, improving deployment efficiency

• Wrote templates for AWS infrastructure as code using Terraform to build staging and production environments, orchestrated and migrated CI/CD processes using Cloud Formation and Terraform Templates, and set up infrastructure in Vagrant, AWS, and VPCs, enhancing deployment consistency

• Used HashiCorp-Vault for Encryption as a service, providing encryption and decryption capabilities for sensitive data to generate and manage encryption keys, encrypt data at rest or in transit, and securely store the encryption keys.

• Designed branching strategies for using Version Control Systems like GITHUB, Clear Case, and Stash and Developed GIT hooks for the local repository, code commit and remote repository, and code push functionality.

• Implemented Mono Repos for enhanced Continuous Integration/Continuous Deployment (CI/CD), allowing for streamlined workflows by combining multiple projects into one pipeline, so changes can be collectively tested and deployed while minimizing the management complexity of separate environments.

BNP Pariba May 2017 - Oct 2018

DEVOPS ENGINEER / SRE India

• Created and maintained highly scalable and fault-tolerant multi-tier AWS environments spanning across multiple availability zones using Terraform, improving system reliability and uptime

• Managed IAM policies integrated with Active Directory, enhancing security in GCP and AWS by ensuring compliance with security standards and improving overall access management protocols.

• Created shared VPC with different tags in a single GCP project and used it across all projects, supporting the AWS team and setting up the IPSec tunnel between Google Cloud and AWS Networking infrastructure, which streamlined network management and improved connectivity

• Installed, Configured, Managed, and Created different Build and Deployment Jobs in Jenkins. Installed several plugins in Jenkins to support multiple tools required for the implementation of projects.

• Worked on troubleshooting the build issues during the Jenkins build process and developed build and deployment scripts using ANT as build tools in Jenkins to move from one environment to another environment.

• Installed Jenkins for Continuous Integration and wrote Windows Shell script for an end-to-end build and deployment automation and used Jenkins to automate most of the build-related tasks.

• Administered and Engineered Jenkins for managing nightly Build, Test, and Deploy chain, GIT with Development/Test/Production Branching Model for weekly releases.

• Automated data ingestion pipelines to load data into Snowflake from S3 and external sources using Python and SQL scripts.

• Created views and materialized queries in Athena and Presto for data exploration, cleansing, and aggregation by cross-functional teams.

• Built Puppet enterprise modules using Puppet DSL to automate infrastructure provisioning and configuration management to existing infrastructure by deploying Puppet.

• Improved deployment processes by using Ant and Maven to create build.xml and pom.xml files, while configuring and deploying applications to WebLogic and WebSphere servers, leading to more streamlined and efficient software delivery.

• Monitored Jenkins, Jira, Bitbucket, and Nexus logs daily to successfully identify and troubleshoot deployment issues; provided comprehensive support to application teams by resolving DevOps tool-related problems, ultimately improving overall operational efficiency and enabling more reliable system performance.

• Maintained the Red Hat Satellite for infrastructure management to keep Red Hat Enterprise Linux environments and other Red Hat infrastructure running efficiently.

• Configured and managed Red Hat Linux kernel, memory upgrades, and swaps area. Red Hat Linux Kickstart Installation Sun Solaris Jumpstart Installation. Configuring DNS, DHCP, NIS, and NFS in Sun Solaris 8/9 & other Network Services.

• Conducted data backups and rollbacks using Copado's snapshot and environment management features to ensure release safety.

• Created and maintained custom Copado templates and automation scripts for frequent deployment tasks, boosting operational efficiency.

EDUCATION DETAILS

Trine University Mar 2022 - Dec 2023

Masters, Information Studies Michigan, United States Nawab Shah Alam Khan College of Engineering and Technology Aug 2016 - Sep 2020 Bachelors, Computer Science Hyderabad, India



Contact this candidate