Post Job Free

Resume

Sign in

Senior AWS & Azure DevOps Engineer

Location:
Southfield, MI, 48075
Posted:
January 26, 2024

Contact this candidate

Resume:

Muheez O.

Sr. AWS DevOps Engineer/ Architect

Phone: 313-***-****; Email: ad0qz9@r.postjobfree.com

PROFESSIONAL SUMMARY

•AWS professional with around 10 years of Cloud DevOps technology and Automation, and 4 years of Data Analysis experience.

•Proficient in Linux and Windows environments,

•Skilled with Jenkins, Terraform, and CloudFormation.

•Project Management Professional (PMP) with experience in both Agile/Scrum and Waterfall.

•AWS Cloud technology with services like EC2, S3, VPC, Glue, Athena, Glacier, ELB, CloudWatch, IAM, CloudFront, CloudFormation, Route53, IAM, Cognito, Direct Connect, Transit Gateway (TGW).

•Azure Cloud technology with services like VM, Storage, Load Balancing, Monitor, Networking, Resource Manager, CDN, Express Route, DNS.

•Third Party DevOps and Automation tools like Terraform, Jenkins, Ansible, Puppet, Chef, and Git.

•Databases (RDS) services on both SQL and NoSQL including Postgres, MySQL, SQL Server, Oracle, MariaDB, DynamoDB, and MongoDB.

•Experienced in migrating applications to the Cloud.

•Proficient in developing scripting languages including Python (boto3), bash (AWS CLI) and Groovy to automate operations and routine tasks, delivering consistency and robustness.

•Implemented containerization tools using Docker and Kubernetes, enabling efficient deployment and management of containerized and serverless applications.

•Skilled in deploying and managing CI/CD pipelines using both Azure and AWS native tools as well as third-party tools like GitHub Actions, GitLab Runners, and Jenkins.

•Experience in building infrastructures using IAM, API Gateway, CloudTrail, CloudWatch, SQS, Kinesis, Lambda, Fargate, Elastic Beanstalk, and Redshift.

TECHNICAL SKILLS

Cloud – AWS, VMware (on prem), Azure.

Container & Orchestration – Docker, Ansible, Kubernetes, Puppet.

IaC – Terraform, CloudFormation.

CICD - Jenkins, GitHub Actions, Gitlab runner, Argo CD.

Monitoring – ELK Stack, Nagios, Prometheus, Splunk, Grafana, Datadog, CloudWatch.

Source Control – Git, GitHub, GitLab, Bitbucket, SVN.

Programming Languages - Python, SQL, Bash., Groovy, C#

Markup: JSON, XML, HCL, YAML

Operating Systems – Linux/Unix, Windows, macOS.

SDLC: Agile, Scrum, Waterfall Methodologies

Database: Postgres, MySQL, SQL Server, Oracle, DynamoDB, Document DB, MongoDB.

PROFESSIONAL EXPERIENCE

DevOps Engineer / AWS Architect

General Motors, Detroit, MI, May 2020 to Present

General Motors (GM), in full General Motors Company, formerly General Motors Corporation, is an American corporation that was the world’s largest motor-vehicle manufacturer for much of the 20th and early 21st centuries. It operates manufacturing and assembly plants and distribution centers throughout the United States, Canada, and many other countries.

•Used AWS and Azure core technologies such as EC2/Compute, S3/Storage, VPC Networking in Server-based, Serverless, and Containerized environments.

•Involved in the design and implementation of CI/CD pipelines using Git and Jenkins that serve the purpose of provisioning and operating test as well as production environments.

•Facilitated IT enterprise Architecture across the organization’s transformation programs.

•Monitored and optimized automated build and continuous software integration processes, ensuring timely resolution of build/release failures.

•Used SQL statements (S/U/I/D) and worked with all major RDS Objects including tables, views, indexes, stored procedures, functions and users, and managed security with user management and appropriate grant and revokes.

•Data Analysis using languages like SQL and Python and technologies including RDS, RedShift, Glue, and Athena.

•Achieved high available and fault tolerance of RDS (Postgres) using geographically diverse replica sets.

•Utilized Kubernetes and Docker to deliver microservices in a containerized environment.

•Backend framework programming Python (boto3) and bash (AWS CLI).

•Supported both SQL and NoSQL databases in the Cloud.

•Prepared and updated comprehensive technical documentation to outline the design of entire projects, facilitating seamless collaboration and knowledge sharing within the organization.

•Worked on setting up app services in Azure using PaaS infrastructure for applications.

•Used Terraform to define IaC using HCL files and reusable modules.

•Used Jenkins to automate the deployment of applications and services.

•Used Datadog and Grafana to create rich dashboards across our multi-Cloud setup.

•Maintained ELK (Elasticsearch, Logstash, Kibana) systems.

•Extended Jira’s functionalities using Groovy scripting through plugins like Script Runner and created advanced workflows and scripted fields.

•Supported AWS Glue for data integration service to move data from multiple sources for analytics (ML) and app development.

•Configured and connected output log files monitored by Splunk.

•Collaborated closely with software development and testing team members to design and develop robust solutions that meet client requirements for functionality, scalability, and performance.

•Actively engaged in cross-functional collaboration, working with development team members to analyze evolving client requirements and propose system solutions.

•Maintained a strong focus on security, implementing best practices to ensure the protection of assets and data both at rest and in transit.

•Proposed and led a team of four in the successful migration from a monolithic architecture to microservices using Lambda, Fargate, and Azure Functions, driving improved scalability and flexibility.

Lead DevOps Engineer

Ameex Technologies, Schaumburg, Illinois, December 2018 - May 2020

Ameex Technologies is a digital transformation and delivery partner helping clients ideate, design, build and deploy next generation, deeply integrated solutions.

•Build and designed, configured, and deployed Amazon Web Services for multiple applications using the AWS stack (Cloud Formation, Cloud Watch, SQS, IAM, EC2, Route53, VPC, S3, RDS) with a focus on high-availability, fault tolerance, and auto-scaling.

•Oversaw as a Lead a team with 5 DevOps Engineers interacting with the applications, operations, and support functions.

•Implemented robust security practices in AWS, including multi-factor authentication, access key rotation, role-based permissions, strong password policies, the configuration of security groups and network access control lists (NACLs), Security Groups (SGs) and management of S3 bucket policies and ACLs.

•Integrated automated build with deployment pipeline and upgraded, migrated, and integrated Jira and Architect Framework.

•Used SolarWinds network mapper for a quick view of the app network live and provide updates.

•Used Grafana Labs for data source query and support and customized using interface to help write dashboard into a variety of panels.

•Applied Business Process Flow, Business Process Modeling, Business Analysis, and multiple testing methodologies to ensure efficient project management.

•Used Groovy API on Jenkins to build and configure script executing Groovy Shell.

•Developed custom images using Docker server and Docker Compose using DockerHub and ECR, orchestrating multiple local containers, and developing production-grade workflows and continuous application workflows for multiple images.

•Used Terraform, Terraform Modules, and HashiCorp Vault to deliver a fully automated and managed Infrastructure as Code solution.

•Used Ansible to build CI/CD comprehensive automation pipeline.

•Used DataDog for real-time insights to performance applications, servers, and services to identify any troubleshoot or issue that becomes critical.

•Build and configure log files monitored by Splunk.

•Monitored end-to-end infrastructure using CloudWatch, leveraging SNS for notifications.

•Built secure and scalable Virtual Private Clouds (VPCs) with private and public subnets, established VPN connections to on-prem data centers (including Direct Connect) and remote offices (using VPN).

•Used Python (Boto3) and bash (AWS CLI) to automate many previously manual tasks.

•Optimized costs by leveraging reserved instances, selecting, and adjusting EC2 instance types based on resource needs, implementing S3 storage classes and lifecycle policies, and utilizing autoscaling capabilities.

•Created detailed documentation of complex build and release process for post-release activities using JIRA workflow and Release notes.

•Managed Security Assessment and Authorization (SA&A) process to support continuous monitoring activities in accordance with NIST requirements and guidelines.

AWS DevOps Engineer / Architect

Mondelez International, Chicago, IL, June 2017 – December 2018

Mondelez International, Inc. styled as Mondelēz International, is an American Multinational Confectionery, food, holding, beverage, and Snack Food Company based in Chicago.

•Designed and implemented automated server build management, monitoring, and deployment solutions across multiple platforms and tools, including Amazon EC2, Jenkins Nodes/Agent, and SSH

•Collaborated closely with development teams, leveraging a range of AWS services such as Kinesis, Lambda, SQS, SNS, and SWF to identify and resolve application issues effectively.

•Maintained databases in the cloud, including RDS and EC2-based databases.

•Ensured the smooth installation, configuration, and management of GitHub repositories.

•Used Terraform HCL code to define Cloud and prem resources to configure and manage all infrastructure throughout lifecycle workflow.

•Oversaw development of training content for issues related to IT Cybersecurity.

•Configured performance and security alert monitoring systems using CloudWatch and CloudTrail, enabling proactive monitoring and ensuring the overall security of the cloud infrastructure.

•Build cloud native microservices infrastructure using Docker and Kubernetes.

•Responsible for designing and deploying ELK clusters (Elasticsearch, Logstash, Kibana).

•Integrated GitHub and Bitbucket with Jenkins through various plugins, scheduling and managing multiple jobs in the build pipeline, facilitating streamlined and automated deployment processes.

•Managed network settings, including Route53, DNS, ELB, IP Address, and CIDR configurations, optimizing performance and maintaining reliable connectivity.

•Developed highly available and resilient applications by leveraging AWS services such as Multi-AZ and Read-Replicas and reliability and robustness.

•Provided comprehensive storage solutions utilizing AWS services like S3, EBS, Glacier, and others, tailoring them to the specific requirements of the applications.

•Followed and adhered to best practices and ensured the successful deployment and debugging of cloud initiatives throughout the development lifecycle.

•Maintained and improved continuous integration and continuous delivery processes, enabling the efficient and reliable deployment of applications.

•Performed troubleshooting and issue resolution within Kubernetes clusters, ensuring the smooth operation and performance of the infrastructure.

•Implemented event-driven and scheduled AWS Lambda functions, effectively triggering various AWS resources, and enabling efficient and automated workflows.

•Involved in performing data migration from on-premises environments into AWS, ensuring a seamless transition and data integrity.

DevOps Engineer

Unisys Corporation, Salt Lake City, UT, August 2016 – June 2017

Unisys Corporation is an American multinational information technology (IT) services and consulting company founded in 1986 and headquartered in Blue Bell, Pennsylvania. The company provides digital workplace, cloud applications & infrastructure, enterprise computing, and business process services.

•Provided technical expertise to Linux and AWS support teams for new product and service releases, ensuring smooth transitions and effective utilization of new technologies.

•Consulted with clients and internal stakeholders, guiding architectural design considerations to optimize system performance, scalability, and security.

•Built a robust Document Management System on the cloud using Lambda, Elasticsearch, Containers, Python and Java codes, S3, and DynamoDB, enabling efficient document storage, retrieval, and management.

•Monitored and managed Linux systems in a complex multi-server solutions environment, proactively identifying and resolving issues to maintain system stability and availability.

•Deployed a classic web app to AWS ECS containers, leveraging key AWS services such as Instance Group, Autoscaling, HTTP Load balancer, Autohealing, and Google App Engine. Ensured scalability and resilience of the application.

•Write in the HashiCorp configuration Language (HCL) Terraform Code to describe the infrastructure Cloud.

•Collaborated with internal teams and external clients, effectively communicating, and coordinating project requirements, updates, and support using various communication channels.

•Worked in the Edge browser to write notes and doodle directly on the webpages.

•Applied core technologies such as Apache/Nginx, MySQL/PostgreSQL, Varnish, Pacemaker, CRM Clustering, Kubernetes, ELK (Elasticsearch, Logstash, Kibana), and Redis to develop and optimize system infrastructure.

•Utilized Ansible/Ansible Tower as a powerful configuration management tool, automating daily tasks, deploying critical applications, and managing changes efficiently.

•Operated and maintained systems running on AWS, ensuring their availability and performance. Deployed built artifacts to the application server using Maven and integrated Maven builds with Jenkins for seamless deployment workflows.

•Embraced the principles of Infrastructure-as-Code (IaC), building and maintaining an IaC codebase using Puppet, Terraform, and Ansible. Ensured consistency and repeatability in infrastructure provisioning.

•Deployed development, QA, and production environments using Terraform variables, managing Terraform code with the Git version control system. Created reusable Terraform modules, such as Compute and Users, to streamline environment provisioning.

•Automated daily tasks using Bash (Shell) scripts, ensuring operational efficiency, and reducing manual efforts. Documented environment changes and performed log analysis to troubleshoot issues.

•Installed and configured a multi-node Cassandra cluster, conducted failure induction, created key spaces/tables, and accessed data from clients, gaining proficiency in Cassandra and the Big Data tech stack.

•Utilized Azure as an external cloud service for managing Microsoft Active Directory.

•Applied built-in customization and third-party add-ons to extend JIRA functionality as needed to meet client's requirements.

AWS DevOps Analyst

Target Corporation, Minneapolis, Minnesota, January 2013- August 2016

Target Corporation is an American retail corporation headquartered in Minneapolis, Minnesota. It is the seventh largest retailer in the United States, and a component of the S&P 500 Index. The company is one of the largest American-owned private employers in the United States.

•Created the DevOps CI/CD Pipeline in AWS supporting the Data Analysis functions at Target.

•Utilized third-party tools including Git, Jenkins and some of the earliest Terraform incarnations for IaC.

•Streamlined data analysis and reporting processes by creating Excel documents that effectively pulled metrics data and presented it to stakeholders and provided concise explanations of the best placement for needed resources, enabling informed decision-making.

•Monitored CPU, memory, hardware, and software including raid, physical disk, multipath, filesystems and networks using the Nagios monitoring Tool.

•Coordinated statistical data analysis, design, and information flow, ensuring the accuracy and integrity of data throughout the organization.

•Coordinated with cross-functional teams to establish seamless data pipelines and efficient information exchange.

•Actively participated in requirements meetings and data mapping sessions to gain a deep understanding of business needs and align technology solutions accordingly.

•Leveraged SQL to develop tables, views, and materialized views, enabling efficient data management and retrieval.

•Conducted thorough research and resolved issues related to data flow integrity into databases and was involved in troubleshooting and implementing effective solutions to maintain the accuracy and reliability of data.

•Transformed project data requirements into comprehensive data models, providing a solid foundation for data management and analysis, and ensuring data consistency, scalability, and ease of integration with existing systems.

•Evaluated industry trends and analyzed competitive environments to gain insights and assess the effectiveness of current strategies and enabled proactive decision-making and the identification of opportunities for improvement and innovation.

•Compiled, evaluated, and reviewed engineered data for internal systems, contributing to the enhancement of overall system performance and efficiency by analyzing data, identifying patterns, and providing actionable insights to drive business outcomes.

EDUCATION

•Bachelor’s in Electrical / Electronic Engineering from Ferris State University, USA

CERTIFICATIONS

•AWS Cloud Practitioner

•Microsoft Certified Professional

•PMP Project Management Professional

•HashiCorp Terraform



Contact this candidate