Post Job Free
Sign in

Senior Cloud/DevOps Architect

Location:
Bloomington, IL, 61704
Posted:
May 23, 2024

Contact this candidate

Resume:

HAIDER SHAGHATI

Phone: 309-***-**** Email: ad5wy7@r.postjobfree.com

AWS CLOUD ENGINEER/ DEVOPS ARCHITECT

SUMMARY

Dynamic Cloud Solutions Architect and seasoned DevOps Engineer with a robust 12-year track record and 24+ years in the IT industry, specializing in cloud infrastructure, automation, and scalable architecture. Holds an impressive portfolio of expertise in both AWS, underpinned by a deep understanding of DevOps practices and tools. Known for leveraging a broad spectrum of cloud services, third-party DevOps utilities, and advanced monitoring solutions to engineer resilient, efficient, and secure IT environments. Demonstrates a strong foundation in scripting, site reliability engineering, and the SDLC, with a proven ability to navigate through the complexities of cloud architecture and DevOps methodologies to deliver innovative solutions.

Core Competencies:

•Cloud Engineering: Mastery over essential AWS, including but not limited to EC2, S3, Lambda, ELB, VPC, CloudFormation. Expert in crafting scalable, fault-tolerant cloud infrastructures that optimize performance and cost.

•DevOps Practices: Hands-on experience with a suite of DevOps tools (Git, Jenkins, Ansible, Puppet, Terraform) to automate build, deployment, and configuration management processes. Proficient in implementing CI/CD pipelines that streamline software delivery cycles.

•Monitoring & Reliability: Skilled in deploying and configuring advanced monitoring solutions (Splunk, Prometheus, Datadog, Nagios) to ensure high availability and reliability. Embraces the principles of Site Reliability Engineering and the Well-Architected Framework to enhance system resilience and user satisfaction.

•Scripting & Automation: Proficient in Python, Bash, and Groovy, enabling the automation of routine tasks, thus improving efficiency, and reducing the margin for error.

•Security & Compliance: In-depth knowledge of security best practices, including the use of AWS CloudWatch, GuardDuty, and CloudTrail for real-time security monitoring. Experienced in managing Certificate Authorities, KMS, and PKI frameworks to ensure the secure transmission and storage of sensitive data.

•Containerization & Orchestration: Advanced skills in deploying containerized applications using Docker, with orchestration via Kubernetes, to facilitate seamless scaling and management of containerized workloads.

TECHNICAL SKILLS

•DevOps & Containerization Skills:

•Continuous Integration/Deployment: Jenkins, AWS (CodeCommit, CodeDeploy, CodeBuild)

•Container Solutions: Kubernetes (EKS), Docker, Amazon ECS/ECR

•System Configuration: Ansible

•Code Management: Git

•Infrastructure Provisioning: Terraform

•Team Collaboration: Slack, Urban Code

•Logging: ELK Stack, Splunk

•Artifact Storage: JFrog Artifactory

•Cloud Expertise:

•Platforms: AWS (including S3, Lambda, MediaConvert)

•Cloud Services Proficiency: AWS (CloudWatch, CloudTrail)

•Programming Proficiency:

•Languages: Java, Python, JavaScript, Kotlin, Go, HTML, CSS, C++, SQL

•Scripting: Python, Node.js, UNIX Shell

•Operating Systems: Unix/Linux (Ubuntu, CentOS, Amazon Linux), Windows (including MS Server)

•Software Development Tools:

•IDEs: Eclipse, Visual Studio Code, IntelliJ IDEA

•Remote Collaboration: Amazon Chime, Zoom, Slack

•Networking Knowledge:

•Protocols: TCP/IP, FTP, SSH, SNMP, DNS, DHCP

•Hardware: Cisco Routers/Switches, understanding of WAN, LAN, NAS, SAN

•Web & Server Management:

•Servers: Apache Tomcat, JBoss, Apache2

•Development Frameworks: Java Spring

•Database Management:

•SQL: Microsoft SQL Server, MySQL, PostgreSQL, Amazon RDS

•NoSQL: MongoDB, Cassandra

•AI Understanding:

•Gemini, Prompt Engineering, Get Cody, ChatGPT & Microsoft Autopilot.

PROFESSIONAL EXPERIENCE

STATE FARM, BLOOMINGTON, ILLINOIS Apr 2022 - Present

DevOps & Security Architect

State Farm is the largest property and casualty insurance provider, and the largest auto insurance provider, in the United States. I Crafted and elevated a resilient AWS cloud ecosystem, blending Terraform automation with cutting-edge AWS services for dynamic, scalable applications; championed a seamless transition from traditional to cloud-based operations, fortifying development and deployment with Docker, Kubernetes, and CI/CD methodologies, personalizing infrastructure agility and reliability.

•Designing, configuring, and deploying highly available, fault-tolerant, and auto-scaling AWS infrastructure for multiple applications utilizing a variety of AWS services including EC2, Route53, VPC, S3, RDS, CloudFormation, CloudWatch, SQS, and IAM.

•Conducting thorough static and dynamic security assessments within Linux environments using tools like Veracode, aligning with DevOps principles to strengthen application safety.

•Automating infrastructure deployment through Infrastructure as Code (IaC) methodologies using tools such as AWS CodePipeline and Terraform to streamline workflows and ensure consistency.

•Implementing Docker Swarm and Docker Compose to automate application development and deployment processes, ensuring reliability and consistency.

•Creating and managing CloudFormation templates (JSON and YAML) for efficient provisioning and management of AWS services.

•Engineering scalable and fault-tolerant staging and production environments across multiple availability zones using Terraform templates.

•Design, build, and maintain scalable and reliable data pipelines to extract, transform, and load (ETL) data from various sources into AWS data storage solutions such as Amazon S3, Amazon Redshift, or Amazon RDS.

•Leading the creation of development, staging, production, and disaster recovery environments with a focus on optimizing deployments and resource utilization.

•Develop and implement security architectures for AWS environments, including designing secure network configurations, access controls, and encryption mechanisms.

•Managing the migration of on-premises applications to the cloud, leveraging features like ELBs and Auto-Scaling policies to enhance scalability, elasticity, and availability.

•Utilizing configuration management tools such as Chef Cookbooks and Ruby scripts to automate infrastructure installation and configuration tasks.

•Maintaining highly available clustered and standalone servers through Ansible scripting and configuration management.

•Leading Kubernetes deployments, including the creation of stateful sets, network policies, dashboards, and Helm charts for efficient cluster management.

•Integrate data from disparate sources, including databases, streaming platforms, and third-party APIs, to create unified and comprehensive datasets for analysis and reporting.

•Set up and manage security monitoring tools such as AWS CloudWatch, AWS Config, and AWS Security Hub to detect and respond to security incidents in real-time, including conducting incident investigations and implementing remediation measures.

•Automating build activities using Maven and Jenkins jobs, managing Jenkins for seamless infrastructure management, and facilitating regular builds.

•Implemented robust AWS security groups to strictly control traffic flow to EC2 instances, significantly improving our cloud environment's overall security posture.

•Led the development of automated solutions for server provisioning, monitoring, and deployment across various platforms (EC2, Jenkins Nodes, SSH). This significantly reduced manual work and boosted operational efficiency.

•Leveraged Groovy scripting and Jira plugins to automate workflows and create custom fields, significantly improving project management capabilities.

•Installed and configured Ansible to automate deployments from Jenkins repositories to various environments (Integration, QA, Production). This streamlined pipelines and accelerated time-to-market for new features.

•Utilized Ansible playbooks and CI tools (Rundeck, Jenkins) to automate infrastructure tasks like continuous deployment, application server setup, and stack monitoring, leading to more consistent and reliable infrastructure operations.

•Design and implement data models and schemas optimized for analytics and reporting requirements, considering factors such as query performance, data granularity, and scalability.

•Automate security tasks and processes using AWS services like AWS Lambda, AWS CloudFormation, and AWS Systems Manager to improve efficiency and maintain a consistent security posture across environments.

•Orchestrated seamless deployments using a suite of AWS services like EC2, Route53, S3, RDS, and IAM. This optimization resulted in faster deployments and better resource utilization.

•Developed automation templates for deploying both relational and NoSQL databases (MSSQL, MySQL, Cassandra, MongoDB) in AWS environments, ensuring efficient database management across projects.

•Configured AWS Elastic Load Balancers (ELB) for auto-scaling based on traffic patterns, and managed multi-tier and multi-region architectures using AWS CloudFormation. This ensured our applications were highly available and scalable.

•Integrated automated build pipelines with deployment workflows to facilitate seamless upgrades, migrations, and integrations of Jira with other tools (SVN, Artifactory, Jama, Jenkins).

•Design and implement data models and schemas optimized for analytics and reporting requirements, considering factors such as query performance, data granularity, and scalability.

•Customized Jira to align with specific project requirements and industry standards. Additionally, provided best practice guidance and ensured consistency with company policies.

•Provide security training and awareness programs for AWS users and stakeholders to educate them about security best practices, policies, and procedures, and promote a culture of security within the organization.

CONOCOPHILLIPS, HOUSTON, TEXAS Aug 2020 – Mar 2022

Lead Cloud/ DevOps Engineer

ConocoPhillips is an independent upstream oil and gas company headquartered in Houston, Texas, United States. With a keen focus on optimizing cloud infrastructure and promoting collaboration, I have successfully implemented various AWS services such as Kinesis, Lambda, SQS, SNS, and SWF to pinpoint and resolve application issues, ensuring smooth and reliable performance.

•Utilized AWS services like Kinesis, Lambda, SQS, SNS, and SWF to pinpoint and resolve application issues, ensuring smooth and reliable application performance.

•Leveraged RDS and EC2-based databases for seamless cloud operation and data integrity.

•Installed, configured, and maintained the GitHub repository to facilitate efficient version control and collaboration among team members.

•Implemented performance and security alert monitoring using CloudWatch and CloudTrail, enabling real-time insights into system health and potential security threats for a more secure cloud environment.

•Configured AWS Elastic Load Balancers (ELB) for auto-scaling based on application traffic patterns and managed multi-tier and multi-region architectures using AWS CloudFormation, ensuring high availability and scalability of applications.

•Integrated automated build pipelines with deployment workflows, facilitating seamless upgrades, migrations, and integrations of Jira with other Atlassian applications and toolsets like SVN, Artifactory, Jama, and Jenkins.

•Integrated GitHub and Bitbucket with Jenkins using plugins and scheduled multiple jobs in the build pipeline, streamlining processes for faster software delivery.

•Oversaw network settings (Route53, DNS, ELB, IP Address, Cider configurations) to guarantee optimal performance and functionality, minimizing downtime and improving user experience.

•Utilized AWS services like multi-AZ, read replicas, ECS, and others to develop highly available and resilient applications, ensuring maximum uptime and reliability for critical business applications.

•Implement data governance policies and security controls to ensure data integrity, confidentiality, and compliance with regulatory requirements such as GDPR, HIPAA, or PCI DSS.

•Managed Docker containers on Kubernetes and successfully migrated containerized environments from ECS to Kubernetes Cluster for improved resource utilization and scalability.

•Offered various storage solutions (S3, EBS, EFS, Glacier) to cater to diverse data storage needs while ensuring data accessibility, durability, and security.

•Deployed applications onto their respective environments using Elastic Beanstalk, simplifying the deployment process, and ensuring consistency across different environments.

•Utilized AWS DataSync to seamlessly migrate petabytes of data from on-premises to AWS Cloud, ensuring minimal disruption to ongoing operations while capitalizing on the scalability and durability of cloud storage solutions.

•Develop and implement data quality monitoring and validation processes to ensure the accuracy, completeness, and consistency of data stored and processed on AWS.

•Managed continuous integration and continuous delivery processes, facilitating rapid and reliable delivery of software updates and enhancements, thereby accelerating time-to-market for new features.

•Resolved issues within Kubernetes clusters, leveraging deep technical expertise and analytical skills to ensure the smooth operation of containerized environments.

•Applied knowledge of Web Services, API Gateways, and application integration development and design to enhance application performance and functionality, delivering superior user experiences.

•Developed and implemented event-driven and scheduled AWS Lambda functions to trigger various AWS resources, automating routine tasks and improving operational efficiency.

THE HOME DEPOT, ATLANTA, GA Apr 2018 – Jul 2020

AWS Data/ DevOps Engineer

The Home Depot, Inc., is an American multinational home improvement retail corporation that sells tools, construction products, appliances, and services, including fuel and transportation rentals. I led the charge in fostering innovation by collaborating across teams, ensuring preparedness for new releases and emerging technologies, while providing expert architectural guidance to align solutions with business objectives.

•Implemented data encryption mechanisms utilizing AWS Key Management Service (KMS) to safeguard sensitive data both at rest and in transit, ensuring compliance with data security and privacy regulations.

•Employed AWS Glue for seamless schema evolution and versioning, enabling updates to data schemas and structures without disruption to downstream applications or analytics processes.

•Collaborated with Linux and AWS support teams to ensure preparedness for new product releases and the adoption of emerging technologies, fostering a culture of continuous learning and advancement.

•Provided expert guidance on architectural design considerations to clients and internal stakeholders, ensuring that solutions aligned optimally with business objectives and technological capabilities.

•Developed a robust Cloud-based Document Management System using Lambda, Elasticsearch, Containers, Python, Java codes, S3, and DynamoDB, enhancing document organization and accessibility within the organization.

•Monitored and maintained Linux systems in a complex multi-server environment, ensuring their stability, security, and optimal performance to uphold critical business operations.

•Deployed classic web applications to AWS ECS containers and managed scalable and resilient applications using Instance Group, Autoscaler, HTTP Load balancer, and Autohealing, guaranteeing high availability and performance across varying workloads.

•Fostered collaboration and transparency between internal teams and external clients through effective communication via face-to-face meetings, phone calls, emails, web portals, and intranet platforms.

•Ensured robustness and scalability of deployed solutions by implementing core technologies including Apache/Nginx, MySQL/PostgreSQL, Varnish, Pacemaker, CRM Clustering, Kubernetes, ELK (Elasticsearch, Logstash, Kibana), and Redis.

•Enhanced operational efficiency and reliability through the utilization of Ansible/Ansible Tower as a configuration management tool for automating daily tasks, rapid deployment of critical applications, and proactive change management.

•Managed systems running on AWS, deploying built artifacts to the application server using Maven, and integrating Maven builds with Jenkins for streamlined build and deployment processes.

•Implemented Infrastructure-as-Code (IaC) principles and maintained an IaC codebase using Puppet, Terraform, and Ansible, enabling consistent and reproducible infrastructure deployments.

•Established Dev, QA, and Prod environments using Terraform variables, managed Terraform code with Git version control system, and defined Terraform modules for Compute and Users to ensure consistency and scalability across environments.

•Automated daily tasks using Bash (Shell) scripts, documented changes in the environment and server configurations, and analyzed error logs and user logs to promptly identify and address issues, ensuring system stability and reliability.

•Installed a multi-node Cassandra cluster, simulated failure scenarios, created keyspaces and tables, and accessed them from the client with Cassandra and Big Data Tech Stack, facilitating efficient handling of large-scale data storage and processing requirements.

TRUIST FINANCIAL CORPORATION, CHARLOTTE, NORTH CAROLINA Sep 2016 – Mar 2018

DevOps Automation Engineer

Truist Financial Corporation is a purpose-driven financial services company committed to inspire and build better lives and communities. I Successfully orchestrated the deployment of an ASP.NET web application on AWS, configuring IIS and application pools for seamless operation, while also establishing automated CI/CD pipelines, optimizing deployment processes, and championing DevOps practices to foster collaboration and enhance code quality.

•Successfully deployed an ASP.NET application on AWS by configuring IIS and application pools for smooth operation.

•Established an automated build and deployment process for the application, laying the foundation for a robust CI/CD system.

•Designed and implemented automated server provisioning using Chef for consistent and scalable infrastructure.

•Installed Tomcat instances and managed deployments using Puppet manifests, enabling efficient application management and scalability.

•Championed DevOps practices like CI/CD, testing, monitoring, fostering collaboration between development and operations teams.

•Developed a robust test environment and utilized JUnit testing to enhance code quality and smoother development cycles.

•Integrated builds and configured deployment pipelines with Jenkins and SSH for streamlined deployments.

•Configured CloudTrail for monitoring user activity and managed product releases across various environments.

•Managed source code repository, build processes, and tools to ensure version control and consistency in software releases.

•Utilized AWS RDS for efficient data storage and retrieval for the organization's applications.

•Modified the SCM database for accuracy and integrity based on user requests.

•Provided deployment services to development teams, ensuring smooth and efficient software releases.

•Collaborated with the Release Manager to improve build automation and redefine processes for software builds, patching, and reporting.

•Automated daily tasks with Bash scripts, documented changes, and analysed logs to ensure system stability and reliability.

•Administered servers using SSH and utilized Nginx and Apache Tomcat for optimal performance and availability of deployed applications.

NISSAN, FRANKLIN, TENNESSEE Jan 2015 – Aug 2016

AWS Cloud Engineer

Nissan produces a diverse range of vehicles, including sedans, SUVs, trucks, electric vehicles, and commercial vehicles. As a lead architect, I spearheaded the design and implementation of highly scalable production systems for Target on AWS, focusing on load balancers, caching, and distributed architectures. By leveraging robust monitoring solutions like CloudWatch and CloudTrail, we ensured proactive performance and security management, safeguarding Target's critical infrastructure.

•Architected and constructed highly scalable production systems for Target on AWS, specializing in load balancers, caching (Memcached), and distributed architectures (master/slave) to efficiently manage high traffic volumes.

•Implemented robust monitoring solutions utilizing CloudWatch and CloudTrail for proactive performance and security management, safeguarding Target's critical infrastructure.

•Spearheaded successful cloud migrations by meticulously analysing and strategically transitioning legacy on-premises applications to AWS, ensuring a seamless transition and optimized performance for an enhanced user experience.

•Utilized AWS provisioning tools (EC2 and ECS) to effectively manage Target's cloud infrastructure, establishing a continuous integration and delivery (CI/CD) environment for accelerated development and deployment cycles.

•Played a pivotal role in crafting a secure and performant VPC design for Target's cloud environment, optimizing network configuration with subnets, availability zones, and adherence to security best practices.

•Advocated for DevOps practices by automating tasks across environments using a variety of tools (Ansible, Bash/Python scripts), facilitating automated builds, deployments, and releases to streamline development lifecycles.

•Streamlined code management and containerization by overseeing Git repositories and crafting Docker containers for optimized Linux environments, alongside Amazon Machine Images (AMIs) for Target's applications.

•Configured resilient network settings utilizing Route53, DNS, Elastic Load Balancers (ELBs), IP addresses, and CIDR blocks to ensure optimal connectivity for Target's applications.

•Ensured adherence to best practices throughout the development lifecycle for cloud initiatives, promoting quality and efficiency in deploying and debugging cloud solutions.

•Contributed to the AWS community by actively engaging with customers in the AWS Containers Area of Depth Technical Feedback Community, providing education on containerization solutions.

•Optimized application deployments through Elastic Beanstalk and leveraged event-driven AWS Lambda functions to dynamically allocate resources based on specific events, ensuring efficient resource utilization.

•Facilitated seamless data migration from on-premises environments to AWS, resolving application issues using a combination of services such as Amazon Kinesis, Lambda, SQS, SNS, and SWF.

•Established a robust CI/CD pipeline by integrating Jenkins with GitHub and Bitbucket using various plugins to orchestrate multiple jobs within the build pipeline, ensuring a seamless and automated development process.

•Provided expert troubleshooting and support for Kubernetes clusters, databases (RDS and EC2-hosted), and storage solutions (S3, EBS, EFS, and Glacier) to maintain optimal performance and address issues within Target's cloud environment. Engineered highly available applications for Target by leveraging AWS services like multi-AZ deployments, read replicas, and ECS, ensuring business continuity and minimizing downtime.

HULU, SANTA MONICA, CA Jan 2013 – Dec 2014

AWS Build & Release Engineer

As an AWS Build and Release Engineer at Hulu, I led Java application construction and deployment across stages, optimizing build processes with Jenkins on Linux. I integrated Maven, Perl, and Bash Shell for streamlined automated build scripts.

•Directed the planning and execution of Java application construction and deployment across various development stages, including development, integration, and user acceptance testing (UAT) environments.

•Established Jenkins on Linux platforms, configuring both primary and secondary builds to enable simultaneous processing and enhance build efficiency.

•Created automated build scripts utilizing Maven, Perl, and Bash Shell, catering to quality assurance (QA), staging, and production deployment needs.

•Employed Bash and Python scripting to automate system administration tasks, resulting in notable improvements in operational efficiency and consistency.

•Provided guidance to developers on effective Git branching strategies, labeling conventions, and conflict resolution techniques to optimize code management practices.

•Implemented a centralized Maven repository using Nexus, ensuring efficient dependency management and version control integration with Git.

•Established and managed automated build systems leveraging Jenkins, ClearCase, and Perl/Python scripts, effectively streamlining the Continuous Integration/Continuous Deployment (CI/CD) pipeline.

•Developed branching strategies for Subversion to maintain code stability and proficiently address user issues.

•Utilized WLST scripts to deploy WebLogic application artifacts, maintaining and optimizing Linux environments to ensure optimal application performance.

•Supervised the implementation of Configuration Management (CM) and Change Management (CM) policies on Linux systems to enforce centralized control and compliance standards.

•Facilitated Release Management meetings to foster collaboration and ensure smooth coordination between teams, thereby facilitating successful deployments.

•Designed and implemented Subversion metadata elements to effectively manage release versions and coordinated cross-team releases for efficient project delivery.

•Actively participated in change control meetings, obtaining necessary approvals for deployments during minor and major release events.

ALTERYX INC., IRVINE, CA Jan 2007 – Dec 2012

Data Analyst

Alteryx, Inc. is a US software company headquartered in Irvine, CA, with a development hub in Broomfield, CO, and global offices. Their products enable data science and analytics, aiming to democratize advanced analytics automation for all data workers.

•Data Acquisition and Ingestion:

•Collaborate with engineers and data scientists to identify and acquire relevant data sources from various formats and locations.

•Ensure data quality and cleanliness through processes like data wrangling and transformation.

•Understand and document data lineage to track the origin and transformation steps of the data.

•Data Analysis and Exploration:

•Utilize SQL and potentially other querying languages to explore and analyze datasets.

•Perform statistical analysis and data visualization to identify trends, patterns, and anomalies.

•Develop dashboards and reports to communicate findings to technical and non-technical audiences.

•Problem Solving and Communication:

•Partner with domain experts (e.g., fraud investigators, financial analysts) to understand their specific questions and challenges.

•Translate business problems into actionable data analysis tasks.

•Communicate insights effectively through presentations, reports, and visualizations tailored to the audience.

•Foundry Expertise:

•Gain proficiency in Alteryx’s Foundry platform, a core product used for data integration, analysis, and visualization.

•Utilize Foundry's features to explore, analyze, and present data efficiently.

•Stay up to date on the latest functionalities and best practices within Foundry.

•Adherence to Security Protocols:

•Understand and follow strict data security protocols as Alteryx deals with highly sensitive data.

•Maintain data privacy and confidentiality throughout the data analysis process.

•Continuous Learning:

•Stay updated on the latest data analysis techniques, data visualization tools, and industry trends.

•Be adaptable and willing to learn new skills and technologies relevant to data analysis at Alteryx.

CONSTELLATION, NEW YORK, NY

Feb 2001 – Dec 2006

System Administrator

As a System Administrator at Constellation, I played a crucial role in supporting the company's operations in electricity generation, transmission, and distribution.

•Involved in installing and configuring databases to meet stakeholder needs and enhance ROI.

•Designing and deploying host computers and implementing automated monitoring systems for optimal performance. Overseeing procurement decisions for hardware, software, and IT communication components.

•Managing infrastructure upgrades, resolving end-user issues, and ensuring smooth operations.

•Facilitating enterprise architecture initiatives, defining database standards, and maintaining data integrity and security through backup and recovery protocols.

•Utilizing SQL scripts to automate processes and managing databases like MySQL.

•Proficiency in VMWare and SAN management, scripting in Shell, Perl, and Python, and troubleshooting using protocols such as DNS, HTTP, LDAP, SMTP, and SNMP to provide production support and maintain system functionality.

EDUCATION

AWS DevOps from Virginia Tech University, Bootcamp VA

Bachelor’s degree - Computer Engineer from University of Technology, Baghdad, Iraq



Contact this candidate