Post Job Free
Sign in

AWS DevOps Engineer

Location:
Fremont, CA
Posted:
October 27, 2022

Contact this candidate

Resume:

SHAHAB ARIAN

(***) -*** ****

Professional Profile

Overall 11+ years of experience in IT, including 8+ in AWS, DevOps, Continuous Integration, Continuous Deployment, and Cloud Implementations.

Experience analyzing system functionality, design, development, and implementation of QA testing strategies for Ecommerce, Financial, Insurance, Retail, Accounting, Inventory, Health Insurance in web-based, point-of-sale and client/server applications using manual and automated testing.

Professional experience configuring and deploying instances on AWS, Azure, and cloud environments.

In-depth experience in Amazon AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP, RDS, SNS, SQS, Glacier, IAM, VPC, CloudFormation, Route53), and managing security groups on AWS.

Migrated applications to the AWS cloud. Involved in DevOps processes for build and deploy systems.

Created Python scripts to totally automate AWS services, which include web servers, ELB, CloudFront distribution, databases, EC2 and database security groups, and S3 bucket and application configuration. Scripts create stacks, single servers, or joins web servers to stacks.

Well versed building and deploying applications by adopting DevOps practices such as Continuous Integration (CI) and Continuous Deployment/ Delivery (CD) in runtime with various CI tools such as Jenkins, Ansible, and VSTS.

Work with configuration management tools such as Puppet and Ansible.

Work with Containerization tools such as Docker and Kubernetes.

Expertise with Monitoring tools like CloudWatch, Nagios and Zabbix.

Expertise preparing Test Plans, Use Cases, Test Scripts, Test Cases and Test Data.

Thorough knowledge in SQL.

Experienced in Defect Management using Test Director, Quality Center, ALM, TFS, VSTS, and MTM.

Diverse experience in Information Technology with emphasis on Quality Assurance, Manual Testing, and Automated Testing using Quick Test Professional/ UFT, Load Runner, Win Runner Telerik Studio, Selenium, Protractor, UI Automation and Test Director/ Quality Center, Rational Suite, ALM, and Microsoft Test Manager.

Proven in QA Agile testing with extensive knowledge of Agile software testing.

Professional Skills

Programming & Scripting

Unix shell scripting, Java, SQL, Python, Parquet, Pig, ORC, Microsoft PowerShell, C, C#, VBA.

Databases

Use of databases and File Systems in SQL and NoSQL Databases, Apache Cassandra, Apache HBase, MongoDB, Oracle, SQL Server, HDFS

Cloud Environment

AWS, Azure, GCP, Openstack, Heroku

Network Protocols

SMTP, SNMP, ICMP, TCP/IP, FTP, TELNET, UDP, and RIP, iSCSI, Fibre Channel, NIS, NFS, DNS, DHCP, Cisco Routers/Switches, WAN, LAN, NAS. SAN

DevOps

CI/CD, Jenkins, Git, GitHub, Bitbucket, GitLabCI, Circle CI, Ansible, Chef, Puppet, Terraform, Docker, Kubernetes

Project Methods

Agile, Kanban, Scrum, DevOps, Continuous Integration, Test-Driven Development, Unit Testing, Functional Testing, Design Thinking, Lean Six Sigma

IAC

Terraform, CloudFormation

DATA VISUALIZATION AND MONITORING

CloudWatch, Prometheus, Grafana, ELK, Tableau, PowerBI, Nagios, Athena

Professional Experience

AWS DevOps Engineer September 2021 – Current

Lucid Newark, CA

Project Description

Worked on automating the process of updating electric vehicle software on a weekly basis with new releases.

Project Points

Worked with Agile practices using CI/CD pipelines, with Jenkins (for continuous integration),

Deployed AWS resources using AWS Cloud Formation.

Wrote and updated Groovy scripts using Jenkins.

Automated AWS EC2/VPC/S3/SQS/SNS processes using Python and Bash Scripts.

Deployed AWS Lambda code from Amazon S3 buckets and created a Lambda deployment function and configured it to receive events from S3 bucket.

Created Python Scripts to Automate AWS services, ELB, CloudFront Distribution, ECS, database, EC2 and database security groups, and S3 bucket and application configuration.

Created Docker images using Dockerfile, worked on Docker container snapshots, removed images, managed Docker volumes and set up Docker Host.

Set up CI/CD pipelines in alignment with standard processes of software lifecycle development and testing to ensure pipelines were fully functional and met requirements before implementation to production.

Configured GIT plugin to offer integration between GIT and Jenkins.

Prepared Linux support teams for new product, service releases, and the application of new technologies.

Utilized JFrog platform on AWS for faster and more efficient software releases.

Utilized Parasoft service virtualization tool to enable simulate services applied to testing procedures.

Performed functional unit tests, regression tests, stress tests, and integration tests.

Applied Static Analysis C++ testing.

Deployed onto development and production environments.

Developed best practice standards, architectures, and procedures to support the utilization of cloud infrastructure services.

Re-designed the Code-promotion process and automated it by leveraging Groovy and Confluence APIs.

Created advanced workflows, conditions, scripted fields, and extended Jira’s functionalities by Groovy scripting through plugins like Script Runner.

AWS DevOps Engineer April 2020 – September 2021

Cricket Wireless Atlanta, GA

Project Description

Integrated DevOps across multiple IT systems to combine customer transactions and interactions data to deploy a CI/CD pipeline.

Project Points

Design, configure, and deploy Amazon Web Services (AWS) for multiple applications using the AWS stack (EC2, Route53, S3, RDS, CloudFormation, CloudWatch, SQS, IAM), focusing on high availability, fault tolerance, and auto scaling.

Handle migration of on-premises applications to the cloud and create resources in cloud to enable this. Use all critical AWS tools, ELBs, and Auto-Scaling policies for scalability, elasticity, and availability.

Write CloudFormation Templates (CFT) in JSON and YAML formats to build the AWS services with the paradigm of Infrastructure-as-Code.

Design AWS CloudFormation templates to create custom-sized VPC Subnets, and NAT to ensure successful deployment of Web applications and database templates. Utilize AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS, and create nightly AMIs for mission-critical production servers as backups.

Use Jenkins as continuous integration (CI) tool to deploy Spring Boot Microservices to AWS Cloud and Pivotal Cloud Foundry (PCF) using build pack.

Use CloudWatch logs to move application logs to S3 and create alarms based on a few exceptions raised by applications.

Create Terraform templates to create custom-sized VPC and NAT subnets for deployment of web applications and databases. Use Terraform as Infrastructure-as-Code and Execution plans, Resource Graphs, Change Automation and extensively use Auto scaling launch configuration templates for launching amazon EC2 instances while deploying Micro services.

Implement Terraform modules for deployment of various applications across multiple Cloud providers and managed infrastructure. Build Jenkins jobs to create AWS infrastructure from GitHub repos containing Terraform code.

Manage Git repository, code merging, and production deployments. Maintain build-related scripts developed in the shell for Maven builds. Create and modify build configuration files, including POM.XML. Use MAVEN as a build tool on Java projects for the development of build artifacts on the source code.

Integrate Jenkins with Git build automation tools and Nexus repository for pushing successful build code. Use Jenkins for automating/scheduling the build processes and used Jenkins along with Shell and Python and Perl scripts to automate routine jobs. Applied CI/CD tools to implement Blue - Green deployment methodology for reducing downtime in Production environment.

Integrate SonarQube with Jenkins for continuous code quality inspection. Implemented functional tests using Java, Junit framework, Cucumber framework.

Use Hybrid cloud environment for application deployments using OpenStack cloud and work within cloud for Integration process. Use OpenStack to build cloud labs for application deployment for Testing Environments.

Build new OpenStack Deployment through created playbooks for OpenStack deployments and bug fixes with Ansible and manage them in production environment.

Automate various infrastructure activities such as continuous deployment, application server set up using Ansible playbooks, and integrate Ansible with Jenkins. Refine automation components with scripting and configuration management (Ansible).

Write Ansible Playbooks with Python SSH as the wrapper to manage configurations of AWS Nodes and Test Playbooks on AWS instances using Python. Run Ansible Scripts to provision Dev servers.

Use Kubernetes to deploy, scale, load balance, and manage Docker containers with multiple name-spaced versions. Use Kubernetes for automated deployments, scaling, and management of containerized application across clusters of hosts.

Manage Kubernetes charts using Helm. Create reproducible builds of the Kubernetes applications, manage Kubernetes manifest files, and manage releases of Helm packages.

Orchestrate Docker images and containers using Kubernetes by creating whole master and node. Create private cloud using Kubernetes that supports DEV, TEST, and PROD environments. Write scripts to create Kubernetes cluster with RHEL and Vagrant. Define Kubernetes services to discover and proxy requests to appropriate Minion.

Configure Kubernetes Replication controllers to allow multiple pods such as Jenkins master server in multiple minions.

Hands on assisting stakeholders with Splunk to design and maintain production-quality data, dashboards, and various applications.

Work on Prometheus for monitoring live traffic, logs, memory utilization, disk utilization, and various other factors which are important for deployment.

Document system configurations, instances, OS, AMI build practices, backup procedures, troubleshooting guides, and keep infrastructure and architecture drawings current with changes. Implement release schedules, communicate the release status, create roll-out plans, track project milestones, and prepare reports using Jira.

Technology: Java, Linux, Maven, Nexus, Ansible, SonarQube, Jenkins, Terraform, Kubernetes, Docker, Nginix, Splunk, GIT, AWS, Openstack, PCF, EC2, Route 53, S3, VPC, EMR, SQS, Auto scaling, ELB, Shell Scripts, Unix/ Linux environment.

Cloud ENGINEER November 2017 – April 2020

US EXPRESS ENTERPRISES Chattanooga, TN

Project Description

US Xpress, provider of a wide variety of transportation solutions, collects approximately a thousand data elements ranging from fuel usage to tire condition to truck engine operations to GPS information, and uses this data for optimal fleet management to drive productivity, saving millions of dollars in operating costs.

Project Points

Designed and implemented an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.

Built and deployed Docker containers to break up monolithic app into microservices, improving developer workflow, increasing scalability, and optimizing speed.

Automated resources in AWS such as Ec2, and S3 using Ansible playbooks (YAML) and Jenkins.

Consulted and contributed to system architecture (AWS).

Administered AWS services: IAM, VPC, Route 53, EC2, S3, ELB, Code Deploy, RDS, ASG, CloudWatch, Terraform.

Analyzed systems and performed usability testing to ensure performance and reliability, and enhanced scalability.

Set up security requirements within AWS Cloud.

Deployed codes to Tomcat server through Maven project using Git and Jenkins.

Performed debugging to optimize code and automate routine tasks.

Automated the deployment of Docker images to Docker registry using Jenkins.

Automated the deployment of codes and applications using Git, GitHub, and Jenkins.

Worked in an Agile environment.

Supported multiple clients and internal teams on an as-needed basis.

Cloud Infrastructure Engineer September 2015 – November 2017

COSTCO Issaquah, WA

Project Description

Like other big box retailers, Costco tracks what you buy and when. The information they collect could prevent you from getting sick. Case Story: A California fruit packing company warned Costco about the possibility of listeria contamination in its stone fruits (peaches, plums, nectarines). Rather than send out a blanket warning to everyone who shopped at Costco recently, Costco was able to notify specific customers that purchased those items. Costco was able to help the Centers for Disease Control pinpoint the source of a salmonella outbreak back in 2010. This project pertained to work on data pipelines to capture supply-chain data and correlate with purchase data.

Project Points

Provisioned AWS EC2 Instances and ECS Clusters.

Designed and built scalable production systems (load balancers, Memcached, master/slave architectures).

Analyzed legacy On-Prem applications and worked on design strategy and final migration to AWS Cloud.

Part of a team that created a VPC environment, including server instance, storage instances, subnets, availability zones, etc.

Set up alert monitoring for performance and security using tools such as CloudWatch and CloudTrail.

Performed DevOps Engineer tasks such as automating, building, deploying, managing, and releasing code from one environment to another environment, tightly maintaining CI and CD.

Installed, configured, and managed GitHub repo,

Created Docker containers to leverage existing Linux containers and AMIs in addition to creating Docker containers from scratch.

Educated customers about containerization solutions as part of the AWS Containers Area of Depth Technical Feedback Community.

Wrote Cloud Formation Templates (CFT) in YAML and JSON format to build AWS services with the paradigm of Infrastructure-as-Code.

Deployed applications on to their respective environments using Elastic Beanstalk.

Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.

Performed data migration from on-premises environments into AWS.

Supported the business development lifecycle (Business Development, Capture, Solution Architect, Pricing and Proposal Development).

Helped solve application problems using services like Amazon Kinesis, AWS Lambda, Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Workflow Service (Amazon SWF).

Set up network configuration and managed Route53, DNS, ELB, and IP Address and Cider configurations.

Designed technically compliant and secure cloud solutions, as well as value-based, on-demand services to facilitate the effective transition and migration of projects and programs into a unique and adaptive cloud environment.

Deployed and debugged cloud initiatives in accordance with best practices throughout the development lifecycle.

Developed high-availability and resiliency applications using AWS Services such as Multi AZ, Read replicas, ECS, etc.

Configured, deployed, and managed Docker containers using Kubernetes.

Worked with version control management tools such as GIT and GIT Hub.

Configured different plugins on Jenkins to integrate with GitHub and Bit and scheduled multiple jobs in the Build Pipeline.

Helped configure and set up the network within Kubernetes by applying Ingress rules and cloud controller manager as required by the Cluster.

Performed troubleshooting and resolved issues within Kubernetes cluster.

Set up and supported databases in the cloud (e.g., RDS and Databases on EC2).

Provided other storage solutions like S3, EBS, EFS, Glacier, etc.

Strong knowledge with Web Services, API Gateways and application integration development and design

Successfully migrated containerized environment from ECS to Kubernetes Cluster.

Using AWS DataSync to migrate petabytes of data from on-prem to AWS Cloud (worked on prototype, design, and implementation).

AWS Engineer September 2013 – September 2015

X-FAB SEMI-CONDUCTORS USA Lubbock, TX

Project Description

Worked with large datasets from database utilizing SQL.

Project points

Provisioned and decommissioned EC2 Instances as requested by other developers and testers.

Migrated on-prem database to AWS RDS or EC2 instances.

Maintained and configured user accounts for dev, QA, and production servers, and created roles for EC2, RDS, S3, CloudWatch, and EBS resources to communicate with each other using IAM.

Migrated and implemented multiple applications from on premise to cloud using AWS services like SMS, DBMS, CloudFormation, S3, Route53 Glacier, EC2, RDS, SQS, SNS, Lambda, and VPC.

Deployed and maintained AWS data warehouse (Redshift and Snowflakes).

Built and configured a virtual data center in the Amazon Web Services cloud to support Enterprise Data Warehouse hosting, including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer.

Built servers using AWS, imported volumes, launched EC2 and RDS, created security groups, configured autoscaling and elastic load balancers (ELBs) in the defined virtual private connection.

Designed AWS Cloud Formation templates to create multi-region web applications and databases.

Configured, deployed, and managed Docker containers using Kubernetes.

Used ETLs such as Glue for data masking, data quality, data replica, data virtualization, and master data management.

Created and managed DynamoDB Tables as required for business use.

Used AWS Data Pipeline for ETL to move data between AWS resources and from on premise to AWS.

Enabled performance and high availability of applications by configuring elastic load balancers and auto scaling groups.

Performed troubleshooting and resolved issues within Kubernetes cluster.

Managed user access to AWS resources using Identity Access Management (IAM).

Used cost-effective systems and processes such as EC2 instances, elastic load balancers, auto scaling, and AMIs to build and manage virtual servers on Amazon Web services.

Created AMIs templates.

Used monitoring tools such as CloudWatch, CloudTrail and others to set up security and auditing strategies for compliance.

Part of a team that worked on setting up Virtual Private Cloud.

Database Administrator September 2011 – September 2013

Walgreen Atlanta, GA

Project Description

Worked with large datasets from Database utilizing SQL.

Project points

Used Data Pump utility for export and import. Wrote scripts for backup of databases. Maintained archive logs for databases. Streamlined backup procedures and implemented RMAN for backup and disaster recovery.

Refreshed database and replicated from production to testing using Exp/Imp.

Performed table partitioning. Created index organized tables and locally managed tablespaces.

Planned and configured disk layout, mount points, and capacity requirements for servers.

Provided maintenance of user accounts, privileges, profiles, and roles.

Involved with OLTP relational databases system analysis, design, implementation, and management.

Upgraded/migrated databases from Oracle 10g to 11g and completed cross migration with different platforms.

Created logical and physical design and installed and upgraded databases.

Managed data files, controlled files, re-did log files, tables, indexes, and applied constraints for performance and availability.

Worked on user management (mainly on grants, privileges, roles, and profiles).

Designed and developed SQL procedures and performed query performance tuning.

Performed hot and cold backup and recovery using RMAN and UNIX Scripts.

Monitored database and SQL performance using Statspack, OEM, Explain, and tuned problem areas to enhance performance.

Monitored database and system performance.

Configured the Event Manager to notify support personnel.

Worked on application and database security (data access, passwords, network, etc.).

Handled daily production problems/change requests with Oracle.

Performed daily database administration tasks: user management, space monitoring, performance monitoring

and tuning, alert log monitoring, and backup monitoring.

Performed physical backup (hot and cold) as well as logical backup (export/import).

Monitored tablespace issues and created tablespaces and added data files.

Involved in session management and monitoring of critical wait events, blocking sessions in the database.

Altered parameters of database and completed tuning tests.

Exported and imported database objects to copy from one database to another database.

Linux System Administrator May 2010 – September 2011

Menards Eau Claire, WI

Project Description

Worked Linux LAMP-based application deployments in customer-facing environment.

Project points

Responsible for maintaining and troubleshooting Linux-based systems and provisioned Linux systems on virtual environments.

Applied user-management functions to ensure appropriate security permissions.

Analyzed and interpreted system logs and error messages.

Installed software and supported installations with maintenance and troubleshooting activities.

Worked on Linux in an open-source environment.

Built LAMP servers on demand for customers.

Managed system processes and scheduled jobs using the Cron utility.

Patched OS on a quarterly cycle to limit downtime and ensure servers were up to date.

Ensured AWS cost optimization by rightsizing EC2 instances, scheduling bedtime scripts, and purchasing

reserved instances, which helped the organization reduce costs.

Performed system bottleneck checks and troubleshooting that related to I/O, CPU, and memory with utilities.

Monitored and maintained adequate file system space for the operating system and application files.

Experienced with server-side technologies such as Apache, Nginx, and HAProxy.

Education & Training

York University

Bachelor’s in Information Technologies

Certifications

AWS Certified Solution Architect

Oracle database certified

Linux Certified

Qualys Security Certified



Contact this candidate