Tamu Tes
AWS Cloud/DevOps Architect
Contact: 636-***-****; Email: **********@*****.***
Profile Summary
An accomplished AWS Cloud Engineer and DevOps Architect with over 16 years of comprehensive IT experience, including 15+ years specializing in AWS DevOps and cloud technologies. Demonstrated expertise in system architecture and leveraging deep industry insights to drive substantial revenue growth, enhance network efficiencies, and secure $13M in new business. Proven track record of boosting sales productivity by 34% and reducing security incidents by 25%.
Key Skills:
Amazon AWS Cloud Services: Expert in EC2, S3, EBS, ELB, CloudWatch, ECS, EKS, RDS, SNS, SQS, Lambda, IAM, VPC, CloudFormation, Control Tower, CodePipeline, Glue, and ETL Jobs.
Azure DevOps: Proficient in Azure Repos, Azure Pipelines, and Azure Artifacts for system building and deployment.
Containerization & Orchestration: Skilled in Docker and Kubernetes, enhancing deployment efficiency and scalability.
Configuration Management: Utilized Ansible for consistent configuration management across the infrastructure.
Monitoring & Alerting: Implemented robust monitoring systems with Prometheus, Grafana, CloudWatch, Check Point CloudGuard, and Splunk for proactive performance management.
CI/CD Pipelines: Established CI/CD workflows using GitHub Actions, Jenkins, AWS CodeCommit, CodeBuild, CodeDeploy, and CodePipeline, facilitating streamlined software delivery.
Infrastructure as Code (IaC): Advocated for IaC principles using Terraform to automate AWS resource provisioning and configuration.
Key Achievements:
Successfully implemented DevOps methodologies and executed seamless cloud migrations, delivering innovative solutions to complex technical challenges.
Enhanced application deployment and orchestration processes, improving efficiency and scalability through Docker and Kubernetes.
Played a critical role in ensuring a robust and secure infrastructure by effectively managing AWS security groups.
Contributed to the resilience of the cloud environment through diligent management of security measures and best practices.
Improved system reliability by leveraging CloudWatch for monitoring and managing cloud resources and applications.
Spearheaded the adoption of Infrastructure as Code (IaC) principles, promoting scalability, consistency, and version control.
Orchestrated end-to-end DevOps processes, including CI/CD and automated testing, resulting in accelerated software delivery cycles and improved collaboration across development and operations teams.
Professional Skills:
In-depth knowledge of Agile and DevOps practices with experience in guiding teams of various sizes through planning, development, rollout, and migration phases.
Strong communication and leadership skills, with a proven ability to drive cross-functional team collaboration and ensure successful project outcomes.
Adept at proactive identification and resolution of performance bottlenecks and potential issues within AWS environments.
With a rich background in cloud technologies and a commitment to continuous improvement, I am dedicated to driving innovation and excellence in DevOps practices to achieve organizational goals and deliver exceptional results.
TECHNICAL SKILLS
DevOps & Containerization: Jenkins, AWS (CodeCommit, CodeDeploy, CodeBuild), Kubernetes (EKS), Docker, Amazon ECS/ECR, Ansible, Git, Terraform, Slack, Urban Code, ELK Stack, Splunk, JFrog Artifactory
Cloud Expertise: AWS (including S3, Lambda, MediaConvert), AWS (CloudWatch, CloudTrail)
Programming Proficiency: Java, Python, JavaScript, Kotlin, Go, HTML, CSS, C++, SQL, Node.js, UNIX Shell
Operating Systems: Unix/Linux (Ubuntu, CentOS, Amazon Linux), Windows (including MS Server)
Software Development Tools: Eclipse, Visual Studio Code, IntelliJ IDEA, Amazon Chime, Zoom, Slack
Networking Knowledge: TCP/IP, FTP, SSH, SNMP, DNS, DHCP, Cisco Routers/Switches, understanding of WAN, LAN, NAS, SAN
Web & Server Management: Apache Tomcat, JBoss, Apache2, Development Frameworks: Java Spring
Database Management: Microsoft SQL Server, MySQL, PostgreSQL, Amazon RDS, MongoDB, Cassandra
Gemini, Prompt Engineering, Get Cody, ChatGPT & Microsoft Autopilot
AI Fundamentals
Work Experience
DevSecOps Architect Centene, St. Louis, Missouri January 2023 – Present
As a DevSecOps Architect at Centene Corporation, I specialize in architecting and managing robust AWS environments, including creating and maintaining CloudFormation templates for provisioning. I design secure network configurations and encryption measures to fortify AWS setups, deploying fault-tolerant infrastructure with Kubernetes and Terraform. I conduct thorough security assessments in Linux, automate deployments with AWS CodePipeline and Terraform, and optimize resource usage across environments. I also implement Docker for application management, manage cloud migrations for scalability, and automate tasks with Chef, Ansible, and Ruby scripts to ensure efficient infrastructure operations.
Created and managed CloudFormation templates (JSON and YAML) for efficient provisioning and management of AWS services.
Developed and implemented security architectures for AWS environments, including secure network configurations, access controls, and encryption mechanisms.
Designed, configured, and deployed highly available, fault-tolerant, and auto-scaling AWS infrastructure for various applications using services such as EC2, Route53, VPC, S3, RDS, CloudFormation, CloudWatch, SQS, and IAM.
Led Kubernetes deployments, including the creation of stateful sets, network policies, dashboards, and Helm charts for efficient cluster management.
Managed and optimized Amazon EKS clusters, ensuring high availability, security, and performance, integrating them seamlessly into existing AWS environments.
Automated deployment and scaling of microservices using Kubernetes and Helm charts, enhancing operational efficiency and scalability.
Engineered scalable, fault-tolerant staging, and production environments across multiple availability zones using Terraform templates.
Leveraged AWS Shield, AWS WAF, and AWS NACL to implement robust DDoS protection, web application firewall rules, and network-level access controls, respectively.
Utilized Veracode and other security tools to conduct thorough static and dynamic security assessments in Linux environment, ensuring compliance with industry standards and best practices.
Automated infrastructure deployment using Infrastructure as Code (IaC) methodologies with AWS CodePipeline and Terraform, ensuring streamlined workflows and consistency.
Implemented Docker Swarm and Docker Compose to automate application development and deployment, ensuring reliability and consistency.
Integrated data from various sources, including databases, streaming platforms, and third-party APIs, to create comprehensive datasets for analysis and reporting.
Managed the migration of on-premises applications to the cloud, leveraging ELBs and Auto-Scaling policies to enhance scalability, elasticity, and availability.
Designed, built, and maintained scalable, reliable data pipelines to extract, transform, and load (ETL) data into AWS storage solutions such as Amazon S3, Amazon Redshift, and Amazon RDS.
Led the creation of development, staging, production, and disaster recovery environments, focusing on optimizing deployments and resource utilization.
Configured AWS Elastic Load Balancers (ELB) for auto-scaling based on traffic patterns, managing multi-tier and multi-region architectures using CloudFormation to ensure high availability and scalability.
Managed DevOps containerized applications using Docker and deployed them on Amazon ECS and EKS, enhancing deployment speed, flexibility, and resource efficiency.
Utilized Chef Cookbooks and Ruby scripts for automating infrastructure installation and configuration tasks.
Maintained highly available clustered and standalone servers through Ansible scripting and configuration management.
Set up and managed security monitoring tools like AWS CloudWatch, AWS Config, and AWS Security Hub to detect and respond to security incidents in real-time, conducting investigations and implementing remediation measures.
Customized Jira to align with specific project requirements and industry standards, providing best practice guidance and ensuring consistency with company policies.
Installed and configured Ansible to automate deployments from Jenkins repositories to various environments (Integration, QA, Production), streamlining CI/CD pipelines and accelerating time-to-market for new features.
Configured DevOps monitoring and alerting systems with AWS CloudWatch, CloudTrail, and ELK stack to proactively identify and resolve infrastructure and application issues.
Orchestrated seamless deployments using AWS services like EC2, Route53, S3, RDS, and IAM, optimizing for faster deployments and better resource utilization.
Automated build activities using Maven & Jenkins jobs, managing Jenkins for seamless infrastructure management, facilitating regular builds.
Implemented robust AWS security groups to strictly control traffic flow to EC2 instances, significantly improving overall cloud environment security.
Led the development of automated solutions for server provisioning, monitoring, and deployment across platforms (EC2, Jenkins Nodes, SSH), reducing manual work and boosting operational efficiency.
Leveraged Groovy scripting and Jira plugins to automate workflows and create custom fields, enhancing project management capabilities.
Utilized Ansible playbooks and CI tools (Rundeck, Jenkins) to automate tasks like continuous deployment, application server setup, and stack monitoring, leading to more consistent and reliable operations.
Provided security training and awareness programs for AWS users and stakeholders.
Designed and implemented data models and schemas optimized for analytics and reporting, considering query performance, data granularity, and scalability.
Automated security tasks and processes using AWS services like Lambda, CloudFormation, and Systems Manager using Python & Bash, improving efficiency and maintaining a consistent security posture.
Developed automation templates for deploying relational and NoSQL databases (MSSQL, MySQL, Cassandra, MongoDB) in AWS environments, ensuring efficient database management.
Developed DevOps strategies for backups and disaster recovery using AWS Backup, RDS snapshots, and S3, ensuring data durability and system resilience.
Integrated automated build pipelines with deployment workflows to facilitate seamless upgrades, migrations, and integrations of Jira with other tools (SVN, Artifactory, Jama, Jenkins).
Cloud & DevOps Architect BNY Mellon, New York City, NY Apr 2020 – Dec 2022
As BNY Mellon's Cloud & DevOps Architect, I orchestrated the implementation of event-driven AWS Lambda functions and developed robust data quality monitoring processes for AWS-hosted data integrity. Leveraging AWS services like multi-AZ deployments and ECS, I ensured high availability for applications. I managed CI/CD pipelines for accelerated software updates using RDS and EC2-based databases, deployed applications with Elastic Beanstalk, and optimized network settings. Enhanced cloud security was achieved through rigorous performance monitoring with CloudWatch and CloudTrail.
Designed and implemented event-driven AWS Lambda functions to automate tasks across AWS resources.
Developed data quality monitoring processes for AWS-hosted data accuracy and completeness.
Utilized AWS services (multi-AZ, read replicas, ECS) for resilient applications with high availability.
Managed continuous integration and delivery processes to accelerate software updates.
Employed RDS and EC2-based databases for seamless cloud operations and data integrity.
Deployed applications using Elastic Beanstalk across environments for consistent deployment processes.
Managed the GitHub repository for efficient version control and team collaboration.
Implemented performance and security monitoring using CloudWatch and CloudTrail to enhance cloud security.
Configured AWS Elastic Load Balancers (ELB) for auto-scaling based on traffic patterns, utilizing AWS CloudFormation for multi-tier architectures.
Applied expertise in Web Services, API Gateways, and integration development for enhanced application functionality.
Optimized network settings (Route53, DNS, ELB, IP Address, CIDR configurations) to minimize downtime and improve user experience.
Built and optimized DevOps CI/CD pipelines with AWS CodePipeline, CodeBuild, and Jenkins to streamline code deployments, ensuring consistent and reliable delivery across environments.
Utilized AWS services such as Kinesis, Lambda, SQS, SNS, and SWF to identify and resolve application issues, ensuring consistent performance.
Implemented data governance policies and security controls for data integrity and compliance.
Orchestrated the migration of containerized environments to Kubernetes for improved scalability.
Integrated automated build pipelines with deployment workflows for streamlined software upgrades and integrations with Jira and other tools.
Led DevOps initiatives to design and maintain Infrastructure as Code (IaC) using tools like Terraform and AWS CloudFormation, automating resource provisioning and configuration for seamless scalability.
Provided storage solutions (S3, EBS, EFS, Glacier) tailored to diverse data needs, ensuring accessibility and security.
Leveraged AWS DataSync for seamless data migration to AWS Cloud, optimizing storage scalability.
Troubleshot Kubernetes cluster issues to maintain optimal containerized environment operation.
Implemented CI/CD pipelines for Kubernetes applications using Jenkins and AWS CodePipeline, facilitating continuous integration and deployment processes.
Lead DevOps Automation Engineer Direct Energy, Houston, Tx Jul 2017 – Mar 2020
As a Lead DevOps Automation Engineer at Direct Energy in Houston, TX, I spearheaded DevOps practices, implementing CI/CD pipelines and fostering collaboration between development and operations. I ensured increased security and accountability by configuring CloudTrail for user activity monitoring and overseeing product releases across multiple environments. Successfully deploying an ASP.NET application on AWS, I optimized performance with IIS and application pools, managed servers using SSH, Nginx, and Apache Tomcat for optimal performance, and established robust CI/CD pipelines to enable faster and more reliable releases.
Championed DevOps practices, fostering collaboration between development & operations by implementing CI/CD, testing, monitoring.
Increased security & accountability by configuring CloudTrail for user activity monitoring & managing product releases across various environments.
Led successful deployment of an ASP.NET application on AWS, configuring IIS and application pools for optimal performance and scalability.
Maintained optimal application performance and availability by administering servers using SSH and leveraging Nginx and Apache Tomcat.
Established a robust CI/CD pipeline that leverages Docker containers for efficient application packaging and deployment on Kubernetes clusters. This approach streamlined deployments, ensured consistent environments, and facilitated faster rollbacks.
Maintained data integrity by modifying the SCM database for accuracy based on user requests.
Supported development teams with deployment services, ensuring smooth and timely releases.
Ensured version control & consistent software releases by managing source code repository, build processes, and tools.
Optimized application performance by utilizing AWS RDS for efficient data storage and retrieval.
Designed & implemented automated server provisioning with Chef, ensuring consistent, scalable infrastructure for efficient application deployments.
Streamlined application management & scalability by installing Tomcat instances & managing deployments with reusable Puppet manifests.
Enhanced code quality and development cycles by developing a robust test environment and utilizing JUnit testing.
Integrated builds and configured deployment pipelines with Jenkins and SSH, enabling efficient and automated deployments.
Collaborated with Release Manager to improve build automation & redefine processes for efficient software builds, patching, and reporting.
Enhanced system stability and reliability by automating daily tasks with Bash scripts, documenting changes, and analysing logs.
Cloud/Data Engineer Macy’s, New York City, NY May 2014 – Jun 2017
As an Cloud/Data Engineer at Macy’s, I deployed web applications on AWS ECS containers with scalability and high availability using Instance Group, Autoscaler, HTTP Load Balancer, and Autohealing. I ensured data security compliance with AWS KMS encryption and managed AWS systems efficiently with Maven and Jenkins integration. Using Terraform and Git, I established Dev, QA, and Prod environments for consistency and scalability. I developed a Cloud-based Document Management System with Lambda, Elasticsearch, Python, Java, S3, and DynamoDB, and implemented AWS Glue for seamless schema evolution.
Deployed web applications to AWS ECS containers and managed scalable applications using Instance Group, Autoscaler, HTTP Load Balancer, and Autohealing for high availability.
Implemented AWS Key Management Service (KMS) for data encryption at rest and in transit, ensuring compliance with security regulations.
Managed AWS systems, deployed artifacts using Maven, and integrated builds with Jenkins for streamlined
Established Dev, QA, and Prod environments with Terraform and Git version control, ensuring consistency and scalability.
Developed a Cloud-based Document Management System using Lambda, Elasticsearch, Containers, Python, Java, S3, and DynamoDB for enhanced document organization.
Utilized AWS Glue for seamless schema evolution and versioning, enabling updates without disrupting downstream processes.
Monitored and maintained Linux systems in a multi-server environment to ensure stability, security, and optimal performance using Prometheus & Grafana.
Provided expert architectural guidance to ensure solutions aligned with business goals and technological capabilities.
processes.
Collaborated with Linux and AWS support teams to prepare for new releases and adopt emerging technologies, fostering continuous learning.
Implemented Infrastructure-as-Code (IaC) principles using Puppet, Terraform, and Ansible for consistent and reproducible deployments.
Ensured robustness and scalability of solutions using core technologies like Apache, Nginx, MySQL, PostgreSQL, Varnish, Pacemaker, Kubernetes, ELK (Elasticsearch, Logstash, Kibana), and Redis.
Installed and configured a multi-node Cassandra cluster, managing keyspaces and tables for efficient data storage and processing with Big Data technologies.
Improved operational efficiency with Ansible/Ansible Tower for task automation, rapid application deployment, and proactive change management.
Automated daily tasks with Bash (Shell) scripts, maintained environment configurations, and analyzed logs to resolve issues promptly.
Facilitated collaboration and transparency between teams and clients through effective communication channels.
AWS Cloud Engineer Tenet Healthcare, Dallas, Tx Dec 2011 – Apr 2014
As an AWS Cloud Engineer with Tenet Healthcare, I led cloud migration initiatives by transitioning legacy applications to AWS, ensuring optimized performance and minimal downtime. I established and managed CI/CD pipelines to streamline deployments, architected highly scalable systems with a focus on load balancing and high availability, and implemented robust monitoring and security solutions to safeguard infrastructure. Through proactive troubleshooting and DevOps automation, I enhanced operational efficiency and enabled seamless cloud-based application performance.
Worked on successful cloud migrations, meticulously analyzing and strategically transitioning legacy applications to AWS for a seamless user experience and optimized performance.
Established a CI/CD pipeline using AWS provisioning tools (EC2) to manage the cloud infrastructure and accelerate development and deployment cycles.
Led architecture and deployment of highly scalable production systems on AWS, specializing in load balancing, caching, and distributed architectures for efficient high-traffic management.
Safeguarded critical infrastructure by implementing robust monitoring solutions for proactive performance and security.
Contributed to the AWS community by educating customers on containerization solutions.
Designed a secure and performant VPC for the cloud environment, optimizing network configuration and adhering to strict security best practices.
Engineered highly available applications by leveraging AWS services (multi-AZ deployments, read replicas) for business continuity.
Implemented DevOps practices by automating tasks across environments (Ansible, Bash/Python scripts) for streamlined development lifecycles (automated builds, deployments, releases).
Configured a resilient network infrastructure (Route53, DNS, ELBs, IP addresses, CIDR blocks) for optimal application connectivity.
Promoted quality and efficiency in cloud deployments by ensuring adherence to best practices throughout the development lifecycle.
Facilitated seamless data migration from on-premises environments to AWS, resolving application issues with services like SQS and SWF.
Automated the development process with a robust CI/CD pipeline using Jenkins and GitHub/Bitbucket plugins.
AWS Build & Release Engineer Chevrolet, Detroit, Michigan Jan 2010 – Nov 2011
As an AWS Build & Release Engineer at Chevrolet, I led the deployment of Java applications across development, integration, and UAT environments using Jenkins on Linux. I automated builds with Maven, Perl, and Bash Shell for QA, staging, and production. Implemented Subversion strategies for release management and coordinated cross-team deployments. Established a centralized Maven repository with Nexus for streamlined dependency management and integrated Git version control. Applied CM policies on Linux for centralized control. Led Release Management meetings to ensure seamless team collaboration and deployment success.
Implemented Jenkins on Linux platforms, configuring primary & secondary builds to support concurrent processing and enhance build efficiency.
Developed automated build scripts using Maven, Perl, and Bash Shell to meet quality assurance (QA), staging, and production deployment
Engineered Subversion metadata elements to manage release versions effectively and coordinated cross-team releases to facilitate efficient project delivery.
Established a centralized Maven repository with Nexus to streamline dependency management and integrate version control with Git.
Implemented Configuration Management (CM) and Change Management (CM) policies on Linux systems to enforce centralized control and compliance standards.
Orchestrated the planning and execution of Java application development and deployment across multiple stages, including development, integration, and user acceptance testing (UAT) environments.
Managed automated build systems with Jenkins, ClearCase, and Perl/Python scripts, effectively optimizing the Continuous Integration/Continuous Deployment (CI/CD) pipeline.
Participated actively in change control meetings, securing necessary approvals for deployments during minor and major release events.
Designed Subversion branching strategies to ensure code stability and efficiently address user issues.
Deployed WebLogic application artifacts using WLST scripts, maintaining and optimizing Linux environments for peak application performance.
Conducted Release Management meetings to promote collaboration and ensure seamless coordination between teams for successful deployments.
Data Analyst Tableau Software, Seattle, WA Jan 2008 – Dec 2009
Tableau Software is a leading data visualization company that empowers organizations to see and understand their data
Collaborate with engineers and data scientists to acquire and verify data from diverse sources.
Ensure data quality through rigorous data wrangling and transformation processes.
Document data lineage to track data origin and transformation steps.
Utilize SQL and other querying languages for comprehensive data analysis.
Perform statistical analysis and data visualization to identify trends and anomalies.
Develop dashboards and reports for clear communication of insights.
Translate business problems into actionable data analysis tasks.
Effectively communicate findings through tailored presentations and reports.
Gain proficiency in Alteryx Foundry for efficient data integration and analysis.
Adhere to strict data security protocols to protect sensitive information.
Stay updated on latest data analysis techniques, tools, and industry trends.
Embrace continuous learning and adaptability to enhance data analysis skills
EDUCATION
Bachelor of Science in Information Technology
East Tennessee State University (ETSU), Johnson City, Tennessee