Post Job Free

Resume

Sign in

Senior AWS DevOps Engineer

Location:
Manhattan, NY, 10001
Posted:
April 22, 2024

Contact this candidate

Resume:

DENNIS ORSINI

Contact: 680-***-**** Email: ad46yo@r.postjobfree.com

SENIOR AWS/CLOUD ENGINEER DEVOPS ARCHITECT

PROFILE SUMMARY

•A Strategic professional with nearly 10 years of qualitative experience as a Senior Cloud Engineer and DevOps Architect proficient in a wide range of AWS and GCP.

•Specialized in the strategic design of cloud solutions, applying industry-leading practices to create scalable, resilient, and secure infrastructures across both AWS and GCP. Demonstrates a keen ability to tailor cloud architectures that align with business needs, ensuring cost efficiency and performance optimization.

•Demonstrates extensive expertise in Kubernetes, including cluster management through Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE), configuring pod networks, provisioning dynamic storage, and orchestrating containers for seamless application scalability and availability; proficiently deployed microservices on Kubernetes, ensuring efficient application scaling, management, and deployment.

•Proficient in designing and configuring networking infrastructures, including Virtual Private Cloud (VPC), Transit Gateway, VPN, Network Security Groups (NSG), Route 53 for domain name system (DNS) management, CloudFront for content delivery network (CDN) acceleration, and robust monitoring with CloudWatch.

•Skilled in crafting IAM roles and policies to enforce granular access controls and bolster security across cloud environments.

•Adept at integrating with Active Directory and AWS Directory Services for centralized identity management solutions.

•Played a key role in the implementation of DevOps methodologies such as Continuous Integration (CI) and Continuous Deployment/Delivery (CD) using popular tools like Jenkins, Ansible, and Puppet.

•Skilled in utilizing build tools such as Maven, Gradle, and Ant for efficient application building.

•Comprehensive expertise in Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and plummi.

•Proficient in managing Auto Scaling Groups for EC2 Instances and Virtual Machines, coupled with expertise in Infrastructure as Code (IaC) services

•Skillfully deployed applications using AWS Lambda and Google Cloud Function, ensuring efficient and scalable deployments.

•Well-versed with the configuration and deployment to application servers such as Nginx Apache Tomcat, and J2EE

•Experienced in AWS deployment services like Elastic Beanstalk, OpsWorks, and CloudFormation for efficient and scalable deployments.

•Proficient in scripting with Python, Bash, and Groovy scripting languages, enabling automation and efficient operations.

•Experienced in building automation runners in GitLab to build Docker images.

•Successfully managed installation and setup of Splunk, Prometheus, Datadog, and Nagios tools for effective log monitoring and infrastructure management in highly accessible configurations and utilized NPM Registry and Docker Registry for package management.

•Insightful understanding of SDLC methodologies including Agile/Scrum and Waterfall with rich experience in full-stack development

•Proficient in working with Certificate Managers & Authorities, Key Management Services (KMS), and Public Key Infrastructure (PKI) ensuring secure service delivery.

•Rich experience with security practices including IAM & Active Directory, CloudWatch/Azure Monitor, Guard Rails, and CloudTrail.

•Experienced in utilizing version control systems like Git for source code management and optimal branching and merging strategies aligned to the CI/CD pipeline.

•Proficient in working with configuration management tools like Ansible and Puppet for efficient management and configuration of the environment.

•Worked on containerization tools such as Docker and orchestration with Kubernetes, enabling efficient deployment and management of applications.

TECHNICAL SKILLS

DevOps and Containerization:

•CI/CD: Jenkins, AWS CodeCommit, CodeDeploy, CodeBuild

•Container Orchestration: Kubernetes (EKS, AKS), Docker, ECS, ECR

•Configuration Management: Ansible

•Version Control: Git

•Infrastructure as Code: Terraform

•Collaboration Tools: Slack, Urban Code

•Log Management: ELK Stack, Splunk

•Artifact Management: JFrog Artifactory

Cloud Technologies:

•Cloud Platforms: AWS (S3, Lambda, MediaConvert)

•Familiarity with Public Cloud Services

•Experience with AWS services like CloudWatch, CloudTrail, S3

Programming Languages:

•Proficient in Java, Python, JavaScript

•Familiar with Clojure(Script), Kotlin, Go, HTML, CSS, C++, SQL

Scripting and Operating Systems:

•Scripting Languages: Python, Node.js, UNIX Shell Scripting

•Operating Systems: Unix/Linux (Ubuntu, CentOS, Amazon Linux), Windows (MS Server, Windows Server)

Software Development Tools:

•IDEs: Eclipse, Visual Studio Code, IntelliJ IDEA

•Collaboration Tools: Amazon Chime, Zoom, Slack

Networking and Protocols:

•Familiarity with various network protocols including TCP/IP, FTP, SSH, SNMP, DNS, DHCP

•Experience with Cisco Routers/Switches, WAN, LAN, NAS, SAN

Web and Application Servers:

•Application Servers: Apache Tomcat, JBoss, Apache2

•Frameworks: Java Spring

Database Technologies:

•SQL Databases: Microsoft SQL Server, MySQL, PostgreSQL, Amazon RDS

•NoSQL Databases: MongoDB, Cassandra

•Data Warehouse: Amazon Redshift

Machine Learning and AI:

•Familiarity with ML and AI concepts, including TensorFlow, Scikit-learn

•Experience with data analysis libraries like Matplotlib

•Exposure to Web App development and Probability concepts

PROFESSIONAL EXPERIENCE

DevOps Architect

AXA Insurance Co., New York, NY Nov’22-Present

Project Summary: In the project at Axa Insurance Co., I spearheaded efforts to optimize the cloud infrastructure, streamline deployment processes, and ensure smooth application delivery. I played a pivotal role by leading the comprehensive design, configuration, and deployment of AWS infrastructure across multiple applications, with a strong emphasis on ensuring high availability, fault tolerance, and auto-scaling capabilities, also implemented automation strategies to enhance efficiency and reliability, contributing significantly to the project's success.

Deliverables:

•Orchestrated the design, configuration, and deployment of AWS infrastructure for multiple applications, prioritizing high availability, fault tolerance, and auto-scaling

•Leveraged essential AWS services including EC2, Route53, VPC, S3, RDS, CloudFormation, CloudWatch, SQS, and IAM to ensure robust cloud environments.

•Implemented AWS Code Pipeline and proficiently crafted CloudFormation JSON templates within Terraform for seamless infrastructure-as-code deployment, streamlining development workflows and enhancing efficiency.

•Spearheaded the automation of application development and deployment processes through Docker Swarm and Docker Compose, establishing standardized procedures and enhancing deployment reliability.

•Developed CloudFormation Templates (CFT) in both JSON and YAML formats, adhering to the infrastructure-as-code paradigm for building and managing AWS services efficiently.

•Engineered staging and production environments using Terraform Templates, ensuring scalability and fault tolerance across multi-tier AWS environments spanning multiple availability zones.

•Led the creation of Dev, Staging, Prod, and DR environments using Terraform scripts, with a keen focus on debugging and troubleshooting to optimize deployment processes.

•Managed the seamless migration of on-premises applications to the cloud, leveraging critical AWS tools such as ELBs and Auto-Scaling policies to enhance scalability, elasticity, and availability.

•Implemented Chef Cookbooks and wrote recipes in Ruby Script to automate infrastructure installation and configuration across environments. Employed Chef alongside Python and AWS CloudFormation Templates for efficient cloud deployment.

•Maintained highly available clustered and standalone server environments using Ansible for scripting and configuration management, refining automation components through Ansible scripts.

•Spearheaded Kubernetes deployment initiatives, developing stateful sets, network policies, dashboards, and Helm charts to streamline cluster management. Crafted OpenShift/Kubernetes templates for various applications like Jenkins, Kafka, Cassandra, and Grafana.

•Designed and implemented Puppet scripts to orchestrate the installation of stack-like LXC containers, Docker, Apache, Postgres, PHP, Python virtual environments, SonarQube, Nexus 2/3, WildFly/Boss applications, and Django applications.

•Automated build activities through Maven POM.XML files and Jenkins jobs, administering and engineering Jenkins for seamless infrastructure management and weekly builds.

Cloud Architect

McKesson Corporation, Irving, TX Sep’20-Oct’22

Project Summary: At McKesson Corporation project I played a substantial role in enhancing cloud infrastructure deployment processes and refining project management practices. By implementing best practices and streamlining workflows, I facilitated the company's transition to a more efficient, scalable, and reliable cloud environment.

Deliverables:

•Implemented robust AWS Security Groups to meticulously control traffic to EC2 instances, bolstering the overall security posture of our cloud environment

•Orchestrated seamless application deployments on AWS, leveraging a suite of services including EC2, Route53, S3, RDS, and IAM. This optimization resulted in faster deployment cycles and improved resource utilization.

•Led the development of fully automated server provisioning, monitoring, and deployment solutions across various platforms, such as Amazon EC2, Jenkins Nodes/Agents, and SSH resulting in a significant reduction in manual intervention, leading to greater operational efficiency.

•Leveraged Groovy scripting and Jira plugins like Script Runner to extend Jira's functionalities, enabling advanced workflows and custom fields to enhance project management capabilities within the organization.

•Installed and configured Ansible Server and clients to automate deployment processes from Jenkins repositories to target environments like Integration, QA, and Production, also streamlined deployment pipelines, leading to faster time-to-market for new features.

•Utilized Ansible playbooks and CI tools like Rundeck and Jenkins to automate infrastructure activities such as Continuous Deployment, Application Server setup, and Stack Monitoring, improving consistency and reliability in our infrastructure operations.

•Developed automation templates for deploying relational and NoSQL databases, including MSSQL, MySQL, Cassandra, and MongoDB, in AWS environments ensuring efficient database management across different projects.

•Configured AWS Elastic Load Balancers (ELB) for auto-scaling based on application traffic patterns and managed multi-tier and multi-region architectures using AWS CloudFormation, ensuring high availability and scalability of our applications.

•Integrated automated build pipelines with deployment workflows, facilitating seamless upgrades, migrations, and integrations of Jira with other Atlassian applications and toolsets like SVN, Artifactory, Jama, and Jenkins

•Applied industry-standard methodologies like Business Process Flow, Business Process Modeling, Business Analysis, and various testing methodologies to ensure efficient project management practices and successful project outcomes.

•Customized Jira to align with specific project requirements and industry standards, while also providing guidance on best practices and standardization to end-users and leadership ensuring consistency and adherence to company policies.

•Evaluated and provided technical expertise for upgrading existing production systems, enhancing security measures, and optimizing database configurations contributing to improving system performance, security, and stability

Lead Cloud/Data Engineer

Shell, Houston, TX Jan’18- Aug’20

Project Summary: In a transformative project at Shell, I played a pivotal role in advancing the organization's digital transformation journey. By leveraging cutting-edge technologies and implementing innovative strategies, I bolstered agility, scalability, and reliability in the company's cloud operations I spearheaded initiatives aimed at optimizing infrastructure, automating processes, and enhancing overall performance.

Deliverables:

•Designed and implemented data pipelines using Amazon Kinesis to ingest, process, and analyze streaming data in real-time, enabling timely insights and actionable intelligence for business decision-making.

•Leveraged RDS and EC2-based databases in the cloud, ensuring seamless operation and data integrity.

•Installed, configured, and maintained the GitHub repository to facilitate efficient version control and collaboration among team members.

•Implemented performance and security alert monitoring using CloudWatch and CloudTrail, enabling real-time insights into system performance and potential security threats, thereby enhancing overall cloud security posture.

•Utilized AWS Glue for data cataloging, ETL (Extract, Transform, Load) processing, and data preparation tasks, ensuring data consistency, quality, and compliance with regulatory standards.

•Implemented data partitioning and indexing strategies in RDS and EC2-based databases to optimize query performance and reduce latency, enhancing the overall efficiency of data retrieval operations.

•Integrated GitHub and Bit with Jenkins using various plugins and scheduled multiple jobs in the build pipeline, streamlining development and deployment processes for faster delivery of software updates and enhancements.

•Oversaw network settings, including Route53, DNS, ELB, IP Address, and Cider configurations, ensuring optimal performance and functionality, while minimizing downtime and improving user experience.

•Utilized AWS Services such as Multi-AZ, Read replicas, ECS, and other related services to develop highly available and resilient applications, ensuring maximum uptime and reliability for critical business applications.

•Managed Docker containers on Kubernetes and successfully migrated containerized environments from ECS to Kubernetes Cluster, optimizing resource utilization and scalability.

•Offered various storage solutions, including S3, EBS, EFS, Glacier, and others as required, catering to diverse data storage needs while ensuring data accessibility, durability, and security.

•Deployed applications onto their respective environments using Elastic Beanstalk, simplifying the deployment process and ensuring consistency across different environments.

•Utilized AWS DataSync to seamlessly migrate petabytes of data from on-premises to AWS Cloud, ensuring minimal disruption to ongoing operations while capitalizing on the scalability and durability of cloud storage solutions.

•Managed continuous integration and continuous delivery processes, facilitating rapid and reliable delivery of software updates and enhancements, thereby accelerating time-to-market for new features.

•Resolved issues within Kubernetes clusters, leveraging deep technical expertise and analytical skills to ensure the smooth operation of containerized environments.

•Applied knowledge of Web Services, API Gateways, and application integration development and design to enhance application performance and functionality, delivering superior user experiences.

•Developed and implemented event-driven and scheduled AWS Lambda functions to trigger various AWS resources, automating routine tasks and improving operational efficiency.

AWS Data/DevOps Engineer

Lockheed Martin Corporation, Bethesda, MD Feb’16-Dec’17

Project Summary: In this project, I played a key role in enhancing the cloud infrastructure and deployment processes. I optimized the company's cloud environment for improved performance, scalability, and reliability. My contributions led to smoother deployment processes, reduced downtime, and increased overall efficiency, ultimately driving the project's success and delivering tangible benefits to the organization.

Deliverables:

•Worked with Linux and AWS support teams to ensure readiness for new product releases and the adoption of emerging technologies, fostering a culture of continuous learning and improvement.

•Collaborated with clients and internal stakeholders to provide expert advice on architectural design considerations, ensuring optimal solutions aligned with business objectives and technological capabilities.

•Designed and built a robust Document Management System on the Cloud using Lambda, Elasticsearch, Containers, Python, Java codes, S3, and DynamoDB, enhancing document organization and accessibility for the organization.

•Designed and implemented data encryption mechanisms using AWS Key Management Service (KMS) to protect sensitive data at rest and in transit, ensuring compliance with data security and privacy regulations.

•Utilized AWS Glue for schema evolution and versioning, enabling seamless updates to data schemas and structures without disrupting downstream applications or analytics processes.

•Monitored and managed Linux systems in a complex multi-server environment, ensuring their stability, security, and optimal performance to support critical business operations.

•Deployed classic web applications to AWS ECS containers and managed scalable and resilient applications utilizing Instance Group, Autoscaler, HTTP Load balancer, and Autohealing, ensuring high availability and performance under varying workloads.

•Facilitated effective communication between internal teams and external clients using various communication channels such as face-to-face meetings, phone calls, emails, web portals, and intranet platforms, fostering collaboration and transparency.

•Implemented core technologies including Apache/Nginx, MySQL/PostgreSQL, Varnish, Pacemaker, CRM Clustering, Kubernetes, ELK (Elasticsearch, Logstash, Kibana), and Redis, ensuring robustness and scalability of deployed solutions.

•Utilized Ansible/Ansible Tower as a configuration management tool to automate daily tasks, deploy critical applications rapidly, and proactively manage changes, enhancing operational efficiency and reliability.

•Operated and maintained systems running on AWS, deploying built artifacts to the application server using Maven, and integrating Maven builds with Jenkins for streamlined build and deployment processes.

•Applied principles of Infrastructure-as-Code (IaC) and built and maintained an IaC codebase using Puppet, Terraform, and Ansible, enabling consistent and reproducible infrastructure deployments.

•Deployed Dev, QA, and Prod environments using Terraform variables, managed Terraform code with Git version control system and defined Terraform modules for Compute and Users to ensure consistency and scalability across environments.

•Automated daily tasks using Bash (Shell) scripts, documented changes in the environment and server configurations, and analyzed error logs and user logs to identify and address issues promptly, ensuring system stability and reliability.

•Installed a multi-node Cassandra cluster, simulated failure scenarios, created keyspaces and tables, and accessed them from the client with Cassandra and Big Data Tech Stack, enabling efficient handling of large-scale data storage and processing requirements.

•Utilized GCP and Azure for a Proof of Concept event registration app, leveraging features such as AppServer Instances, Azure Active Directory, Functions, and CDN, demonstrating the versatility and capabilities of different cloud platforms for potential future deployments.

DevOps Engineer

Black Rock, New York City, NY Mar’15-Feb’16

Project Summary: In a targeted project aimed at modernizing the organization's software development and deployment practices, I played a diligent role in driving the transformation. By implementing cutting-edge technologies and best practices, I streamlined our processes, improving efficiency and agility. My contributions led to a more streamlined development pipeline and faster deployment cycles.

Deliverables:

•Worked with the team and successfully deployed an ASP.NET web application on AWS infrastructure by setting up and configuring IIS (Internet Information Services) and application pools, ensuring seamless operation in a cloud environment.

•Implemented an automated build and deployment process for the application, laying the foundation for a robust continuous integration and continuous deployment (CI/CD) system, enhancing agility and reliability in software delivery.

•Designed and implemented fully automated server build management, monitoring, and deployment using the Chef configuration management tool, enabling consistent and scalable infrastructure provisioning.

•Installed Tomcat instances and managed multiple application configurations by creating Puppet manifest files, facilitating efficient management and scalability of application deployments.

•Implemented various DevOps practices including continuous integration, continuous delivery, continuous testing, and continuous monitoring, fostering collaboration and efficiency across development and operations teams.

•Developed a robust test environment that reduced integration issues and improved code quality, ensuring smoother development and deployment cycles.

•Performed JUnit testing and deployments using multiple Jenkins plugins, integrated builds using ANT and Maven as build tools, and configured Jenkins pipelines with SSH for continuous deployments, optimizing software delivery processes.

•Configured CloudTrail to monitor API activity of users, enhancing security and compliance measures, and managed the release cycle of the product across various environments including Development, QA, UAT, and Production.

•Managed source code repository, build and release configurations/processes, and tools to support daily development, testing, and production builds, ensuring version control and consistency in software releases.

•Managed and analyzed scalable data using AWS RDS (Relational Database Service), ensuring efficient data storage and retrieval for the organization's applications.

•Modified the Software Configuration Management (SCM) database for software lifecycle process flow, user permissions, access, and file attributes in response to user requests, ensuring the accuracy and integrity of SCM processes.

•Provided deployment services to development teams from initial development through production deployments, facilitating smooth and efficient software releases.

•Worked closely with the Release Manager to improve build automation and reduce bottlenecks in the delivery pipeline, redefined processes, and implemented tools for software builds, patch creation, release tracking, and reporting.

•Automated daily tasks using Bash (shell) scripts, documented changes in the environment and each server, and analyzed error logs, user logs, and /var/log messages, ensuring system stability and reliability.

•Administered local and remote servers using SSH daily, and utilized Nginx and Apache Tomcat web servers for application deployment, ensuring optimal performance and availability of deployed applications.

Data Analyst

Protiviti, San Ramon, California Oct’14-Mar’15

As a Data Analyst, I contributed to the comprehensive processing, analysis, and interpretation of data to deliver actionable insights that drive informed decision-making. I gained exposure in utilizing Excel skills to extract, manipulate, and visualize data, ensuring its suitability for reporting to senior leadership.

•Involved in parsing, sanitizing, & restructuring data from various sources and combining them into specific formats

•Used operational details and intended outcomes to determine appropriate analyses and data requirements

•Conducted analyses, summarized results, and wrote reports and presentations to provide actionable insights to a lay audience

•Provided assessment through quantitative and qualitative analyses

•Evaluated and executed assessment strategies requiring collection metrics related to program objectives

•Used Excel skills to extract data from CSV & other delimited text formats, restructured data as needed and updated existing summary information for presentation to senior leadership

•Collected and synthesized qualitative vignettes that provide support for data interpretations

•Employed superior written communication to create meaningful stories by combining quantitative (e.g., correlation, trends over time) and qualitative data

•Worked with stakeholders to ensure assessment products have the appropriate focus and are provided an accurate representation of the environment

ACADEMIC CREDENTIALS & CERTIFICATIONS

•Bachelor of Science (Computer Science) from CUNY Brooklyn College, Brooklyn, NY

•Bootcamp Full-stack Software Development from NYC Tech Talent Pipeline

•JavaScript Algorithms and Data Structure Certificate, FreeCodeCamp



Contact this candidate