Post Job Free

Resume

Sign in

Devops Engineer Cloud

Location:
Arlington, TX
Posted:
December 10, 2023

Contact this candidate

Resume:

SAI PAVAN K

Sr. Cloud DevOps Engineer

+1-216-***-****

ad1uuk@r.postjobfree.com LinkedIn

PROFESSIONAL SUMMARY

An Experienced IT professional with over 7+ years of hands-on experience as an Azure Cloud DevOps engineer, possessing a strong command of multi-cloud environments including Microsoft Azure and Amazon Web Services (AWS). Proficient in the deployment of resources through Infrastructure as Code methodologies using tools such as Terraform and Cloud Formation. Demonstrated expertise in containerization and orchestration technologies, including Docker and Kubernetes, as well as adeptness with configuration management tools like Ansible, Chef, and Puppet.

●Extensive experience with Continuous Integration Tools (Jenkins, Hudson, Bamboo) to automate software development and deployment process.

●Experience in using Subversion and LINUX environment and utilizing Build Forge and Jenkins for enterprise scale infrastructure configuration and application deployments.

●Experience with security operations proactive threat hunting, Incident response, and implemented active network monitoring (Moloch). Provides a powerful interface for analyzing data and identifying threats.

●Strong experience creating ANT/ MAVEN with Ansible build script for Deployment. Optionally run unit tests and actively involved in Project Planning, Requirement Management, Release Management and User interface benchmarking for different platforms.

●Experience in Private Cloud and Hybrid cloud configurations, patterns, and practices in Windows Azure and SQL Azure and in Azure web and database deployments.

●Good knowledge in using Microsoft Azure cloud computing services (like PAAS, IAAS) for building, testing, deploying, managing applications and services through a global network of Microsoft-managed data centers.

●Designed, configured and deployed Microsoft Azure for a multitude of applications utilizing the Azure stack (Including MFA, Compute, Web & Mobile, Blobs, Resource Groups, Azure SQL, Cloud Services, and ARM), focusing on high-availability, fault tolerance, and auto-scaling.

●Expertise in DevOps, Release Engineering, Configuration Management, Cloud Infrastructure, Automation. It includes Amazon Web Services (AWS), Ant, Maven, Jenkins, Chef, SVN, GitHub, Serena Products, Tomcat, Nginx, JBoss and Linux etc.

●Experience migrating infrastructure and application from on premise to AWS and from Cloud to Cloud such as AWS to Microsoft Azure.

●Good Knowledge in bash (shell)/Perl and exposure to Python scripting widely used for web development, data analysis and machine learning.

●Extensively worked with Change tracking tools like BMC Remedy, JIRA, and HP Service Center. Knowledge in IIS and hands on experience with WebSphere, JBoss and WebLogic Deployments.

●Experience with ticketing & project management tools like JIRA, Azure DevOps, Bugzilla, OTRS, Service NOW and HPQC.

●Hands on experience on working with many AWS services like Ec2, Lambda, Cloud watch Schedulers, API Gateways, Auto scaling, Security groups.

●Experience in using Nexus and Artifactory Repository Managers for Maven and Ant builds which are used for promoting releases from development to production.

●Experience in implementing and making teams adapt to Release/Change Management Process and helping team adapt for successful delivery of software products.

●Used scripting languages like Python, Ruby, Perl, Bash and configuration management tools Chef, Ansible, and CF Engine and Web Service like AWS and Azure.

●Worked on Docker clustering on Mesos with Marathon and experience with Kubernetes, Mesos, and Docker Swarm.

●Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

TECHNICAL SKILLS:

Cloud

Amazon Web Services, Microsoft Azure.

Configuration Management

Chef, Salt Stack, Ansible, Puppet, CFEngine

Monitoring Tools

Azure Monitor, Nagios, AppDynamics, Datadog, Cloud Watch, Splunk

Build Tools

MS Build, Ant, Maven, Gradle, Make, Docker, Kubernetes

Operating Systems

Windows, LINUX, Unix, Ubuntu, iOS

Version Control Tools

Subversion, Git, Bit bucket, SVN

CI/CD Tools

Jenkins, Azure DevOps, CircleCI

Repository Management

Cloud smith, JFrog Artifactory, Pulp, Nexus

Programming/ Scripting Languages

Shell, Python, Ruby, Groovy, Bash, YAML, JSON, XML, PowerShell.

Web/Application Servers

Tomcat, Web logic, Web Sphere, NGINX.

Database

SQL Server, No-SQL Database, My SQL, DB2

IAC Tools

Terraform, Cloud Formation, ARM Templates, Pulumi, Azure Bicep

Virtualization Technologies

VMWare, Tanzu, Hypervisor,Vagrant, Oracle VM Virtual Box

WORK EXPERIENCE:

Client: Wells Fargo, San Francisco, CA May 2022 – Present Role: Sr. Cloud DevOps Engineer

Responsibilities:

●Interacted with client teams to understand client deployment requests and Coordinated with the Development, Database Administration, QA, and IT Operations teams to ensure there are no resource conflicts.

●Led the conception and creation of scalable orchestration architectures with Azure Functions, resulting in streamlined business processes and heightened operational efficiency.

●Assisted and collaborated with cross-functional teams in the migration of an on-premises application to Azure, resulting in increased availability and faster response times for end users.

●Build, manage, and continuously improved the build infrastructure for global software development engineering teams including implementation of build scripts, continuous integration infrastructure and deployment tools.

●Implemented Azure DevOps pipelines for continuous integration and delivery of cloud-based applications, modernizing the software development process.

●Led initiatives to optimize operational efficiency and support decision-making within the financial services sector by implementing Azure Custom Vision for image classification.

●Implemented a disaster recovery plan for crucial applications using Azure site recovery and automated the critical process using Terraform scripting.

●Proficient in automating secure and scalable device provisioning using IoT Hub Device Provisioning Service (DPS). Skilled in seamless provisioning of numerous devices across different IoT hubs, with expertise in utilizing quickstarts, tutorials.

●Experienced in deploying models using Deployment APIs within Azure OpenAI Resources, enabling effective model selection for tasks like API calls and text generation, crucial for various natural language processing applications.

●Deployed microservice architecture principles and, resulting in expedited development cycles, enhanced scalability, simplified maintenance, and an overall fortification of cloud infrastructure reliability.

●Designed Terraform and Cloud Build pipelines for infrastructure as code, standardizing deployments of Google Kubernetes Engine clusters, BigQuery datasets, and Virtual Private Networks.

●Professionally orchestrated the design, development, and deployment of Azure Resource Manager templates for efficient provisioning and management of cloud resources.

●Worked with Azure DevOps services such as Azure Repos, Azure Boards, and Azure test plans to plan work and collaborate on code development, built and deployed application.

●Used the DevSecOps in agile framework for different purpose mainly it focus on the speed of an app delivery, where augments speed with security by delivering apps.

●Executed the implementation and enforcement of security best practices for Azure Virtual Machines, involving network segmentation, access controls, and encryption, in order to fortify defenses against unauthorized access and mitigate the risk of data breaches.

●Integrated Azure Databricks with Azure Data Lake Storage, enabling the creation of efficient data pipelines and Utilized Databricks notebooks and Apache Spark to perform data transformations.

●Experience in crafting and executing event-driven architectures utilizing Azure Functions and these architectures are triggered by a variety of events such as HTTP requests, Azure Blob Storage events, Azure Service Bus, and scheduled timers.

●Working experience with the Terraform for the purpose of automate provisioning and lifecycle management of the cloud resource which benefited in increase productivity in automating tasks, and eliminating manual configuration.

●Orchestrated the integration of Okta identity and access management solutions into Azure environments, resulting in the reinforcement of security measures and access controls.

●Implemented robust authentication and authorization mechanisms for Azure Functions, prioritizing secure access and data protection by leveraging Azure Active Directory, OAuth, and custom authentication providers.

●Experienced in Azure Kubernetes Service to produce production-grade Kubernetes that allow enterprises to reliably deploy and run containerized workloads across private and public clouds.

●Implemented DevSecOps practices to integrate security into the software development lifecycle (SDLC), and ensured secure and resilient software delivery and maintained security automation tools and scripts, such as static code analysis, vulnerability scanning, and security testing frameworks.

●Integrated Azure Log Analytics with Azure Automation, Azure Logic Apps, and Power Automate to automate tasks, resulting in a substantial enhancement of operational efficiency within the Azure environment.

●Implemented Helm Secrets to securely manage sensitive configuration data, such as passwords and API keys, ensuring the protection of this critical information within AKS.

●Demonstrated expertise in seamlessly integrating various Azure services, such as Azure Functions, Azure Logic Apps, Azure App Service, Azure Storage, and Azure SQL Database, to create comprehensive and interconnected solutions.

●Leveraged Infrastructure as Code principles to define and manage Docker Compose infrastructure using tools like Docker Compose files, Kubernetes YAML files.

●Experience with container based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes.

●Understanding of Docker networking concepts, including creating and managing Docker networks to work with specific use case and containers can communicate each other and the outside world.

●Designed and executed comprehensive strategies for migrating on-premises databases to AWS, ensuring minimal downtime, data integrity, and optimal performance.

●Worked on multiple things like setting up Kubernetes dashboards with AAF and also using kube config. Created private cloud using Kubernetes complete control security over the environment that supports DEV, TEST, and PROD environments.

●Experience in Setting up the build and deployment automation for Terraform scripts using Jenkins and managed document all post deployment issues utilizing the Post Deployments Issue Log.

●Developed ETL pipelines to efficiently load and transform data into Amazon Redshift from various sources, including Amazon S3, RDS databases, and external APIs.

●Hands on experience in customizing Splunk dashboards, visualizations, configurations, reports, Indexers and search capabilities using customized Splunk queries.

●Expertise in building Object Oriented applications using C++ and Java, writing Shell Scripts and Perl Scripts on UNIX perform wide range of tasks by managing files, manipulating text and executing tasks.

Environment: Jenkins, Azure, Bamboo, Artifactory, Terraform, Git, Python, Maven, Docker, Kubernetes, Splunk, AWS.

Client: JP MORGAN, AUSTIN, TX Sept 2019 – Apr 2022

Role: DevSecOps ENGINEER

Responsibilities:

●Led a team in designing, and executing a hybrid cloud solution to seamlessly combine the client's existing on -premises infrastructure with cloud-based resource.

●Implemented CI/CD pipelines on AWS, leveraging the prowess of AWS DevOps tools like AWS Code Pipeline, AWS Code Build, and AWS Code Deploy.

●Designed and implemented Amazon Redshift clusters, factoring in data distribution styles, sort keys, and compression methods to optimize query performance and storage utilization.

●Integrated IAM with AWS Organizations, establishing a centralized identity and access management framework across multiple AWS accounts, streamlining user provisioning and policy enforcement.

●Designed and deployed the Amazon EKS clusters spanning multiple regions and availability zones. This initiative was aimed at bolstering high availability and disaster recovery capabilities.

●Orchestrated the delivery of specific application component versions to target environments using Jenkins. Tracked inventory and set alerts for server capacity constraints.

●Implemented encryption-at-rest using AWS Key Management Service (KMS) to protect sensitive data stored in Amazon S3, Amazon EBS, and databases like Amazon RDS, ensuring data confidentiality even in case of unauthorized access.

●Executed comprehensive machine learning pipelines within SageMaker, encompassing activities from data pre-processing and model training to hyperparameter optimization and model deployment.

●Built end-to-end server less ETL pipelines using AWS Glue, AWS Lambda, and Amazon S3, automating data extraction, transformation, and loading processes for improved data quality and analytics readiness.

●Designed and executed IAM strategies intricately aligned with the principle of least privilege, meticulously constructing granular permissions and policies to ensure the robust security of AWS resources.

●Designed and managed Amazon VPCs to create isolated network environments, implementing subnets, security groups, and Network ACLs for enhanced security.

●Collaborated with DevOps, development, and security teams to architect and maintain AWS-backed Kubernetes solutions, fostering agile and efficient cross-functional workflows.

●Implemented robust content security measures by seamlessly integrating AWS Cloud Front with SSL/TLS certificates.

●Developed Bash automation scripts to streamline infrastructure management tasks such as provisioning instances, updating AMIs, and managing backups, reducing manual overhead.

●Established custom CloudWatch metrics and dashboards to monitor cloud resource utilization, performance, and key business metrics.

●Demonstrated expertise in utilizing Apigee analytics dashboards to analyze API traffic patterns, latency metrics, and errors. Identified optimizations to enhance developer experience.

●Integrated Kubernetes deployments into AWSCodePipeline and CodeDeploy, orchestrating automated application delivery that ensured consistent rollouts and minimized downtime.

●Leveraged deep understanding of Azure App Service to design, deploy, and manage web applications, ensuring optimal performance, scalability, and availability.

●Designed and enforced Kubernetes Pod Security Policies to enhance containerized workload security by defining and controlling security settings, effectively mitigating risks and strengthening the security posture.

●Integrated Amazon Redshift seamlessly with business intelligence tools like Tableau and Power BI, enabling end-users to create meaningful visualizations and reports.

●Utilized EC2 to deploy scalable virtual server fleets and ELBs for high availability. Implemented granular IAM access policies and automated scaling with optimized ASGs.

●Instrumented EC2, RDS, and Lambda resources with custom CloudWatch metrics to monitor real-time usage patterns and trigger alarms.

●Implemented cluster scaling strategies to effectively manage dynamic workloads and skillfully configured Auto Scaling groups for Amazon EKS nodes.

●Demonstrated a high level of proficiency in Kubernetes orchestration, showcasing my ability to expertly deploy, scale, and proficiently manage containerized applications.

●Effectively showcased adaptability through the expert utilization of Terraform to provision resources across varied AWS regions and alternate cloud providers.

●Developed Ansible playbooks for automated application deployment and configuration, fostering collaboration between development, operations, and security teams.

●Employed Nagios to monitor build statuses and gain insights into code lines, branching, merging, integration, and versioning concepts.

Environment: AWS, Jenkins, Chef, Puppet, Docker, Terraform, Ansible, Git, Nagios, Kubernetes, Apigee.

Client: Kone Corporations, Chennai, India Nov 2017 – Aug 2019 Role: Build & Release Engineer

Responsibilities:

●Designed and configured AWS CodePipeline workflows, integrating technologies such as Docker containers, Jenkins, and Terraform to automate the end-to-end build, test, and deployment processes.

●Created and maintained Continuous Build and Continuous Integration environments in scrum and agile projects.

●Worked on building and deploying Java code through Hudson to automate builds and deployments and it has plugin ecosystem same level of functionality and compatibility with modern technologies and tools.

●Experienced in seamlessly integrating custom automation scripts, developed in Bash and Python, with AWS Lambda to optimize operational efficiency.

●Created, and modified UDeploy workflow configurations and templates by the COT team and automated the Build and Deployment process using Udeploy tool and created the UDeploy templates for components and applications, on boarded around 200 apps into UDeploy to achieve continuous integration.

●Understanding of Kubernetes container runtime options, including Docker and CRI-O, and ability to configure and manage the run time environment for containerized applications.

●Knowledge of Kubernetes cluster management, including using tools like kubeadm and kops to provision and manage Kubernetes cluster.

●Utilized AWS Parameter Store and AWS Secrets Manager to ensure secure storage and effective management of sensitive configuration data and confidential credentials.

●Established the site-to-site VPN connection between Data center and AWS to perform troubleshooting and monitoring of the Linux server on AWS using Zabbix and splunk.

●Responsible for monitoring the AWS resources using Cloud Watch and also application resources using Nagios and Integrate Artifactory repository server to handle multiple external and internal project binaries.

●Experience with build automation using Hudson, Artifactory, and Gradle. As an active part of DevOps team. Developed Puppet modules to automate the IaaS on both Windows and Linux, including SQL Server, Relic, etc. Artifactory were used. HPE/VMware as cloud platform.

●Worked in configuring baselines, branches, merge in SVN, and automation processes using Shell and Batch scripts. Maintained the overall SVN architecture that includes setting up branching process, setting up user accounts, and maintaining the user access across the organization as an admin.

●Worked with System Administrators to upgrade multiple environments for various application releases including setup/configuration of Jboss Clusters across Linux (Ubuntu) platforms.

●Experience with Chef Solo and ability to use it for local infrastructure configuration management, without requiring a Chef Server.

●Automated patch management for EC2 instances utilizing AWS Systems Manager (SSM), proactively applying critical security patches to bolster system security.

●Strong Knowledge of AWS data base services such as RDS, Aurora and experience in deploying and managing applications on AWS using DevOps tools such as Code Pipeline and Code Deploy.

●Worked on configuring Puppet Master Servers and installing Puppet client software on Linux servers. Deployed Puppet, Puppet Dashboard, and Puppet DB for configuration management to existing infrastructure.

●Developed ANT and Maven build scripts for maintaining test automation builds & Java based library to read test data from XML & Properties files using Junit and loading it into Selenium.

●Closely worked with Development, QA, and other teams to ensure automated test efforts are tightly integrated with the build system and in fixing the error while doing the deployment and building.

●Troubleshooting Network, memory, CPU, swap and file system issues, TCP/IP, NFS, DNS and SMTP in Linux Servers.

●Worked on PROD release every fortnight and work closely with the DEV and DB support teams to fix issues occurred during deployment. Created and managed Change Requests for all non-prod environment, production releases, and patches.

Environment: Linux/Unix (Red hat, Ubuntu), Shell Scripting, SVN, Maven, ANT, Java/J2EE, Jenkins, Puppet, AWS, Junit, Jira, Udeploy.

Client: Value Labs, Hyderabad, India. Jan 2016-Sept 2017 Role: Linux Systems Engineer Administrator

Responsibilities:

●Experience in creating and managing users and groups accounts, passwords, permissions, disk space allocations and process monitoring in CentOS and Red Hat Linux.

●Installation, configuration, maintenance and support of Red Hat Linux 4.0, 5.0 and Solaris 7/8/9.Installing and configuring of Samba for heterogeneous platform. Installation, configuration and maintenance of local and Network based Printers.

●Experience with Ansible roles and ability to create and reuse roles that encapsulate infrastructure tasks and configurations in using Ansible modules and ability to create custom Ansible modules to automate specific infrastructure tasks or interact with APIs.

●Proficient in troubleshooting Ansible issues and ability to debug Ansible playbooks, roles, and modules. Understanding of task delegation and ability to delegate tasks to specific hosts or groups, based on inventory or tags.

●Familiarity with Ansible integrations with other tools, such as Jenkins, Git and Slack, and ability to integrate Ansible automation with other DevOps tools. Knowledge of Git branching strategies and ability to create and manage Git branches for future development, bug fixes, and hotfixes.

●Designed, orchestrated, and upheld Linux servers, fostering peak performance and robust security by implementing system tuning, fine-tuning kernel parameters, and proficiently managing firewalls.

●Expertise in scripting languages such as Groovy, Shell, Python, and/or Ruby for automation and customization of Jenkins. Experience with Jenkins security, including configuring access control and implementing security best practices and able to troubleshoot and debug issues in Jenkins, including analyzing logs and identifying root causes.

●Understanding of software development life cycle (SDLC) processes and agile methodologies, and ability to tailor Jenkins workflows accordingly.

●Worked with Netstat, Prstat, and Iostat monitoring commands. Implemented the file sharing on the network by configuring NFS on the system to share essential resources.

●Worked with the robust security hardening practices, encompassing SELinux policy enforcement, firewall rule management, and routine system patching, to fortify and protect critical systems against potential vulnerabilities.

●Installation of Oracle Patches and Troubleshooting, Creating and modifying application related objects, Creating Profiles, Users, Roles and maintaining system security.

Environment: Linux, Oracle 10g, Netstat, NOC, Virtual Machines, Ansible, Jenkins, Git, API’s, Python, Groovy.



Contact this candidate