GOPI REDDY
**********.****@*****.***
DEVOPS ENGINEER
PROFFESIONAL SUMMARY:
DevOps Engineer with 5+ years of hands-on experience delivering high-quality, production-grade code and infrastructure. Skilled in debugging complex system issues, reviewing code for quality, and applying best practices to optimize CI/CD pipelines and infrastructure. Currently exploring ways to apply software engineering knowledge to support AI systems through expert feedback and training.
Experience across Azure and AWS platforms, specializing in infrastructure automation, CI/CD pipeline development, containerization, and cloud-native deployment strategies.
Proficient in infrastructure provisioning using Terraform, with deep expertise in deploying and managing cloud resources such as VMs, VNETs, NSGs, Load Balancers, Firewalls, AKS, ECS, and F5 Load Balancers (BYOL) in Azure and AWS.
Skilled in configuring and managing CI/CD pipelines using Azure DevOps, Jenkins, and GitHub Actions, enabling automated build, test, and deployment workflows for .NET and Java-based applications.
Expertise in containerization and orchestration using Docker, Kubernetes (AKS/EKS), and Helm, enabling scalable, resilient, and secure microservices deployment in production.
Experienced in monitoring and alerting using tools such as Azure Monitor, Prometheus, Grafana, CloudWatch, and Splunk, with a focus on real-time observability and incident response.
Developed and maintained automation scripts using Python, Bash, and PowerShell, supporting tasks like deployment validation, infrastructure configuration, log parsing, and MLOps workflow integration.
Implemented DevSecOps practices by integrating SAST/DAST tools into CI/CD pipelines, and managing secrets and access using Azure Key Vault, IAM, and RBAC for secure cloud environments.
Experience with NoSQL databases like Cosmos DB and DynamoDB, integrating them into cloud-native apps to support scalable, low-latency data access.
Configured automated backup policies and implemented disaster recovery (DR) procedures for cloud resources and critical application data using Azure Backup and S3 lifecycle policies, participated in DR drills to validate recovery time objectives (RTO) and ensure business continuity.
Proficient in collaboration, debugging and troubleshooting skills demonstrated through cross-functional teamwork, Root Cause Analysis (RCA) participation and involved in Blue/Green and Canary deployments.
Proficient in Agile methodologies, with extensive use of Jira, Confluence, and ServiceNow for sprint planning, documentation, change requests, and deployment tracking across enterprise projects.
Understanding of the SDLC with hands-on experience in Unix/Linux system administration, including VMware-based provisioning, configuration, patching, and monitoring by managing backend services and integrating with RESTful APIs across development, test, and production environments.
Implemented GitOps workflows using ArgoCD to manage Kubernetes deployments, enabling declarative infrastructure, automated sync, and rollback capabilities by maintaining version-controlled Helm charts and promoted consistent, auditable deployment practices across environments.
EDUCATION
Master’s in Computer and information Sciences from University of South Dakota
TECHNICAL SKILLS
Cloud Platforms
AWS (EC2, S3, EKS, ECS, IAM, CloudFormation), AZURE (VM, IAM,
Key Vault, Azure DevOps, VNet, firewall), GCP
Containerization & Orchestration
Docker, Kubernetes, Helm, Amazon EKS
Configuration Management & IaC
Ansible, Terraform, AWS CloudFormation
CI/CD Tools
Azure DevOps, GitHub Actions, Jenkins
Source Code Management
Git, GitHub, GitLab, Azure Repos
Monitoring & Logging
ELK Stack (Elasticsearch, Logstash, Kibana), Azure Monitor, Prometheus, Grafana, Splunk, CloudWatch
Scripting & Programming Languages
Shell Scripting (Bash), Python, Groovy, PowerShell, JSON, YAML, Java, C
Build & Deployment Tools
Maven, Jenkins, SonarQube
Operating Systems
Linux (RedHat, CentOS, Ubuntu), Windows (7/8/10/Server), VMWare
Databases
MySQL, MongoDB, DynamoDB, RDS
PROFESSIONAL EXPERIENCE
Magellan Health, Frisco, TX Jun 2023 - Present
DevOps Engineer
Responsibilities:
Provisioned Azure Resource Groups, Virtual Machines, and Storage Accounts using modular Terraform code, leveraging reusable variables, remote backends, and environment-specific workspaces in Azure DevOps to ensure consistent, version-controlled and scalable infrastructure deployments across staging and production environments.
Provisioned and configured VNets, Subnets, and Network Security Groups (NSGs) using Terraform, implementing secure, segmented network architectures by defining dynamic IP addressing, custom route tables, and environment-specific variables to support scalable and isolated cloud environments.
Deployed advanced networking services including Azure Firewalls, Application Gateways, and Azure Front Door using Terraform resource modules, enabling centralized traffic management, web application firewall (WAF) policies, and secure ingress/egress routing to support enterprise application deployments.
Collaborated with the network engineering team to provision and migrate F5 Load Balancer deployments from PAYG to BYOL, customizing Terraform configurations, securely managing licensing variables to ensure minimal downtime and uninterrupted service availability.
Deployed and configured Azure Load Balancers, including SFTP-Load Balancer to ensure high availability and uninterrupted access to virtual machines and implemented health probes, backend pools, and load balancing rules using Terraform to distribute traffic efficiently and maintain service continuity.
Managed Azure IAM to enforce Multi-Factor Authentication (MFA) and implemented RBAC across subscriptions and resource groups, also created custom roles to resolve access issues in deployments and configured Key Vault policies to securely manage secrets, keys, and certificates with strict access controls.
Implemented Azure Active Directory (AAD) authentication for Single Sign-On (SSO) to strengthen access control, and automated cloud resource provisioning for containerized applications using Azure CLI, improving deployment efficiency and security.
Diagnosed and resolved issues in Kubernetes deployments by analyzing kubectl logs, pod events, and cluster metrics and performed controlled rollbacks, also coordinated with CI/CD systems to ensure stable release recovery in case of failed or misconfigured deployments.
Created and managed Helm charts for deploying microservices to Azure Kubernetes Service (AKS), enabling repeatable and version-controlled deployments through templated configurations and integrated the workflows with GitHub Actions to automate deployments in production environments.
Dockerized legacy and modern applications by creating and managing optimized Docker images and deployed them as containers in Azure Kubernetes Service (AKS) pods, enabling scalable, resilient, and cloud-native applications with streamlined deployment workflows and efficient resource utilization.
Participated in solution design and architectural blueprinting for microservice-based deployments on AKS, incorporating high availability, autoscaling, and observability best practices.
Designed and implemented CI/CD pipelines using GitHub Actions for C#/.NET-based applications, leveraging GitHub as the source code repository to automate build, test, and deploy the developed REST APIs to streamline infrastructure automation and facilitate seamless service integration.
Managed Git repos by implementing effective branching strategies, resolving merge conflicts, and contributing to shared codebases by reviewing PRs, applying linting standards, and integrating with Azure DevOps to enforce code quality and streamline CI/CD processes.
Automated infrastructure configuration and centralized management using Ansible by developing reusable playbooks to manage Linux package installations, service configurations reducing manual effort, minimizing configuration drift, and ensuring consistency across production environments.
Utilized Azure OpenAI and GitHub Copilot to accelerate DevOps automation by generating and optimizing YAML configurations for CI/CD pipelines and supported Blue/Green deployments with minimal downtime and fast rollback in Azure environments.
Managed Palo Alto Next-Gen Firewalls to secure cloud traffic, set up custom security rules, and controlled access across Azure cloud environments to support Zero Trust architecture.
Collaborated with AI platform teams to integrate Azure OpenAI capabilities into CI/CD workflows, enabling GenAI-powered automation for YAML pipeline generation and validation.
Utilized Python for developing components within MLOps workflows and for scripting automation tasks such as infrastructure provisioning, deployment validation, and log analysis, enhancing operational efficiency and reducing manual effort across environments.
Utilized Azure Cosmos DB as a scalable NoSQL database to support high-performance, low-latency and integrated with ServiceNow for managing incident tracking, change requests, and deployment approvals, ensuring operational continuity and compliance across Azure environments.
Monitored application and infrastructure health using Azure Monitor, Prometheus, and Grafana, configuring dashboards and alerts to track key metrics by actively involving in troubleshooting and debugging production issues by analyzing logs to ensure system stability and uptime.
Utilized Jira for agile tracking and Confluence for documenting deployment processes and runbooks, configured Azure Traffic Manager for intelligent traffic routing and high availability across multiple regions.
Environment Tools: Terraform, Azure DevOps, GitHub Actions, Docker, Kubernetes (AKS), Helm, Azure Monitor, Prometheus, Grafana, Ansible, Python, PowerShell, Bash, Azure Active Directory, Key Vault, Jira, Confluence, ServiceNow
UST Global, India Jun 2021 – Jul 2022
DevOps Engineer
Responsibilities:
Automated infrastructure provisioning on AWS using Terraform, deploying resources like EC2, S3, Route 53, RDS, KMS, Security Groups, Elastic Load Balancers, and custom VPCs with subnets, ensuring consistent, repeatable, and scalable environment setups across multiple environments.
Supported application hosting and deployment on AWS infrastructure following SRE principles, focusing on high availability, resilience and network security by utilizing firewalls, custom VPC configurations, auto-scaling and routing mechanisms to ensure scalable system performance.
Configured AWS IAM roles and user permissions to enforce least-privilege access across environments and deployed EC2 instances using custom AMIs and attached EBS volumes for persistent storage, supporting scalable infrastructure setups aligned with application needs.
Managed DNS and traffic routing with AWS Route 53, utilized AWS S3 for asset storage and backups within CI/CD pipelines and DynamoDB as a scalable NoSQL database to enable fast and reliable data access with minimal latency.
Containerized applications using Docker on Amazon ECS, and explored Amazon EKS for orchestrating services with Kubernetes, creating and managing task definitions, container images, and load-balanced services to support scalable and resilient microservice architectures.
Experience with AWS SNS and SQS to enable asynchronous communication between microservices by configuring SNS topics for event-based notifications for system alerts and integrated SQS queues with AWS Lambda to automate downstream processing, enhancing system scalability in a cloud-native environment.
Configured and maintained CI/CD pipelines in Jenkins for Maven-based Java applications by integrating SonarQube for automated code quality checks and JUnit for unit testing, storing build artifacts in the Nexus repository, and deploying them on Apache Tomcat servers, resulting in reduced deployment time and improved consistency across environments.
Integrated SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) tools into Jenkins CI/CD pipelines to enforce secure coding practices, improve code quality, and ensure security compliance prior to production deployments.
Designed and supported event-driven microservices using AWS Lambda for scalable, cost-efficient compute, triggered via API Gateway to expose RESTful APIs. Utilized DynamoDB as a scalable NoSQL database to enable fast and reliable data access with minimal latency.
Managed the design, structure, and maintenance of Git repositories across multiple development projects using GitHub supporting git branch management, XML configurations, merge conflict resolution, and code integration workflows for CI/CD pipelines.
Designed and managed Apache Kafka clusters on AWS using Amazon MSK (Managed Streaming for Apache Kafka) to enable scalable data streaming across distributed microservices to support fault-tolerant, low-latency in a secure cloud environment.
Integrated middleware components such as Apache Kafka, AWS API Gateway, and service mesh to support distributed microservice communication and ensure reliable data streaming.
Deployed and managed the ELK Stack (Elasticsearch, Logstash, Kibana) on AWS to centralize log collection and visualization for cloud-based applications integrating with AWS CloudWatch Logs for seamless data ingestion and built Kibana dashboards to monitor system health and application performance
Participated in incident management, monitoring by analysing logs and system metrics using Splunk and AWS CloudWatch to detect and diagnose issues, tracking system resolution workflows through Jira, ensuring accountability, timely communication, and root cause identification across teams.
Implemented and maintained network protocols such as TCP/IP, DNS, DHCP, and SNMP, and utilized netstat for troubleshooting connectivity issues, ensuring stable and secure network operations.
Developed and optimized Bash and PowerShell scripts to automate repetitive DevOps tasks such as service monitoring, log rotation, and system cleanup reducing manual intervention and enhancing operational efficiency.
Collaborated with cross-functional teams to enhance system reliability through SRE practices and integrated DevSecOps principles into CI/CD workflows demonstrating strong communication, adaptability, and problem-solving skills in fast-paced, production environments.
Environment Tools: Terraform, AWS (EC2, S3, RDS, Route 53, IAM, KMS, DynamoDB, Lambda, ECS, EKS, CloudWatch, SNS, SQS), Jenkins, Maven, Kafka, SonarQube, JUnit, Nexus, GitHub, Docker, Kubernetes, ELK Stack, Splunk, Bash, PowerShell, Jira, Apache Tomcat.
TCS, India Jun 2019 – May 2021
Junior DevOps Engineer
Responsibilities:
Utilized Git for version control operations (pull, push, commit) to manage Terraform code, and executed plan, apply, and destroy commands in test environments using VS Code under the guidance of senior team members and gained practical understanding of infrastructure automation.
Managed and maintained Linux servers across Ubuntu, CentOS and cloud environments by performing installations, configurations, and documented installation process in Bash and Linux servers.
Performed software package installations and security rule configurations on Ubuntu and RHEL systems, and validated Java application deployments on Apache Tomcat servers across test and non-production environments, ensuring readiness and consistency in deployments.
Deployed and managed virtual machines using Google Compute Engine, configured firewall rules, and set up basic networking to support test environments.
Gained hands-on exposure to Jenkins CI/CD workflows, learning how automation pipelines are structured, triggered, and monitored in production and non-production environments.
Tested and validated end-to-end CI/CD pipelines using Maven and Nexus repository for build automation and artifact management, conducted unit testing with JUnit and enforced code quality checks through SonarQube for ensuring reliable build and deployment workflows.
Created and deployed a serverless web application using Google Cloud Functions and Cloud Storage, integrating with Cloud Pub/Sub for basic event-driven processing.
Monitored system health and application performance using New Relic by configuring custom dashboards and alerting rules for CPU utilization, memory consumption, and thread activity which helped identify performance bottlenecks and resolved response time issues.
Developed automation scripts using Bash and Perl to streamline routine tasks including system health checks, service restarts, and log file management improving consistency and reducing manual intervention.
Experience in object-oriented programming (OOP) principles while supporting development and deployment of Java-based applications by applying encapsulation, inheritance, and modular design during application validation in CI/CD pipeline.
Gained exposure to cloud IAM policies and storage services and developed an understanding of standard security practices such as SSH key management, user access control, and basic compliance measures through hands-on learning.
Conducted network troubleshooting using tools like ping, traceroute, netstat, and tcpdump for traffic analysis and utilized top, htop, and vmstat to monitor system performance.
Participated in Root Cause Analysis (RCA) sessions to understand and learn troubleshooting workflows for critical production issues using Jira, gaining practical exposure to log analysis, system behaviour, and service dependencies in production environment.
Worked within an Agile Environment actively participating in daily stand-ups, sprint planning, and team syncs to stay aligned on progress and contribute to collaboration efforts.
Environment Tools: Git, Terraform, Jenkins, Maven, Nexus, JUnit, SonarQube, Apache Tomcat, Bash, Perl, Ubuntu, CentOS, RHEL, New Relic, Jira, VS Code, SSH, OOP (Java), Linux utilities (ping, traceroute, netstat, tcpdump, top, htop, vmstat)
Gopi Reddy
Newark, DE
**********.****@*****.***
06/26/2025
Hiring Committee
Library Information Technology
University of Michigan
Ann Arbor, MI
Dear Hiring Committee,
I am writing to formally express my sincere interest in the DevOps Engineer position within the Library IT Architecture and Engineering (A&E) team at the University of Michigan Library. After reviewing the responsibilities and mission of your department, I am enthusiastic about the opportunity to contribute my experience and capabilities to an institution whose values of academic excellence and public service I deeply admire. It would be a professional privilege to dedicate my skills in service of the University’s mission, and I am wholeheartedly committed to pursuing this role above all other opportunities.
With over five years of hands-on experience as a DevOps Engineer, I have designed, deployed, and maintained secure and scalable infrastructure across enterprise environments. My background includes managing container orchestration platforms like Kubernetes, implementing infrastructure as code using tools such as Terraform and Ansible, and automating deployment pipelines across cloud and on-premise systems. I have administered Linux-based systems, written robust Bash and Python scripts for operational efficiency, and collaborated cross-functionally to deliver stable, high-performing services to both internal and external users.
Beyond the technical alignment, I am especially drawn to the University of Michigan’s commitment to digital preservation, open access, and scholarly impact. The chance to contribute to infrastructure that supports research, archives, and public-facing services resonates strongly with my desire to apply engineering for public good. Moreover, the collaborative and inclusive ethos of your team is deeply aligned with my personal values and professional approach.
As someone who thrives in mission-driven environments, I am eager to bring my problem-solving mindset, automation-first philosophy, and dedication to operational excellence to your Architecture and Engineering team. I am also excited about the opportunity to grow within an academic setting that embraces technological innovation while upholding timeless educational values.
Thank you for your time and consideration. I have attached my resume and would be honored to discuss how my background and passion align with the goals of the University of Michigan Library. I am ready and willing to fully commit to this role and contribute meaningfully from day one.
Sincerely,
Gopi Reddy