Post Job Free
Sign in

DevOps Engineer - Cloud Automation & CI/CD Expert

Location:
Greensboro, NC, 27403
Salary:
90000
Posted:
December 12, 2025

Contact this candidate

Resume:

Sai Sudha Mandula DevOps Engineer

*************@*****.*** +1-339-***-**** LinkedIn Portfolio PROFESSIONAL SUMMARY

Results-driven DevOps Engineer with 5+ years of experience in automating CI/CD pipelines, containerizing microservices, and building scalable, cloud-native architectures. Adept at integrating AWS, GCP, and Azure environments, enhancing deployment speed by 50%, and optimizing data and backend performance by up to 40%. Skilled in leveraging Power BI for data visualization, Atlassian tools for agile delivery, and monitoring platforms like Prometheus and Grafana for proactive system health management. Passionate about delivering robust, efficient, and self-healing infrastructure solutions that empower business agility. TECHNICAL SKILLS

• DevOps & Automation: GitLab CI/CD, Jenkins, Docker, Kubernetes, Terraform, SonarQube, Vault

• Cloud Platforms: AWS (EC2, S3, Lambda, IAM, RDS, CloudFormation), GCP (BigQuery, Compute Engine), Azure

• Programming & Scripting: Python, Java, Bash, Shell, JavaScript

• Data Engineering & Databases: ETL, SQL Tuning, Data Modeling, Amazon Redshift, Vertica, MySQL, PostgreSQL, MongoDB

• Monitoring & Logging: Prometheus, Grafana, Splunk, RabbitMQ

• Analytics & Visualization: Power BI, Tableau, Looker

• API & Backend Development: REST APIs, Microservices, Spring Boot, Node.js

• Version Control & Tools: Git, GitHub, GitLab, Bitbucket, JIRA, Confluence (Atlassian Suite)

• Operating Systems: Linux (Ubuntu), Windows, macOS. PROFESSIONAL EXPERIENCE

Software/DevOps Engineer IBM Suwanee, GA Jan 2024 – Present

• Designed and implemented GitLab CI/CD pipelines automating build, validation, and deployments across multi-cloud environments.

• Containerized microservices with Docker and Kubernetes, increasing system uptime and fault tolerance by 35%.

• Built and maintained automated ETL pipelines using Python and Bash for high-volume data ingestion and transformation.

• Optimized Big Query data structures, improving query performance and reducing compute cost by 40%.

• Deployed centralized monitoring with Grafana, Splunk, and Prometheus, reducing pipeline outages through early anomaly detection.

• Implemented Power BI dashboards for real-time metrics and business insights, enabling data-backed decision-making.

• Strengthened data reliability through automated validation, regression testing, and reconciliation frameworks. DevOps Engineer RedHat India June 2019 – July 2022

• Built end-to-end CI/CD pipelines with GitLab and Jenkins, cutting deployment times and increasing release efficiency by 45%.

• Implemented Terraform-based Infrastructure as Code (IaC) for scalable AWS and Azure resource provisioning.

• Transformed monolithic applications into containerized microservices, improving scalability and recovery time.

• Integrated SonarQube and Vault to automate code quality and enforce secure secrets management.

• Deployed Atlassian tools (JIRA, Confluence) for agile sprint tracking and cross-team collaboration.

• Established centralized observability with Splunk and Grafana, improving issue resolution time by 30%. EDUCATION

University of Bridgeport Bridgeport, CT - Sept 2022 - Dec 2023 Masters in Computer Science

PROJECT

Self-Healing Kubernetes Platform with Automated Remediation

• Created a custom Kubernetes Operator in Python to monitor pod states and restart unhealthy workloads automatically.

• Integrated ArgoCD to enforce declarative GitOps deployments with instant rollback capabilities.

• Deployed Helm charts for configurable, version-controlled microservice releases.

• Implemented Prometheus & Alertmanager rules to detect anomalies and execute remediation scripts.

• Automated incident notifications to reduce manual on-call effort and prevent recurring service outages. Intelligent Log Analytics Platform for Predictive Alerting

• Designed a centralized log ingestion pipeline using Fluentd, Kafka, and Elasticsearch to collect logs from distributed applications.

• Applied Python-based anomaly detection models (Isolation Forest) to predict performance degradation and resource exhaustion.

• Automated log enrichment and pattern analysis using AWS Lambda and CloudWatch Logs Insights.

• Built interactive Power BI dashboards to visualize anomaly trends, system metrics, and predictive insights for DevOps teams.

• Integrated alert notifications with Slack and JIRA, automatically creating incident tickets for high-risk anomalies.

• Achieved a 30% reduction in unplanned downtime through predictive maintenance alerts and early detection. CERTIFICATIONS

AWS CERTIFIED DEVOPS ENGINEER - Certificate



Contact this candidate