Post Job Free
Sign in

Cloud DevOps Engineer

Location:
Denton, TX
Posted:
October 14, 2025

Contact this candidate

Resume:

BHANU VENKATA MANIKANTA MOTUPALLI

Sr. Cloud DevOps Engineer

LinkedIn: https://www.linkedin.com/in/bhanuvenkatamanikantamotupalli

Phone: 940-***-****

Email: ******************************@*****.***

PROFESSIONAL SUMMARY

I am having 10+ years of experience as a Platform Engineer, specializing in designing, implementing, and managing secure, scalable, and compliant cloud platforms for healthcare and financial services, Telecommunications, Retail/E-commerce, Automotive industries.

Expertise in cloud infrastructure automation using Terraform, Ansible, and ARM templates, ensuring consistent and scalable deployments across environments.

Proficient in containerization and orchestration using Docker, Kubernetes, Helm, ISTIO, and OpenShift for deploying and managing containerized applications.

Skilled in building and maintaining CI/CD pipelines using Jenkins, GitLab CI, Argo CD, and Azure Pipelines, integrating Azure Key Vault and AWS Code Commit for secure automation.

Extensive experience with Microsoft Azure and AWS, including AKS, ECS, EC2, S3, RDS, VPC, and CloudFormation, to support high-availability and scalable platforms.

Implemented monitoring and observability solutions using Prometheus, Grafana, Datadog, Splunk, and Dynatrace for real-time performance tracking and issue resolution.

Enforced security and compliance with HIPAA, HITRUST, and PCI-DSS using Azure Policies, AWS IAM, HashiCorp Vault, and mutual TLS for secure service communication.

Designed and implemented MLOps pipelines using Azure Machine Learning (AML), Kubeflow, and MLflow for training, deploying, and monitoring machine learning models.

Built self-service tools and reusable Helm charts and Terraform modules to streamline developer workflows and accelerate application deployments.

Designed and implemented data pipelines using AWS S3, Azure Data Lake, Python ETL scripts, and Mesos-managed EC2 clusters for advanced analytics and machine learning.

Defined and implemented SLOs, SLIs, and KPIs to measure and improve platform reliability, including MTTR and availability.

Architected high-availability cloud infrastructures using AWS CloudFormation and Terraform, ensuring business continuity and data resilience.

Automated Kubernetes cluster deployments using KOPS and Ansible, writing playbooks for cluster setup and management.

Configured ISTIO service mesh and Nginx Ingress Controller for secure and efficient traffic routing in Kubernetes clusters.

Integrated Splunk with ServiceNow and AWS CloudWatch for centralized log management, alerting, and incident resolution.

Implemented Dynatrace for end-to-end application performance monitoring, including Real User Monitoring (RUM) to optimize user experience.

Designed and implemented ServiceNow solutions tailored to organizational needs, leveraging ITSM modules such as Incident, Change, Problem, and Service Catalog.

Proficient in UNIX shell/Bash scripting, AutoSys scheduling, and batch processing for production environments.

Tracked and managed tasks, defects, and project progress using Jira, ensuring timely delivery of platform initiatives.

Expertise in using build tools like Maven, Ant, and Gradle for building deployable artifacts.

Proficient in version control with Git, GitHub, Azure Repos, Bitbucket, and GitLab.

Experience with configuration management tools like Chef and Puppet.

Skilled in managing databases such as SQL Server, MySQL, NoSQL, MongoDB, DynamoDB, Cassandra, and Data Lake.

Configured and managed web/application servers like Apache Tomcat, Nginx, IIS, WebLogic, and Kafka.

Implemented testing and code quality tools like SonarQube, Selenium, Veracode, and X-Ray.

Worked with ticketing tools such as ServiceNow, Bugzilla, and Mingle for issue tracking and project management.

Proficient in additional AWS services like Lambda, Kinesis, Elastic Beanstalk, CloudTrail, Direct Connect, SQS, and SNS.

Experienced with additional Azure services like Azure Functions, Azure Blob Storage, Azure Monitor, and Azure Log Analytics.

Skilled in programming languages such as Python, Java, Ruby, .NET, YAML, JSON, Golang, PowerShell, and Groovy.

Extensive experience with operating systems like Linux, RHEL, and Windows Server.

CERTIFICATIONS

Microsoft Certified Azure Administrator Associate.

Certified Kubernetes Administrator.

AWS Developer – Associate.

TECHNICAL SKILLS

Title

Tools Used

Cloud Environments

Microsoft Azure, Amazon Web Services (AWS)

AWS

EC2, S3, Lambda, RDS, ECS, ECR, EKS, CloudFormation, IAM, VPC, CloudWatch, Kinesis, Elastic Beanstalk, Autoscaling, CloudTrail, AWS Direct Connect, Route53, SQS, SNS

Azure

VM, App Services, Azure Repos, Azure Pipelines, Azure Boards, Azure Kubernetes Service (AKS), Azure Container Registry (ACR), Azure Functions, Azure Blob Storage, DevOps Services, Azure Monitor and Log Analytics, Networking Services

Configuration Management

Ansible, Chef, Puppet

Build Tools

ANT, Maven, Gradle

CI/CD Tools

Jenkins, Argo CD, Azure Pipelines, GitLab, GitHub Actions

Monitoring Tools

Splunk, Dynatrace APM, Cloud Watch, ELK, Grafana, Prometheus, Datadog

Container Tools

Kubernetes (EKS, AKS), OpenShift, ECS, Docker

Scripting/Programming Languages

Python, Java, Shell (Bash), Ruby, .NET, YAML, JSON, Golang, PowerShell, Groovy

Version Control Tools

GIT, GitHub, Azure Repos, Bit Bucket, GitLab

Operating Systems

UNIX, Linux, RHEL, Windows Server

Databases

SQL Server, MYSQL, NoSQL, S3, MongoDB, Dynamo DB, Cassandra, Data Lake

Ticketing Tools

Jira, ServiceNow, Bugzilla, Mingle

Testing / Code Quality

Selenium, SonarQube, Veracode, X-Ray

Web/Application Servers

Apache Tomcat, Nginx, IIS, httpd, Web logic, Kafka

Virtualization Tools

Oracle Virtual Box, VMWare, vSphere, Vagrant

Infrastructure as Code

Terraform, ARM Templates, CloudFormation .

WORK EXPERIENCE

Client: JP Morgan Chase, Austin, TX Aug 2023 – Till now

Role: Azure DevOps Engineer / Platform Engineer

Project Title: Enterprise Cloud Platform Modernization

Project Description: Worked on JP Morgan Chase’s Enterprise Cloud Platform Modernization to migrate legacy banking systems to Azure. Developed secure, scalable CI/CD pipelines using Azure DevOps, Terraform, and GitHub Actions. Automated provisioning of Azure resources like AKS, App Services, Key Vault, and Azure SQL using Terraform and Bicep. Implemented RBAC, Azure Policies, and governance controls to meet security and compliance standards. Integrated Azure Monitor, Log Analytics, and Application Insights for centralized observability. Enabled blue/green and canary deployments using Helm and Customize in AKS environments.

Responsibilities:

Designed and implemented a secure, scalable cloud platform on Microsoft Azure to support credit card transaction processing and fraud detection systems, leveraging AKS, ACR, and Azure Active Directory (AAD) for identity management.

Built MLOps pipelines using Azure Machine Learning (AML), Kubeflow, and MLflow to train, deploy, and monitor machine learning models for real-time fraud detection, integrating Feature Store for real-time and batch feature serving.

Implemented AWS cloud stack, including EC2, RDS, S3, IAM, VPC, and CloudWatch, ensuring high availability, security, and compliance for financial applications.

Designed and implemented AWS application hardening practices, including IAM policy enforcement, network segmentation, security groups, and encryption for sensitive banking data.

Configured ArgoCD for GitOps-based continuous delivery, ensuring synchronization of application configurations stored in Git repositories with Kubernetes clusters.

Deployed and managed Kubernetes (EKS) clusters to run and scale containerized banking applications, ensuring reliability and fault tolerance.

Implemented HashiCorp Vault for secure storage and management of cryptographic keys, passwords, and API tokens, ensuring compliance with PCI-DSS and other regulatory standards.

Deployed and managed enterprise applications on AKS/EKS clusters secured with IAM roles, RBAC, and network policies to meet SOC2 and PCI compliance requirements.

Set up Datadog, Prometheus, and Grafana for real-time monitoring and alerting, and configured Splunk for log aggregation and anomaly detection to proactively identify security threats.

Designed and optimized Databricks clusters for fraud detection and analytics, integrating Delta Lake for scalable data storage.

Designed and deployed AWS infrastructure across multi-account environments with VPCs, Route53, Transit Gateways, and NAT Gateways for secure global banking workloads.

Configured and optimized database services (RDS Aurora, DynamoDB) with multi-region replication to ensure cross-region availability and disaster recovery.

Migrated legacy on-premises systems to Azure using Terraform and Docker, modernizing infrastructure and improving scalability and maintainability.

Built monitoring and alerting solutions using CloudWatch, Prometheus, and Grafana for proactive incident detection and resolution.

Defined and enforced AWS governance standards for resource tagging, cost optimization, and compliance tracking.

Led migration of on-premises workloads to AWS, modernizing infrastructure for scalability, resiliency, and operational efficiency.

Supported container security best practices including image scanning, RBAC, and network policies in Kubernetes.

Troubleshot Istio service mesh issues including sidecar injection, mutual TLS, and traffic routing across services deployed on Kubernetes.

Developed CI/CD pipelines using GitHub Actions, Azure Pipelines, and ArgoCD to support GitOps-based deployments for containerized workloads.

Built and administered Kubernetes (EKS) clusters with autoscaling, RBAC, and network policies for high-security financial applications.

Designed and implemented disaster recovery and high availability solutions for Azure Databricks and AKS environments, including data replication, backup, and failover mechanisms.

Configured Istio service mesh for secure service-to-service communication, mutual TLS, and traffic shaping across critical banking microservices.

Configured and managed Kafka clusters on AWS (EC2) and Azure for handling high-throughput real-time data streams.

Integrated SonarQube for code quality analysis and ELK Stack for centralized logging and analytics, improving system stability and performance.

Configured Istio ingress gateways and JWT authentication policies to secure APIs exposed externally to partners.

Designed cross-region failover strategies with AWS ELB, Route53, and RDS Multi-AZ for business-critical financial applications.

Integrated Airflow with AWS Secrets Manager and Parameter Store to securely manage credentials for Databricks, S3, and RDS connections.

Implemented security best practices with IAM policies, security groups, and compliance with PCI-DSS standards.

Implemented encryption and decryption mechanisms using HashiCorp Vault's transit secrets engine to protect sensitive data in transit and at rest.

Managed Docker container logistics, including image creation, registry optimization (ECR), and container lifecycle management.

Implemented CI/CD pipelines using Harness, integrating security scans, approvals, and automated rollbacks for critical microservices.

Developed Python and Java-based Kafka producers and consumers for publishing and consuming real-time fraud detection events.

Designed and implemented Azure Kubernetes Service (AKS) clusters, configuring networking, security, and scaling policies to support high-traffic credit card transaction systems.

Implemented AWS Well-Architected Framework principles across applications, including security, performance efficiency, and operational excellence.

Enforced application hardening policies VPC segmentation, security group restrictions, IAM least-privilege roles, and encryption (KMS, TLS, S3 SSE).

Configured and scripted AWS networking logistics with VPC peering, Direct Connect, security groups, and routing automation.

Deployed services with Helm charts, enabling version-controlled, standardized, and scalable releases across environments.

Architected production-resilient infrastructure with multi-AZ failover and cross-region DR solutions for core payment systems.

Automated DR drills for RDS and EKS clusters, validating recovery time objectives (RTO) and recovery point objectives (RPO).

Utilized Ansible to automate system operations, including server provisioning, configuration management, and application deployments, reducing manual effort and errors.

Implemented Azure Monitor and Log Analytics to track platform performance, set up custom alerts, and troubleshoot issues in real-time.

Designed and deployed Azure Databricks environments for advanced analytics and big data processing, integrating with Azure Data Lake Storage (ADLS) and Azure SQL Data Warehouse.

Configured Azure Artifacts for package management, setting up feeds for npm, Maven, and other package types to streamline dependency management.

Designed and implemented single sign-on (SSO) and multi-factor authentication (MFA) using Azure Active Directory (AAD), enhancing platform security.

Automated the deployment of Cloud Native Functions (CNF) on AKS using Helm charts, enabling rapid scaling of serverless workloads.

Automated database provisioning, schema migrations, and replication strategies using Terraform and Liquibase for financial transaction systems.

Configured cross-region replication and failover between AWS RDS instances for disaster recovery and business continuity.

Integrated Splunk with Azure services for centralized log management, enabling advanced querying, alerting, and reporting capabilities.

Automated the deployment of TensorFlow, PyTorch, and Scikit-learn models in production environments using Docker and Kubernetes.

Implemented Azure Cosmos DB for globally distributed database solutions, ensuring low-latency access to credit card transaction data.

Designed and implemented Azure Data Factory pipelines for data ingestion, transformation, and migration from on-premises systems to Azure Data Lake.

Utilized Kafka for real-time data streaming and event processing, enabling real-time fraud detection and transaction monitoring.

Collaborated with cross-functional teams to design and implement Azure Blueprints for standardized and compliant cloud environments

Environment: Azure DevOps, AWS, EC2, RDS, S3, IAM, VPC, EKS, Bamboo,Terraform, Azure SQL, Azure Active Directory, Jenkins, Python, GIT, Bitbucket, Ansible, Azure Services, Docker, Azure Databricks, Azure Key Vault, SonarQube, Argo CD, Azure Kubernetes Service (AKS), Azure Container Registry (ACR), CI/CD pipelines, Datadog, HashiCorp, OpenShift Container Platform, .NET, ISTIO, ELK stack, Azure Log Analytics, Azure Pipelines, Nginx, Prometheus & Grafana, Splunk, Kafka, Azure Cosmos DB, Migration, Jira.

Client: Kaiser Permanente, Oakland, CA Jan 2021 – July 2023

Role: AWS DevOps Engineer / SRE

Project Title: CloudOps Enablement for Healthcare Services

Project Description: Kaiser Permanente modernized its healthcare platform by migrating legacy systems to AWS Cloud to improve scalability, security, and reliability. As an AWS DevOps Engineer/SRE, I designed and implemented automated CI/CD pipelines using Jenkins, GitHub Actions, and AWS CodePipeline. I automated infrastructure provisioning with Terraform and managed containerized deployments on Amazon EKS using Helm. I configured monitoring and alerting tools like CloudWatch, Prometheus, and Grafana to ensure high availability and quick incident response. I implemented SRE practices including SLOs and error budgets and enforced HIPAA compliance through security best practices.

Responsibilities:

Installed and Administered Jenkins CI for ANT and Maven Builds and managed the installation, configuration, and administration of RDBMS and NoSQL tools such as DynamoDB.

Designed and implemented scalable cloud infrastructure on AWS using Terraform and AWS CloudFormation, provisioning resources such as EC2 instances, RDS Aurora, S3 buckets, and VPCs to support healthcare applications.

Automated Kubernetes cluster deployments using KOPS and Helm charts, ensuring consistent and repeatable configurations across environments.

Engineered Databricks clusters for healthcare analytics, implementing secure data lake integrations and HIPAA compliance.

Automated ETL workflows with Apache Airflow, orchestrating pipelines for EHR and patient data.

Containerized legacy healthcare applications using Docker and deployed them on AWS ECS and Red Hat OpenShift, leveraging Kubernetes constructs such as Pods, Services, and ConfigMaps.

Built end-to-end CI/CD pipelines using Jenkins, GitLab CI, and AWS CodePipeline to automate build, test, and deployment processes.

Provisioned Databricks workspaces using Terraform, integrating with AWS Lake Formation and S3 for secure data lake access.

Integrated Argo CD with CI/CD pipelines to enable GitOps-based continuous delivery, automatically synchronizing application configurations with Kubernetes clusters.

Configured Prometheus, Grafana, and AWS CloudWatch for real-time monitoring of platform performance, application health, and resource utilization.

Set up Elasticsearch, Fluentd, and Kibana (EFK) for centralized logging and log analysis, enabling proactive issue resolution.

Enforced HIPAA and HITRUST compliance by implementing AWS IAM policies, Security Groups, and VPCs for resource isolation.

Configured HashiCorp Vault for secure storage of cryptographic keys, passwords, and API tokens.

Implemented mutual TLS for secure service-to-service communication within Kubernetes clusters using ISTIO.

Designed and implemented data pipelines for ML teams using AWS S3, Python ETL scripts, and Mesos-managed EC2 clusters.

Integrated Kafka with Kubernetes-based microservices (EKS, AKS) to decouple services and handle asynchronous processing.

Provided reusable Helm charts and Terraform modules to streamline application deployments.

Configured Splunk for log aggregation, alerting, and anomaly detection, integrating it with ServiceNow for incident management.

Automated infrastructure provisioning and app deployment using Terraform, ARM templates, and Ansible in both Azure and AWS environments.

Created Python and Shell scripts to support data migrations, system health checks, and deployment automation in cloud and hybrid environments.

Configured AWS networking logistics, including VPNs, Transit Gateways, and route automation for hybrid healthcare data flow.

Designed production-resilient environments with multi-AZ replication and cross-region failover for healthcare APIs.

Automated RDS backup, snapshot, and restore workflows with Terraform integrated into Harness CI/CD.

Tuned EKS performance with node scaling policies, pod affinity rules, and optimized Docker image pipelines.

Migrated and managed enterprise workloads on multi-cloud platforms (Azure, AWS, GCP), with a focus on AKS and cloud networking configurations.

Implemented FluxCD for continuous reconciliation of manifests in multi-region EKS deployments, ensuring rapid disaster recovery readiness.

Defined and implemented SLOs, SLIs, and KPIs to measure and improve platform reliability, including MTTR and availability.

Automated cluster operations including node scaling, image builds, and rolling updates using Ansible, Terraform, and OpenShift CLI (oc) tools.

Monitored OpenShift cluster health using Prometheus, Grafana, and EFK stack, and defined alerting rules for pod failures, high memory/cpu usage, and platform issues.

Designed and deployed VPCs with subnets, route tables, NAT gateways, and VPN connections.

Set up AWS Route53 health checks and routing policies (e.g., Weighted, Failover) to ensure high availability of critical services.

Monitored and optimized AWS costs using AWS Cost Explorer and CloudWatch, ensuring efficient resource utilization.

Conducted database performance tuning on AWS RDS and Aurora clusters, reducing backup times by 30%.

Designed cloud cost savings plans in AWS, optimizing EC2 usage, RDS instance rightsizing, and S3 lifecycle rules.

Managed database stack including PostgreSQL and DynamoDB, ensuring replication and high availability.

Ensured disaster recovery readiness by implementing region failover and DR exercises for AWS RDS and EKS workloads.

Automated infrastructure provisioning and configuration management using Ansible, writing playbooks for deployment, server setup, and stack monitoring.

Configured RBAC roles for clinicians, developers, and operations teams, enforcing least-privilege access at namespace and cluster level.

Integrated Ansible with Jenkins to automate infrastructure activities, including continuous deployment and application server setup.

Managed Docker container snapshots, attached to running containers, removed images, and managed directory structures for efficient container lifecycle management.

Set up AWS Virtual Private Cloud (VPC) and Database Subnet Groups for isolation of resources within Amazon RDS Aurora DB clusters.

Implemented SonarQube for developer code quality checks, established quality gates, and designed gate thresholds by muting/unmuting rules.

Automated Kubernetes cluster deployments using Ansible, writing playbooks for cluster setup and management.

Wrote Kubernetes YAML files for deploying microservices into Kubernetes clusters, adhering to 12-factor application principles.

Configured JUnit coverage reports and Integration Test cases as part of the build process in GitLab Runner.

Scheduled, deployed, and managed container replicas onto nodes using Kubernetes, ensuring efficient resource utilization.

Configured ISTIO service mesh on Kubernetes clusters and implemented mutual TLS for secure internal service-to-service communication.

Deployed Kubernetes clusters on top of Amazon EC2 Instances using KOPS, managing local clusters and deploying application containers.

Set up development and production data pipelines for ML teams on Mesos-managed EC2 clusters with Marathon Docker Management.

Transformed data stored in AWS S3 using Python ETL scripts for advanced analytics and reporting.

Designed and implemented ServiceNow solutions tailored to organizational needs, leveraging ITSM modules such as Incident, Change, Problem, and Service Catalog.

Performed administrative tasks such as user management, role-based access control (RBAC), and license management for Splunk environments.

Integrated Splunk with other IT operations tools and platforms (e.g., Nagios, ServiceNow, AWS CloudWatch) for streamlined monitoring and troubleshooting.

Implemented Dynatrace for end-to-end application performance monitoring, enabling real-time visibility into application health and performance.

Conducted DR exercises validating SOC2 and HIPAA requirements, including encrypted multi-region failover for PHI workloads.

Defined and implemented SLOs and SLIs for critical services, establishing measurable targets for reliability and performance.

Managed SLAs to ensure the delivery of services met agreed-upon performance standards and availability targets.

Developed and tracked key SRE KPIs, including MTTR (Mean Time to Recovery), availability, incident frequency, and error rate.

Implemented KPI dashboards to provide real-time visibility into system performance and reliability metrics.

Environments: Ansible, Apache Tomcat, AWS, AWS CodePipeline, Argo CD, AWS Secret Manager, Chef, CI/CD Pipeline, CloudCheck, CloudFormation, CloudWatch, Confluence, Cost Explorer, Docker, Dynatrace, Elastic Container Registry (ECR), Elastic Kubernetes Service (EKS), ELK Stack, GitLab, GitHub, GIT, Helm Charts, IAM, Jenkins, JIRA, Migration, Nagios XI, OpenShift, Prometheus, Python, ServiceNow, SonarQube, Splunk, Terraform.

Client: Cox Communications, Atlanta, GA April 2018 – Dec 2020

Role: DevOps Engineer

Project Title: NextGen Cloud Infrastructure Automation

Project Description: Led the automation of Cox Communications’ cloud infrastructure using Terraform and AWS CloudFormation, enabling scalable and consistent environment provisioning. Developed CI/CD pipelines with Jenkins to streamline application deployment and reduce manual errors. Implemented containerization with Docker and orchestration via Kubernetes for faster, reliable releases. Automated configuration management using Ansible to ensure environment consistency. Integrated monitoring solutions like AWS CloudWatch and Prometheus for proactive system health tracking. This project improved deployment speed by 40% and infrastructure provisioning time by 60%, enhancing overall operational efficiency.

Responsibilities:

Established a Continuous Delivery pipeline with Docker, Jenkins, and GitHub. Installed and configured Jenkins to support various Java builds, automated continuous builds using Jenkins plugins, and published Docker Images to the Nexus Repository.

Implemented SonarQube for continuous inspection of code quality and automated Nagios alerts and email notifications using Python scripts executed through Chef.

Set up AWS infrastructure for telecom workloads with VPCs, Route53, and Transit Gateways supporting millions of daily users.

Automated provisioning of EC2, S3, and RDS with Terraform, including modularized IaC for EKS cluster deployments.

Built and managed EKS clusters with Helm-based deployments, securing traffic with network policies.

Optimized Docker container logistics by automating build pipelines, version tagging, and vulnerability scanning.

Designed Databricks ETL pipelines for telecom usage data, enabling near real-time billing and reporting.

Administered GitHub Enterprise, automating repo creation, role-based access, and CI/CD policy enforcement.

Deployed Kubernetes clusters and Docker containers for large-scale, distributed applications with automated scaling.

Migrated CI/CD workflows from Bamboo to Jenkins and GitHub, improving pipeline standardization and developer productivity.

Defined and enforced Kubernetes RBAC roles, network policies, and Pod Security Standards to align with SOC2 audit controls.

Automated secrets rotation with Vault and integrated it into GitOps pipelines, ensuring ephemeral credentials across clusters.

Implemented monitoring solutions (CloudWatch, Prometheus, ELK) for proactive troubleshooting.

Hardened AWS applications using IAM security, RBAC, and encryption.

Scripted AWS networking logistics, configuring cross-region peering, routing tables, and NACLs to optimize traffic flow.

Architected production environments with multi-AZ RDS clusters and cross-region replication for telecom billing services.

Automated DR failover testing with Terraform-driven scripts to validate backup and recovery processes.

Built monitoring pipelines with Prometheus, CloudWatch, and Harness dashboards for proactive fault detection.

Developed Python/Ansible automation for repository audits and secure migration from Bitbucket.

Conducted database tuning on Oracle and MySQL clusters, applying sharding and replication strategies.

Utilized Git for source code version control, integrated with Jenkins for CI/CD pipeline, and managed user management with Maven and Ant build tools.

Implemented compliance monitoring frameworks that mapped Kubernetes audit logs and Istio telemetry into SOC2 reporting dashboards.

Installed, configured, and managed Monitoring Tools such as Nagios for Resource Monitoring and Network Monitoring.

Managed infrastructure servers from SCM to GitHub and Chef.

Extensively worked with the distributed version control system Git.

Managed database stack including DynamoDB and Aurora, ensuring replication and disaster recovery

Implemented security best practices including IAM policies, audit logging, and network segmentation.

Collaborated with the development team to generate deployment profiles (jar, war, ear) using Ant Scripts and Jenkins.

Used Maven dependency management system to deploy snapshot and release artifacts to Nexus, facilitating artifact sharing across projects.

Implemented CI/CD Automation Process using CI Tool Jenkins, CD Tool Docker.

Installed, updated, diagnosed, and troubleshot the issue tracking and project management application, learning agile methodology by JIRA. Created and configured new JIRA projects and maintained existing JIRA projects.

Managed servers built on Linux, Solaris, and Windows platforms using the Chef Configuration management tool.

Created Deployment notes in collaboration with the Local SCM team and released Deployment instructions to Application Support.

Environments: Docker, Jenkins, GitHub, Nexus, SonarQube, Nagios, Python, CI/CD pipeline, Chef, Red Hat Enterprise Linux (RHEL), AWS (Amazon Web Services), Apache Web Server, WebSphere Application Server, Sun Solaris, AWS, EC2, RDS, S3, IAM, VPC, EKS, Bamboo, CloudWatch, Prometheus, ELK, Test Kitchen, Chef Spec, Knife, Docker-Maven plugin, Maven, Git, Ant, Nagios, Maven, JIRA.

Client: Ford Motors, Dearborn, Michigan July 2016 -Mar 2018

Role: Linux Administrator

Project Title: Ford Vehicle Data Integration and Monitoring System

Project Description: Managed and maintained the Linux-based infrastructure for Ford’s Vehicle Data Integration system, enabling real-time telemetry and diagnostic data collection from connected vehicles. Ensured high availability, security, and performance of Red Hat Enterprise Linux servers supporting critical data pipelines. Automated deployment, monitored system health, and resolved incidents to support Ford’s predictive maintenance and vehicle analytics initiatives.

Responsibilities:

Deployed and managed AWS infrastructure using CloudFormation, EC2, and S3; worked across SaaS, PaaS, and IaaS environments.

Built Docker-based deployment pipelines using Jenkins; integrated Git for source control and implemented CI/CD with Maven, Python scripts, and nightly/



Contact this candidate