Post Job Free
Sign in

Customer Service Configuration Management

Location:
Irving, TX
Salary:
150-160
Posted:
June 13, 2025

Contact this candidate

Resume:

RAMESH KUMAR

*************@*****.*** 940-***-**** Texas

Cloud Architect(GCP&AWS)

PROFESSIONAL SUMMARY

Over all 19 years IT. 15 yrs into skilled AWS, Azure and GCP Cloud DevOps Architect. Hands-on experience supporting, automating, and optimizing mission-critical deployments in AWS, leveraging configuration management, CI/CD, and DevOps processes.

Experience into DevOps and cloud Solutions(CI/CD) design and implementation, with Software Configuration Management, Change Management, build automation, Release Management, and AWS/Azure/PCF/DevOps experience in large and small software development organizations.

Experience in using Build Automation tools and Continuous Integration concepts by using tools like Gradle, Maven and Jenkins.

Hands-on experience with Java and Java frameworks (Spring Boot, Spring Batch, Spring Cloud, etc.).

Strong experience in microservice design and integration implementation.

Deep understanding of architecture and design patterns.

Expertise in designing and implementing event-driven architectures and integrating data, APIs, and systems.

Experience in integration with multiple COTS applications and databases.

Set up Continuous Integration for major releases in Jenkins and Azure DevOps.

5+ years experience working with Linux.

Familiarity with Kubernetes and cloud platforms like AWS, Azure, OCI, GCP.

Experience in infrastructure, networking, storage, and DevOps scripting

Storage in(databases, file systems, block storage, blob storage).

Expertise Development: expert in DGX Cloud.

Customer Service: experience in customer service/support.

Experience SLURM and HPC (High-Performance Computing)

Proficiency in cloud services, skilled in GCP (Virtual Private Cloud, Load Balancing, Cloud IAM, Compute Engine, Cloud Storage, Cloud Functions) and AWS.

Led migrations from legacy systems to GCP.

Deep experience with Google Cloud Platform (GCP) components, including Cloud Run, GKE (Google Kubernetes Engine), GCS (Google Cloud Storage), DataFlow, BigQuery, Pub/Sub, Cloud SQL, and more. Strong understanding of cloud services such as IAM (Identity and Access Management), Networking, Cloud Spanner, Cloud Storage, Pub/Sub

Hands-on experience in migration projects.

Lead critical solution design decisions and architect systems to meet integration, scalability, and security requirements. Design and implement event-driven architectures, data integration platforms, event streaming architectures, and API-driven systems.

Utilize a deep understanding of architecture and design patterns to create highly available, reliable, and efficient systems. Design integrations for multiple COTS (Commercial off-the-shelf) applications and multiple databases (both SQL and NoSQL).

Architect and design API-centric solutions and microservices. Ensure the design of APIs, services, and integrations follows industry best practices and addresses specific business requirements.

Collaborate effectively with cross-functional teams, gathering integration requirements, creating specification documents, mapping specifications, and providing high-level and detailed design documentation. Lead technical teams during design and implementation phases.

Lead a team of cloud engineers and drive the execution of GCP migrations, ensuring that all migration activities are completed on time and in line with the defined strategy.

Experience in network diagrams, schematics, and documentation.

Having experience in troubleshoot and resolve network and infrastructure issues

TECHNICAL SKILLS

DevOps Tools: GIT, GIT HUB, SVN, Maven, ANT, Gradle, Nexus Repository, SonarQube, Jenkins, Puppet, Chef, Ansible, Docker, Kubernetes, Nagios.

Infrastructure automation: Terraform

Cloud and Infrastructure: AWS, Azure, GCP, PCF and On-Premises

Databases: RDBMS, MySQL, Oracle, Teradata, PostgreSQL, and DB2

Scripting Languages : Bash, Perl, Python, Ruby, Golang, Unix Shell scripting, Terraform.

Container Tools: Kubernetes, Docker, Open Shift.

CI/CD Tools: Azure, Jenkins/Hudson, and GIT hub action

Monitoring 24/7 Tools: Grafana, Nagios, Splunk, AWS cloud watch, ELK, AppDynamics, Azure Insight.

Ticketing/Bug Tracking: Azure ADO, JIRA, J Unit, J Meter Test Flight, ServiceNow.

Networking: TCP/IP, NFS, DNS, VPN, WAN, HTTP, LAN, FTP/TFTP, VMware, Nexus switch, IP Networking, F5 load balancer.

Application Servers and Web Servers: WebLogic, WebSphere, Tomcat

Apache, https, and IIS.

Operating System: Windows, Linux/Unix. Mac

CERTIFICATION/EDUCATION

Solutions Architect Associate certified by Amazon Web Services.

Microsoft Azure architect technologies certified by Microsoft Corporation.

Oracle Business Intelligence 10.1.3 (1Z0-526) certified by Oracle corporation.

SCJP - Sun Certified in Java Professional

ITIL - V3 Certified

Project Management Professional (PMP) – Certified

Master of Computer Application(MCA), Madurai Kamaraj University, Tamil Nadu, Chennai, INDIA – 2002

PROFESSIONAL EXPERIENCE

Cloud Architect(GCP&AWS), ADP, Dallas, Texas

March 2024 – Present

Implemented AWS solutions using EC2, S3, RDS, IAM, Redshift, Lambda, Security Groups, EBS, Elastic Load Balancer, Auto scaling groups, SNS, Optimized volumes and Cloud Formation templates.

Designed, configured and managed public/private cloud infrastructures utilizing Amazon Web Services (AWS), including EC2, Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer, Cloud Watch and IAM.

Design highly available, fault-tolerant, multi-region architectures to meet performance and DR objectives.

Lead end-to-end migration of complex multi-tier applications to cloud environments with minimal downtime.

Evaluate legacy infrastructure and define cloud adoption roadmaps and modernization strategies.

Automate infrastructure provisioning and configuration using Terraform, AWS CloudFormation, and GCP Deployment Manager.

Maintain version-controlled, reusable infrastructure templates for consistent deployments.

Implement cloud security best practices, including IAM policies, encryption, firewall rules, and audit logging.

Ensure compliance with standards like HIPAA, SOC 2, GDPR, and internal governance policies.

Monitor and analyze cloud usage to identify cost-saving opportunities.

Implement auto-scaling, rightsizing, reserved instances, and lifecycle policies to control cloud spend.

Work closely with DevOps, development, security, and data teams to align cloud solutions with business goals.

Provide architecture reviews, mentorship, and guidance to internal teams and stakeholders.

Drive continuous improvement by applying industry best practices to enhance performance and scalability.

Design and build robots, including hardware and software systems. Work on sensors, actuators, power supply, and control systems.

Develop software that controls robotic systems. developing algorithms for robot navigation, manipulation, perception, and decision-making.

Design the systems robots move, act, and react.

Develop AI algorithms

Works on making AI systems understand, interpret, and generate human language.

Designs systems with multiple autonomous agents that can interact and collaborate

Migrated 9 microservices to Google Cloud Platform and have one more big release planned with 4 more microservices.

Experience in using Tomcat Web Server and JBOSS/WebLogic and IBM WebSphere Application Servers for deployment.

At GTRI, implemented infrastructure as code (IaC) using tools like Terraform and AWS CloudFormation to automate the provisioning and management of cloud resources.

Branching, Tagging, Release Activities on Version Control Tools: SVN, GitHub.

Designed and created multiple deployment strategies using CI/CD Pipeline and configuration management tools like Spinnaker with remote execution to ensure zero downtime and shortened automated deployments.

Used DataDog to monitor a variety of systems and applications, including servers, databases, and cloud services, in order to identify and troubleshoot performance issues.

Installation, Configuration and Administration of Docker and Kubernetes and managing Kubernets using Openshift.

Developed CI/CD/CO pipeline libraries to deploy helm charts in AWS EKS using fargate profiles.

Responsible for site reliability engineering, and automation of monitoring, and troubleshooting of RH applications using ALM tool suite including New Relic, Grafana, Prometheus, Sumologic, and Pagerduty.

Collaboratively work closely with Executives, Managers & Leaders in “Scaling” & developing Business Value Streams, launching Agile Release Trains (ART), integration of Development with Operations DevOps/DevSecOps for setting Delivery Pipeline through to Releases.

Deployed applications in RedHat OpenShift Container Platform and monitored logs using

Datadog.

Streamlined application deployment on cloud-native platforms using OpenShift, driving operational efficiency and cost-effectiveness.

Installation, Configuration and Administration of IBM WebSphere Application Server V7.0 on Linux and Aix.

Worked on Creating continuous delivery pipeline using Spinnaker and Kubernetes.

Design of Cloud architectures for customers looking to migrate or develop new PAAS, IAAS, or hybrid solutions utilizing Amazon Web Services (AWS). Used Pandas library for statistical Analysis. Worked on Python Open stack API's.

Installation, Configuration and Management of RDBMS and NOSQL tools such as MySQL.

Implemented AWS high-availability using AWS Elastic Load Balancing (ELB), which performed a balance across instances in multiple Availability Zones.

Written a Cloud formation templates for Docker, ECS cluster with EC2 and FARGATE.

Conducted data analysis and visualization using DataDog to identify patterns, trends, and anomalies in large datasets, helping to inform business decisions.

Created and configured pipelines for Canary Deployments and implemented New Relic monitoring for Canary deployments, ensuring smooth and reliable testing of new features.

Senior Cloud DevOps Engineer, Verizon, Dallas, Texas

Feb 2022 - March 2024

•Designed and developed a robust Virtual Network infrastructure on GCP.

•Provisioned GCP resources using GCP Resource Manager (Cloud Deployment Manager) templates and Terraform.

•Managed the transition of services to GCP, covering service and network architecture design, data migration, and automation.

•Created and maintained GCP landing zones and GCP Blueprints setups.

•Constructed cloud infrastructure utilizing Terraform scripts.

•Developed complex Ansible playbooks for the deployment of Docker engine and Docker swarm clusters on GCP.

•Integrated GCP Cosmos DB with GCP Cloud Functions.

•Enhanced hybrid cloud storage capabilities by integrating Google Cloud Dataflow with on-premises storage systems and GCP Cloud Storage.

•Leveraged Google Cloud Dataflow's features for data compression and encryption to optimize data transfer performance.

•Developed a custom automated Python script for migrating on-premises data to GCP Cloud Storage.

•Executed ETL operations using GCP Dataflow.

•Implemented Google Cloud Disaster Recovery for robust disaster recovery solutions.

•Conducted comprehensive DR testing exercises using Google Cloud Disaster Recovery.

•Implemented stringent security protocols in GCP public cloud.

•Developed Google Cloud Policy solutions to validate if resources in GCP meet their standards.

•Maintained comprehensive documentation on system configurations and security guidelines.

•Architected and developed serverless applications using GCP Cloud Functions.

•Implemented event-driven architectures leveraging GCP Cloud Functions.

•Developed and maintained Python scripts for automating data tasks using GCP Cloud Functions.

•Engineered and managed CI/CD pipelines using Google Cloud Build and GCP Resource Manager templates.

•Established Google Cloud Build CI pipelines.

•Designed and maintained CI/CD pipelines for Node.js applications.

•Created Google Cloud Build Pipelines with YAML.

•Specialized in GCP Kubernetes Engine (GKE).

•Worked with GCP Kubernetes Engine (GKE) for managing Docker Containers.

•Deployed application codes using Kubernetes tools

Environment: GCP Resource Manager (Cloud Deployment Manager) templates, Virtual Machines, Cloud Storage, Cloud Functions, Google Cloud Build, GCP Dataflow, GCP BigQuery, Cloud Security Command Center, Cloud Key Management Service, Identity-Aware Proxy (Cloud IAP), GCP Kubernetes Engine (GKE), Terraform, Ansible, Docker, Docker Swarm, GCP Cosmos DB, Google Artifact Registry, Kubernetes, Helm, minikube, YAML, Python 3.6, Google Cloud SDK, and GCP Blueprints.

Sr Cloud Architect, Conduent Business Services, Atlanta, GA

May 2019 to Feb 2022

Conduent Inc. is an American business services provider company headquartered in Florham Park, New Jersey. It was formed in 2017 as a divestiture from Xerox. The company offers digital platforms for businesses and governments. As of 2021, it had over 31,000 employees working across 22 countries. Conduent delivers digital business solutions and services spanning the commercial, government and transportation spectrum – creating exceptional outcomes for its clients and the millions of people who count on them.

Leveraged various AWS solutions like EC2, S3, IAM, EBS, Elastic Load Balancer (ELB), Security Group, Auto Scaling and RDS in cloud Formation JSON templates

Defined AWS Lambda functions for making changes to Amazon S3 buckets and updating Amazon DynamoDB table.

Created snapshots and Amazon machine images (AMI) of the instances for backup and created Identity Access Management (IAM) policies for delegated administration within AWS.

Hands on Experience with spinning up new infrastructure using Terraform templets.

Experienced in Cloud automation using Terraform templates.

Experience in setting up the build and deployment automation for Terraform scripts.

Developed PYTHON script that allows access tokens of antifactory to send images from DOCKER registry.

Automated Creating lambda functions in AWS to check VPC end points and versions of Elastic Bean Stalks using Terraform.

Using DevOps tools such as Chef, Ansible, Jenkins, Maven, ANT, SVN, GIT, and Docker.

Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes.

Involved in development of test environment on Docker containers and configuring the Docker containers using Kubernetes.

Installation, Administration, Upgrading, Troubleshooting Console Issues & Database Issues for AppDynamics.

Identifying the Critical applications for System resource utilization (CPU, Memory, and Threads etc.) & JVM heap size was monitored using AppDynamics.

Worked with various PCF components like OAuth2 server, login server to provide identity management & Cloud Controller to configure deployment of applications. Worked with Blob store for storing and managing Application code packages, Build packs.

Implemented Micro-services using PCF platform build upon Spring Boot Services. Managed the lifecycle of containers and processes.

Good knowledge in Infrastructure as Code by using Terraform and Cloud Formation and Worked on creating Terraform templates for dev, test, staging and production.

Created Terraform modules to create instances in AWS and automated process of creation of resources in AWS using Terraform.

Experience in writing the HTTP RESTful Web services and SOAP API's in Golang.

Strong working knowledge in developing Restful web services and Micro Services using Golang.

Experience writing data APIs and multi-server applications to meet product needs using Golang.

Ensured successful architecture and deployment of enterprise-grade PaaS solutions using Pivotal Cloud Foundry (PCF) as well as proper operation during initial application migration and set new development.

Used AWS Route53, to route the traffic between different availability zones. Deployed and supported Mem-cache/AWS Elastic Cache and then configured Elastic Load Balancing (ELB) for routing traffic between zones.

Used IAM to create new accounts, roles and groups and policies and developed critical modules like generating amazon resource numbers and integration points with DynamoDB, RDS.

Automated various infrastructure activities like Continuous Deployment, Application Server setup, Stack monitoring using Ansible Playbooks and has integrated Ansible with Concourse CI.

Wrote CI/CD pipeline in Groovy scripts to enable end to end setup of build & deployment using Concourse CI.

Maintained JIRA for tracking and updating project defects and tasks ensuring successful completion of tasks in a sprint.

Environment: AWS, Azure, PCF, S3, EC2, ELB, IAM, RDS, VPC, Data Factory, Databricks, SES, SNS, EBS, windows, Cloud Trail, Auto Scaling, Chef, Jenkins, Maven, JIRA, Linux, Java, Kubernetes, OpenShift, Terraform, Docker, AppDynamics, Nagios, ELK, SonarQube, Nexus, JaCoCo, JBOSS, Nginx, PowerShell, Bash, Ruby, and Python.

DevOps and Azure Cloud Engineer, Tenet HealthCare, Dallas TX

Jan 2017 – May 2019

Description: Tenet Healthcare Corporation is a for-profit multinational healthcare services company based in Dallas, Texas, United States. Through its brands, subsidiaries, joint ventures, and partnerships, including United Surgical Partners International, the company operates 65 hospitals and over 450 healthcare facilities. Tenet integrated system across the greater Los Angeles County and Orange County area, includes four acute care hospitals, ambulatory surgery centers, clinics, and ancillary services.

Worked on Network services on cloud as part Infrastructure as a service offering. Created VPC’s, subnets, VPC Peering connections across different VPC’s.

Followed security best practices on deployed application on cloud AWS and Azure.

Infrastructure Automation by using Terraform.

Maintained high available production environments on AWS using Load balancers, Auto scaling groups, Route53, RDS using Multi-AZ.

Created snapshots, Custom AMIs, golden images using pipelines and managing EBS volumes.

Worked on RDS service to manage the different databases and scheduled maintenance of snapshots at Rest.

Knowledge in developing Micro Services by using Golang.

Implemented some of the REST services by using Golang with microservices architecture.

Access management using IAM Users and Roles, Defined custom json policy statements for different identities.

Worked on simple storage AWS S3 and applied S3 lifecycle rules for storage classes to utilize more cost effectively.

Worked on AWS ECS faregate cluster setup, created the task definition and services.

Automated deployment to ECS using Jenkins pipelines.

Monitoring Exposure of Kubernetes using ELK.

Infrastructure as a code automation Using Terraform on Public cloud platform AWS.

CI/CD Pipeline setup using Jenkins and Azure pipeline scripts.

Configuring the Docker containers and creating Docker files for different environments.

Experienced in Gitlab CI and Jenkins for CI and for End-to-End automation for all build and CD(CI/CD process).

Used Docker containers for eliminating a source of friction between development and operations.

Installation, Configuration, and administration of VMware.

Managing Azure DevOps build and release pipeline. Setting up new repos managing the permissions for various GIT branches. Deployed microservices, including provisioning AZURE environment.

Developed and Maintained the SSAS cubes for planners.

Environment: AWS, Azure, AppDynamics, Ansible, Puppet, Red Hat, windows, VMware, GIT, Shell Scripting, Jenkins, SonarQube, Nexus.

DevOps Engineer and Middleware Admin, CitiBank, Irving TX,

May 2015 – Jan 2017

Description: Citibank, N. A. is the primary U.S. banking subsidiary of financial services multinational Citigroup. Citibank was founded in 1812 as the City Bank of New York and later became the First National City Bank of New York. Recently introduced latest technologies for development and their corresponding operational process.

Interacted with client teams to understand client deployment requests.

Coordinate with Development, Database Administration, QA, and IT Operations teams to ensure there are no resource conflicts.

Worked closely with project management to discuss code/configuration release scope and how to confirm a successful release.

Build, manage, and continuously improve the build infrastructure for global software development engineering teams including implementations of build Scripts, continuous integration infrastructure and deployment tools.

Managing the code migration from GIT and star team Subversions repository.

Implemented continuous integration using Jenkins.

Worked in container-based technologies like Docker, Kubernetes and OpenShift.

Installed, configured, managed and monitoring tools such as Splunk, Nagios and Graphite for Resource monitoring, network monitoring, log trace monitoring.

Using Jira, Confluence as the project management tools.

Configured applications servers (Apache Tomcat) to deploy the code.

Installation and configuration and setup of Docker container environment.

Created a Docker Image for a complete stack and created a mechanism via Git workflow to push the code into the container, setup reverse proxy to access it.

Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple namespace versions.

Experience in using Cloud Infrastructure management and Implementation Working experience on various Azure services like Compute (Web Roles, Worker Roles), Azure Websites. Caching, SQL Azure, NoSQL, Storage, Network services, Azure Active Directory, Scheduling, Auto Scaling, and Power Shell Automation.

Deployed Azure IaaS Virtual Machines (VM’s) and PaaS role instance`s (Cloud Services) into secure VNets and subnets, designed VNets and Subscriptions to confirm to Azure Network Limits.

Prototype CI/CD system with GIT on GKE utilizing Kubernetes and Docker for the runtime environment for the CI/CD systems to build and test and deploy.

Designed and Developed Bamboo Build deployments on Docker containers.

Worked on installation and configuration of Chef server and Chef-client (Nodes).

Repaired broken Chef Recipes and corrected configuration problems with other Chef objects.

Installed applications and load balance packages on different server using Chef.

Developed unit and functional tests in Python and Ruby.

Developed and maintained Shell scripts for build and release tasks.

Environment: WebLogic, WebSphere GIT, Maven, Gradle, Python, Ruby, Bamboo, Shell, Jenkins, JIRA, Azure, Kubernetes, Docker.

Middleware Admin.(WebLogic), Loblaw Companies Limited – New York

Jan 2014 – April 2015

Interacted with client teams to understand client deployment requests.

Coordinate with Development, Database Administration, QA, and IT Operations teams to ensure there are no resource conflicts.

Worked closely with project management to discuss code/configuration release scope and how to confirm a successful release.

Build, manage, and continuously improve the build infrastructure for global software development engineering teams including implementations of build Scripts, continuous integration infrastructure and deployment tools.

Deployed the Build Packages into Application Server- WebLogic.

Worked on Service Now tool for tikcet tracking and Monitoring our work and worked on different types of Incidents, requests and Change tasks.

Environment: WebLogic, WebSphere, PCF, GIT, Maven, ServiceNow.

Sr Cloud Engineer, GE Health Care (ITO) USA

Jan 2011 to Jan 2014

Design scalable, reliable, and cost-effective AWS cloud infrastructure solutions based on business requirements. Architect solutions that leverage AWS services such as EC2, S3, RDS, VPC, IAM, and Lambda. Define and implement best practices for cloud architecture, including fault tolerance, high availability, and disaster recovery.

Implement automation scripts to streamline deployment and management processes. Develop and maintain monitoring systems to ensure the health and performance of cloud infrastructure.

Ensure cloud infrastructure complies with industry standards and regulations. Implement security best practices, including identity and access management, encryption, and network security.

Work closely with cross-functional teams, including software developers, operations, and product managers, to support business needs. Provide guidance and support for cloud-related issues and initiatives.

Stay updated with the latest cloud technologies and best practices. Proactively identify areas for improvement in existing infrastructure and processes, and implement enhancements to optimize performance and cost-efficiency

Software Engineer, Capital One

Nov 2004 – Jan 2011

Work closely with cross-functional teams, including designers, other developers, and stakeholders, to deliver high-quality software solutions.

Troubleshoot and resolve performance and security issues identified during testing, ensuring applications run efficiently and securely.

Design and implement efficient APIs to facilitate communication between different components of the application.

Develop user-friendly interfaces, ensuring responsiveness and cross-platform optimization for mobile devices.

Create and manage servers and databases, ensuring robust functionality and seamless integration with front-end elements.

Create and maintain scalable and secure full-stack applications, utilizing technologies such as Java, JavaScript, HTML, PHP, and C#.



Contact this candidate