Post Job Free
Sign in

Spring Boot Cloud Solutions

Location:
McKinney, TX
Posted:
July 31, 2025

Contact this candidate

Resume:

Rohini Ravi

+1-551-***-**** ***********@*****.*** Plano, Texas

Summary

AWS Technical Developer & Architect expert with 13+ years of professional experience in advancing team velocity by introducing common branching strategies, automating build/integration checks, and simplifying deployment patterns. Creating, monitoring and maintaining scalable cloud environments.Creating cost-effective cloud solutions for clients.Helping companies create and meet business objectives. Designed and built cloud service models including Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service using Cloud Formation, Terraform, Python Boto 3,Spring boot, Go Lan, Kubernetes for orchestration and Cloud front, cognito,Fargate serverless computing services.

Technical Skills

Programming Languages: Python, Java, SQL

Scripting: Shell, Bash, PowerShell, Python, Ruby

Versioning Tools: GITHub,GIT, BitBucket, SVN

Automation Tools: Jenkins

Infrastructure Code: Cloud Formation, Terraform, Python Boto 3

Configuration Management Tools: Ansible, Chef, Puppet

Databases: MySQL,MSSQL,PostgreSQL,Oracle,MongoDB

Virtualization: VMWare, Docker, Kubernetes

Monitoring Tools: Cloud Watch, Splunk, Datadog

Cloud Technologies: AWS,Azure

Bug Tracking Tools: Bugzilla, Atlassian JIRA, Remedy

Repository Manager: Nexus, JFrog

Methodologies: Agile, Waterfall, Scrum

Operating Systems: Linux, Unix, Windows, Mac OS

Professional Experience

USAA Jan 2024 to Present

AWS Architect

AWS Developer /Architect for Microservice Environment Team of 10

Tools/Languages Utilized : Github, Sonarqube, AWS, JFrog, Datadog, Spring boot, JDK, Go Lan, cloudfront, ELK logs, Qtest, Pitest

●Worked on solution for microservices based environment where I have developed code to integrate downstream API’s metrics to be integrated from a shared jar to datadog for monitoring

●Used Github integrated with AWS Lambda services to run pipelines, logs were being monitored in cloud watch and ELK

●Worked on JFRog repository for artifact management

●Used JDK Version 17 for the latest spring boot code and worked on getting a Unit test coverage and pitest coverage for mutation to 100%

●Worked on code smells and successfully delivered code to be merged to master of a shared jar repository where the metrics such as ELK availability and latency can be monitored in datadog using shared jar which calls consuming API

●Worked on Go Lan code to perform similar kind of downstream metrics integration to datadog for monitoring purpose

●Used intellij plugin for Go and Terraform, which will create the infrastructure environment, dashboard in datadog and also help the Go code metrics to be sent using metrics distribution type to datadog

●On time delivery of stories with qtest link updated to each story after merging code to master/develop branch

●Working with business SME and Tech Leads to perform development in AWS microservices based architecture to deliver the JIRA stories as part of SAFE methodologies

●Working on Lambda wrapper and pod orchestration as part of the pipelines .

Accenture Feb 2022 to Dec 2023

AWS Data Engineer

Migration of legacy to modern microservice based architecture.

Client: Ecopetrol Role: AWS Engineer Team of 20

Environment:Angular,, serverless, Github,, SonarQube, docker, dynamodb, appsync, cloudfront

●Working on solution for serverless application model to migrate data from azure to aws and design applications using Fargate model to run docker container images and provide solutions

●Experience in practical implementation of AWS Cloud-Specific technologies including EC2, EBS, S3, VPC, RDS, SES, ELB, EMR, ECS, Cloud Front, Cloud Formation, Elastic Cache, Cloud Watch, Red Shift, Lambda, SNS, Dynamo DB, Kinesis .

●Created AWS Launch configurations based on customized AMI and use this launch configuration to configure auto scaling groups and Implemented AWS solutions using EC2, S3, RDS, VPC, DynamoDB, step functions, lambda, glue, glue data catalog, event bridge, cloud front, cognito, appsync, lake formation,ECR, ECS, EKS, Route53, EBS, Elastic Load Balancer, Auto Scaling groups.

●Responsible for proper functioning DEV/TEST/STG/PROD environments for these applications.

●Installed, configured and administered CI tool Jenkins for automated builds.

●Extensively worked onJenkins for continuous integration and for End to End automation for all build and deployments. Created Puppet manifests and modules to automate system operations.

●Created monitors, alarms and notifications for EC2 hosts using Cloud Watch.

●Deployed and monitored scalable infrastructure on AWS and managing the infrastructure environment with Puppet . Designing and implementing CI (Continuous Integration)/CD Pipeline system: configuring Jenkins Servers, Jenkins nodes, TFS creating required scripts (Python Boto 3), Code Deploy, Code Build and creating/configuring VMs (Windows/Linux),FsX.

●Experienced in monitoring servers using Nagios, Splunk, Cloud watch and using ELK .

●Responsible for configuring, integrating, and maintaining all Environments and Production PostgreSQL databases within the organization.

●Worked with Terraform for automating VPCs, ELBs, security groups, SQS queues, S3 buckets, and continuing to replace the rest of our infrastructure.

●Worked on Terraform key features such as Iac,Execution plans, resource graphs, state files,configuration files and extensively use auto scaling launch configuration templates for launching ec2 instances while deploying microservices.

●Built Terra grunt project to manage terraform configuration file DRY while working with multiple terraform modules and worked with terraform templates to automate the virtual machines using terraform modules and deployed virtual machine scale sets in production environment.

●Experience in developing scalable solutions using NoSQL databases Cassandra, MongoDB .

●Created Docker images using a Docker file, worked on Docker container snapshots, removing images and managing Docker volumes.

●Administering developers using IAM by creating groups, roles, using trusted relationships for services and also administering proper multi tenant applications.

●Created the ANT scripts and extension of existing ANT scripts for deployment of applications to HUDSON Virtualized the servers using the Docker for the test environments and dev-environments needs.

●And, configuration automation using Docker containers. Coordinating between the AppDynamics support and application owners by raising CRs to find the root cause.

●Created multiple Python, Bash, Shell and Ruby Shell Scripts for various application level tasks. Monitoring of Application and Servers through Nagios. Web Servers Administration on Apache and Nginx .

Wipro Sep 2018 to Feb2022

Data Engineer/Technical Lead

Client: Energy Australia Levis Northern Trust Mercedes Philips

Environment: AWS, Boto3, Cloud Formation templates, Jenkins, Ansible, Terraform, Python, Shell Scripts, Groovy, NLP, NLU, CHatbot, Classifier

●Creation of Cloud Architecture, including Private, Public and Hybrid architectures, IaaS, PaaS and SaaS models for any domain .

●Deployed Chatbot, Classifier wipro In house framework solutions in AWS Cloud, which will be put across service now as a chat assistant for one of the customers

●Worked on service catalog management for incident creation, knowledge document and others

●Experience in defining new architectures and ability to drive project from architecture standpoint

●Experience driving enterprise - grade Cloud services adoption across a medium-to-large organization with proven success Identify continuous services improvement, automation areas and prepare strategy script which helps in automating manual tasks and improve service quality and reliability

●Expertise in Analysis, design and development of ETL programs as per business specifications and assist in troubleshooting for complex business performance

●Experience in architecting, deploying and managing cost-effective and secure AWS environments, across multi-zones and regions, leveraging AWS services such as EC2, S3, RDS, VPC.

●SME for Technical and Architectural Informatica DW/BI Arch and Domain experience in Healthcare, Logistic, Life Science, Pharmacy, Clinical Trials, Telecom, Finance/Investment, Operations, process improvement and reEngineering.

●An expert in the ETL Tool Informatica which includes components like Power Center, PowerMart, Power Connect, Designer, Workflow Manager, Workflow Monitor, Repository Manager, Repository Server Administration Console

●Hands experience in using Informatica Cloud services like data synchronization, data replication, mapping configuration and object migration from Dev. to SIT, UAT and Prod Env.

●Hands-on experience performing migrations from On-Premise to Public Cloud solutions Managed successfully developed and implemented various

●Cloud/DWBI projects related to end to end (HLD/LLD) development/maintenance, model optimization, workflow schedule automations and technical solutions delivery by using Tableau and OBIEE analytical tools.

●Expertise in Technical architecture/Data modeling and re-design of existing model for strategic Solutions

●Proven track record to solve performance issues of any kind of Env. by using innovate methods/strategic solutions/ideas. Expertise in Analysis, design and development of ETL programs as per business specifications and assist in troubleshooting for complex business performance

●Collaborate with product development teams and senior designers to develop architectural requirements to ensure client satisfaction with the product.

●Expertise in Technical architecture /Data modeling and re-design of existing models for strategic Solutions.

●Ability to build relationships and have both business-level discussions with the business and technical level discussions with internal and external technical team members.

Cognizant Dec 2010 to Jun 2018

Senior Software Engineer/ AWS Engineer

Client: AbbVie Abbott

Environment: Kubernetes services, Container Services, Model management, Terraform, Docker, Python, Django, GitHub

●Create and test scalable python codes

●Determine the requirements for application integrations

●Connect applications to certain third-party website programs and services

●Conduct tests on various applications

●Debug and add enhancements to computer systems and applications

●Review client requests and apply the necessary technical updates

●Use server-side logic to integrate user-facing features and elements

●Analyze data storage solutions and integrate them with different applications

●Improve functionalities of existing databases through reprogramming strategies

●Attending daily status calls with the client and BA, third party vendors

●Lead the Defect Triage Call and successfully performed End to End testing and Integration testing of Humira site by coordinating with more than four third party vendors.

●Involved in sending out daily and weekly status reports.

●Conducting defect triage calls with the developers

●Participated in GxP criticality assessment and FMEA(Failure Mode Effect Analysis) and GAP analysis.

●Having experience in administering continuous integration (CI)w, delivery and build automation tool Jenkins.

●Improve speed, efficiency and scalability of the continuous integration environment, automating wherever possible using Python, PowerShell Scripts.

●Lead the definition of processes, standards & guidelines for architecting data platforms, Data Estates and modern ELT/ETL, Compute, Storage, Consumption for BI, and Data Science workloads as part of the Architecture function.

●Support delivery teams as a senior technical leader in a hands-on capacity to remove blockers to progress by creating demos, proofs of concepts, and troubleshooting with other engineers.

●Design, deploy and manage a Continuous Integration System which includes automated testing and automated notification of results using technologies like Terraform, Cloud formation, Docker and Server spec.

●Recommending hardware and software to the organization following the needs of the project and organization.

●Ability to quickly understand a client's business needs, make impactful recommendations and adjust accordingly to needs as they change.

●Created and maintained Continuous Integration Process Documentation.

●Build Cloud Formation templates to use automation functions on EC2 instances in Amazon cloud service which can also be reused for different environments, applications, AMI's or etc. and to simplify provisioning and management of EC2 instances, RDS and VPC on AWS.

●Implementing AWS services like EC2, Elastic Load balancing (ELB), Route53, S3, CloudFront, SNS, RDS, IAM with existing projects. Using Jenkins as a continuous integration tool creating new jobs, managing required plug-ins, configuring the jobs, selecting required source code management tools, build triggers, build system and post build actions, scheduled automatic builds, notifying the build reports etc.

●Configured AWS IAM and Security Group in Public and Private Subnets in VPC.

●Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, ELB Security Group, Auto Scaling, and RDS in Cloud Formation templates.

●Configure and ensure connection to RDS database running on MySQL engines.

●Created AWS Route53 to route traffic between different regions.

●Configured AWS Identity Access Management (IAM) Group and users for improved login authentication.

●Build application and database servers using AWS EC2 and create AMIs as well as use RDS for Oracle DB .

●Created DEV, QA, PROD, STAGING, UAT Environments in AWS from scratch .

●Involved in end to end testing of modules.

Education & Certifications

Anna University (2006 - 2010)

Bachelor of Engineering (Computer Science)

AWS Certified Solutions Architect - Associate

Accomplishment

1. Have got accolades for a seamless production release of energy and gas consumption migration into AWS services.

2. Working with Accenture Partner Amazon team and leadership team to convert the project into a productization solution which can be used across Oil and Gas Industries

3. Achieved best associate award and appreciations from the client side.

4. Achieved 100$ reward for seamlessly deploying the solution into production environment without any risks

5. Developed a calculator using macro in order to simplify the nuclide age calculation. Awarded with a cash prize of 25$ from the client for providing the easiest tool which reduces most of the manual time involved in calculative process.



Contact this candidate