Post Job Free
Sign in

Specialist Security Devops Engineer

Location:
Round Rock, TX
Posted:
May 03, 2023

Contact this candidate

Resume:

Professional Summary:

Experienced IT professional with 7+ years of Cloud DevOps Engineer, experience in developing and managing cloud infrastructure and microservice installations. Worked in Agile, Scrum, and waterfall techniques, as well as expertise in automation, build and release management, and software configuration management.

Experience with provisioning IAAS, PAAS, FAAS and SAAS cloud computing ideas and implementing them on major public cloud vendors like AWS, Microsoft Azure, and Google Cloud Platform.

Worked on AWS Lambda and used to run servers without implementing Cloud Solutions using various AWS Services such as EC2, VPC, S3, Glacier, EFS, Lambda, Directory Services, CloudFormation, OpsWorks, CodePipeline, CodeBuild, CodeDeploy, Elastic Beanstalk, RDS, Data Pipeline, DynamoDB, Redshift.

Involved in designing and deploying an extensive application utilizing almost all the AWS stack (Including IAM, EC2, S3, Route 53, ELB, Code Commit, Code Build, Code Deploy, RDS, Glue, DynamoDB, SNS, SQS, Cloud Formation, EBS. Focusing on high availability, fault tolerance, and auto-scaling in Aws cloud.

Optimization of AWS cost service and serverless architecture development utilizing Lambda functions, STEP Function, Athena, Glue, S3, CloudWatch, and Metrics, and implementation on public and private websites.

Worked on Azure Storage-Storage accounts, blob storage, managed and unmanaged storages. Responsible for a web application deployment over cloud services (web and worker roles) on Azure, using VS and PowerShell.

Written Templates for Azure Infrastructure as code using Terraform to build staging and production integrated Log Analytics with Azure VMs for monitoring the log files, storing them, and tracking metrics.

Worked with GCP services compute engine, cloud load balancing, cloud storage, cloud SQL, GCP Dataproc, Big Query, GCS bucket, cloud dataflow, Pub/sub, cloud shell, GSUTIL, BQ command line utilities, Data Proc, stack driver monitoring, and IaaS, execution plans, resource graph, and Terraform change automation.

Worked with Google Cloud Platform Engine for deploying and scaling web apps and with Java, as well as with GCP cloud services like as Compute, load-balancing, auto-scaling, and VPC to design secure, for scalable, flexible systems that can handle both anticipated and unforeseen load variations.

Worked on Terraform in mapping more complicated dependencies and identifying network issues, as well as implementing Terraform significant capabilities such as Infrastructure as code, Execution plans, Resource Graphs, and Change Automation. Terraform was used to design infrastructure for AWS applications workflow.

Worked on Docker App Containerization technology, creating Docker images, Containers, Docker Registry to store photographs, Docker Hub, a cloud-based registry, and Docker Swarm to manage containers.

Worked building, installing, and administering Docker Containers, Docker Images for Web Servers, and Application servers such as Apache Tomcat, as well as integrating it with Amazon MySQL RDS databases.

Extensively utilized Docker and Kubernetes to safely run and deploy the application, speeding up the Build and Release process. created services like deployments, replica sets, and pods to administer the cluster.

Extensive experience with scheduling, deploying, and maintaining Kubernetes container replicas on a node, as well as constructing Kubernetes clusters. Used Kubernetes to deploy, scale, load balance, and manage Docker containers with various namespace versions and created a Docker container-based test environment.

Developed Kubernetes Pod definitions, deployments, and Helm Charts to version control to complete deployment approaches and worked on the Minikube tool for deploying and to run Kubernetes locally.

Worked on Apache Mesos as an orchestrator for apps and services as well as on OpenShift to establish new projects, services for load balancing and adding them to Routes so that they are available from the outside, debugging pods through SSH and logs, modifying build setups, templates, Image streams, so on.

Worked with Ansible as a configuration management tool, including creating Ansible Playbooks using YAML to provide the infrastructure and the usage of Ansible Control Server to deploy playbooks to target machines to reduce downtime. Terraform DSL to automate provisioning instances, configuration and patching.

Established Plugins for deployment pipelines, such as the Ansible plugin with Jenkins, enabling software installation and deployment process automation in target environments (QA and Production).

Worked on Chef Infra, bootstrapping nodes, and node convergence in Chef SCM. Experience in configuring Chef repositories, workstations. Also, skilled with data bags, characteristics, cookbooks, recipes, and templates.

Deployed AWS, in a Chef Server, Chef Cookbooks are used automate different infrastructure components.

Performed full-blown Puppet Enterprise installations and Puppet Agent configuration. Puppet Enterprise was installed, and Puppet Agents in the Initiative in AWS were connected.

Worked with multiple agile development teams to standardize branching and tagging of code in our repository and preserve code base integrity through Subversion (SVN), Git, Bitbucket, Clear case, and TFS.

Performed day-to-day GIT assistance for many projects and was practiced in establishing branching strategies to maintain the GIT repository and kept SCM solutions such as GitHub, Bitbucket, GitLab, and Azure Repos.

Worked on building automation scripting languages such as Apache ANT and Maven, as well as tags, tasks and goals, dependencies, and coordinates for creating pom.xml and build.xml.

Worked on continuous integration tools such as GitHub Actions, GitLab CI, Bamboo, TeamCity, and Argo CD for timely builds, code coverage, and test execution and integrated SonarQube for code coverage.

As part of the Continuous Development and Deployment process set up the Jenkins as a CI tool and integrated maven plugin as build tool and stored the built artifacts in JFrog artifactory.

Worked on connecting Jenkins with Docker using Cloud Bees Docker plugin, the Kubernetes pipeline plugin, and installing, configuring, and managing the Jenkins CI tool on Windows platform and Linux platforms.

Worked with Splunk data mining, log file reporting, Searching and Reporting modules, Knowledge Objects, Administration, Dashboards, Clustering, and Forwarder Management.

Worked with monitoring tools like Dynatrace, AppDynamics, Datadog and Nagios for identifying and resolving infrastructure issues, handlers in the case of automatic resumption of unsuccessful application.

Worked in querying RDBMS such as Oracle and MySQL using SQL for data integrity, installation, and NoSQL technologies such as SQL Server, DB2, PostgreSQL, MongoDB, and Cassandra.

Worked with virtualization technology such as VMWare and Virtual Box for generating virtual machines and provisioning environments, as well as Tomcat and Apache web servers for deployment and hosting tools.

Worked on frameworks such as Django and Angular.js and web technologies such as JSON, JavaScript, jQuery plugins, HTML, CSS, XML, and Node.js. Python and Java design, coding, debugging, reporting, data analysis, and online application configuration management and development.

Worked on building different Automation Scripts to automate manual processes, deploy apps, build scripts/versioning, utilizing open-source libraries and Python, Shell, Bash, Groovy and PowerShell scrips.

Worked in Shell scripting using sh, ksh, bash and PERL, in Performance Monitoring of CPU, Network.

Worked with several issue-tracking tools such as Jira, Remedy, Azure Boards, ClearQuest, Bugzilla, and ServiceNow, as well as handling all bugs and modifications in a production environment.

Skills Matrix:

AWS Cloud Services

EC2, VPC, S3, Route53, SNS, IAM, CloudFront, EBS, ELB, ECS,CDK, EKS, Lambda, CloudFormation, CloudFront, CloudWatch, Auto Scaling, SQS, Elastic Beanstalk, etc.

Microsoft Azure

VM, Active Directory, ARM, App Service, AKS, ACR, Blob storage, Azure SQL, Azure monitor, Azure Functions, Cosmos DB, etc.

Google Cloud Platform

Compute Engine, Cloud Functions, BigQuery, GCR, GKE, Data Proc, DataFlow, App Engine, Knative, Cloud storage, Cloud Datastore, etc.

Infrastructure as a Code (IaaC)

CloudFormation, ARM, Terraform

Containerization Tools

Docker, Kubernetes, OpenShift

Configuration Management Tools

Chef, Ansible, Puppet

Build Tools

Maven, Ant, Gradle

Continuous Integration Tools

Jenkins, Bamboo, GitLab CI, GitHub Actions, TeamCity

SCM/Version Control Tools

GitHub, SVN, Bitbucket

Artifactory Repositories

Sonatype Nexus, JFrog

Logging and Monitoring Tools

Nagios, Splunk, Prometheus, Grafana, Log DNA, ELK, AppDynamics, Dynatrace, Data Dog.

Databases

Oracle, MongoDB, SQL Server, MS SQL, MySQL, NOSQL, Cosmos DB, PostgreSQL, Cassandra, Snowflake

Scripts/ Languages

HTML, Bash, Shell Scripting, Ruby, Groovy, YAML, Python, Java, Perl, PL/SQL

Bug Tracking Tools

Jira, Bugzilla, HP-ALM, Remedy, ServiceNow

Virtualization Tools

Oracle VM virtual Box, VMware, Vagrant

Servers

Apache Tomcat, WebLogic, WebSphere, Jboss, Wildfly, IIS, and Nginx

Operating System

UNIX, Linux (Ubuntu, RHEL, Centos), Windows

Professional Experience:

MED PRO Feb 2022 – Present.

Sr. Cloud DevOps Engineer

MedPro Group is the national leader in customized insurance. I’m a part of Cloud Platform Team, as a Sr. Cloud DevOps Engineer I am responsible for planning and design of the project's transfer from on-premises to Cloud as well as integrated and maintained major applications.

Roles & Responsibilities:

Designed and implemented Cloud infrastructure solutions that meet business requirements and comply with security and compliance standards which enforces Cloud architecture and engineering standards, patterns to cross-functional with the teams to define cloud infrastructure requirements, priorities, and roadmaps.

Designed and implemented AWS Control Tower solutions to manage multiple AWS accounts and enforce organizational compliance policies and in created and managed AWS Control Tower landing zones, including the definition of account structures, guardrails, and blueprints.

Configured and customized AWS Control Tower features, such as AWS Config, AWS CloudTrail, and AWS Organizations.

Configured and administered AWS IaaS solutions such as EC2 instance generation utilizing VPC AMIs, RDS, S3 buckets, Glacier, CloudWatch, CloudTrail, CloudFront, SQS, SNS, and Route 53.

Designed roles and policies for specialist security teams. IAM roles were created for new users using AWS (IAM), and rules for Dev and Prod users were revised to govern resource access. AWS Lambda was used to operate servers without controlling them and to trigger code execution through S3 and SNS.

Designed and deployed several apps that used practically the entire AWS stack, including SNS, SQS, Lambda, and Redshift, emphasizing high availability and Managed CloudTrail logs and objects within each bucket.

Worked on Route 53 for AWS instances and deploy DNS service on ELBs through Route 53 to accomplish HTTPS-secured connections, manage DNS zones, and assign public DNS names to elastic load balancer IPs.

Created AWS CLI to automate data store backups to S3 buckets and EBS, as well as custom AMIs for essential production servers as backups. AWS network architecture configured with VPC, Subnets, IG, NAT, Route table.

Configured and migrated data from on-premises data warehouses to AWS Redshift for optimizing query performance and tuning database parameters for better performance.

Worked with SSM Agents that had been pre-backed up into AMI using Packer and Ansible. Written custom IAM Policies are used to construct IAM instance profiles that allow SSM Agents to access S3 buckets hosted in a separate VPC and obtain logs from Sumo-Logic to the S3 bucket by creating Python scripts.

Designed and implemented monitoring and alerting solutions for Kafka using tools such as Dynatrace, Splunk, and AWS CloudWatch, to ensure high availability and proactively identify and resolve issues.

Created an Azure Cloud environment to host migrated IaaS VM and PaaS role instances for refactored apps and databases using a predetermined capacity and architectural plan.

Worked on deploying and managing Kubernetes clusters using AKS in Azure cloud to create, configure, and scale Kubernetes clusters on-demand. Wrote and deploying Kubernetes manifests and Helm charts in AKS.

Configured Azure infrastructure components such as Virtual Networks, Load Balancers, and Azure Container Registry. Used Azure cloud services such as Azure SQL Database, Azure Cosmos DB, and Azure Cache for Redis that can be used as data storage for Kubernetes applications.

Managed Azure Infrastructure, including Azure Web Roles, Worker Roles, VM Roles, AzureSQL, Azure Storage, AD Licenses, and VM Backup, and created and deployed Virtual Machines on Azure, as well as created and managed virtual networks to connect servers and composing ARM templates for the same cloud platform.

Created and maintained automated ETL pipelines using Azure Data Factory and Azure Databricks, ensuring data accuracy and consistency across different data sources and destinations.

Designed and implemented infrastructure as code (IaC) solutions using Terraform, enabling the deployment and management of AWS resources in a consistent and automated way.

Configured Terraform to provision and manage a wide range of AWS resources, including EC2 instances, S3 buckets, VPCs, and RDS databases.

Implemented Terraform to complex deployment, such as multi-region deployments, hybrid cloud architectures, and container orchestration with Kubernetes.

Used Helm, managed Kubernetes charts and built repeatable builds of Kubernetes applications, as well as handled Kubernetes manifest files and Helm package releases. Worked with Kubernetes to deploy scale, load balance, and manage Docker containers with multiple names spaced versions using Helm charts.

Developed a blueprint for cloud creation. All microservices will be delivered to the Docker registry, and deployed to Kubernetes, where Pods will be created, and Kubernetes will be used to manage them.

Created Docker images from a Docker file, worked on Docker container snapshots, deleted images, handled Docker container and volume directory structures. Implemented Docker-maven-plugin in Maven POM.

Worked on the Docker Hub, building Docker images, and managing many images, particularly for middleware deployments and domain settings for all microservices and later used Docker file to build the Docker images.

Integrated Ansible with Jenkins to offer automation and continuous integration via Jenkins, as well as implemented Jenkins Workflow and Plugins for repeated Docker deployments of multi-tier apps, artifacts.

Worked with Ansible playbooks to provision virtual and physical instances, manage configuration, patch, and deploy software. Ansible playbooks were maintained using Ansible roles, Ansible Galaxy, and various modules in Ansible playbooks using YAML scripting to configure files on remote servers.

Used Jenkins to manage the weekly build, test, and deploy chain; integrated Jenkins with GIT for Dev, Test, and Prod branching models for weekly releases, implemented continuous formation based on check-in for applications and set up GitHub Webhooks to set up triggers for commit, push, merge, and pull request events.

Created custom Jenkins jobs and pipelines that contained Bash shell scripts and used the AWS CLI to automate infrastructure provisioning while implementing a CI/CD framework in a Linux environment with Jenkins, Maven and saving build jar/war artifacts in repositories such as Nexus Artifactory.

Worked on the creation of Jenkins file to build Jenkins Pipelines to push all Microservices builds to the Docker registry, which were subsequently deployed on Kubernetes.

Created ELK (Elasticsearch, Logstash, and Kibana) to analyze and visualize system and application logs. Splunk forwarders were configured to detect SSL certificate expirations, while Nagios is configured to monitor network latencies that the environment is facing between systems.

Developed custom Dynatrace extensions using the Dynatrace API to automate monitoring and alerting workflows, such as detecting and alerting on failed transactions or detecting security threats in real-time.

Integrated the Dynatrace with ServiceNow by creating a closed-loop incident management process that automatically creates and tracks tickets based on alerts and assigns them to the appropriate team for resolution.

Created Splunk-based native Roles and Users and scheduled Splunk-based Reports and Alerts to monitor system health performance and deployment, configuration, and maintenance for platforms have standardized.

Experience in developing and implementing large-scale, Object-Oriented, high-performance Web-based Client-Server applications using Java and J2EE technologies, including Lambdas, Streams, Observables, and Completable Futures.

Designed and implemented systems based on N-tier distributed architecture using JAVA/J2EE technologies such as Core Java, Multithreading, Collections Framework, Java I/O, JDBC, Hibernate, Spring Framework, Spring Batch, Struts Framework, JSP, jQuery, and XML including XSL, XSLT, and XML Beans.

Developed a number of Web services REST, Web APIs in a variety of online and mobile apps, and APIs for mobile and web interfaces were designed and implemented by merging REST-based services.

Experience designing and developing SOAP web services, as well as integrating them with existing systems.

Involved in the configuration of the report server and report manager scheduling, in addition to the assignment of rights to different levels of SQL Server Reporting Services users (SSRS).

Worked on deployment, build scripts and automated solutions using scripting languages like Shell, Perl, Groovy and worked on JavaScript frameworks such as Angular JS and jQuery.

Detailed documentation was created for the complex build and release process, post-release activities process, JIRA workflow, and release notes, and JIRA was utilized workflow, customizing permissions, and schemes.

Environments: AWS, Azure, Docker, Kubernetes, GIT, Ansible, Jenkins, Terraform, Maven, Dynatrace, Ubuntu, Jira, ELK, MySQL, REST, SOAP, MongoDB, Apache Tomcat, Nginx, Java, Java Script, NodeJS, Bash, Python, Shell, Jira, ServiceNow etc.

Ally Finance May 2021 - Feb 2022

Cloud DevOps Engineer

Ally Financial is a bank holding company based in Detroit, Michigan, as a DevOps Engineer I was a part of Development and Operations team, I was responsible for software integration, configuration, building, automating, managing, and releasing code from one environment to another, as well as server deployment.

Roles & Responsibilities:

Implemented an automated build and deployment procedure for the software, reworking the user interface, and ultimately developing a continuous integration system for all our products base Environment.

Worked on migrating legacy applications to the GCP by implementing and managing GCP services such as Compute Engine, cloud storage, Big Query, VPC, Stack Driver, Load Balancing, IAM and lowered compute engine costs in GCP based on the services consumption.

Configured firewall rules on GCP environments to allow or restrict traffic to and from VM instances depending on a given configuration and using GCP cloud CDN (content delivery network) to transport content from GCP cache locations, significantly improving user experience, utilized policy management and latency.

Worked on building server-side code for Google Cloud Platform (GCP)-based apps, creating strong high-volume production applications, and fast developing prototypes, microservices container orchestration.

Designed and implemented complex data models and ETL processes using Big Query and Dataflow to process, transform, and load petabyte-scale datasets. To Build real-time streaming analytics pipelines with Big Query and Cloud Pub/Sub to enable near-instantaneous data processing and analysis.

Integrated Terraform essential elements such as Infrastructure as a code (IAC), Execution plans, Resource Graphs, and Change Automation and built new plugins to accommodate new Terraform capabilities.

Worked regularly on the AWS platform and its functions, such as EC2, VPC, AMI, RDS, S3, Route53, IAM, CloudFormation, CloudFront, and CloudWatch.

Implemented ELB serves as a single point of contact for the client and helps to be transparent and increases the application availability by allowing the addition or removal of multiple EC2 instances across one or more AZs, without disrupting the overall flow of information.

Worked on Docker and Kubernetes on cloud providers, from helping developers build and containerize their application continuous integration and continuous deployment to deploying either on public or private cloud.

Implemented Docker Engine and Docker Machine to deploy micro services-oriented environments for scalable applications. Ship, Run, and deploy the application securely to speed up the Build/Release Engineering.

Created Docker images using a Docker file, worked on Docker container snapshots, removing images, and managing Directory structures for Docker containers and Docker volumes.

Worked on application containerization using a Kubernetes cluster and created a Platform-as-a-Service (PaaS) environment for web servers and an automated Kubernetes cluster.

Deployed and configured Chef server including bootstrapping of Chef-Client nodes for provisioning and created roles, recipes, cookbooks and uploaded them to Chef-server. Managed on-site OS, applications, services, packages as well as AWS services like EC2, S3, and VPC with Chef cookbooks.

Implemented Chef using knife commands to manage nodes, recipes, cookbooks, attributes, and templates. Used ruby scripting for creating cookbooks comprising all resources for automating Chef.

Automated weekly releases using Maven scripting for compiling Jar/War/Rar files while debugging and storing builds in the Maven repository.

Created repositories, branches, and tags in GitHub and assisted developers in resolving merge conflicts, code reviews, merging to the main branch, creating local branches, and pushing/pulling any changes to the remote.

Built virtual repositories in Artifactory for project and release builds, repository management in Maven to share snapshots and releases of internal projects using the JFrog Artifactory.

Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along with Shell scripting for the automation of the routine jobs.

Successfully designed and developed Java Multi-Threading based collector parser and distributor process, when the requirement was to collect, parse and distribute the data with high Incoming Traffic.

Developed various commands and helper classes using core Java mainly following multi-threaded concepts and MVC design patterns for the application like Factory Pattern, Singleton, Data Access object, session Facade, and Business Delegate Factory.

Created and maintained automated dashboards and reports in Dynatrace to provide real-time visibility into application performance and user experience, using advanced features such as anomaly detection and root cause analysis.

Created Splunk-based native Roles and Users and scheduled Splunk-based Reports and Alerts to monitor system health performance and deployment, configuration, and maintenance for platforms have standardized.

Worked on writing various Automation Scripts to automate manual tasks, deploy applications, application build scripts/versioning etc. using open-source libraries with Python based scripting, using JavaScript, Web services and other web technologies to integrate ServiceNow with internal/ external systems and tools.

Expertise in designing SOAP and REST apex web services classes and testing them using tools like SOAP UI and Workbench.

Worked on SQL Server Analysis Services (SSAS), including the creation of cubes and the creation of MDX queries and used SQL Server Reporting Services to create parameterized, linked, drilldown, and sub reports.

Actively participated in determine the database schema, database schema changes, implementation plan for database schema releases in coordination with developers and data modelers with Organization policies.

Maintained the WebSphere and WebLogic Servers on LINUX platforms and setting up the development, testing and staging environment for the ongoing application development.

Developed metrics dashboards and advanced filters in JIRA to offer performance metrics and status reports to end-users and business executives.

Environment: GCP, AWS, Terraform, Docker, Kubernetes, Chef, Jenkins CI/CD, Maven, GIT, Maven, SonarQube, Splunk, Dynatrace, Java, Java Script, MYSQL, SQL, MongoDB, WebLogic, WebSphere, Bash, Python, Ruby, Shell, Jira, ServiceNow etc.

Vertex Nov 2017 -Dec 2020 DevOps Engineer

Vertex Pharmaceuticals is a biopharmaceutical business headquartered in Boston, Massachusetts. I was a part of development team and was developed an enhancement that allows users to quickly understand and use the program, integrated two distinct but related functions into a single phrase.

Roles & Responsibilities:

Worked on writing Python/Bash scripts were to automate AWS and co-located infrastructure provisioning, data processing operations, and system management duties.

Contributed to developing gray log infrastructure's Elastic Container Service architecture (image repositories, service architecture, EC2 container instance size, etc.) and workflow.

Defining and configuring CloudWatch alarms and triggers to implement auto-scaling policies, used AWS CLI for EC2 creations and S3 uploads, and authenticated downloads.

Responsible for managing network security using a load balancer, autoscaling, security groups, and nails.

Maintained 4-5 different testing/QA setups and setting up the production environment on AWS, as well as keeping track of live traffic, logs, memory use, disk utilization, and other variables., essential for deployment.

Installed, configured, and managed Puppet master and agents as well as wrote custom Modules and Manifests.

Utilize Jenkins for configuration management of hosted instances within AWS.

After the production release, release branches were merged into the trunk, and conflicts that emerged during the merge were handled in Subversion and GIT.

Implement and enhance the existing scripts developed in Java, shell, Perl, ruby, and python.

Built software packages on Red Hat Linux (RPM) and Solaris (DataStream package format), Administration, monitoring, maintenance, troubleshooting, and performance tweaking experience is required.

Worked with configuring WebSphere Global Security for access to the WebSphere Admin console.

Deployed and troubleshoot any JAR, WAR, and EAR files in WebSphere/WebLogic/JBOSS server.

Involved in the migration of WebLogic, Experience reviewing SiteMinder server logs and finding login and authorization issues. Extensive experience with plug-ins for WebLogic/iPlanet/Jboss servers.

Environments: AWS, Puppet, SVN, GIT, Maven Jenkins, Python, WebSphere, WebLogic, JBoss,Unix,Shell, Bash, Ruby, Perl script, Java etc.

Kering June 2015- Nov 2017

System Engineer/Administrator

Kering is a global firm established in France that specializes in luxury products, worked in Administrator team as systems admin Managing and monitoring all installed systems and infrastructure, configuring, testing, and maintaining operating systems, application software, and system management tools.

Roles & Responsibilities:

Created VMWare environment by installing VMWare ESXI in system-X hardware and handling data center power planned maintenance and upgradation activities.

Provisioned VMs and installed Linux and Windows OS as per the client's requirement and provided OS support to the clients.

Configured the web servers like Apache with Tomcat servers and installed all the dependencies and environmental properties within the Linux and Windows environment.

Maintained the performance and effectiveness of computers for collections using Apache Tomcat.

Responsible for Design/Install/Engineer for Apache Tomcat middleware platforms on Unix and Windows.

Setup minimum baseline standards for Apache HTTP and Tomcat before delivering platforms to middleware/dev-ops/development teams.

Worked exclusively on Tomcat 7.2.x to manage web applications' performance and security optimization.

Worked with the team in upgrading application servers like Java and site minder as part of Vulnerability Management and Remediation.

Deployment of applications on tomcat/web sphere application servers and proactively coordinating with the application team for releases/deployment on time and concluding the release of the application.

Performed patching WebLogic/JBoss application servers and troubleshoot any issues in the production process.

Involved with production environment support and WebLogic/JBoss server stalling and server crash issues. Improved heap size parameters and the JVM's Garbage collector for WebLogic/JBoss application servers.

Deployed WebSphere Application Server for Java/J2EE applications. Created and maintained MQ objects like as queue managers and remote/local queues and used JMS Admin to integrate WebSphere Applications with MQ series. Application installation, configuration, and deployment.

Environment: VM Ware, AWS, AZURE, ESXI, Linux, Windows, Shell, Apache Tomcat, War, Jar, TCP/IP, Confluence, Siebel CRM, Java, J2EE, SMP, BRM, SCP.

Shravya Chandra

Email: ***********@*****.***

Contact: 956-***-****



Contact this candidate