Post Job Free
Sign in

Senior Performance Engineer and Architect

Location:
Seabrook, MD, 20706
Posted:
January 30, 2026

Contact this candidate

Resume:

SUMMARY

An IT Professional with **+ years of experience in all phases of the Software Life cycle in specializing in Performance Engineering / Architecting environments for automation / performance evaluation in various domains like Utilities, Retails, HealthCare, Banking, Finance, Shipping, Sales, Legal and Education. Expertise includes automation / performance engineering and architecting for various distributed / multi-clustered systems within on-premise or cloud (Azure, AWS, PCF) based applications developed for Web (UI), Web-Services (WSDL / REST), Mobile (iOS/android), Mainframe, ERP, CRMs, Java Springboot framework, Micor-Services in Agile practices by automating CI/CD pipeline.

Expert in multi-platform (on-premises & Cloud) capacity planning techniques, architecture practices and standards across Cloud, Windows, LINUX / UNIX, VMWare and Mainframe environments. It includes planning for Define goals, scope, and create sprint plans with performance testing tasks. Facilitating to Run daily scrums, remove blockers, and keep the team focused. Monitoring and Tracking testing progress, gathering results and reports to stakeholders. Helping to analyze results, identify bottlenecks, and support performance tuning (JVM / Database). Facilitation of retrospectives, document lessons, and sharing knowledge.

Extensive experience working on state-of-the-art load testing tools Apache JMeter, OpenText Load Runner, Tricentis NeoLoad, Gatling Performance Center, Quality Center in MF ALM for applications developed in J2EE environments of Microservices, Spring Boot, REST APIs, Kubernetes, Azure Cloud, Kafka, DevOps, TDD, Git, SQL, NodeJS, and MySQL.

Experienced Automation Frameworks with Selenium, TestNG in combination with Sparks, Cucumber, Postman and CI/CD pipeline automation with repositories.

Experienced monitoring server performances using APM tools like CA Wily Introscope, AppDynamics, Spunk, KIBANA, Sitescope, LPAR2RRD, Dynatrace, New Relic, Azure Insights Cloud Monitoring, AWS-Kubernetes and SolarWind during tests executions to identify potential root cause of bottlenecks.

Expertise on Waterfall, Agile (SAFe / JIRA), DevOps (Jenkins, GITHub, BitBucket) processes for Quality Vs CI / CD environment for CI / DC pipeline automation with various kind of testing like smoke, baseline, regression of Performance, Functional, GUI, Security, System Integration (SIT), Software Validation, UAT, Batch, Interface.

Administrated and Moderated defect management to track the defect status and assign responsible team to get it resolved in Agile practices.

Successfully facilitated corrective and preventive action in favor of business, team, product and project to achieve desired goal.

EDUCATION:

2003: Masters of Computer Applications (M.C.A.) from University of Pune, Maharashtra, India.

1999: Bachelor's in science (Physics / Mathematics)

TECHNICAL PROFECIENCIES

Performance Tools

Apache JMeter, HP/MF/OpenText LoadRunner, Gatling, ALM, Quality Center, Performance Center, VUGen, Controller, Analysis 9.5-12.5x, Stormrunner, Tricentis NeoLoad, SeeTest (ExperiTest)

APM Tools

Azure APM, CA Willy, AppDynamics, DataDog, Dynatrace, Solar Winds, Aurgus, New Relic, Splunk, VMWare, HP Diagnostics, SiteScope, Adlex, ELK (KIBANA), Mainframe Reflection Workspace 15.4, LPAR2RRD

Cloud

Microsoft Azure, AWS, Pivotal Cloud Foundry (PCF)

DevOps

Jenkins, GitHub, Bitbucket, Azure DevOps, CICD Pipeline, OpenShift, Putty, CyberArk

Automation Tools

Selenium, UFT, SoapUI, Fiddler, Postman, Karate Framework

Web/App Servers

WebSphere, WebLogic, JBoss, JRun, Apache Tomcat

Database

SQL Server, Oracle8i/9i/11i, Sybase11, DB2, UDB, MySQL, NOSQL(PostgreSQL, MongoDB, Cassandra)

Methodology

AGILE (KANBAN / SAFe, JIRA), OOPS, UML(OOD /OOA), SAD

IDE

WSAD, JBuilder, Eclipse, Quip, IntelliJ, Edit Plus, WordPad, Notepad++, MS Code

Languages

C, C++, Java/J2ee, PL-SQL

Protocols

HTTP, HTTPS, RTE, Java over HTTP, FTP, SMTP, SOAP, REST (Micro-Services)

Operating Systems

Windows NT/9X/XP/7/8/10 Professionals, UNIX, Linux 6.0/7.0, Ubuntu 16.0

Professional Experience

US Bank / Javen Technologies Mar 2024 – Jun 2025

Role: Lead Performance Engineer / Architect

Projects : Money Movement, Bill Pay, Internal Transfers, External Transfer (Wires, ACH, A2A), Mobile Check Deposit, Apply Card/Apply Loan, Costco Payment Processing

RESPONSIBILITIES MANAGED

Collecting and understanding the non-functional and performance Requirements.

Understand workload model & application architecture to come up with the user navigation flow and its integration points.

Prepared a detailed Test Plan.

Coordinating with the application team and collecting the metrics from the server logs in Splunk and Datadog / AppDynamics to calculate the workload for each flow.

Create Performance test scripts for identified flows in Mobile (iOS / Android) and Web UI using JMeter and LoadRunner.

Execute performance tests with user load simulated from mobile devices (iOS /Android) and Web UI.

Created & monitored DataDog / AppDynamics Dashboards to analyze the end-to-end response times, the breakdowns response times of API and other integration systems, CPU, memory and JVM utilizations to identify the bottleneck.

Coordination with DevOps team to get access and build, deploy and run batch jobs for dependent test run on it. Also, coordinate and configured load balancer via auto-scaling setup on cloud.

Analyzed server (front-end, back-end) logs for each cluster’s layer to identify slowness or delay for end-to-end response time of requests.

Analyzed Heap / Thread dump to identify root cause of memory leak if JDBC connections are causing issues.

Identified and analyzed slow requests and traced their exception’s backtrace log using profiler tools to deep dive investigation.

Analyzed browsers and .HAR file from browser’s developer tools to compare & identify captured requests in network for transactions.

Analyzed the test results and documented the detailed report consisting of the response time, scalability level of the application and performances (CPU and Memory) of Application, Web and Database servers.

Initiating triages and technical troubleshooting sessions with respective teams.

Provided recommendations & guidance to engineering teams on where the issue is, and how to fix it.

Maintaining test results and recording on Confluence’s Wiki.

Tracking, assigning and managing tasks and defects using the JIRA dashboard.

Coordinating with different agile teams (Application (UI & API), DevOps, DBA and Security) for environment changes and environment upgradation.

Setting up Jenkins’s configuration to execute jobs for each different test for execution, report generation in CI/CD pipeline.

Adding Windows Batch Commands / Execute Shell commands in Jenkins build to perform execution and generate results, collect log files.

Configured servers / docker to collect backend log for JMeter to deliver live execution status using Graphana.

Setting build, post-build actions in Jenkins jobs.

Provide Sign Offs for every application before production release.

ENVIRONMENT: Apache JMeter 5.4.1, Gatling, OpenText LoadRunner, Grafana, Elasticsearch / InfluxDB, Datadog, ELK KIBANA, AppDynamics. Jenkins, AWS, Azure, Springboot Micro-Services, Java/ J2EE framework, Unix (putty, CyberArk), MongoDB, Cassandra, PostgreSQL, Kafka, SaaS, CaaS, PaaS, OpenShift, Postman, Confluence Wiki, JIRA, Agile SAFe, GitHub, GitHub Copilot, Bitbucket, CA LISA, Selenium, IntelliJ, React JS, React Native

Wipro Limited Oct 2020 ~ Feb 2024

Role: Lead Performance Engineer / Architect

Projects:

Arizona Public Service, Phoenix, AZ - MAXIMO Automation, Power Plan

Ferguson Enterprises, New Port, VA - Salesforce CRM Integrations, Manhattan WMS, Leads / Opportunities / Quotes / Orders

Best Buy, Richfield, MN - TMO – Retail, ADDON, DOTCOM, AcPack - ATT New Payment, ADDM_Agreement, TMO_CAP Postpaid, ATG Prepaid, ECM Connectivity – Print & Email

RESPONSIBILITIES MANAGED

Collecting and understanding the non-functional and performance Requirements.

Coordinating with the application team and collecting the metrics from the server logs to identify the business-critical traversal flow that has to be tested and percentage of load for each flow.

Setting up repositories using Bitbucket (GIT) and Azure DevOps with setting up local Master / Branches for mobile applications to be tested.

Used Apache Spark Databricks built-in support for charts and visualizations in both Databricks SQL and in notebooks.

Used PySpark to frame Databricks SQL for visualization to analyze dataframes with combination of Python and SQL.

Creating Azure Monitoring Dashboards to monitor and identify Memory, CPU utilization and JVM performances.

Involved in JVM / Database Tuning and analyzing heap/thread dumps to identify root cause.

Creating Jenkins jobs for each different test for execution, report generation in CI/CID pipeline.

Adding Windows Batch Commands / Execute Shell commands in Jenkins build to perform execution and generate results, collect log files.

Setting build, post-build actions in Jenkins jobs.

Coordination with DevOps team to get access and build, deploy and run batch jobs for dependent test run on it.

Build and configure Jenkin’s CI/CD pipeline to execute JMeter Load Test with Grafana monitoring to store result on independent Unix server (similar to Docker).

Created JMeter scripts for Web (UI), SOAP and REST APIs and configured scenarios based on requirements.

Setting up JMeter commands with scenario, result and report log to execute load test in non-GUI mode using JMeter.

Generating and analyzing JMeter test results / HTML using Simple Data Writer Listener.

Configuring and mapping Elasticsearch / InfluxDB servers to act as a backend listener for JMeter to support Grafana monitoring.

Used Redhat OpenShift & Kubernetes in AWS environment to monitor servers logs of deployed application.

Setup AWS monitoring for collecting response for back-end servers response for test execution.

Created Microfocus LoadRunner for retails / utilities applications and configuring scenarios based on requirements.

Created NeoLoad scripts for Web (UI) of Salesforce for and configuring scenarios based on requirements.

Monitoring and analyzing servers’ performances for APIs on Cloud Monitoring (Azure) / Splunk / ELK (KIBANA) during tests to identify potential bottlenecks.

Used KAFKA for monitoring for expected SLA on clusters.

Used Fiddler / Wireshark to capture and analyze, troubleshooting requests / responses and data being passed in packets.

Using Dynatrace to monitor and analyze results at each layer using PurePath on 95% Line for each layer.

Receiving sign off for requested REST micro-services flow identified as critical business transaction.

Configured Gatling to make a performance testing done for APIs.

Preparing User Profile for the identified business flows.

Developing simulation scripts for Web applications using JMeter 5.4.1.

Completed a POC for micro services using Gatling.

Collecting historical CPU utilization data for shared processor pools, partitions and systems using Splunk dashboard.

Preparing detailed Test Plan.

Analyzed the test results and documented detailed report consisting of the response time, scalability level of the application and performances (CPU and Memory) of Application, Web and Database servers.

Initiated triages and technical troubleshooting working sessions with respective teams.

Maintained test results and record on Confluence’s Wiki.

Tracked, assigned and managed tasks and defects using JIRA dashboard.

Coordinating with different teams (Application, DevOps, DBA and Security) for environment changes and environment up gradation.

ENVIRONMENT: Apache JMeter 5.4.1, Gatling, Microfocus LoadRunner, Tricentis NeoLoad, Gatling, Salesforce, Manhattan WMS, IBM MAXIMO, Micro-Services, Grafana, Elasticsearch / InfluxDB, Azure APM monitor, AWS EC2, Apache Spark Databricks, PySpark, Dynatrace, Jenkins, Azure DevOps, Bitbucket, Spring-boot, Java/ J2EE framework, JBoss7.1.5, Apache 2.4.23, UNIX (putty, CyberArk), Oracle, CosmosDB, Kafka, SaaS, CaaS, PaaS, OpenShift, Kubernetes, Postman, Confluence Wiki, JIRA, Agile SAFe, GitHub, Bitbucket, WireMock, LISA, Selenium, IntelliJ, Slack, Tomcat

Independent Contractor July 2016 ~ Dec 2019

Role: Sr. Performance Engineer

Clients & Projects

SJI Energy & Gas, Hammonton NJ – CC&B, Maximo, MWM/ORS, GIS

Master Card, NYC, NY – Pre-Checkout, Checkout

TJX, Marlborough, MA – Manhattan WMS, Global WMS

SSA, Gwynn Oak, MD – MCS

Salesforce.com, San Francisco, CA – AR Ops, PSE – SubCo: Admin / PM / Consultant / Apex Record Sharing, Customer Success Console (CSC), Big Top 9Replace Eloqua

BCBS’s CareFirst (FEPOC), Washington, DC - Enrollment Audit / Imaging, Provider Data Enhancement (PDE), Claims Audit Remediation, IRS’s Code 6055, Claims Audit Monitoring Tool (CAMT)

Global Data Management Inc. Oct 2014 ~ July2016

Role: Performance Engineer

Clients: BCBS’s CareFirst (FEPOC), Washington, DC July 2016 ~ Dec 2016

Projects: Enrollment Audit / Imaging, Provider Data Enhancement (PDE), Claims Audit Remediation, IRS’s Code 6055, Claims Audit Monitoring Tool (CAMT)

Clients: JPMC, Wilmington, DE

Projects: Enterprise Testing Services (JETS) : This project provides mainframe application and infrastructure production simulation testing. It is used to be called DR Testing or AIR (Advanced Internal Recovery) Testing.

Wipro Limited Aug 2011 ~ Sep 2013

Role: Lead Performance Engineer

Clients: CITI GROUPS, Irving, TX

Projects: Rainbow USA (RUSA)

RESPONSIBILITIES MANAGED

Collecting and understanding the non-functional and performance Requirements.

Coordinating with the application team and collecting the metrics from the server logs to identify the business-critical traversal flow that has to be tested and percentage of load for each flow.

Planning schedule to run batch jobs and dependent test run on it.

Preparing detailed Test Plan.

Preparing the traversal flow document for the identified business flows.

Preparing User Profile for the identified business flows.

Developing simulation scripts for Web / Mobile applications using VUGen or JMeter scripts for UI / REST API based micro-services.

Planned schedule to run batch jobs and dependent test run on it

Executed Smoke test to validate the scripts, environment and data.

Executed scenarios (test runs) in Stormrunner (AWS Performance Center) to find the scalability, stability, capacity and response time of the identified transactions.

Monitored server performances AppDynamics, ELK during tests to identify potential bottlenecks.

Collecting historical CPU utilization data for shared processor pools, partitions and systems ELK dashboard.

Created Splunk dashboard to monitor architecture’s layer performance.

Monitoring server performances using Dynatrace APM during tests to identify potential bottlenecks.

Analyzed the test results and documented detailed report consisting of the response time, scalability level of the application and performances (CPU and Memory) of Application, Web and Database servers.

Initiated technical troubleshooting working sessions with respective teams.

Conducting Root Cause Analysis (RCA) sessions on system incidents.

Coordinated with different teams (Dev, Tech-Ops, DBA and Security) for environment changes and environment up gradation.

Supported site/application for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.

Collecting environment Infrastructure and architectural communication details.

Understanding application, network and database components along with third party services.

Recommends environment scales based on application and supporting micro-services.

Assign/allocate # of instance, RAM and disc size to recommend capacity.

Recommending performance environment scales to Infrastructure team.

ADMINISTRATIVE RESPONSIBILITIES

Moderated defect management call to track the defect status and assign responsible team to get it resolved in Agile practices.

Organized Daily / Weekly Status Call, meeting / conference invites, conference bridge allocation request.

Mentored new team resources, assign, review scripts and analyze tasks.

ENVIRONMENT: HP Load Runner, Stormrunner (AWS Performance Center), ALM 12.0/.55/.57, VUGen, MF Performance Centre 12.50, MF Quality Centre 12.50, AppDynamics, Adlex, Jenkin, Java/ J2EE, WebSphere, Web-Services, SolarWinds, LPAR2RRD,, Mainframe Reflection Workspace 15.4, VBA, MQ, AWR Reports

AIT Global Inc. July 2003 ~ Sep 2010

Role: Programmer / Analyst

Clients & Projects

NYK Line INC, Secaucus, NJ – IOCM (In-bound Out-bound Custom Module) Canada

U.S. BLS (Dept. of Labor), Washington, DC - Credentials Administrative Systems (CAS), Internet Data Collection facility (IDCF)

Thomson West, Eagan, MN - Westlaw to Novus, Admin Decisions

AISSMS COE, Pune, India - College Management System

COD Dehuroad, Pune, India - Payroll System

ROLES AND RESPONSIBILITIES

Implemented J2EE patterns viz. MVC, Business Delegate, Façade, Session Factory, DAO patterns.

Create JSPs for user interface. Also created Action forms to pass data and maintain session.

Created Action classes to process received data from Action forms for next action.

Configured struts-config.xml to map Action classes, Action Forms, JSPs.

Responsible to create new Parsers as Filter Classes, & calling those filters into the base filter class.

Wrote business methods in Stateless Session Beans as Session Façade to get the Data from the DAO layer.

Wrote DAO components to interface with Java as a Service Layer for the Databases.

Used the Connection Pooling and Data source lookup features in the DAO layer using J2EE design patterns

Made the logging Framework for each layer in MVC using Log4J API.

Responsible to make the ANT script to compile and run the code.

Code test with help of JUnit, for the particular release on Local and DEV blades.

Developed code to handle synchronization, multithreading, object pooling.

Responsible to make build request for the particular release and their version.

Involved in testing application manually and interacted with QA teams to sort out defects in application as well as system performance.

Monitored application and used profiler tools to dig out coding or server issues.

Worked with Performance engineering team to understand issues causing through application to servers to avoid memory leak or garbage collection issues.

Involved in support of shifting database from Siebel to Oracle.

ENVIRONMENT: J2EE, JAVA 1.5, JSP 2.0, Struts 1.2, XML, DTD, Eclipse 3.1.2, JBoss 4.0.3, Log4J, JUnit, FTP, HTTP, Unix, Oracle 10g, Siebel, Perl, Windows XP



Contact this candidate