Post Job Free
Sign in

Performance Engineer Testing

Location:
Boston, MA, 02109
Posted:
May 21, 2025

Contact this candidate

Resume:

Sam Moussaoui

(Mobile) (***) *** - ****

Email : ************@*****.***

www.linkedin.com/in/sam-a-moussaoui-7516182

SUMMARY

Principal Performance Engineer Observability Cloud-Scale Systems CI/CD Streaming Data

Dynamic and impact-driven Principal Performance Engineer with 15+ years of experience scaling and optimizing high-throughput, mission-critical systems across financial, cloud, and cybersecurity domains. Proven ability to drive root-cause analysis, streamline cloud-native applications, and lead large-scale performance initiatives in remote-first environments. Adept with observability tools (Dynatrace, Splunk, Datadog), stream processing (Kafka, Cribl), and cloud infrastructure (AWS, Azure, Docker). A strong communicator and team mentor who thrives in cross-functional teams, and knows how to ship code, crush bottlenecks, and even throw in a solid goat pun when needed.

Core Competencies & Technical Skills:

Performance Monitoring & APM – Dynatrace, DataDog, Splunk, Synthetic Monitoring.

Performance Engineering & Testing: JMeter, LoadRunner, SQL, SolarDB.

Performance Testing & Optimization: Stress Testing, Load Testing, Endurance Testing, Workload Design and Root Cause Analysis.

Cloud & Infrastructure: Azure, AWS, Docker, Kubernetes, Cloud Services, Application Server Tuning.

Programming Languages: Java, Bash, Linux, SQL, Python

DevOps & CI/CD: Jenkins, GitHub Actions, GitLab, Git, Automation Pipelines.

Database Optimization & Query Tuning: SQL, Indexing, Query Optimization.

Databases: SQL Server, Oracle, PostgreSQL, MySQL, Redis

Stakeholder Leadership: Cross-functional collaboration, vendor management, incident resolution.

Develop and implement statistical and analytical models to optimize system performance, including workload characterization, bottleneck analysis, anomaly detection, and scalability assessments, ensuring efficient and reliable software systems under diverse conditions.

EXPERIENCE

Principal Performance Engineer/Architect Charles River Development (a State Street company)

Aug 2018 – June 2024

Managed cross-functional teams to define, design, execute, and oversee end-to-end performance testing projects, ensuring the financial trading platform CRD met client SLAs and was ready for onboarding with new software versions or migration to Azure cloud.

Reduced User Response Times by 35%: Eliminated bottlenecks (CPU, Memory, I/O), improving user interactions and application speed.

Boosted SQL Performance by 15%: Optimized SQL execution, reduced query times and improved application responsiveness.

Optimized Batch Job Performance by 20%: Enhanced reporting and daily transactions, reducing elapsed times and increasing productivity.

Test Development & Strategy: Developed web services APIs testing framework for CRD high-traffic financial application, improving user response times, faster data retrieval and ensuring production readiness and efficiency.

Led Cross-Functional Teams: Drove performance improvements on Microsoft Azure for CRD trading apps, improving performance, scalability and reliability.

Cut Performance Testing Engagement by 60%: Reduced client performance testing engagement time from 3 months to 1 month, accelerating delivery.

Critical Bottleneck Mitigation: Integrated performance tests into CI/CD pipelines on Azure, ensuring continuous delivery and maintaining performance benchmarks.

Root Cause Analysis: Utilized Dynatrace to monitor system performance, identify bottlenecks, and collaborate with developers to resolve critical issues.

Developed custom monitoring solutions using Dynatrace RESTful APIs to enhance performance monitoring capabilities, resulting in successful diagnosis of performance issues

Utilized Google Analytics to analyze user behavior and performance metrics, leading to a 30% improvement in system responsiveness by optimizing user flows and reducing bottlenecks

Utilized Wireshark and TCP dump to troubleshoot network issues of a dedicated leased line, identifying and resolving bottlenecks, which improved system performance by 20%.

App Performance Monitoring: Implemented various dashboards for application monitoring using Dynatrace APM.

Utilized Datadog and Splunk to monitor and analyze system performance, identifying bottlenecks and optimizing application performance, improving system responsiveness.

Team Leadership & Development: Led and mentored high-performing teams, fostering a collaborative environment that enhanced productivity and operational efficiency.

Software Engineer II – Performance Sophos, Inc

Jan 2018 – July 2018

Built Performance Engineering Practice: Established a performance engineering methodology for cloud-native security products, integrating best practices into AWS-hosted environments and CI/CD pipelines using Jenkins, Gatling, and AWS CLI.

Improved Scalability & Reliability: Led performance testing efforts that uncovered architectural bottlenecks, resulting in a 20% improvement in system scalability and reduced production incidents.

Enabled Continuous Testing: Developed automated regression and load testing pipelines, boosting performance coverage and reducing performance-related defects in releases by 30%.

Enhanced Observability & Debugging: Introduced custom telemetry and fine-grained monitoring for microservices, accelerating root-cause analysis and driving resolution of CPU, memory, and thread-level inefficiencies.

Cross-Functional Collaborations: Worked closely with SRE, QA, and development teams to define non-functional requirements, establish SLAs, and ensure performance metrics were part of the Definition of Done.

Principal Performance Engineer Kronos Incorporated

Jan 2016 – July 2017

Built Cloud Performance Strategy on GCP: Led performance engineering for a newly migrated architecture on Google Cloud Platform, establishing baseline metrics and iteratively tuning both infrastructure and applications through structured test cycles.

Modeled Real-World Workloads with Synthetic Data: Designed and executed large-scale performance scenarios using LoadRunner, simulating real user behaviors with synthetically generated test data to mimic production usage patterns and validate system readiness.

Achieved 30% System Performance Gains: Profiled and optimized key backend services, improving system responsiveness, scalability, and resource utilization across a highly concurrent user base.

Developed Full-Lifecycle Performance Methodology: Introduced a performance strategy spanning planning through deployment, aligning KPIs with SLAs and integrating performance checkpoints into the development process.

Cross-Team Enablement & Advocacy: Acted as a performance subject-matter expert across development, QA, and infrastructure teams, embedding insights into CI/CD pipelines and improving delivery velocity and confidence.

Principal Performance Engineer Carbon Black, Inc

May 2015 – Nov 2015

Engineered Performance Strategy for Security Streaming Platform: Designed and led performance testing for a real-time threat detection system ingesting massive telemetry from thousands of endpoint agents across enterprise environments.

Validated Scalability Across Data Ingestion Pipeline: Simulated production-scale streaming scenarios using REST APIs and SolarDB, ensuring reliable throughput, low-latency parsing, and high-volume data handling under continuous load.

Reduced Query Latency & Boosted Throughput: Tuned ingestion and retrieval paths for large telemetry datasets, improving write performance by 20% and enabling faster incident triage in time-sensitive security operations.

Ensured Lightweight Endpoint Monitoring: Validated the low resource footprint of endpoint agents through rigorous CPU/memory profiling, ensuring real-time event capture without degrading host system performance.

Established Continuous Regression Pipeline: Built a daily performance regression suite for key security workflows, integrating with CI/CD to monitor trends, detect degradations early, and uphold SLAs for incident response systems.

IBM Corporation

Dec 2007 – April 2015

DevOps / Senior Cloud Performance Engineer

Jan 2015 – April 2015

Pioneered Performance Delivery on IBM Bluemix: Spearheaded the automation of performance test environments across pre-prod, staging, and production using Jenkins and Infrastructure-as-Code on IBM Bluemix Cloud, tailored for WebSphere Application Server deployments.

Integrated CI/CD with Scalable Performance Testing: Embedded performance benchmarks and regression suites into the CI/CD pipeline, enabling continuous validation of non-functional SLAs and rapid identification of performance degradations.

Led Production Deployments with Post-Validation Strategy: Executed live deployments of cloud data services and implemented post-deployment performance validations, ensuring seamless transitions and SLA compliance under real workloads.

Senior Performance Engineer – Information Server Suite (ETL, Data Quality, Governance)

Dec 2007 – Jan 2015

Led performance strategy, testing, and scalability efforts across IBM's Information Server Suite (ETL, Data Quality, Governance), focusing on complex, multi-tier clustered systems. Drove performance from design to delivery, supporting both product development and enterprise clients with mission-critical infrastructure.

Key Projects & Achievements

DataStage Lineage Engine Optimization:

oReduced JVM heap usage by 92% via leak diagnostics and GC tuning.

oIdentified and refactored high-CPU methods, improving execution time by 20%.

oImproved SQL execution time by 50% using IBM Optim Query Tuner.

oEnabled high-complexity DataStage job publishing to drop from hours to minutes.

Operations Console Scalability Project:

oRemoved scaling bottlenecks across compute nodes in clustered environments.

oImproved memory heap efficiency by 48.5%, reducing query times and GC pause times.

oDecreased REST API and web UI response times via smarter object batching.

oPublished IBM Developer Works white paper on best practices.

Logging Services Performance & Stability:

oEnhanced throughput across IBM WebSphere App Server and IBM DB2 layers.

oEliminated memory leaks in Log View UI components and redesigned for scale.

oRemoved single points of failure and optimized IO with SAN + SSD tuning.

oDelivered white paper tuning guidance to customer deployments worldwide.

DataStage Designer Scalability & API Performance:

oScaled DS Designer thick client operations from 43 to 100 concurrent users through GC tuning and memory optimization.

oDesigned and executed performance test suites using IBM Rational Performance Tool (RPT) for nightly API benchmarking, catching regressions bottlenecks early in dev/staging cycles.

Team Leadership & Customer Engagement

Led multiple cross-functional teams of 6+ engineers across major initiatives.

Delivered enterprise performance POCs, deployment architecture, and capacity planning for clients including Walmart, Target, Lowes, Allstate, BOA, and UPS.

Helped secure multi-million-dollar licensing deals ($4.4M, $20M) through high-stakes performance advocacy and solution design.

Recognized by IBM Executive Leadership: Delivered strategic value across performance initiatives and client POCs—earning direct recognition from corporate VPs for excellence in product scalability, field delivery, and customer enablement.

Senior Software QA Engineer Ascential (an IBM company)

2004 – 2007

Spearheaded performance testing for Logging Service infrastructure, proactively mitigating scalability and reliability risks.

Designed and implemented a comprehensive Java-based testing framework utilizing JUnitPerf and JMeter, creating realistic workloads and synthetic large-volume datasets to identify bottlenecks, drive improvements, and continuously expand regression test suites.

Identified critical point-of-failure in logging event streaming architecture, resulting in

design rework introducing buffer agents, reducing contention, increasing throughput, and reducing latency.

Developed and executed detailed test plans and automated tests, ensuring defect-free releases.

Managed offshore team in India responsible for continuous development of Java frameworks and execution of performance testing cycles.

Senior middleware performance Engineer National Leisure Group

2002 – 2004

Optimized high-traffic enterprise applications ensuring 24/7 reliability and scalability for major travel-industry clients such as Expedia, Travelocity, Booking.com, and Sabre.

Identified and resolved critical application code issues, significantly enhancing scalability, reliability and performance.

Achieved 92% reduction in API response times, enhancing transaction processing efficiency.

Led J2EE application migration to JBOSS, optimizing JVM and IO configurations.

Designed automated CI/CD pipelines for nightly regression testing and real-time performance monitoring.

Software QA Lead - Product development C-bridge Internet Solutions

1999 – 2002

Led the performance testing strategy and execution for C-bridge distributed applications.

Designed and implemented large-scale lab for benchmarking N-tier J2EE applications.

Conducted load, stress, and scalability testing using automated tools, analyzing results to optimize application performance.

Developed custom test harnesses in Java to simulate concurrent virtual users for Java/CORBA framework testing.

Led development of Order Management Application on WebLogic server, highlighting J2EE platform expertise.

Supervised and mentored QA engineers, ensuring timely deliverables.

Instituted and deployed new QA processes, enhancing team efficiency.

EDUCATION

Master of Engineering Management, Tufts University, Medford, MA

Bachelor of Science in Computer Engineering, Wentworth Institute of Technology, Boston, MA

PUBLICATIONS

External to IBM:

- "Configuration and Tuning Guidelines for IBM InfoSphere DataStage Operations Console" (July 2012)

http://www.ibm.com/developerworks/data/library/techarticle/dm-1207configtuningdatastage/index.html

Internal to IBM:

"End-to-End Deployment of IS 8.5 in a Linux WebSphere App Server Clustering Environment" "IS 8.1 DS Designer Scalability on AIX White Paper" (adopted by multiple IS 8.1 customers for tuning)

AWARDS & RECOGNITIONS

Dream Team Member Award for outstanding performance on large customer account ($4.4M license fees, 2012)

Senior Manager's Operations Efficiency Recognition Award (2011)

Champion Member of the Information Server Linux Platform (2007)

Information Server Bravo Award 2007.



Contact this candidate