Post Job Free
Sign in

Backend Developer Java

Location:
Pennsauken, NJ
Posted:
October 14, 2025

Contact this candidate

Resume:

KAVYA NETHA PENDEM

Sr. Java Backend Developer

Email: **************@*****.***

Phone: +1-551-***-**** LinkedIn: linkedin.com/in/kavya-netha-pendem-8211a2178

Professional Summary

Core Java & Backend Expertise

10+ years of experience as a Backend Developer specializing in Java (8–17), with strong focus on building reliable, scalable, and high-performance enterprise applications across banking, insurance, healthcare, and telecom domains.

Proficient in Object-Oriented Programming, Collections, Streams, Generics, and Multithreading, delivering concurrent backend services capable of handling high-volume transactions under strict SLAs.

Skilled in applying design patterns (DAO, Factory, Singleton, Observer) and domain-driven design to build maintainable backend systems aligned with business rules.

Experienced in JVM tuning, garbage collection optimization, and thread management, ensuring consistent application performance in production.

Adept at modular backend architecture, ensuring each service is reusable, loosely coupled, and resilient to failures.

Microservices & API Development

Expertise in Spring Boot microservices, exposing secure and reusable REST APIs with standardized validation, structured error handling, and comprehensive exception frameworks.

Hands-on experience in Spring Security, OAuth2, JWT, and LDAP, implementing strong authentication and role-based authorization for sensitive business workflows.

Skilled in Spring MVC, WebFlux, and Spring Batch, enabling synchronous, reactive, and scheduled processing to support diverse enterprise use cases.

Designed and implemented workflow orchestration logic for backend services, supporting fraud detection, claims validation, and clinical alerts with configurable rules.

Strong knowledge of idempotency, retry mechanisms, and backoff strategies, ensuring backend reliability and preventing duplicate transactions in distributed systems.

Data & Messaging Systems

Extensive experience in Hibernate/JPA ORM with optimized entity mappings, caching strategies, and query tuning for relational databases such as Oracle, SQL Server, PostgreSQL, and IBM DB2.

Expertise in NoSQL databases including MongoDB, Cassandra, and Redis, managing unstructured datasets, caching frequently accessed records, and supporting fast lookups.

Skilled in building event-driven architectures using Apache Kafka, Kafka Streams, JMS, and Apache Camel, ensuring reliable message delivery with exactly-once processing.

Designed Redis and Coherence caching layers to reduce database load and improve response times by up to 25–40%, enabling faster decision-making for end users.

Experienced in implementing Elasticsearch indexes with analyzers and filters, supporting sub-second searches for claims data, fraud records, and compliance logs.

Quality, Performance & Compliance

Advocate of Test-Driven Development (TDD) and Behavior-Driven Development (BDD) with JUnit, Mockito, Cucumber, and REST Assured, ensuring reliable code and reducing regression defects.

Conducted performance and load testing with JMeter, tuning JVM memory, thread pools, and SQL queries to maintain SLA compliance during peak transaction volumes.

Implemented structured logging, correlation IDs, and audit trails, improving observability and reducing mean time to resolution (MTTR) by 30–40%.

Deep understanding of regulatory frameworks (PCI-DSS, HIPAA, SOX, AML), embedding encryption, masking, tokenization, and audit logging into backend workflows to ensure zero audit findings.

Achieved measurable backend improvements including 40% faster claims processing, 34% higher fraud detection accuracy, 25% quicker data retrieval, 30% cost savings, and 35% fewer production defects by optimizing backend services.

Technical Skills

Programming Languages: Java (8–17), SQL, PL/SQL, Python, Scala

Frameworks & APIs: Spring Boot, Spring MVC, Spring WebFlux, Spring Security, Spring Batch, Spring Integration, Hibernate/JPA, REST, SOAP (CXF, Axis2)

Databases: Oracle 11g/19c, SQL Server, PostgreSQL, IBM DB2, MongoDB, Cassandra, Redis

Messaging & Integration: Apache Kafka, Kafka Streams, JMS, Apache Camel

Caching & Search: Redis, Oracle Coherence, Elasticsearch

Testing Tools: JUnit, Mockito, REST Assured, Cucumber, JMeter

Build & Version Control: Maven, Gradle, Git, SVN

Deployment & Servers: IBM WebSphere, Oracle WebLogic, Apache Tomcat

Logging & Monitoring: Splunk, ELK Stack, Prometheus, Grafana, Dynatrace

Security & Compliance: OAuth2, JWT, LDAP, WS-Security, PCI-DSS, HIPAA, SOX, AML

Methodologies: Agile, Scrum, Test-Driven Development (TDD), Behavior-Driven Development (BDD)

EDUCATION

Bachelor’s in Computer Science — Haindavi Degree College, Osmania University — 2013

PROFESSIONAL EXPERIENCE

Client: TD Bank, New Jersey

Role: Sr. Java Backend Developer Feb 2024 – Present

Project: Fraud Detection and Alerting System

Project Description: Involved building an advanced fraud detection and alerting system to analyze high-volume banking transactions in real time. The system was designed to identify suspicious activity, generate alerts, and assist compliance teams in ensuring regulatory alignment. The platform integrated with multiple banking applications, handled large datasets, and provided resilient backend services to support critical fraud detection operations.

Responsibilities

Designed and implemented Spring Boot microservices that supported modular and reusable backend services for fraud detection workflows, ensuring that each service handled a clearly defined responsibility with minimal coupling.

Developed Java 17 concurrent components using multithreading and the Executor framework to process simultaneous fraud validation requests efficiently, maintaining high throughput under peak transaction loads.

Created robust RESTful APIs using Spring MVC to enable communication between fraud detection services and external banking systems, ensuring consistency in request validation and error handling.

Applied Spring Security to enforce authentication and authorization across backend endpoints, ensuring secure access for internal fraud teams and protecting sensitive financial data.

Leveraged JPA and Hibernate for data persistence and transaction management, building optimized entity mappings to handle large volumes of fraud case records without performance degradation.

Tuned Oracle 19c database queries with indexing and partitioning strategies to accelerate fraud case retrieval and reduce latency for analyst investigations.

Built Apache Kafka consumers and producers to handle fraud event ingestion, ensuring reliable message delivery with proper partitioning and offset management.

Implemented Kafka Streams pipelines to process transaction data in real time, transforming and aggregating records for downstream fraud analytics.

Designed idempotent service layers that prevented duplicate fraud alerts, ensuring data accuracy and avoiding unnecessary escalations in compliance workflows.

Used Apache Camel integration patterns to orchestrate communication between legacy systems and modern services, enabling seamless data flow across the fraud detection ecosystem.

Developed transaction management logic with Spring and Hibernate to maintain data integrity, applying rollback strategies for failed transactions.

Optimized connection pooling and caching configurations in the backend to handle fluctuating fraud transaction loads and maintain consistent service performance.

Created Redis caching layers to store frequently accessed reference data, improving fraud validation response times and reducing database load.

Designed audit logging mechanisms to record every critical backend operation, ensuring compliance with AML and SOX regulations.

Conducted code reviews and peer programming sessions with junior developers to enforce best practices in Java coding standards, design patterns, and backend architecture.

Wrote detailed JUnit and Mockito unit tests to validate service logic, ensuring comprehensive coverage and reducing production bugs.

Automated integration testing using REST Assured to validate fraud detection APIs against expected contract responses across multiple environments.

Performed performance testing with JMeter, analyzing transaction throughput and tuning JVM configurations to reduce garbage collection overhead.

Implemented asynchronous message handling in fraud validation services, improving responsiveness and enabling scalable processing of high-volume fraud requests.

Designed batch processing jobs with Spring Batch to handle scheduled fraud data reconciliation tasks, ensuring data consistency across systems.

Partnered with database administrators to design partitioned fraud transaction tables, reducing retrieval times by 25% and improving analyst productivity.

Built exception handling frameworks across microservices to ensure clear logging, consistent error codes, and proper notification of failures.

Ensured regulatory compliance by embedding strict data validation, masking, and encryption mechanisms at the service layer, aligning with PCI-DSS standards.

Designed domain-driven service models that represented fraud cases, alerts, and transaction workflows in a structured and maintainable manner.

Implemented retry and backoff strategies in service calls to external systems, ensuring resiliency during transient failures and reducing downtime risks.

Developed workflow orchestration logic in Java to support complex fraud case routing rules, ensuring alerts were prioritized correctly for analysts.

Strengthened backend observability by implementing structured logging and correlation IDs, enabling traceability of fraud events across distributed services.

Practiced Test-Driven Development (TDD) by writing tests before implementing backend logic, ensuring cleaner code design and reduced defects.

Reduced average fraud dashboard response times by 20% through backend query optimization and caching strategies that minimized database bottlenecks.

Improved fraud detection accuracy by 34% by integrating graph-based relationship checks within fraud validation microservices.

Lowered fraud case retrieval times by 25% through Oracle query tuning and optimized entity relationships in Hibernate mappings.

Achieved 40% reduction in mean time to resolution (MTTR) for backend incidents by embedding structured logging and alerting in fraud microservices.

Reduced infrastructure costs by 30% by optimizing backend memory consumption, tuning thread pools, and eliminating redundant data processing pipelines.

Environment: Java 17, Spring Boot, Spring MVC, Spring Security, Spring Batch, Spring Integration, Hibernate/JPA, REST APIs, Apache Kafka, Kafka Streams, Apache Camel, Oracle 19c, Oracle 11g, Redis, JUnit, Mockito, REST Assured, JMeter, Git, Maven, Agile/Scrum

Client: Assurant Insurance, New York

Role: Java Backend Developer Sep 2022 – Jan 2024

Project: Dynamic Claims Orchestration and Event Automation

Project Description: Focused on building a dynamic claims orchestration platform that automated the intake, validation, and settlement of insurance claims. It streamlined workflows between multiple insurance systems, provided reliable backend integrations, and supported compliance requirements across claims operations.

Responsibilities

Designed and developed Spring Boot microservices in Java 11 to manage claims intake, validations, and approvals, ensuring modularity and scalability of backend workflows across multiple systems.

Created RESTful APIs for secure and reliable communication between claims services and external applications, implementing detailed input validation and structured error handling to reduce integration defects.

Applied concurrency and multithreading techniques in critical service components, enabling simultaneous processing of claim events while maintaining consistency and integrity.

Leveraged Spring Security to enforce robust authentication and authorization mechanisms, ensuring that sensitive claims data was only accessible by authorized users.

Integrated JPA and Hibernate ORM for persistence operations, designing optimized entity relationships to manage complex claims data with high efficiency and accuracy.

Tuned SQL Server queries by introducing indexes, query hints, and optimized joins, which improved the response time for claims lookups and history retrieval.

Migrated legacy database schemas into redesigned structures that supported partitioning and better query performance, reducing load on production databases.

Implemented Kafka-based event pipelines to orchestrate asynchronous claim events such as submissions, approvals, and settlements, ensuring high reliability in message delivery.

Configured Kafka partitions, batching, and consumer groups to scale backend processing and maintain consistent performance during seasonal surges in claims activity.

Designed retry and error recovery strategies in messaging flows to handle transient system failures, ensuring that claim events were never lost or duplicated.

Built Apache Camel integrations to connect with external insurance systems, enabling reliable data flow between policy management, fraud detection, and payment modules.

Developed batch processing jobs using Spring Batch to handle large sets of claim records for reconciliation, validation, and periodic reporting.

Applied idempotent message handling in backend services, preventing duplicate settlements and ensuring accuracy in claim orchestration workflows.

Designed audit logging and traceability features that captured every backend operation, ensuring compliance with insurance audit and regulatory requirements.

Partnered with database teams to implement stored procedures and triggers for data integrity, ensuring consistency across claims-related transactions.

Conducted root cause analysis for performance bottlenecks in backend services, tuning JVM heap configurations and connection pools to achieve higher stability.

Designed data models for MongoDB to store unstructured claims records, enabling efficient retrieval of documents such as customer histories and attachments.

Implemented Redis caching strategies to accelerate retrieval of frequently accessed claim reference data, reducing average API response times.

Strengthened backend error handling frameworks, ensuring that unexpected failures were logged with meaningful messages and did not disrupt critical processing.

Practiced Test-Driven Development (TDD) by writing JUnit and Mockito test cases before service implementation, ensuring reliable code and reducing defects.

Automated integration testing with REST Assured to validate backend service contracts and prevent regression issues across microservices.

Performed load and performance testing using JMeter, ensuring that claims orchestration services maintained SLAs under peak transaction volumes.

Designed Elasticsearch indexes to enable fast and flexible search capabilities for claims history, empowering analysts to quickly retrieve relevant case data.

Implemented workflow orchestration logic to prioritize claims processing, ensuring that urgent or high-value claims were handled first.

Secured backend SOAP integrations with WS-Security and XSD validation, maintaining compatibility with legacy insurance systems while modernizing workflows.

Collaborated with compliance and audit teams to align backend workflows with insurance regulatory standards, ensuring that claims data remained secure and compliant.

Mentored junior developers on backend best practices, guiding them in Java, Spring Boot, Hibernate, and Kafka development patterns to improve delivery quality.

Authored detailed runbooks, troubleshooting guides, and operational playbooks to support backend services, streamlining production support readiness.

Improved claims processing throughput by 40% by optimizing Kafka brokers, batching strategies, and consumer group configurations.

Reduced average claim settlement cycle time by 30% through backend automation and workflow orchestration, eliminating redundant manual validations.

Achieved a 25% improvement in API response times by tuning SQL queries, applying Redis caching, and optimizing Hibernate entity mappings.

Lowered production regression defects by 35% by enforcing TDD practices, contract testing, and comprehensive JUnit and Mockito coverage.

Enhanced claims search efficiency by 50% by implementing Elasticsearch indexes with custom analyzers and optimized filters for high-volume datasets.

Environment: Java 11, Spring Boot, Spring MVC, Spring Security, Hibernate/JPA, REST APIs, Kafka, Apache Camel, Spring Batch, SQL Server, MongoDB, Redis, Elasticsearch, JUnit, Mockito, REST Assured, Cucumber, JMeter, Git, Maven, Agile/Scrum

Client: Capital Group, California

Role: Java Backend Developer Jan 2021 – Aug 2022

Project: Enterprise Collaboration Platform for Investment Operations

Project Description: Delivered a secure collaboration and integration platform for investment operations. The system was designed to unify portfolio, trading, and research workflows, streamline communication between multiple financial applications, and ensure compliance across data exchanges.

Responsibilities

Designed and implemented Spring Boot microservices in Java 8 to provide backend services for portfolio management, trading workflows, and research collaboration, ensuring modular design and reusability across multiple business domains.

Developed secure RESTful APIs to expose collaboration services to external systems, applying strict input validation, response standardization, and robust exception handling for consistent service reliability.

Applied multithreading and concurrency controls in backend processing components to handle high-volume trading and research data, improving throughput and stability during peak market hours.

Leveraged Hibernate ORM with JPA for persistence operations, building optimized mappings that supported both transactional data and reference datasets with minimal performance overhead.

Designed and fine-tuned Elasticsearch indexes to enable fast search across investment documents, financial records, and compliance data, supporting quick decision-making for analysts.

Built Kafka-based event pipelines to orchestrate asynchronous communication between microservices, ensuring reliable message processing and smooth data flow between collaboration modules.

Implemented Spring Security with OAuth2 and LDAP integration to enforce user authentication and role-based authorization, securing backend endpoints in compliance with investment security standards.

Configured Apache Camel connectors with error handling and retries to integrate backend services with legacy trading applications and regulatory systems, enabling seamless interoperability.

Designed Redis caching strategies to accelerate data retrieval for frequently used endpoints, reducing backend latency and ensuring smoother analyst interactions with the system.

Created audit logging mechanisms across microservices to capture every sensitive backend action, ensuring full traceability for compliance audits and regulatory checks.

Conducted root cause analysis for backend performance bottlenecks and applied JVM tuning, connection pooling optimization, and query restructuring to improve transaction stability.

Practiced Test-Driven Development (TDD) by writing JUnit and Mockito test cases, improving unit test coverage and ensuring clean and maintainable backend code.

Automated API integration tests with REST Assured, validating interoperability between collaboration services and minimizing the risk of regression issues across environments.

Performed load testing with JMeter on backend APIs to validate system resilience under high data ingestion rates, tuning Kafka brokers and database queries to meet performance SLAs.

Implemented data encryption policies for sensitive collaboration records, ensuring compliance with financial data protection regulations and eliminating unauthorized access risks.

Built workflow orchestration logic to manage approval processes and task routing across investment teams, reducing manual interventions and ensuring timely decisions.

Improved API response times by 25% through a combination of Redis caching, Hibernate optimizations, and fine-tuned SQL queries.

Enhanced system reliability by 30% by designing retry logic, exception frameworks, and idempotent transaction handling for backend services.

Reduced mean time to resolution (MTTR) by 35% through structured logging, correlation IDs, and proactive monitoring of backend collaboration services.

Achieved 40% improvement in backend throughput during peak usage periods by optimizing Kafka consumer groups, batching logic, and partition strategies.

Lowered regression defects by 30% through disciplined TDD practices, JUnit coverage, and automated contract testing with REST Assured.

Environment: Java 8, Spring Boot, Spring MVC, Spring Security, Hibernate/JPA, REST APIs, Apache Kafka, Apache Camel, Elasticsearch, Redis, SQL Server, JUnit, Mockito, REST Assured, JMeter, Git, Maven, Agile/Scrum

Client: Kaiser Permanente, California

Role: Java Backend Developer Nov 2019 – Dec 2020

Project: Proactive Health Data Streaming and Alert Management

Project Description: Aimed to deliver a real-time patient monitoring and clinical alert system that captured and processed vital health data. The platform supported proactive alerts, automated workflows, and reliable data pipelines, ensuring clinicians could respond quickly to emergencies. It was designed with resilience, compliance, and high throughput in mind, enabling healthcare providers to improve patient safety and align with regulatory standards.

Responsibilities

Designed and developed Spring Boot microservices in Java 8 to process patient monitoring data, ensuring modular services could be reused across multiple clinical workflows.

Built RESTful APIs that securely exposed patient data services to external hospital applications, implementing strict request validation and structured exception handling for reliability.

Implemented Spring WebFlux reactive APIs to handle large volumes of concurrent patient monitoring events, significantly reducing alert latency during critical situations.

Engineered Kafka Streams pipelines with replay and idempotency logic, enabling real-time alert generation while ensuring accurate and consistent processing of patient health data.

Applied transaction management with JPA and Hibernate to maintain reliable persistence of patient records, ensuring data integrity across multiple medical systems.

Tuned PostgreSQL queries through indexing, partitioning, and optimization techniques, reducing query execution times and improving responsiveness for high-volume clinical datasets.

Designed MongoDB collections to manage unstructured health data such as medical histories and diagnostic records, improving flexibility in handling complex clinical cases.

Implemented Redis caching layers to accelerate frequent patient lookups, improving the responsiveness of critical alert services and reducing strain on the database.

Developed JMS messaging pipelines to decouple backend components, ensuring reliable delivery of patient alerts across emergency and monitoring departments.

Created SOAP service integrations with WS-Security and schema validation to maintain compatibility with hospital legacy systems while supporting modernization efforts.

Applied Spring Security with OAuth2 and LDAP integration to enforce robust access control, ensuring sensitive health data was protected in compliance with HIPAA regulations.

Designed audit logging and traceability mechanisms across backend services, ensuring that every alert and data transaction was captured for compliance and investigations.

Conducted unit testing with JUnit and Mockito, practicing Test-Driven Development (TDD) to ensure cleaner code, high test coverage, and reduced regression defects.

Automated integration testing with REST Assured to validate API contracts between backend services, reducing risks of failures during production deployments.

Performed load testing with JMeter, analyzing system throughput under high patient admission volumes and tuning JVM configurations, Kafka brokers, and database connections to ensure stability.

Improved system responsiveness by 25% by optimizing database queries, enhancing caching strategies, and reducing unnecessary backend calls.

Enhanced alert processing accuracy by 30% by implementing idempotency checks and validation rules in Kafka Streams pipelines.

Achieved a 35% reduction in mean time to resolution (MTTR) for backend incidents by embedding structured logging, correlation IDs, and clear exception handling.

Increased system scalability by 40% by designing concurrency-safe components and optimizing consumer group configurations in Kafka.

Reduced compliance-related audit issues by 100% through proactive backend logging, encryption, and strict adherence to HIPAA standards.

Environment: Java 8, Spring Boot, Spring WebFlux, Spring Security, Hibernate/JPA, REST APIs, Apache Kafka, Kafka Streams, JMS, PostgreSQL, MongoDB, Redis, SOAP (WS-Security), JUnit, Mockito, REST Assured, JMeter, Git, Maven, Agile/Scrum

Client: Zurich Insurance, New York

Role: Java Backend Developer Mar 2018 – Oct 2019

Project: Adaptive Predictive Analytics and Log Monitoring

Responsibilities

Designed and implemented Spring Boot microservices in Java 8 that unified log ingestion workflows, replacing fragmented legacy modules and ensuring long-term maintainability of backend services.

Built RESTful APIs to expose core analytics and monitoring services, standardizing communication across insurance applications while enforcing strict validation and exception handling.

Developed Apache Kafka consumers to reliably ingest high-volume log streams, ensuring resilience with offset management, partition strategies, and error-handling mechanisms.

Implemented Apache Spark Streaming jobs with replay and deduplication logic to process real-time logs, enabling predictive anomaly detection and improving system intelligence.

Applied Hibernate ORM with L2 caching and query optimizations, ensuring backend services handled large datasets efficiently while reducing database latency.

Optimized Cassandra clusters with compaction strategies and MongoDB TTL policies to enforce data retention, aligning with strict insurance compliance requirements.

Built and configured JMS messaging pipelines to decouple backend log processing services, improving system stability during spikes in ingestion volume.

Integrated SOAP APIs with CXF and Axis2, applying schema validation and WS-Security policies to support secure and standards-compliant communication with external partners.

Designed and implemented Coherence caching strategies for hot read-heavy scenarios, significantly reducing database load and ensuring SLA adherence.

Conducted performance testing with JMeter, tuning JVM garbage collection, Hibernate cache strategies, and SQL joins to achieve consistent SLA compliance during peak usage.

Enhanced Splunk and ELK-based dashboards by delivering enriched backend log data, enabling faster troubleshooting and improved correlation of system anomalies.

Improved query response times by 20% by tuning database joins, caching strategies, and optimizing Hibernate mappings, resulting in faster analytics processing.

Increased system anomaly detection accuracy by 30% by leveraging Spark pipelines with predictive analytics logic, reducing false positives and improving monitoring precision.

Reduced downtime incidents by 25% by implementing resilient Kafka and JMS messaging pipelines that stabilized backend log processing under heavy load.

Lowered mean time to resolution (MTTR) by 35% by embedding structured logging, audit trails, and correlation IDs across backend services, improving troubleshooting efficiency.

Enhanced compliance audit readiness by 100% through backend encryption, audit logging, and strict enforcement of regulatory data retention rules.

Environment: Java 8, Spring Boot, Spring MVC, Hibernate/JPA, Apache Kafka, Apache Spark Streaming, JMS, Cassandra, MongoDB, Oracle WebLogic, SOAP (CXF, Axis2), Coherence Cache, Splunk, ELK, JUnit, Mockito, JMeter, Git, Maven, Jenkins, Agile/Scrum

Client: Airtel, Hyderabad, India

Role: Java Backend Developer Jul 2013 – Oct 2016

Project: Autonomous Event-Driven Telecom Monitoring

Responsibilities

Designed and implemented Struts MVC modules and backend workflows that automated the handling of telecom monitoring events, reducing operator errors and improving overall process accuracy.

Developed Spring and Hibernate-based persistence layers for secure transaction management, ensuring consistent storage and retrieval of telecom event data across distributed systems.

Built RESTful APIs and SOAP services with WSDL and UDDI standards to integrate with external telecom vendors and systems, enabling seamless interoperability across multiple network partners.

Applied DAO, Factory, and Singleton design patterns to backend modules, improving modularity, maintainability, and long-term scalability of the monitoring platform.

Engineered JMS pipelines with durable subscriptions and retry mechanisms, ensuring reliable delivery of high-volume telecom events while maintaining SLA-driven alerting.

Optimized Oracle 8i and IBM DB2 queries through indexing, caching, and connection pooling, which improved query performance and reduced delays during peak telecom traffic.

Tuned application servers on IBM WebSphere and Oracle WebLogic by adjusting thread pools, JDBC settings, and JVM heap configurations, ensuring system resilience under fluctuating workloads.

Designed and implemented LDAP-based authentication and RBAC policies, securing



Contact this candidate