Post Job Free
Sign in

Software Developer Front Office

Location:
San Jose, CA
Posted:
May 23, 2025

Contact this candidate

Resume:

Summary

●Software Developer with * years of IT experience in Development, Analysis, Testing of various backend Microservice Client/Server applications

●Skilled in driving significant Business impact, dedicated and hardworking with internal drive to deliver high quality solutions

Skills

●Programming Languages: Java, GO, SQL, Python

●Java/J2EE Technologies: Spring Framework, REST Web services, Maven, Apache Camel, JPA, JDBC, Hibernate, JUnit, Mockito

●Databases: MongoDB, MySQL, Oracle, Postgres

●Lucene Technologies: ELK stack, Apache Solr, Splunk

●Queue Based Technologies: Apache Kafka, Tibco EMS

●CI/CD: Jenkins, GitHub Actions

●Others: Docker, AWS, GCP, RedHat OpenShift, Apache Tomcat, Unix shell, Jira, GIT

Work Experience

Nomura - San Jose, California April 2021 – Till Date

Role: Software Developer

TROY-CFTC: Process Investment Trades that are booked from Front Office System and run through RegTech Business Logic to report trades to Regulator The Depository Trust and Clearing Corporation (DTCC).

●Develop Multi-Maven module application using Spring framework, Apache Camel to integrate multiple Spring based components consuming data from EMS queues.

●Build Transformation module to extract data from Trade FPMLs (XML) using XSLT technology. This process is performed based on Intraday/Snapshot and asset classes Credit, Equity, Commodity, InterestRate, ForeignExchange and product types

●Build Trade Reportability framework with reusable Spring components to configure Rules based on categories and evaluate if Trade is Reportable to the Regulator

●Design and implement Trade Enricher module to perform enrichments of reportable fields invoking Nomura internal application endpoints like Takara, TVS for Organizational and Business Entity information

●Build Tradestate report generator module that generates file submissions (with 600 plus fields) using Spring batch processing from MSSQL server Database

●Implemented Apache Camel Selective Consumer mechanism to maintain sequence in Multi-threaded application to support Trade lifecycle and improved throughput by 10 times

●Provided enhanced business solutions on production issues and strategically incorporated Team member feedback in a way that can be validated against accepted application design

Cisco - San Jose, California Mar 2017 – Mar 2021

Role: Software Developer

Project Name: Software Supply Chain (Order Fulfillment and Software Licensing)

Process electronic delivery of software license documents like Right to Use, Subscriptions, Claim documents containing the PAK information (Product Activation keys), End User License Agreements. Ensuring the appropriate number of licenses generated as per the ordered quantity

●Work on Design and Development to refactor application currently in Procedural SQL to Backend Microservice using Spring Framework

●Implement module that performs Bill of Material validations to determine the fulfillment status of Orders and expose this feature as a REST endpoint

●Setup Elasticsearch Data Sync Framework to perform indexing of data to Elasticsearch from Oracle database using Apache Camel to increase throughput of application queries

●Implement Logging Framework and spin up Filebeat as sidecar sidecar to the application pod running on OpenShift Platform which will transfer logs through Logstash to Elasticsearch

Project Name: Cisco Log Intelligence Platform

●Build logging solution using Elasticsearch stack allowing internal applications to index logs to cluster enabling Client Teams with critical Business Insights

●Designed Dashboards, Reports leading to 30% reduction in report generation time to understand Key Performance Metrics. Configure alerts for application surveillance and 100% availability

●Design configuration files of Elasticsearch, Logstash, Kibana, Apache Kafka, File beat and spin up cluster for data indexing

●Hands on experience to maintain Elasticsearch clusters. Implement Sharding strategy to distribute load across Data nodes to avoid Memory contention.

Project Name: Data Indexing and Search Platform

●Build Microservice to support customer enterprise backend Non-Transactional large database with high-performance cache and parallel indexing of data nodes to achieve enhanced retrieval of data by 40% using Apache Solr cluster

●Develop schema in relation to the user data types and Solr configuration files containing request handler, search component and query parser.

●Identify and configure the Solr index schemas for data elements based on Client priorities

●Develop Auto suggest component in Solr configuration file which uses only the built-in index

Education

●Master of Science, Computer Science: Western Illinois University, USA - 2017

●Bachelor of Technology, Computer Science: Jawaharlal Nehru Technological University, INDIA - 2014



Contact this candidate