Post Job Free
Sign in

Project Data

Location:
Toronto, ON, Canada
Posted:
December 01, 2017

Contact this candidate

Resume:

Hands-On Architect - Canadian and European citizen. SC Clearance.

Skills Big Data: Scala, Spark, Akka, Play; Distributed Computing: Coherence, Gemfire; AI: Machine

Learning, Deep Learning, Neural Networks, JRules, Blaze, Drools, PROLOG, Lisp, Smalltalk;

Languages: Scala, Scalaz, Java8, Guava, AWS, C++, C#; SDN/VNF/NFV, D-NFV, MANO.

Certifications Functional Programming in Scala – Odersky/Coursera (92% with distinction) Oct. 2014

C# - .NET Proficiency Certification by TeckChek May 2009

Project Management Certification by Brainbench, Aug. 2006

Enterprise J2EE Architect Authorization by Derrico Computers, Jan. 2004

EJB/J2EE Certification by Brainbench, Cert. ID3289262 Oct. 2001

SUN Java 2 Platform Certification, Cert. FA4DTT19C2 Dec. 2000

Author of “Intelligent NFV MANO Orchestration of VNFM Events using the Scala Rule Engine” - 2017

“Implementing MANO Virtual Network (VNF, CP, VLink, Path, Graph, NS) in Scala” - 2016

“Scala Sigmoid Neurons recognizing Hand-written Colored Letters via Deep Learning” - 2016

“Self-re-architecting of Scala Deep Learning Hidden Layers for Performance Boost” - 2016

“The 25 Scala Code-Review and Performance-Optimization / Idioms Points” - 2015

“The 8 approaches for Coherence Caches Loading and how to choose the right one” - 2013

“Visitor, Observer, and Chain-of-Responsibilities Patterns in Oracle Coherence Grid” - 2009

“The 30 Performance Checks for Oracle Coherence Cache Data Grid” - 2009

“The 25 Steps for Developing an ILOG JRules Enterprise Web Service” - 2008

“The 60 Best Practices of Development with Weblogic Integration (WLI)” - 2005

“ILOG JRules vs. PROLOG – forward with Rete or backward with Backtracking” - 2005

“The 85 Oracle Performance Bottleneck Checks”, - 2004

Recognitions COMDEX Best Application in the World Trophy, with the Merrill Lynch team;

RON technology candidate of the year list, with over 500 member companies;

Individual Liberty Mutual Award for critical project contributions, with IBM

Training Santa Clara, CA Building Streaming Apps with Akka, Spark, Kafka (Lightbend) - 2017

Toronto, CA Flume v. Spark – modes of operation and differences (Apache) - 2016

San Fran, CA Fast Data Analytics with Spark (Typesafe) - 2015

Detroit, MI AWS Reactive Apps for AWS Streams/S3 with Amazon Lambda - 2015

Toronto, CA Functional Programming in Java 8 - 2014

Toronto, CA Functional Programming in Scala (Odersky-Coursera) - 2014

Toronto, CA Big Data Map/Reduce with Hadoop /HBase - 2012

Detroit, MI Oracle Fusion 12.0 development with WLS 12 and Coherence 3.7 - 2011

Scottsdale, AZ Oracle Coherence (Tangosol) Performance Optimizations - 2009

Toronto, CA Oracle UCM (Stellent) Content Management Best Practices - 2008

Singapore, SG JSF/Ajax development for Weblogic Portal 9.2 (Ajax4Jsf) - 2007

Singapore, SG Integrating Weblogic Components with Aqua Logic ESB - 2006

Education Ph.D. Thesis in Artificial Intelligence: "CONTEXT Business-Rules-Engine AI System"

Thesis published in Japan, Germany, France, Greece, and Hungary by IEEE, ACM.

MSc. Mathematical Modeling in Computer Science, from Sofia University .

Bachelor in Mathematics, from Sofia University .

FINANCIALS Barclays Capital, Scottrade, Bonddesk Group, Merrill Lynch, Fidelity Investments, Bear Stearns,

CIBC. Toronto Stock Exchange, Capital One, Liberty Mutual, CIBC Trust, Aetna, Option One.

TELECOMS HUAWEI, Apple Computer, British Telecom, Vodafone, Nokia, AT&T, Cingular Wireless, Telus

FORTUNE500 Oracle, Barclays Capital, KOHLs, Lloyds, Kuoni, Shell, Apple, BEA Systems, IBM Global Solutions, IBM Corporation and Labs, Nokia, Vodafone, Liberty Mutual, Accenture, Intuit, Toronto Stock Exchange, Telus Communications and Edmonton Tel, CIBC Trust, CIBC Investments - Wood Gundy, Liberty Mutual, Fidelity Investments.

FOR: V-Frameworks INDUSTRY: Cloud Security / V-Networks Platforms

Tampa, FL Tools: Scala /Akka / Scalaz / Shapeless, SDN, VNF, NFV, D-NFV, MANO

April’17 – Oct’17 Project: SDN/NFV Platform Orchestration Architecture Review and redesign

Re-designed V-Frameworks’ (a spin-off from a major Telco) VNF Platform to optimize the real-time Mapping of the Virtual Elements in Distributed Environment (DNFV) to real-network elements in optimized way. Expanded the Blue Planet’s NFVI guidelines to use the Scala Rule engine and its unification/backtracking mechanism in the Scheduler in order boost the utilization of the network resources. Orchestration based on a price-tag, associated with every Network Service as part of the NS-Barcode at the Ingestion Point (plain Kafka-based distributed VNF Data ingestion, similar to the one for the IoT-project of the GM Labs in Detroit). Applied two-plane scheduling: first - within the Virtual Plane with different NSs competing for Common Virtual Resources (CVRs) in Distributed environment (in the prototype - VNFs only in a Cluster of JVMs), and second - within the Real Network Plane with different NSs competing for real-time mapping to the available Real Network Elements. Streamlined the RESTful integration using NiFi data routing with TOSCA templates, converting the nested NFV Tuples in Shapeless HLists for traversing the heterogeneous resources. Changed the static lists of Virtual Network Services’ elements to Scalaz enum Typeclass to apply the Scalaz’s traversable functionality, to fit the Shapeless implicit HLists and prductElement(i) direct access. Switched the Scala options to Scalaz Optional (not to be confused with the Java 8 Optional) to ease the creation of option values. Fixed the Akka communication between the layers of the event handler and reduced the number of layers in the process. Moved away from Spark Streaming to Akka Streaming to accommodate performance requirements (Spark is slow in the Streaming Part - basically 1 full second latency) and Akka Persistence to match the platform. Extended “The 25 Scala code-review points” during the code-review with Complexity check-points and ran the main modules through it, including Control Paths, Dependen-cies, State Management, O(n) performance, and Cyclomatic complexities. Designed a Big-data/ IoT stress-performance-testing facility with NS-Service data-generators pushing partition-keyed data into a Kafka cluster defaulting to 10 Topics with 10 Partitions over 10 Brokers and 10 Consumer Groups of 10 for every topic. Generators push data to both Kafka and a 10-node Cassandra cluster to test different Big-data implementations: Kafka/Scala; Cassandra/ Scala; Kafka/Spark; Cassandra/Spark; Kafka/Akka; Cassandra/Akka; and selected the Kafka/Akka solution for the particular dataflow.

FOR: HUAWEI Labs INDUSTRY: Cloud / Virtual Networks Security

Munich, Germany Tools: Scala /Akka / Scalaz /Scala Rule Engine, SDN, VNF, NFV, MANO Microservices

August’16 – Feb’17 Project: "Virtual Networks Intelligent Policy (MANO/Orchestration)”

Lead Architect of the Huawei Intelligent Policy Engine for the MANO Virtual Networks Services. Scala / AKKA agile implementation of the components below, using the Scala Rule Engine. Developed a dozen of Generators for IP Addresses, semi-specified Servers, VNFs with their corresponding Connection Points, Paths, Virtual Links, Forwarding Graphs, etc. from an external file-specification. Developed the Control Console for Rules Editing in Scala Swing, as well as the Specification Language for all the elements of the Virtual Network Services, allowing for any kind of Virtual Network Service to be automatically built, including services with Chris-Cross VNFs. Designed and developed the Events Service, which provides the communication between the Virtual Network Elements. This includes MANO and its Orchestrator and the Policy Engine, the Broadcasting facility, the Events Map, and the Events Facade - all in plain Scala with two layers of AKKA Actors handling the Event Broadcasting for different subscriptions, all exposed as Microservices (REST). Designed the structure of the Access Control rules and the Policies they belong to, used by the Scala Rule Engine - and implemented them in Scala along with a Rules Builder with 5-way overloaded factory methods on the number of arguments. Nested Scala Generators for VNF resources, Firewall Rules, Policies, NS Forwarding Graphs & Paths, etc. Nested case-class definition of Network Service containing a set VNFs, VLinks, CPs, FGraphs, VPaths etc. over Openstack with chained Scala Futures in a for-comprehension via ‘onSuccess’, injecting static NFV-context via the Scalaz Reader monad, while passing dynamically-changing Load-context via Scalaz State monad. Designed the active rules (resulting in changes in the network components like increasing the throughput of Firewall, etc.), ready to be processed by the Rule Engine. Created a COMBO-communication Module with adapters for Event-driven to Request/Response and vice versa, allowing for flexible integration with 3rd-party components. Created the Scala NFV unit tests with ScalaTest and crafted all the Design Diagrams and wrote the Design Documents (both HLD and LLD).

FOR: Fortress INDUSTRY: Cyber Security

Orlando, FL Tools: Scala / Akka / Scala Rule Engine, Deep Learning, Cyber Sec-tools

April’16 – August’16 Project: "Cyber Security Vulnerability Management - Platform" Architecture 2.0”

Re-architected the Cyber-Security Management from Python/MySql to Scala/Akka/AkkaStreaming to increase the performance more than 200 times on a 10-node cluster over the same testing sample - that is from O(n^3) to O(logN).

Four layers of Actors communication via tell/forward messages starting with a single Command & Control Central Actor (C3). On top of running the Pool of First Layer Actors which distributes the Cyber-Security-Tools load, the C3 also manages the System Monitoring Actor and the Conditional-Run Actors. Using a pool of Scala Rule-Engine Akka-Actors for Conditional Security-tolls running (including cross-company-dependencies runs, DNS-health, Port-scan, Platform-scan, Protocol-scan, Geo-location, etc.) the Monitoring Actor dynamically spins-off new nodes upon reaching a pre-set threshold. Created a Security-exposure back-propagation Actor (from the same Scala Rule Engine Actors Round-robin Pool but with a different rule-set) triggering Big-data broadcasted messages to the affected Communication Actors on the level below. Designed and implemented a prototype in plain Scala without Akka, but with dynamically-controlled ForkJoin Pool over the parallel collections for increased performance. All operations are implemented as a revolving series of parallel mappings - even the allocation of the pools is done via parallel mapping of the Factory Function over a List.par. Using a multi-layered set of Sigmoid Neurons (plain Scala implementation) for function-fitting of the influence of Partner's Cyber Security failures on the primary company Security Index, with back-propagation of the error and gradient descent (stochastic SGD via mini-batches). See the Article on the Colored Hand-written Letters recognition, based on pilot project for teaching the team via both horizontal & vertical (hidden layers/depth) expansion of the Neural Network. Re-did the implementation using the Scala linear-algebra / statistical library Breeze (linalg, stats, gradientAt, etc.). Re-designed the app for AWS, moving from using Akka Streams to AWS Kinesis streams with lazy AWS Lambdas (cost minimization) for the triggers in the Kinesis streaming data from the Cyber Security tools.

FOR: GM Labs INDUSTRY: Automotive IoT

Detroit, MI Tools: Scala, Akka, Java 8, Scala Rule-Engine, CEP, JSON, Reactive RxScala, IoT

June’15 – Feb’16 Project: "Scala/Akka Real-time Danlaw Connected-Car Events Processing Platform"

Lead IoT Platform Architect for the Microservices-based Platform handling automotive onboard sensor information. Data streaming from over 2 million cars equipped with Live Onboard Data Transmission Devises connects to the Platform Server via Sockets, and after reformatting hits a farm of Kafka Topics (Topics mimicking the caches in the initial Oracle Coherence POC with Cloudera / Hadoop data ingestion). Processing done in a Docker/Mesos resource-pooling environment ready for cloud-deployment (AWS), with Jira for the Agile Development (Epics/Stories/Sprints) and the Twitter Scala guidelines. Implemented and benchmarked different Scala Distributed Rule-Based Modules (Scala-Rules-Actor Router, Sharding, and a Spark Cluster in the CEP Rule Engine) based on the Scala Rule Engine. Alternative implementation via Scala Decision Tables (direct facts-results links). Since the initial PoC in ThingWorx lacked the CEP and Rule-based capabilities, designed and implemented a CEP component with Cumulative Rules engine connected to the Spark cluster (over Hadoop and Kafka topics), streaming 4-dimentional event-tuples. Alternative CEP implementations via Reactive Streams (Akka Streams Scala Template with Flow / Sink / FlowMaterializer) and the standard Spark Streams with filters on specific TAGs and Cassandra persistence, feeding the bridge-hashmap to the Rule Engine (with rule triggers/polling/request based updates in the CEP engine). Redesigned the processing with innovative foldLeft(Z)(f) over the result-lists with function (f) containing both Filters (discarding irrelevant) and Closures (collecting side-indicators). Prepared the ScalaTest entries for border-value-testing. Alternative solution with double-layer of Akka Rule Actors, the bottom layer actively refreshing the accumulator-values from the Spark RDDs for the top-layer of Actors. Replaced the count/max/min/etc. map-reduce GroupByKey Spark queries with ReduceByKey. Geographic (State-based) Elastic Search 2.0 (Lucene) on the Spark RDDs (indexed JSONs). Finalized the Processing Big-data Architecture as a triple-distributed framework (1. Incoming stream-tagged-splitting over a farm of Rule Modules; 2. The Rule modules controlling a RoundRobin Router Pool of Akka-Rule-Actors; and 3. The Rule Set itself being split and distributed across the Rule Modules relative to the same Tag that splits the Incoming Input Stream). All Akka-Rules-Actors in the pool are loading and using the same partitioned Rules Subset. Only Request-based read-through method implemented (plain Scala) - the triggers/pooling method enabled for active backend (grid), where continuous queries, listeners, etc. are available. Implemented the ObservalableMap for the Scala HashMaps to mimic the grid behavior and avoid polling. Re-implemented the CEP using RxScala (reactive streams) on top of RxJava (including creating Observables from Spark RDDs – both via filters and collect calls) and process the streams in asynch (nonblocking) way, merging streams from cars on the same fleet before subscription. Implemented a concise fleet-reporter using groupBy identity on the List of participating vehicles, combined with mapValue on size.

FOR: IronShield INDUSTRY: Security / Cryptography

Oxford, UK Tools: Scala, Akka, Play, Spring/ XD, Web Sockets, Encryption, Java 8

Dec’14 – May’15 Project: "Scala/Play Implementation of Conference Security Protection"

Implementation engagement to code a Scala version of a Full-proof-protected Conference Room Application in Play Framework (MVC over AKKA) using the company's Secure Protocol Communication layer over Web Sockets. The Encoding algorithm makes the 40-year-old Public-Private Asymmetric Cryptography as obsolete as it should be, with all the inherent performance problems. Security made theoretically unbreakable - NSA would be just as helpless to decode the dataflow as helpless the designers and me (the implementer) are. The Application class renders the encoded message AND the actual decoded message, activated via JS through the routes control-file, the JS requests handled by the Web Sockets interface using Scala Lists. Implementation conducted using Scala-mapping to parallelize the processing, and Scala-foldLeft / mkString to assemble the messages in the onMessage method. Broadcasting via the standard notifyAll . Multiple messages merge in the same encoded dataflow, as well as the same message splits in different dataflows, thus deflecting attempts for brute-force key-search. Test prepared in simple ScalaTest (Funsuit). Made the implementation embeddable in layered encryption as a component of a larger standard encryption framework and designed stream-based POC versions via Spring XD streams (Spark) and Java 8 streams with aggregate operations. Established the coding standards around the Twitter “Effective Scala” guide.

FOR: KOHLs INDUSTRY: Retail

Santa Clara, CA Tools: Coherence 3.7, Scala, Spark, ATG commerce, OEM, Spring, Play Fmwk

Jul’14 – Nov.14 Project: "Coherence Re-design and Performance tuning of KOHLs Commerce"

Re-architecting company's RESTfull OpenAPI from EHCache into Coherence. Sizing the Static caches (Product, Inventory, Price, Promotion, etc.). Coherence-type Push-replication POC on 2 EHCashe-DCs through a hole in the Firewall via TCP-sockets on the hole-port, eliminating the Buffers causing the trouble in the Coherence Push-Replication operation. This also eliminated the time-discrepancy in the replication via a Buffer since all put operations are written on both ports - the Cluster-definition-port and the Firewall-hole port. Just for testing purposes, configured the Cluster-definition-port to be the same in both Data Centers, both going through the Firewall hole and forming a WAN cluster (impossible in Coherence). Designed the Dynamic Data Caches (related to Customer / Basket / Order / Billing / Shipping caches) and their cooperation via utility-cache listeners, off-loading the listeners to work-threads to shield them from becoming a bottleneck. Designed 4 business-specified POCs in the new Coherence Environment to measure the different types of load-related performance with Scala Array data-generators with parallel map-functions to adjust the business-dictated value boundaries. Created a Spark Scala Shell poc over a Customer RDD with a Filter-transformation, with a reduce and collect actions on the resulting RDD (count). Designed a Relevancy Cache keyed by ProductID and providing direct access to a Set of relevant products to be fed into a business-rule engine for context-filtering. Experimented with reduced number of Server Nodes to test and measure the influence of the Network chatter on the performance. Adjusted the socket buffers, high-units, JVM parameters to fit the memory requirements and the network capacity. Compared Coherence vs. Spark operations and integrated Coherence with Spark in order to make Spark fault-tolerant and avoid the Backing Map Coherence performance bottleneck, with 2-way-converters Java – RDD – Java to serve the Spark – Coherence – Spark data flow. Implemented a Spark version of “writeBehind” where the bulk operation goes from Spark to Coherence rather than to a disk. Web tier (MVC) alternative approach to ATG via Play over AKKA actors, to match the Scala alternative platform.

FOR: Kuoni Global Travel INDUSTRY: Travel / Retail

London, UK Tools: Coherence 3.7, ATG Commerce, Endeca, Scala, Infinispan, JSON, Spring.

Jan’14 – May’14 Project: "Coherence Re-design of Kuoni World-wide Reservation System"

Complete re-design and implementation of the Hotel Reservation and Management system away from JBoss/Infinispan and Apache Cassandra into Oracle Coherence. Designed and prototyped the caches needed to hold the data for the entire reservation system. Designed and implemented asynchronous solution for a single Gateway architecture with input and output caches operated via Cache Listeners and a single-service co-located caches cooperating via backing map queries in the Pricing Engine for the RatePlan / RateRule caches, de-normalizing the Property – Contracts relationships in the process, allowing for proper operation of the extensions for the 3 different Pricing Engines (Property, DI, DS). Outside data updater application for reflecting data changes in the database from external applications, eliminating the need for Golden Gate adapter. Designed and implemented a synchronous solution for multiple Gateways, wrapped in a Java Service, distributing the Searches after the ATG Commerce layer between the Endeca Engine in front of the Coherence Layer and the Coherence apps via Lucene Search Engine. Prototyped different cache loaders in Scala for convenient data-generation during the prototyping stages of the project. Designed an XML-fragments cache needed for high-performance handling of the assembly of XML pieces from different providers in a fast string-based way. Eliminated a major bottleneck in the operations via re-designing the event- handlers during massive data re-loading.

FOR: Barclays Capital INDUSTRY: Financial

New York, NY Tools: Coherence 3.7, Coherence*Extend, Google Guava, Guice, Spring Framework

Sept ‘13– Dec’13 Project: "Barclays Capital Risk Management System Extensions"

Risk Management augmentations and business/performance improvements on Barclays' Risk Management System - one of the biggest of the 15 Coherence Projects I have been on with over 200 packages and 3000 classes. Created a Monitoring Service on the Valuation Results cache via a Utility Cache, whose dynamic contents controls the process. The monitor releases another Batch of Valuation Runs to the engine while it is still processing the end of the previous submissions (five different implementations via MapListener / notification, Thread join, poling through configurable interval, Thread ExecutorService/futures, and MapTrigger). Reduced the JMX contention by extending the MBean-Filters with specific types of Caches, lowering the refresh-timeout and increasing the refresh expiry to reduce the contention in the locks on the common mutex in the MBeanConnector. Increased the tenured space of the JVMs to accommodate larger JMX buffers. Created a Polymorphic Cache for storing hierarchies of objects (no direct support in Coherence for that) with trigger-filtering and dynamic down casting. Designed and implemented a Dynamic-Inheritance cache for timely destroying the caches created on-the-fly for handling the Reference and Valuation Contexts, thus easing the space-contention on the tenured heap. Created a set of Entry-processors and nested filters to use in the process, using Google's GUICE Module/Config and Java Dependency Injection. Created a set of self-cleaning and self-destroying Caches with expiration-enforcer for the dynamically-created Reference and Valuation Context Caches - both Intraday and EOD - using Google's Guava Collections for extended functionality. Wrapped the collections to avoid the Serialization Problem while passing through Coherence * Extend proxies and used readCollection / Codec for template, typed POF read. Created a dual implementation reading the stats in the database via the Spring JDBC Framework and to enable switching between Coherence Caches and the Monitoring Database.

FOR: Scottrade/Oracle ~~ INDUSTRY: Financial

St. Louis, MO Tools: Coherence 3.7, Coherence*Extend, Coherence*Web, OEM, NIO

Sept'13 Project: "Coherence Extend / Coherence Web Trading Modules"

Back to Oracle with another Financial Client (Scottrade) for a short Coherence Architecture review. Reviewed the two Coherence Modules of the Online Trading Application including Client, Proxy, and Cache Servers configurations and scripts, monitoring facilities and alternatives, optimized the thread-pool sizes on both the Proxies and the Cache Nodes, corrected the memory assignment on the Proxies, adjusted both the connection and request time-outs, enabled outside modifications through the System Properties, redesigned the topology of the horizontal and vertical scalabity and relocated the Proxies on a single machine, reduced the number of Server Nodes and increased the number of Proxies while reducing the thread pool sizes to reduce the CPU contention and context-switches, provided alternatives for NIO extension of the JVM memory and searches with Lucene Engine. Arranged the main parameters to monitor in the grid Control in 7 groups and planed the transition to the Grid Control 12c with cloud and SQL-server support, etc.

FOR: CF Retail Chain INDUSTRY: Retail Chain

Atlanta, GA Tools: Coherence 3.7, Spring, REST, Amazon Cloud, OpenID, ATG, AWS

June'13 - Sept.'13 Project: "Universal Identity Management in Amazon Cloud"

Architected and Designed the new Identity Management system in Cloud environment (Amazon EC2 and AWS) with the Amazon Cache (ElastiCache). Created a POC with Elastic UserProfile and ApplicationAccess set of dynamically-extendable cooperating Caches, first in Coherence, then in Memcached (ElastiCache) - wrapped in a Proxy-service with custom API-Gateway in front to filter access from both Apache Tomcat and internal components on the App tier. Designed active Chained Cache Loaders via Triggers, and Distributed Authorization via the extendable Elastic ApplicationAccess Caches (each authorization fluctuating between Application-centric and Centralized). Private Session-Management Cache for the Web Tier (Tomcat) for multi-server utilization, accessible from different tiers. Federated ID on SignOn (Yahoo, Google, FB, Twitter, LinkedIn) using OpenID and OAuth. Authentication and authorization via AccessIQ, SAML, and UnboundID, communicating via REST API calls. Implementation in Spring Tool suite with ROO console, with ESAPI, MVC, Mobile, and Spring Data add-ons through ROO. Deployment via the Amazon Elastic Beanstalk with CloudWatch monitoring. Security enhancements against Injection, Session manipulations, XSS, Direct references, and Internet bots. POCs with alternative NoSQL environments (Cassandra).

FOR: Capital One INDUSTRY: Financial

Richmond, VA Tools: Coherence 3.7, WebLogic Portal, Oracle, Spring, P13N, REST, NIO

Feb.'13 - May'13 Project: "Performance Tuning of Universal Portal Framework System"

Reviewed the WebLogic Portal P13N configurations with Coherence as Cache Provider - with Spring Portlet MVC and Spring-aware Coherence CacheFactory, along with Spring HTTP Invoker (for Objects) and Spring Burlap (for XML docs). Prepared the Capacity-planning and Topology-changes Table for the Coherence Resources needed as the load increases. Evaluated the Remote-Node alternatives - Extend vs. Cluster Push Replication. Prototyped the Push-replication leak-fix with Double-push replication. Re-organized the Near Cache memory to use Overflow Map with NIO memory extension to ease the GC contention. Switched to Paged NIO to optimize the NIO Eviction. Pro-con considerations of using Coherence*Web for session data instead of having "manual" consumer-producer caches with delayed session-expiration. Optimized all 4 P13N Config files for smooth access to Coherence through the Weblogic Portal with Coherence as Cache Implementation. Created a farm of independent Coherence Clusters over the Paged NIO facilities for operating in silo. Cleaned-up the binary unit calculators for both BDB and NIO memory. Re-focused the JMX-based Monitoring facilities to restrict the amount of jmx-related load while keeping the key indicators active. Optimization of the explicit Locking via Entry Processors for Status updates. Documented the testing findings and the performance improvements on both Session and VCR caches. Prepared custom cross-caches monitoring and flush utility.

FOR: Grid Fluid INDUSTRY: Financial Middle-tier Infrastructure ~~~

Oxford, UK Tools: Coherence 3.7, JRules, Prolog, GemFire, CEP, Spring, Prolog/Scala/Akka

Oct.'12 - Jan. '13 Project: "Performance Rule-based High-frequency Grid Trading System"

For a financial spin-off company, designed an Spring-based Coherence Grid integration with 2 rule-based systems: Forward-chained (ILOG JRules) for the algorithmic rule-based trading and Backward-chained (Prolog implemented in Scala/Akka) for Matching Engine (see the Bond Desk Fixed-Income project in 2011-2012 where we used portions of this Coherence Integration technique to achieve massive performance improvements). Utilized the internal synchronized IlrConext thread-pool for double-distribution (Nodes plus Threads) while using a Private Thread-pool for the consolidated Matching Engine - both integrations done via Entry Processors. JRules consumes only references to the objects in the Coherence Caches. All Caches are loaded by the Backward-chained Matching Engine, hooked to the built-in Unification Mechanism. Dynamic JRules Rulesets re-loading, based on pre-specified alerts, with dynamic transfer of Rules from one IlrConext to another. An unique project providing a smooth division of labor between 2 different types of Rule Engines, indirectly cooperating through a set of Coherence Caches to provide a robust CEP event processing. Attempted and ruled-out a GemFire re-implementation due to the "single-cache in single-JVM" architecture of Gemfire.

FOR: Apple Comp. INDUSTRY: Computer / Telco

Cupertino, CA Tools: Coherence 3.7, Push Replication, Golden Gate, DB-reflection, GemFire, AWS

March'12 - Oct.'12 Projects: "Performance Adjustments", "Apple Fault-Tolerant Coherence Environment"

Multiple Coherence projects (see below).

New project: Critical Performance Improvements in several Coherence projects within Apple: OBS (Order), PMT (Payments), CAS (Customer Service). Changing and expanding the JVM and Network parameters to make the cluster course-grained thus reducing the internal communication in the WKA-list-based cluster. Reduced the amount of Entry Locking causing contention. Reduced the number of nodes per VM (VMware) to avoid the swapping performed by Linux, which in turn was causing Nodes Leaving. Changed the Configuration and Startup scripts to adjust the Units, to optimize the number of Partitions, and the Caches topology. Created the Coherence Interaction Models for all the 3 applications, leading to Coherence Processing modifications.



Contact this candidate