Post Job Free
Sign in

C# Capacity Planning, enterprise Pub sub software systems

Location:
Queens, NY
Posted:
March 31, 2025

Contact this candidate

Resume:

Kafka SME

Fiserv Berkeley Heights, New Jersey Oct. 2022 – present

Architected use case for capacity planning of T-Bill aggregation, pricing, futures lakehouse, 400 msgs per second

CUSIP key, 5 states per key, record size 1K Avro; 10 state updates; measured and identified the SLAs; identified latency during normal operations and latency TM failure; prepared the baseline – avoiding backpressure, added 15% for catching during recovery; factored a 10. 20 and 50% growth in data ingestion; factored periodic checkpointing for data resiliency; factored the size of each record, throughput, distinct keys and a full checkpoint every minute(to meet SLAs); factored Kafka source and sink per Flink TM, factoring keyBy and network shuffle, window size, 20 pods overall; calculated users by number of bytes, 600 Mb per machine; did the mathematics for the data egress and ingress; factored the window with RockSDB and S3 as backup for checkpointing; calculated how muych state is beign checkpointed(bytes x windows x 15M keys); caculated checkpoint interval by checkpoint size; final analysis EKS stup 3 CPUs per pod; 2 pods per instance, calculated bandwidth, SSD drive, CPU, RAM, network bandwidth

Experience with building Visual Studio IDE for building out C# Kafka clients for producer and consumers;

Producers enabled to set up test topics with a million messages a pice; wrote C# scripts to ascertain consumer

lag, offset maintenance, integrity, analyze gaps in partitioning consumer balance, printed out specific messages,

message count by partition and topic; SSLencryption and SASL/SCRAM authentication

Troubleshooted Flink jobs with excessive backpressure, examined the networking topology, examined JProbe

Statistics, GC and I/O, examined Flinks creditbased flow control between operators; examined isBackPressure, idelTimepersecond, backpresureTimemsSecond, observed blocks waiting for output buffer, examined busy versus idfle time; utilized Flink Web UI for backpressure metrics for bytes received, bytes sent,, parallelism and start time; looked for root causes – data skew, network bottlenecks and under-provisioning in the Flink data sinks;utilized JProbe and flame graphs to isolate hot threads, thread contention, partition overlay; observed numRecordsOut. numRecordsIn, numBytesLocal and numRecordsRemote, adjusted exclusive data buffers for optimized network throughput, observed buffer timeouts

Troubleshooted Flink jobs with checkpoint failures – observed relationship between job manager and task managers with multiple slots and snapshot store; analyzed job timeouts, VLJs VLSs high rate of state change;

Anlyzed backpressure versus throttled checkpoints gtom advancing quick enough; examined I/O writes to checkpoint storage; found out root causes of excessing buffering and large Flink ccheckpints(FLIP-182);

Found out root causes for multiple timers firing simultaneous( no backoff time interval); examined heavy loads to external data sources(S3) which caused intermittent backpressure and checkpoint failures; examined job summary statistics for job and task incompletes, end to end duration, checkpoint size, processed in-flight data; diagnosed

Checkpoint delays – start delays, task alignment durations unbalanced and balnced subtask behavior; looked and solved for root causes of checkpoint failures(synchronous I/O), low parallelism between tasks; adjusted execution.checkpoint.timeout; DDOS to external systems like (S3), high time I/O contention as bottleneck;

Utilized entropy injection to improve scalability to external sink connections(S3)

Refactored Flink streaming jobs for fault tolerance using state snapshots, judicious use of checkpoint barriers and barrier alignments; failure analysis of failed tasks and recovery from checkpoints/task manager slots; drill down into tasks – map, keyBy, apply, sink – slot component analysis; instituted and observed recovery strategy of all Flink workers, including positions of input streams; wrote a Pyflink job to find all active checkpoints in a Flink stream(Proprietary); second version accounted for multiple inputs into an Flink operator; experiments on barrier alignment and triggering snapshots; tradeoff analysis between exactly once versus at-least once; analysis of

unaligned barriers and its relationship to long checkpoint times and checkpoint timeouts; wrote a Pyflink job examining barrier forwarding anf the capture of in-flight data

At scale, Flink jobs 300,000 msgs per second(1K avro size) jobs, utilized RocksDB incremental checkpoint with RocksDB, observed the memtables and SST memory stagnation and turnover; analyzed edge use cases due to space amplification from stale data(tombstones)

Troubleshooted Flink jobs for excessive latency and job sluggishness – measured skewness between the system clock an the latest watermark(PyFlink custom jos) developed custom histogram metric; utilized Flinks latency tracking compute paradigm – injecting latency markers(operators and sources -> multiple histograms); optimized Flink jobs for low latency via custom and smart serialization(avoiding Kyro, using pojos); reduced network buffer timeouts, reduction of autowatermarking intervals, reduced checkpointing interval, utilized pre-aggregation of windows, disabled barrier alignment, optimized gc /Jprobe, smart use of RocksDB

Thorough network analysis of Flink streaming jobs impacted by poor network design – control plane – dispatcher,resource manager, scheduler amd checkpoint coordinator(Akka) versus Task Manager(s) task slots on the data plane(Data:Netty); observed credit-based flow control via TCP/IP; observed channel credits versu floating buffer pool; observed and conducted experiments on network configuration options taskmanager.network.memory buffer per channel, taskmanager.floating buffers per gate, min/max parameters

Tuned Flink jobs for latency issues looking for watermarking metrics, pre-aggregation(for optimization), network timeouta, serialization strategies(using pojos instead of json/default KYRO); using of object mappers(pojo optimization), removal of unnecessary data fieldsa, introduction of RocksDB statestul stores

Mitigated problematic skew in Flink TM slots, some underutilization versus overutilization – pinpointed root cause – hot keys and hot spots in the tasks managers; utilized coping strategies using fine grained keys and/or

Local-global two phase aggregation; salting of keys to spread the randomness and to better spread of the keys;

Mitigated event time skew or watermark skew – observed partition splits, multiple keyBy s, joining of streams or tables;utilized rate limiting strategies – increased checkpointing(data integrity, thread analysis for deserializing schemas(CPU, I/O bound)

Downloaded current Apache distribution of Flink, https://flink.apache.org/; (version 1.12) set up dual platform for Windows 10 server and RHEL OS on AWS EC2 instance; set up and established compatible Java JVM/JRE

Java 11 distribution(Oracle); set up Job Manager – actor system, scheduler and checkpointing and job client;

Designed and established 5 node(EC2 medium weight) Flink cluster. Started and tested Flink daemons(ps aef);

Deployed Apache Flink 1.12 on AWS; launched a 5 node EMR test cluster; reinstalled Flink on EMR(Ealastic Map Reduce); executed and tested Flink on EMR, EMR-YARN, started a batch YARN session; graceful shutdown of Flink job cluster; researched and tested Flink on EMR 5.3+; Flink with AWS S3 bucket;

Successful integration of Kafka topics(created from scratch) and the Apache Flink Kafka Connector; set up

Maven Scala and Flink version dependencies; set up Kafka/Flink consumer properties file, created and tested the Flink consumer source and implemented the Flink job using the DataStream/Batch Processing API – utilized the various transformations – filter, join, reduce on group datasets, rebalance; utilized the console producer CLI to generate initial Kafka topic data; designed broadcast variables; set up Flink Graph API for the 12 Federal Reserve

Wire transfer branches 1 M+ transactions per day; designed special debugging trace for operator Table API joins,union, intersects

Instituted Flink best practices for logging in Flink applications Logbacl/Log4J version2; utilized and instituted

Parameter tool(using CLI command line interface) anf Flink properties file; altered and configured the flink-conf.yaml file ro expose the JMX metrics for the dual Prometheus cluster – Flink and Kafka; instituted separate

Grafana dashboards for Kafka and Flink observability GUI panels; both running on separate AWS EC2 instances;

Researched into the Flink REST API and custom back pressure monitoring and FLINK CEP

Designed experiments around the Asynchronous Barrier Snapshot(Chandy-Lamport algorithm), svaepoints

(Flink/application version control and watermarks for stateful fault tolerance and exactly once processing;

Utilized PyFlink for throwaway experiments on fault tolerance and using Kafka topics as durable

Data sinks(proprietary)

Sr Flink Developer

Google Corporation Mountain View, California Feb. 2021 – Oct. 2022

Set up operating parameters for Flink applications – managed savepoints, CLI, Rest API, bundling and deploying Flink applications in containers, controlled task scheduling and chaining, defined slot sharing groups, tuned checkpoints and recoverable states confingured state backends and recoverable states; monitored Flink clusters and applications using Flink UI, mtr

Established development and testing of Flink platform for streaming applications – deployment mode standalone cluster, Docker, Kubernetes, set up HA standalone setup, HA Kubernetes setup;integration Hadooop HDFS;

Configured Flink jobs for Java and classloading, CPU, main memory and network buffers, disk storage, checkpointing and state backend and security

Designed and implemented external Flink data source and sinks to AWS S3, Cassandra, Cassandra applying consistency guarantees for idempotent writes and transactional writes; worked and implemented the Apache

Kafka source Connector, Kafka Sink Connector, S3 Filesource connector and S3 Filesystem sink connector

And the Cassandra Sink connector’ implemented a custom source connector, set up watermarks, functions and timestamps; implemented custom sink connector – idempotent and transactional using asynchronous methods(non-blocking)

Implemented stateful functions, declared key state at runtime context,implmeneted operator list state ising listecheckpointed interface, connected broadcast state, checkpointedfunction state, designed failure recovery for stateful application(s), specified unique operator identifiers; defined maximum parallelism in keyed state operators, designed the backend, state primitives, avoiding leaky state and interoperability; updated Flink application with modifying existing state, removed state from Flink operators, architected queryable state, exposed queryable state from external applications(Kafka)

Manipulated and designed time operator time-based-based and window operations – configured time characteristics, assigned timestamps and generating watermarks; analyzed watermarks for latency and generating waterwarks for completeness; designed process fucntions such as TimeService and Timers, factored for time side outputs and coprocess functions; defined windo operators using built-in assigners, applied windows to functions and customizating windows operators; designed stream joins with time – interval join versus window join; provisioned for late data – dropping late events versu redirecting late events(Kafka topic); updated results by inclusion of late events; set up log files for these discrepancies

Set up Flink execution environment to read an JSON input stream, apply transformations map, keyby, analyzed the results and execute; worked with transformations – basic, keyed Stream, multiStream and distributed transformations; set up the parallelism; worked with types – supported data types, created type information for data types versus explicit definition; defined keys and referencing fields for field positions, field expressions and key selectors; implemented functions- function clsses, lambda and rich functions

Set up Flink applications from git imported and installed IntelliJ Idea (IDE) runa examples in the ID, debugged allplications in the IDE, boostrap Flink in a Maven/Gradle project

Deep dive into the architecture of Flink 1.14X – 1.15X – analyzed the components of Flink setup, application deployment, task execution,HA setup, data transfer in Flink(Avro vs Kyro), metrics in credit based flow control

Versus task chaining; thorough analysis of event time processing – timestamps, watermarks, watermark propagation and event time, assignment and generation; analysis of state management- operator state, keyed state and state backends; analysis of checkpoints, savepoints, and state recovery; analysis of consistent checkpoints and recovery strategies from a consistent checkpoint, conducted experiments with Flink’s checkpointing algorithm, analysis of performance implications of inconsistent vs consistent checkpointing

Set up Flink stream fundamentals in dataflow programming with dataflow graphs, data parallelism and graph parallelism, set up data exchange strategies, factoring latency and throughput, experiments wih various Flink operations on streams, formulated impact analysis; differentiated between walk clock time, event processing time and event time in applications; watermark analysis versus processing and event time distinguishing factors; rectified tasks failures

Kafka AWS SME

Pratt and Whitney Atlanta, Georgia Nov. 2018 – Jan. 2021

set up POC with Kafka brokers refactored into individual node/pod, created the Kubernetes

client cluster with Stateful sets and Persistent Volumes. Utilized kubectl to access the deployment of the

Kpods and access to etcd. created the yaml files and Config Maps for Kafka and Zookeeper pods. Created the

VPC, subnets and security groups and 3 AZ’s, IAM special permissions. Designed implemented custom Bah scripts to access Zookeeper metrics – ruok, etc. by Znode. modified ConfigMaps for each Znode to include JMX.

Integrated Confluent Operator into PKS. Built similar Kafka infrastructure using EKS. Created Helm charts for

Kafka and Zookeeper artifacts(yaml files). created a custom health management report consisting of JVM tuning,

SLAs, Kafka hardware, Kafka benchmarking, staging practices, control center/monitoring of Kafka/Zookeeper,

Rest Proxy, Multi-tenancy deployment, operational excellency, source/sink connector parameterization.

Custom KStream microservice applications written in Java 8, Maven(pom.xml) using high-level DSL and

KStream API, designed different size windows based upon event time of existing Kafka topics on flight

Engine metrics around psi, temperature ranges from 500 to 1000 degrees, rate of air flow inbound and outbound,

aviation fuel consumption per second, height in feet above sea level, air temperature outside plane, RPM all in

JSON format. 10 JSON records per second. Applied various filters based upon temperature, pressure, height.

created state and state transformations of KStreams. Transformations of KStreams into KTables. KTables

captured latest event change of the JSON parameter. designed various joins between KStreams and KTables.

Captured all specific changes of a certain temperature, pressure, G-force, aviation fuel. insured and validated that

the SerDes operations were consistent with the Kafka framework -> serialize -> K,V(binary) -> deserialize.

validation of the KStreams application via the kafka-streams-test-utils module for logic correctness(dev and qa).

Utilized the Processor API o create test topologies(LStream mappings and transformations). Architected

workarounds for handling dedups from the Kafka streams, impact analysis of using Flink waterwarking,

created custom topic(s) for injecting multiple dedups (2,4,8) over multiple partitions stress testing algorithms inside Kafka processor API(Java thick client)

Created, designed and wrote a custom Kafka configuration tool in Java 8 using the Eclipse Neon.1 version of the

IDE. JPanel has custom input textboxes for message sizes, message transmission rates per second, per minute,

per day. compression rates and factors(GZIP, etc.), number of topics, number of partitions, number of cores/threads, size of RAM, RAID versus JBOD, percentage of overlap of the input data streams. growth

factor in data volume(5, 10, 20, 25%). output included number of Kafka brokers, number of Zookeeper znodes,

size of disk volumes, size of RAM, number of CPU/cores, based upon AWS EC2 types of instances, D, M, I,

etc.. current version of KCAD(Kafka Computer Aided Design) will factor security module integration(Kerberos 5)

and a visual aid to plot the Kafka Brokers and Zookeeper instances into a jpeg/png file.

Utilized Terraform to create AWS EC2 instances with IP addresses, DNS names, templates modified

AWS templates used to create the dev Kafka environment of 5 brokers, 3 Zookeeper nodes. QA cluster 8 brokers,

and 5 Zookeeper nodes and prod – 10 Brokers and 5 Zookeeper nodes. automated the Kafka property

files and sequence of Kafka components activated. modified the Log4j logging to point to specific directory

/tmp/Kafka files. Installed the Ansible control server. Created roles and playbooks. Created the yaml files via vim.

Debugged the changes made to the Kafka and Zookeeper servers. Created the access paths to each server

Via the sshgen – sshkey for each server. embedded Linux commands into the roles/playlists for convenient

monitoring and troubleshooting the Linux clusters(EC2 instances).

Created a Kafka cluster(7 node) on Kubernetes in AWS in AZ Region N. Virginia. Created EKS Kubernetes

With 7 nodes. Installed kubecl, installed kafkacat.created 3 zookeeper nodes/pods. Created the yaml file

Ran kubectl with different port #s. Created the kafka service/pods(7). Created test Kafka topics. created a

test producer and test consumer. Created a performance harness with vmstat and iostat. Created a loop to

Create 1000 to 50000 topics. Tabulated the performance degradation of Kafka running in an orchestration

environment – Kafka(stateful) and Kubernetes(stateless). Also monitored the impact upon the JVM(OpenJDK)

Heap sizes(new versus old generation -G1 garbage collection algorithm). Developed POC for K8 Ishtio service

mesh for K8 orchestration of worker/pod topologies.

Integrated custom Java connectors and custom Kafka producers using the Producer API. Designed, wrote

custom Java consumers with Java Consumer API. Etilized the Eclipse IDE for Java, Neon.3. Created

the Java classes, methods, routines, packages. used the Java debugger to validate the logic of the Java

code. Created the jar files and incorporated into the Kafka classpath. Designed Java performance

benchmark code using n x m matrix multiplication. Implemented the GCEasy JVM monitor. monitored

frequency of garbage collection and ratio of new versus old generation Java objects. Installed the

json-data-generator for the qa and dev Kafka clusters. adjusted the parameters for 3,000 to 20,000

messages per second.

Created the AWS EC2 instances for the Kafka Broker, Kafka Connectors for Oracle, SQL Server and

S3 and Hadoop(HDFS/Hive), Schema Registry, KSQL, Zookeeper(3). Modified over 30 scripts

and properties files. Calculated data intake at 450 Mb/second peak throughput – topology included 5

Kafka brokers(500,000 messages per second/broker)

Installed Python 3.7 on local desktop – assisted data scientists in creating custom code for analyzing

flight performance data of the F-35 and F-22 Raptor – over 100 different engine performance

parametric. used IDLE to eliminate source TAB characters and insure proper spacing. Used on-line

Python interpreters to debug code. extensive use of print command as necessary to debug source code.

Developed correlation coefficients to mathematically determine possible relationships between

Temperature, pressure, density of aviation fuel, oxygen level, altitude of plane, etc. Utilized

Matlab to plot the results with milliseconds as the tick marks. utilized pip install utils for wheels(whls).

Extensive debugging and troublehooting using the Anaconda/Jupyter notebooks, interative testing with cells.

Set up JMX metrics for Kafka and Zookeeper. Monitored the G1 garbage collection algorithm. Tuned

the young and old partitions of the Java heap. Monitored the ratio between GC pauses and productive

time of the Kafka Java applications. Executed the kafka-perf-producer-test and consumer-test and

Examined the impact of message volume and size upon the Java heap space. Installed JConsole on selective

Kafka brokers for detailed analysis of various JMX metrics – frequency of garbage collection in relation of

Java application method invocations.

Lead Kafka Architect

Liberty Mutual Insurance Dover, New Hampshire May 2015 – Oct.. 2018

Taught a custom Kafka 3-day training course to the client dev ops engineers on the operation, security, logging,

performance tuning, best practices, customer rebalance algorithm, leader follower/topic allocation strategies,

Kafka Tool usage, system/systemctl startup scripts, adding/deleting topics, partitions. Zookeeper command line

processing. Kafka properties files and use for tuning and maintenance. How to set up robust benchmarks

for consumers and producers. Setting up custom Kafka installs in Apache and Confluence Open Source.

Analysis of the Kafka ZeroCopy algorithm for consumers. How to set up a custom partitioner instead of round

robin allocation. How to activate JMX parameters for Kafka and Zookeeper. How to set up BASH scripts for

log scanning across the entire Kafka cluster. Monitoring size of data logs. Set up Restful interfaces, Kafka Schema

Registry, Kafka Connect to MySQL database for persistence layer.

Created the Kafka-Flink integration plan incorporating the Confluent Enterprise Kafka version 0.11 and the Data

Artisan Streaming version of Flink. Support for Kubernetes container technology. Transformed the use case of

20,00,000 IoT devices emitting 2K messages per second over a 24 by 7-time frame. Devised the Kafka topics thru

a custom partitioner, adding a salt to improve the randomness and dispersion of messages evenly across all

partitions. devised the DR plan of using Mirror Maker between two cities between Dover, NH and Boston, Ma..

Worked with site reliability engineers to us portions of NetFlix Chaos, a framework for injecting random

errors into the Big Data framework. Worked with site reliability engineers for a 360-degree view of the various logs distributed in the Big Data topology. established smart cronjobs under RHEL7 and a centralized reporting/and

functionality for Level 1 and 2 support. Integrated the Flink Kafka Consumer for Flink job to listen on specific Kafka topics. Designed Flink save points to replay with different versions of Flink applications – replayed the same stream of data across different application logic. tackled critical issues involving backpressure, adjusted

the size of the memory buffers to handle changes in the velocity of the streaming data. Tested and implemented

the Kafka consumer offset with the Flink map task. Worked with the ElasticSearch architect to build out the Flink

ElasticSearchSink Connector in Java. Implemented the Flink checkpoint feature for exactly once delivery.

Modified the Kafka consumers to emit watermark(digital tracking) to promote in-flight topic tracing and message

accountability through special Java methods. designed and implemented the ElasticSearch Kafka Data Connector

for Kafka Consumer. Used Java profilers to discern hot spots in Java methods within Java packages with Kafka APIs.

Downloaded from Apache.org the latest stable version of Spark 2.2.1 binaries. overlaid the Hortonworks 2.4

distribution of Hadoop/HDFS/HBase via Ambari. Installed Java+, Scala 2.11. 75 nodes – 32 cores, 30 Tb disks,

each disk 6 Tb 10000 rpm, 2Gb SSD for Linux OS (Red Hat 7.3 Enterprise edition), 10 Gb Ethernet backbone.

master case(s) analysis, 600+Gb customer data of car accidents, ran police reports, demographic information

from state DMV’s. utilized MLlib regression models (for 2/3 pass in memory algorithms across multiple Spark

workers. leveraged DataFrame API for Tungsten memory management optimization instead of the classic JVM

(young Eden). Resolved the cross-licensing issues with the numerical programming package Breeze for optimized

numerical processing. Utilized the Power Iteration Clustering algorithm for finding logical groups based on age,

sex, geography, type of car, length of driving for insurance coverage characteristics. downloaded Mesos from

mirror to submit Spark batch jobs into Mesos 1.0.0. established and designed Spark test jobs for incremental

performance testing (1 Mb, 10 Mb, 100 Mb, 1Gb, 10 Gb) on dev on dev and qa Spark clusters, varying the number

of cores and RAM available for each Spark worker. Installed Datadog to monitor the Spark cluster across dev,

qa and prod environments monitoring the performance of the Spark job(s). Refactored Spark batch jobs to mitigate

excessive shuffling and heuristic salting of the keys to prevent overloading on certain Spark nodes – reduce

CPU and I/O contention

Led screening effort to pick best architects in the industry for building out the associate Big Data

Solutions architect role – including Walt Disney, MasterCard and Coca-Cola. Looked for candidates

with deep knowledge in data streaming, windowing, watermarks with Kafka and Flink. Created a 50-point

skills evaluation and Big Data use case and design scenario for evaluating candidate strengths and weaknesses.

Conceived and designed an 80-page Kafka playbook for operations and tuning high performance low

latency messaging systems – tuning broker, producer and consumer nodes, Java gc .

Created a 110-page cookbook on using DataDog and creating customer performance dashboards

for Zookeeper. created a 100-page cookbook for creating Kafka performance dashboards.

Conceived and created 30-page architecture design document for integrating Kafka consumer to

a Hadoop batch gateway via HortonWorks release 2.4 Ambari REST services. Integrated Kafka

consumer with SPENEGO/Kerberos 5 authentication services. custom document for Kafka – Hadoop

data transfer and integration.

Installed the Confluent 3.2.1 Kafka broker and Operation Dashboard. ran and modified the performance

Kafka load against DataDog Zookeeper and Kafka dashboards. Custom Docker dashboards for 500+

Docker containers in Red Hat Linux 7.3 kernel. integrated Kubernetes as orchestration engine for Docker containers.

Set up AWS MKS(Managed Kafka Service); remapped the data ingestion(3 V’s) with appropriate

Sizing parms(EC2 instances); integrated DataDog to ascertain SLAs; kafka.m5.2xlarge$0.84 per hour

kafka.m5.4xlarge$1.68 per hour kafka.m5.8xlarge$3.36 per hour

Top analysis and design of the Kafka Broker, Kafka Producer, Kafka Consumer, Schema Registry (100

Schemas), Kafka Connect for Cassandra, Hadoop HDFS. utilized the API for release 0.11 for the producers

and consumers. Set up the replication factor, logical compensation for automated consumer rebalancing.

Utilized Mirror Maker for setting up DR between two data centers. Established the JVM criteria for the

Kafka broker, producers and consumers. Set up the Kafka metrics using JProbe. Confluent Control Center

Release 3.1. Devised durable commit markers for Kafka consumers – custom Java code for Kafka consumers.

Designed optimal consumer groups based on use case, topic type and number of Kafka brokers. Conducted

the Netflix Chaos site reliability engineering(SRE) tests on the dev platform - drop partitions, brokers, consumers.

Setup up the Zookeeper cluster 3 and 5 nodes- utilized the 4 letter commands for systems admin for Zookeeper.

Software Engineer Positions

Fortune 500 Corporations 1974-2015

Architect level consulting with Bank of America. Allstate Insurance, Sumitomo Mitsui Investment Bank,

Motorola, Marriott International Hotels, Chen Yu Enterprises, IBM Thomas Watson Research Center,

Solomon Brothers Investment Bank, L.F. Rothschild.

Various systems development and design staff positions with Fortune 500 corporations including Merrill Lynch,

ATT, General Motors, NCR, Allied Signal/Bendix, Itochu Corporation (NKK)

Chrysler, NCR Corporation, JP Morgan Chase/Banc One, Solomon Brothers, L.F. Rothschild, Allied/Signal

Bendix, Itochu Corporation(NKK).

Education

Fairleigh Dickinson University MBA in Management Information Systems

Rutherford, New Jersey

Wayne State University Master’s in Computer Science – emphasis in large scale software system design

Detroit, Michigan

Wayne State University Bachelor’s in Applied Mathematics – partial differential equations, game theory

Detroit, Michigan

Certifications

Oracle/Sun Certified Solutions Architect

AWS Certified Solutions Architect Associate (early 2024)



Contact this candidate