Post Job Free

Resume

Sign in

Full Stack Front End

Location:
Crafton, PA, 15205
Posted:
August 20, 2023

Contact this candidate

Resume:

Yoongeun Kim (Brian Kim)

**** **** ***** **** # **

PITTSBURGH, PA 15205-4805

(408) 800- 8743

ady2ts@r.postjobfree.com

skype :: live:windkim60

linkedin: www.linkedin.com/in/yoongeun-kim-b2988889

Legal Status: USA Citizen

Professional Summary:

Have the experience for the various fields including Computer Network(Tcp/Ip), Java, Java Script, Angular.js, Node.js, Ruby on Rail, Delphi. C#, C/C++, Visual Basic, Python, Perl. Linux Kernel programming.1

Have the development experience for the front end and the back end.

Have the long experience of the DevOps including Kubernetes, Docker, Elastic Search, Apache Spark, Hadoop, Jenkins, GitHub, Apache Airflow, Concourse, IBM ConnectDirect

Have the experience for the microservice conversion from the legacy codes.

Have the experience for Apache Kafka and Redis mem Cache

Tools:

Linux: Ubuntu, Cento OS, Redhat

Network tools: WireShark

Cloud platform: Amazon Web Service

Language: Java, Java Script, Perl, Python, Ruby On Rail, C#, Visual Basic, C/C++

Software : Kubernetes, Docker, Spark, Redis, Kafka, Jenkins, Gibhub, S3, EC2

Skills & Experience

EMPLOYMENT HISTORY:

Role: DevOps Engineer & Java Full stack developer

Eli Lilly, Indianapolis IN 8/2022 – May/2023 (Python, Elastic Search, Jenkins, Ansible, Automation by Apache Airflow)

Eli Lilly is one of the biggest pharmaceutical company in the world. It manages the billions documents for employees, customers, and drugs. So the quick recovery of the documents and the update of the documents and the removal of the old unnecessary documents are very important for the company. eArchive management for the Lilly company is created for this purpose. The documents are searched by the elastic search and updated by the python program. Also Jenkins handles many task for the automation and it is written by Ansible, Bash, XML and Yaml. It manages AWS, Azure, and some Databases.

Project 1: Make the java program to catch the data from the front end and execute the elastic search and send back the result to the front end.

Project 2: Make the elastic search index and patterns for the specified documents.

Project 3: Make the Jenkins scripts to run many tasks.

Project 4: Make the automation of jobs by using python and Apache airflow

Role: DevOps Engineer & Java Developer

Comcast Cable Communication, Philadelphia PA 10/2021 – 8/2022 (GitHub, Concourse, Docker, Kubernetes, Elastic Stack and Automation)

Comcast is the very big Telecommunication company. The customers report the malfunction of their devices or some complaints as well as the devices reports some problems automatically to the Comcast servers. So it should handle many thousands of alerts or errors per day. So it automatically process the errors or the complaints by the systemic ways.

Project 1: Make the microservice by using spring boot

Divide the big service into the small microservice based on the domain based architecture and write the microservice by using the spring boot. Make the Docker image and load the service into the Kubernetes Name space and connect the service by using Ingress controller or node ports to make the connection between the services.

Project 2: Make Docker image and install it into Kubernetes cluster.

It gathers the error messages by using Logstash and extracts the information by using python program. Docker converts the python program into the docker image and loads it into the namespace of Kubernetes.

Project 3: Make the ingress controller for Ingress

For the container orchestration, it should use the connection between services. It uses the node port or ingress controller to the outside program. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. In order for the Ingress resource to work, the cluster must have an ingress controller running. I made it for the Kubernetes cluster

Role: DevOps Engineer

Experian, Costa Mesa CA 04/2021 – 10/2021 (Model Verification for the extremely Big Size Data Processing)

Experian plc is an Anglo-Irish multinational consumer credit reporting company. Experian collects and aggregates information on over 1 billion people and businesses including 235 million individual U.S. consumers and more than 25 million U.S. businesses. It gathers, analyzes and process data in ways others can’t. It help individuals take financial control and access financial services, businesses make smarter decision and thrive, lenders lend more responsibly, and organizations prevent identity fraud and crime.

It has made about 200 models to do this job.

Project : Model Verification.

The followings are the steps which it performs for the model verification

a.Generate Yaml File by using the given model and modify it by the developer.

b.Generate C file for the given model in Linux and make .so file and copy it into Windows.

c.Modify the Scala program for running the given model

d.Compile Java, Scalar, and .so file made by gcc compiler together

e.By the above step, it can make .jar file and copy it into S3.

f.Make the manifest file for the given model in S3.

g.Open spark and submit the job for many million pins. The source data file is made by parquet file format.

h.Open the spark shell and compare the source data and the generated data by using Scalar program.

i.If something does not match, search the root cause of the mismatch.

j.If it is necessary, modify the generate c file and compile again.

k.Compare the source data and the generated data and reduce the mismatches until the number of the mismatch pin is 0.

Role: Java Full Stack developer

PNC Bank, Pittsburgh PA 03/2020 – 03/2021 (“Very Large Size Batch Processing and JSP, MVC, Restful Web Service Implementation) Java Lead Developer

PNC bank is one of the largest Bank in USA. It handles more than 5 million concurrent users and produces more than 100 million bank documents per week. Therefore its batch processing for the documents takes the very long time. Our team’s responsibility is to make this job faster and more reliable.

Project 1: Update the Batch processing to be faster by using the parallel processing.

PNC used 1 batch servers and 1 app server for this job. My duty was to connect 6 batch servers and 6 app servers by using IBM Connect Direct. IBM Connect Direct is point-to-point (peer-to-peer) file-based integration middleware for 24x365 unattended operation, which provides assured delivery, high-volume, and secure data exchange within and between enterprises. Then run the servers parallelly. It increased the speed 6 times.

Configure the ConnectDirect Servers from the scratch.

Make the processing statements by using IBM processing Language.

Make 6 Queues to run the servers parallelly.

Make the python scripts to send the data from Batch Servers to App Servers.

Project 2: Use Rx.JS and Angular.js for the asynchronous computing

Make the template for the front end interface and make the components in the template for the data updating by using Angular.js. Make the event streams from the front ends to the back end by using Rx.JS. Observable object send the events to any downstream observers. Combine the parallel streams if necessary. Populate the database from stream data. Make the buffer for event emitter.

Project 3: Handle the very large size batch processing.

PNC handles the extreme large size batch processing which is based on the Event based architecture. The Batch processing often makes the errors and often stops. Then it has to restart the whole processing again. My job is to find the cause of the problems during the processing and to fix it for the non-interrupted processing.

Project 4: convert Kafka Streams into Java Repository

Get the data streams from Kafka Streams and store the data into the repository for the faster processing.

Project 5: publish and subscribe the messages by using message broker

Publish the messages and supply it to message broker. Then the subscriber consumes the message through message broker. This is the basic module for the event-oriented architecture.

Project 6: make the service and the service interface in the domain

Make the isolated domain and it doesn’t keep any reference to any other object. Map the domain objects to Java class by using JPA/Hibernate. Make the data class. Make the interface and the implementation and connect it to the repository. Then one service in the domain is completed.

Role: DevOps Engineer

Apple, Cupertino CA 07/2018 – 09/2019 (“System Automation and Deployment for iCloud” and Java Full Stack Development)

“iCloud” System is the cloud system for Apple. Apple provides many different services by using iCloud System. I was belonged to “iCloud team” for iMail Service. It provides the service by using more than 1000 servers and 20 testing servers. I managed more than 1000 servers for its performance and system automation.

Project : Writing Jenkins Script by Ruby, Git and Bash Developer

My duty: Write the Jenkins Script by Ruby and execute any git modification in the remote git repository. So Jenkins builds the fresh version from the repository and provide the fresh iCloud installation for the customers. Everyday new fresh system is built and deployed more than 1000 production servers.

Role: Java Microservice Developer

Harman, Novi MI 01/2017 – 07/2018 (“System automation by Perl and Java” and “the transition to Microservice based system”)

Project 1: Make the REST API by using node.js

Make the request handler.

Make the pre-process chain for the authentication control.

Make the router handler to take care of the request.

Make the controller for the specific resource

Make the representation layer that is visible to client app.

Make the response handler which send the response back to client.

Project 2: Build the domains for the Service Oriented Architecture and Make the services based on events.

Project 3: Conversion from the existing system to Micro Services.

It requires the layered architecture for the conversion.

Project 4: Deploy the microservice into the Docker container and make the routing between the services.

Project 5: Implementing event-based microservice by using Kafka

Produce the events by using the service

Send the events to Kafka

Other microservice consume the events by listening to it.

Data Serialization for the event by using Apache Avro.

Role: DevOps and System Security Engineer

CVS/Omnicare, Cincinnati OH 05/2016 – 11/2016 ( Refactoring codes by using Delphi 10.1)

Targeted to developers and security practitioners, the Common Weakness Enumeration (CWE) is a formal list of software weakness types created to:

Project: Memory Leak(CWE #404) and Hard coded any sensitive information(CWE #259)

Description: There are over 900 tables in Oracle database. And the application calls the tables, stored procedures very often. Whenever it wants to connect to the database, it uses the hard-coded password and id number. Also it uses SSN number to search the records in the database. If the attacker reads the memory in the system, the attacker can know all of the sensitive data for the CVS customers. So I changed the program not to be vulnerable from the attackers.

Role: Java Full stack Developer

North Pacific Inc, Irvine CA 06/2015 – 04/2016 (Java, Java Script and Perl) Software Architect

My duty: Develop many different Java programming for the project. (JPA, Hibernate, J2EE, and etc)

Role: Network Engineer for Kernel development in Linux system by using (c, c++)

IST Inc, Aliso Viejo CA 01/2010 – 05/2015

IST Inc is a network software development firm. Its main product is WTCP which is the optimization of TCP. It uses the TCP slicing technology which is extremely difficult. The product consists of 3 parts which are WTCP (C Language) in the blade server, ONM Server (C++ Language) in the Linux Server, and ONM Client (Delphi) in the Window7. Its main customers are the big telecom companies to reduce the wireless telecom traffic. Its basic idea was provided by two former famous UC Irvine Professors who are “Tatsuya Suda” and “Wei Tsai”. IST worked with Korea Telecom(Korea) and NTT Docomo(Japan) for the testing and implementation of their system. I managed onshore team and offshore team together for the very large volume of network traffic testing at the core network of KT which covers 0.8 million customers in Korea.

Project: Kernel Modifications and Testing

Description: Implemented WTCP by the theoretical guide line. And it is verified in the various network environments.

Guessed the optimal bandwidth for the network by using inter-packet arrival time and inter-packet departure time. The time makes the network to guess the optimal window size for TCP traffic.

So it enables the network to carry out the maximum bandwidth utilization.

Role: Java Developer

E & J (Ellen & Joan) Orange County, CA 05/2003 – 12/2009

E-commerce Start-Up Company for the Digital Image Processing (photography transmission) and photograph selling by using internet.

Project 1: Implement the domain driven design

Project 2: Implement the service oriented architecture.

EDUCATION

Master in Computer Science PENNSYLVANIA STATE UNIVERSITY: University Park, PA. USA

Projects

Construction of Compilers for C and Pascal Language by YACC.

Implemented a distributed file system

It is the updated version of Amoeba distributed system.

Amoeba is a distributed operating system developed by Andrew S. Tanenbaum and others at the Vrije Universiteit Amsterdam. The aim of the Amoeba project was to build a timesharing system that makes an entire network of computers appear to the user as a single machine.

BA. Computer Science. UNIVERSITY OF IOWA: Iowa City, IOWA USA

The below projects are the verification of some Furer’s Algorithms.

He was my thesis adviser.

https://en.wikipedia.org/wiki/F%C3%BCrer's_algorithm

MY THESIS:

Color of random graphics.

Approximated the minimum degree-spanning tree to within one from the optimal degree

Design of an efficient NC algorithm for finding Hamilton cycles in dense directed graphs.

Fürer's algorithm is an integer multiplication algorithm for quite large integers possessing a very low asymptotic complexity, which can be optimized by using the inverse Ackermann function instead of the iterated logarithm. It was designed in 2007 by the Swiss mathematician Martin Fürer of Pennsylvania State University [1] as an asymptotically faster algorithm (when analysed on a multitape Turing machine) than its predecessor, the Schönhage-Strassen algorithm published in 1971.[2]



Contact this candidate