Post Job Free
Sign in

Hadoop

Location:
United States
Salary:
70,000
Posted:
August 26, 2015

Contact this candidate

Resume:

GUOFENG JIN

**** ******** ***, **********, *******

**************@*****.***

ph:424-***-****

OBJECTIVE

Dedicated System architect with 1 and half years of experience combining management and developer service familiar in Oracle database and Linux system. Because of hadoop, knew the high-tech of Cloud Computing and Big Data. Resigned previous job and came to America for certification in UCLA about statistics, hadoop, machine learning and artificial intelligence. Search a job in the field of big data and cloud computing.

Details Information

Full Name Guofeng Jin

Contact 424-***-****

E-mail address **************@*****.***

Current Location LosAngeles

Experience 1 and half years

Work Authrization OPT

Relocation Yes

Travel Yes

WORK EXPERIENCE

IT Infrastructure System Engineer.

January 2013 - September 2014

Samsung SDS Beijing, China

Computer/IT Services

EDUCATION

Bachelor's Degree, Electronic Information Engineering, September 2009 - June 2013

Civil Aviation University of China Tianjin

GPA : 3.4

Certification, System Analyst

January 2015 - September 2015

UCLA LosAngeles

GPA: 3.6

CERTIFICATION

Oracle 11g DBA OCP Certification

October 2013

Oracle

Hive

May 2015

IBM

Spark

June 2015

IBM

SKILLS

Eclipse-JAVA Beginner

Oracle Database Intermediate

Linux(RedHat) Intermediate

sap basis Intermediate

Hadoop(Map Reduce) Intermediate

Hive

SQL SERVER

Beginner

Intermediate

Hbase

Python

Beginner

Beginner

Spark

Vmware

Beginner

Intermediate

LANGUAGES

Chinese - Mandarin Native

English Advanced

Korean Native

PROFESSIONAL MEMBERSHIPS / AFFILIATIONS

51cto

CSDN

GitHub

Coursera

Udacity

edx

Sap basis forum(SCN)

Hacker Union

HONORS & AWARDS

BeiDou Cup (Science fiction)

ISO9001(Internal Auditor Training)

RESPONSIBLE

Migrate Netweaver7.3_erp6enhancement7 system from windows server to Linux. Set up solution manager to manage SAP system more efficiently upgrade and tune.

Maintenance more than 20 nodes Linux distribute system. Migrate database from product environment to test environment. Data ingestion, data transformation, data storage and data retrieval using sqoop and flume.

Work on implementing queriable Hadoop cluster using HAWQ/Hive/Impala. From the SAP system customer data, create personalized experiences using spark.

Using spark core for log analytics.

KNOWLEDGE

great understanding of system’s capacity, basics of memory, CPU, OS (Linux), Storage and network.

Excellent understanding of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, NameNode, Data Node and MapReduce programming paradigm.

Overall knowledge of Big Data technology trends, Big Data vendors and products. program using Scala and Java.

Experience with Scala in addition to an object oriented language (such as Java). Large scale data processing experience using Spark or Hadoop/MapReduce. INTERESTS

Big Data, Hadoop, Cloud Computing, Internet of Things REFERENCES



Contact this candidate