Post Job Free
Sign in

Python, Matlab, Pytorch, Keras, Tensorflow, Theano, C/C++, Latex

Location:
Gilbert, AZ, 85297
Posted:
September 24, 2020

Contact this candidate

Resume:

YIKANG LI

*** *. **** *** *****, Tempe, AZ *****

+1-480-***-**** ********@***.***

RESEARCH INTEREST

My research interest mainly focuses on computer vision and machine learning. Currently, my research concentrates on utilizing deep learning methods on image and video processing such as zero-shot learning of image-attributes classi cation, semantic retrieval of image-tags pair, spatial-temporal localization and recognition of actions and events in videos, images & videos retrieval through compact binary codes, semantic embedding of videos and event recognition for very long and complex video sequences. PROJECTS

Very Long Video-based Complex Event Recognition since Jan 2019

Lead the project for recognizing complex events in very long videos with only event label available.

Proposed a new RNN unit with a cosine similarity based pooling function is proposed in this project. Additionally, a bilinear reweighting function is applied to enhance the frame correlation. Video Activity Recognition via Visual Reasoning and Plan Recognition since Jan 2019

Participate in the ONR supported project for classifying actions distributions utilizing both low-level visual features and high-level logical and planning features.

Provide the support of extracting both hand-crafted and deep learning based low-level visual features. EDGE Computing of Pruning the Neural Network 2017 Jan - 2017 May

Participate in the project for developing a new strategy for re-training a pruned neural network.

Design and provide the architecture and strategy for neural network retraining. Social Network Images Retrieval for Speci c Group 2017 Jan - 2017 Aug

Participate in the project for classifying and retrieving images from social network for speci c groups.

Provide the support of object detection in images by utilizing Fast-RCNN. MSR Video to Language Challenge 2016 May - 2016 July

Lead the project for captioning untrimmed video clips with human-like annotation.

Design a multi-modal architecture to encode both video and text sequences and decode the correspond- ing captions with video features.

Image and Video based Q & A 2016 Feb - 2016 June

Participate in the project for answering questions which are asked based on image or video contents.

Provide the support of extracting the deep learning based video features and design a skip-though based question-answer pairs classi cation model.

EDUCATION

Arizona State University, Tempe, USA 2014 - Now

Ph.D, Electrical Engineering majored in Computer Science, advised by Prof. Baoxin Li George Washington University, Washington D.C, USA 2011 - 2013 M.Sc, Electrical Engineering, advised by Prof. Kie-Bum Eom & Murray Loew Xi’an Jiaotong University, Xi’an, China 2007 - 2011 B.Sc, Electrical Engineering

WORKING EXPERIENCE

Research Intern May - Aug 2019

OPPO Company

Working on project about semantic embedding and retrieval for image and multi-tag pairs with trans- former based models.

A SENet and multi-head self attention mechanism was utilized in this project.

Two related patents are proposed.

Teaching & Research Assistant Since 2015

Ira A. Fulton Schools of Engineering & School of CIDSE

Invited lecturer on CSE 591 Intro to Deep Learning in Visual Computing (2018 Spring) SKILLS

Programming Python, Matlab, Pytorch, Keras, Tensor ow, Theano, C & C++, LaTex PUBLICATIONS

Yikang Li*, T. Yu*, B. Li, "RhyRNN: Rhythmic RNN for Recognizing Events in Long and Complex Videos", European Conference on Computer Vision (ECCV) 2020. T. Yu, Yikang Li, B. Li, "Learning Diverse Features via Determinantal Point Process", IEEE Inter- national Conference on Learning Representations (ICLR) 2020. Yikang Li, T. Yu, B. Li, "Recognizing Video Events with Varying Rhythm", arXiv:2001.05060. Y. Zha, Yikang Li, T. Yu, S. Kambhampati, B. Li, "Plan-Recognition-Driven Attention Modeling for Visual Recognition", Workshop on Plan, Activity, and Intent Recognition at AAAI 2019. Yikang Li*, T. Yu*, B. Li, "Simultaneous Event Localization and Recognition for Surveillance Video", IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS) 2018 Oral. Y. Zha, Yikang Li, S. Gopalakrishnan, B. Li, "Recognizing Plans by Learning Embeddings from Ob- served Action Distributions", International Conference on Autonomous Agents and Multiagent Systems

(AAMAS) 2018

Yikang Li*, P.L.K. Ding*, B. Li, "Mean Local Group Average Precision (mLGAP): A New Perfor- mance Metric for Hashing-based Retrieval", arXiv preprint arXiv:1811.09763 Yikang Li*, P.L.K. Ding*, B. Li, "Training Neural Networks by Using Power Linear Units (PoLUs)", arXiv preprint arXiv:1802.00212

P.S. Chandakkar, Yikang Li, P.L.K. Ding, B. Li, "Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm", IEEE International Conference on Edge Computing (EDGE) 2017 Yikang Li, S. Hu, B. Li, "Video2Vec: Learning Semantic Spatio-Temporal Embeddings for Video Representation", IEEE International Conference on Pattern Recognition (ICPR) 2016 Yikang Li, S. Hu, B. Li, "Recognizing Unseen Action in a Domain-Adaptive Embedding Space", IEEE International Conference on Image Processing (ICIP) 2016 Z. Tu, J. Cao, Yikang Li, B. Li, "MSR-CNN: Applying Motion Salient Region Based Descriptors for Action Recognition", IEEE International Conference on Pattern Recognition (ICPR) 2016

* indicates equal contribution



Contact this candidate