Post Job Free

Resume

Sign in

Computer Vision High School

Location:
Emeryville, CA
Posted:
September 10, 2023

Contact this candidate

Resume:

Baladitya Yellapragada adzlx5@r.postjobfree.com

EDUCATION

PhD in Vision Science, 2021, University of California, Berkeley BS in Electrical Engineering & Computer Science (EE&CS), 2013, University of California, Berkeley BS minor in Physics, 2013, University of California, Berkeley WORK EXPERIENCE

Senior Computer Vision Engineer, Akasa (Dec ‘22-May‘23) South San Francisco, CA Stack : PyTorch, Python, OpenCV, Sci-Kit Learn/Image, Pandas, Matplotlib, Feature Extraction

• Enriched model capability from 1 to 8 classes detected of webpage UI elements for a larger medical website processing system shared across 6 hospital groups (serving 1M+ patients each).

• Reduced expensive labeling of classes by 90% for required minimum labels per separate detection class.

• Successfully defined project metrics for determining model capabilities so we could accurately determine (1) when to approve a model from research to production, avoiding another 3 months on development, and (2) which classes had features that could be learned and transferred across clients, which contributed to reducing label time.

Computer Vision Engineer, Clover Therapeutics (Jan ‘22-Jun’22) Jersey City, NJ Stack : PyTorch, Python, OpenCV, Sci-Kit Learn/Image, Feature Extraction

• Designed and implemented from scratch pixel-wise segmentation of retinal disease symptom for (1) first grayscale and (2) then RGB color medical retinal images (geographic atrophy from Age-related Macular Degeneration, aka AMD) as an automatic aid for ophthalmologists to quickly detect the disease 85% faster.

• Developed a custom technique to augment labeled RGB training data from 60 expertly-labeled images to 6000 semi-expertly-labeled images through iterative inference and filtering of 18k unlabeled images. This extra data lead to (1) increased segmentation performance from .65 to .8 mIOU, as well as (2) speeding up expensive creation of future expert labeling by 30x, and this process is scalable for even more images.

Graduate Researcher, Yu Lab [Computer Vision] @ UC Berkeley (Aug ’16-Dec ‘21) Berkeley, CA Stack : PyTorch, Python, Matlab, OpenCV, Sci-Kit Learn/Image, Pandas, Matplotlib, Feature Extraction, Caffe

• Published 4 papers, 1 in review.

• (pre-publication). First author, implemented a self-supervised neural network that classifies across 12 zebra finch bird call categories, comparable to established, supervised bioacoustic classifiers modeled with real bird brain activity; showed independent audio filters shared by the two models created automatic groups of similar audio processing behavior.

• (In publications). First author, implemented and compared self-supervised and supervised neural networks to classify various scales of macular degeneration based on 100k retinal images, with special focus on achieving comparable performance to established baselines. Our goals were (1) to create a fast diagnosis aid that required minimal expert labels to start, and (2) show clinical versatility and explainability of this model behavior, which led to discovery of images grouped by other subject physiology and imaging artifacts.

• (In publications) First author, novel approach to self-driving using a variant of an in-house self-supervised neural network. Surpassed established offline baseline for this driving dataset, and explained network behavior through visualizations.

• (In publications) First author, novel way of probing hidden motion filters learned by a neural network for self-driving. Tracked how driving behavior was sensitive to different egomotion properties, and which neurons across each layer contributed to each.

• Developed a new visual tool to probe the global topology of the network features using a manifold-unfolding algorithm.

Lead Computer Vision Intern, Kiwicampus (May ‘18-Aug ’18) Berkeley, CA Stack : Caffe / Tensorflow, Python, OpenCV, Feature Extraction

• As lead, implemented a self-driving CNN from an NVIDIA white paper, increasing full autonomy time from 15s to 3 minutes for a food delivery robot application, designed for sidewalk driving and obstacle avoidance during clear daytime settings.

Research Intern, NTT Innovation Institute (May ‘15-Aug ‘15) Palo Alto, CA Stack : Python

• Built a prototype device to control a powerpoint slide with the mind. Used real-time EEG data to monitor a subject's attention to reliably change slides forward or backward. PUBLICATIONS

• (pre-publication) Comparing Artificially and Biologically Learned Audio Features for Classifying Zebra Finch Call Types. Yellapragada, et al.

• Self-Supervised Feature Learning and Phenotyping for Assessing Age-Related Macular Degeneration Using Retinal Fundus Images. Yellapragada, et al. Ophthalmology Retina; July 2021. https://www.sciencedirect.com/science/article/pii/S2468653021002062

• Motion Selectivity of Neurons in Self-Driving Networks. Yellapragada, et al.. POCV Workshop @ ECCV ; Sept 2018 http://pocv18.eecs.berkeley.edu/papers/6.pdf

• The Natural Statistics of Blur. Sprague, Yellapragada, Banks, et al. Journal of Vision; Aug 2016 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5015925/

• Environmental Protection and Agency: Motivations, Capacity, and Goals in Participatory Sensing. Willet, Yellapragada, et al. ACM CHI Conference on HFCS, 2017

https://dl.acm.org/doi/10.1145/3025453.3025667

Other Relevant experience

Rotation Student, Banks Lab @ UC Berkeley (Aug ‘15-Jan ‘16) Berkeley, CA

Research Asst, Gallant Lab @ UC Berkeley (Jan ‘14- May ‘15) Berkeley, CA

Applications Programmer, Feller & Adesnik Labs @ UC Berkeley (Aug– Dec ‘13) Berkeley, CA

Research Intern, Gallant Lab @ UC Berkeley (March ‘13-September ‘13) Berkeley, CA

UC BERKELEY LEADERSHIP / ACTIVITIES

Pioneers in Engineering (PiE): Mentor (Feb ’14-April ’16)

• Mentored an underprivileged high school team of 6 students from a continuation school (suspended/expelled students) in Oakland through the PiE robotics competition.

PiE: Director (May ’12-May ‘13)

• Overseeing the entire 80-person organization to create a high school robotics competition from the ground up, by providing 300 high school students with complete robotic kits, as well as training 100 college student mentors who guide the high school teams every week. Required $50,000 annual fundraising. PiE: Technical Coordinator (May ’11-May ‘12)

• Oversaw the 30-person committee consisting of Software, Electrical, and Mechanical teams to design, develop, test, and manufacture 20 complete robotics kits.

PiE: Software Lead (Jun’10-May ‘11)

• Oversaw a 5-person subgroup to create the software (both the abstracted backend, as well as what the students interacted with) for each of the 20 robotics kits provided.

TECHNICAL COURSEWORK

Johns Hopkins Coursera – Statistical Analysis of fMRI Data (with Distinction) Vision Science - Neural Computational Models; Applied Statistics in Neuroscience; seminars on Ocular Health and Function, Visual Sensitivity, and Visual Pathway Computations EE&CS – Computer Vision, Computational Optics; Machine Learning; Graduate Algorithms; Discrete Probability & Combinatorics; Optimization in Models; Undergraduate Algorithms; Quantum Computing; Electricity & Magnetism; Transistor Physics; Operating Systems; Artificial Intelligence; Machine Structures; Network Communication; Systems and Signals; Microelectronics; Discrete Math and Probability in Computer Science; iPhone Programming Physics – Atomic Physics; Adv. Electricity & Magnetism; Adv. Quantum Mechanics II; Adv. Quantum Mechanics I; Honors Mechanics, Honors Thermodynamics and Electrodynamics Other – Cognitive Science: Perception; Industrial Engineering: Discrete Event Simulation Audited - Topology Theory; reading group on Modern Computer Vision; General Engineering Optimization Theory

Honors

● Qualcomm Undergraduate (QUEST) Grant Recipient, Spring 2013

● UC Berkeley Regents’ and Chancellor’s Scholarship 2009-2013

● UC Berkeley Cal Alumni Leadership Award 2011-2012



Contact this candidate