Samvid Jhaveri
TECHNICAL SKILLS
• Proficient Languages: C, C++, C#, JAVA, Python, HTML/CSS, JavaScript.
• Development Tools: Unity3D, VRTK, MRTK, ARCore, Vuforia, D3.js, OpenGL, three.js, A-frame
• Headset knowledge: HTC Vive, Oculus Rift, Microsoft Mixed Reality
• Computer Vision Tools: Deep Learning, Tensorflow, TPU, OpenCV, AutoML-GCP WORK EXPERIENCE
AR/VR Specialist & Computer Vision Engineer, Baker Hughes [Python, Unity, AR, CV] Dec 2018 - Present
• Used Unity and nvidia video codec SDK to create cross platform streaming service which majorly boosted rendering capacity of Hololens from 100k->100mil. We used this solution to deploy safety and training application for Baker Hughes.
• Collaborated with surgeon to develop a novel markerless face tracking and utilized RGBD camera for overlapping CT-scan data on the humor head for the surgical procedures. Created novel camera rig to map multiple coordinate system of camera, Hololens and other device for this very project.
• Scanned and annotated around 200,000 images to create an Automatic Defect Recognition(ADR) solution for one of the product companies. Created those model using AutoML, tensorflow and accelerated using TPU.
• Created computer vision model serving application with FastAPI on Dockerized containe for ADR project. Graduate Student Researcher, UC Santa Cruz [Wearable Tech, VR] Jan 2017 – Sep 2018
• Working on development of Social VR platform to increase focus and social interaction.
• Deployed wearable technologies in Battle Star Galactica LARP (by Eleventh Hour Production) at Dexcon ’17. ACADEMIC & RESEARCH PROJECTS
Messy Classifier [Unity3D, Hololens, IOT, Computer Vision, Adafruit IO] May 2020 – July 2020
• Developed a fun project to classify whether my couch is messy or clean, with the bigger goal of interfacing a camera remotely with any augmented reality headset and sends inference responses to any device listening.
• Controlled position and rotation of a camera using a virtual camera placed inside an augmented reality space and ran deep learning models on the camera and sent the data back to the Hololens and created a novel notification system.
Ba-Ke-Neko(Ghost Cats) - Social VR [Unity3D, Photon Networking, VRTK] January 2017 – Aug 2018
• Goal of this project is to explore the social superpowers in VR and I have created a multiplayer VR environment where users can explore new environment by using other people as their light source. Programmed tracing of all the activities of all users through out the usage and scale change as well.
• This project is funded by Mozilla® foundation.
Neurocave [three.js, Unity, C#] May 2018 – August 2018
• Created a tool to visualize the brain network graphs and how the different regions of brains are interconnected.
• Neurologist can upload their own data and visualize, interact and compare different brain models concurrently. EDUCATION
Master of Science in Computer Software and Media Application University of California, Santa Cruz August 2018
• Courses: Data Visualization, Game Design, AI in Games, User Evaluation Technologies, Social Emotional Technologies, Computer Vision, Costumes as Game Controllers, Interactive Computer Graphic PAPERS PUBLICATIONS
• Designing Future Social Wearables with Live Action Role Play (Larp) Designers: In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 462). ACM.
• Visualising the landscape of Human-Food Interaction research: In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems (DIS 2018), June 9-13, Hong Kong.
• Making Sense of Human Food Interaction: Bertran, F. A., Jhaveri, S. N., Lutz, R., Isbister, K., & Wilde, D. (2019, May) In CHI (p. 678).
ACHIEVEMENTS
• Scene Sampler: This game was showcased at IndieCade 2017 and Come Out and Play (NY).
• Best Collaboration Award: Received this award for my collaborative efforts in Instagrade project. 215 NE 28th St. Apt 4401 +1-203-***-**** samvid95.github.io/website Oklahoma City, OK 73105 ******.*******@*******.***