Post Job Free

Resume

Sign in

Computer Vision robotics machine learning mechatronics controls

Location:
Brooklyn, NY
Posted:
August 14, 2023

Contact this candidate

Resume:

AJAYKUMAAR SIVACOUMARE

+1-917-***-**** adyxut@r.postjobfree.com LinkedIn GitHub

A Robotics graduate from NYU with hands-on experience in the development and implementation of computer vision and ML techniques. Passionate about creating sustainable robotics solutions, I am seeking full-time roles in the field of robotics and computer vision. EDUCATION

New York University, Brooklyn, NY May 2023

Master of Science, Mechatronics, Robotics and Automation GPA: 3.8 Reinforcement Learning, Robot Localization and Navigation, Adv. Mechatronics, Robot Perception National Institute of Technology, Puducherry, India June 2021 Bachelor of Technology, Electrical and Electronics Engineering GPA: 3.6 Circuit Theory, Intelligent Techniques, Power Electronics, Control System TECHNICAL SKILLS

Programming Languages: Python(6+ years), C/C++(4 year), MATLAB (6 years) Software: PyTorch/TensorFlow, OpenCV, ROS/ROS2, Linux, Mujoco, Gazebo, Rviz, Fusion360, Autodesk Eagle, PLC, Flask/Django, Docker Hardware: Raspberry Pi, Arduino, ESP, Propeller, UR16e robot arm Additional Tools: Pose Estimation, Machine Learning, Reinforcement Learning, Computer Vision, Multi-camera calibration, Sensor fusion EXPERIENCE

Research Assistant- Bipedal gait, Applied Dynamics and Optimization Lab (Prof. Joo H. Kim), NYU, USA Feb 2022 – Present

Calibrate a stereo camera using OpenCV and employ multi-view geometry for depth estimation and feature tracking of bipeds

Design Reinforcement Learning controller(Soft-Actor Critic) with PyTorch for bipedal stability using Mujoco simulation

Develop and optimized automation programs using MATLAB to generate stability regions, increasing productivity by 310% Production Intern, SOOD LLC, NYU, USA Jul 2023 – Present

Research, brainstorm and prototype ideas involving RFID sensors for advancements in wearable technology

Design and 3D print CAD parts with Solidworks/Fusion360 to test a new product Tech Team Lead, Third Eye, NYU, USA Jan 2022 – Dec 2022

Led team of three in the development of theft detection program using Computer vision and OpenCV

Implemented and tested person re-identification algorithms with instance segmentation and encoder-decoders

Shortlisted among the top 5 teams in the Innovention Challenge securing a $1000 funding price Developer Intern (ML-Ops), Tact Labs, Canada Mar 2021 – Jun 2021

Created web applications with Flask involving database toolkits (SQLite, MongoDB)

Built and deployed over 7 Machine Learning models as web applications using Heroku, within a one-month timeframe

Coded new Chatbots for questions on Canadian immigration, Tact Lab’s repository and AWS using Rasa Drone Development Engineer, EXOR Robotics, Puducherry, India Aug 2020 – Oct 2020

Designed custom PCB chip using Eagle software for the drone with IMU and altitude sensor

Engineered ESPcopter- drone that can be remotely controlled(Wi-Fi) through mobile app PUBLICATIONS

AI quadruped robot for assisting the visually impaired

● Built spider-type quadruped robot prototype using Python with ROS, OpenCV, Raspberry Pi, Pi camera and ESP32

● Integrated object-detector(Mobilenet-SSD), face recognizer(Local Binary Pattern Histogram) and RASA chatbot server with three modes of HRI- voice control, gyroscope, and buttons

● Published at IECON, 2021 - Annual Conference of IEEE Industrial Electronics Society SELECTED PROJECTS

Person Recognition without Facial Features (Python)

● Tested algorithms for Region of Interest extraction (semantic segmentation mask-RCNN) and feature description/matching (SIFT,ORB)

● Utilized neural-network-based approaches for patterns and feature extraction using encoder-decoder

● Employed U-net architecture to extract pre-defined features for re-identification without racial bias Pose Estimation of a Quadrotor (MATLAB)

● Implemented Extended Kalman Filter and optical flow (KLT tracker) program to estimate 3D pose and velocity of quadrotor by analyzing video data, with 97% accuracy

● Fused inertial data and vision-based estimation using Unscented Kalman Filter to capture system nonlinear dynamics and optimized it using RANSAC

Robot Perception Projects (Python)

● Utilized RANSAC and Iterative Closest Point (ICP) algorithm to align point clouds from Open3D reducing the mean-squared error from 63 to 7.5

● Calibrated camera using vanishing points and Zhang’s method and applied it for tag-based augmented reality

● Implemented auto-encoder with TensorFlow for dimensionality reduction and visualized it using tSNE Robot Arm Control – UR16e (Python, UR console)

● Programmed the UR16e robot to follow trajectory, pick-and-place and tele-operation from Jupyter Notebook

● Modified the code to perform repetitive tasks with a human-in-the-loop setup ensuring safety measures



Contact this candidate