Mohan Manchala
****@**.****.***, 925-***-****, https://www.linkedin.com/in/mohanmanchala
OBJECTIVE
Fresh grad with strong background in Machine learning, Data mining and programming seeking Full-time position as a Software Development Engineer.
EDUCATION
Master of Science, Computer Science
School of Computing, University of Utah, Salt Lake City, UT May 2016
GPA: 3.78/4.00
Bachelor of Science + Master of Science, Electrical Engineering Department of Electrical Engineering, Indian Institute of Technology, Hyderabad July 2014
GPA: 7.5/10.00
RELEVANT COURSES
Machine Learning, Natural Language Processing, Data Visualization, Advanced Image Processing, 3D Computer Vision, Advanced Algorithms, Image and Video Processing, Data Structures, Multi-Agent Systems, Computer Networks, Data mining. SKILLS
Programming Languages: C/C++, Python, Java, MATLAB, R, JavaScript, HTML, CSS. Operating systems: Linux, Windows, Mac
Tools/ Libraries: Scikit, OpenCV, NLTK, StanfordNLP, SQL, D3.js, CUDA, OpenCL, OpenGL. Others: MS Visual Studio, Code Blocks, MS office, Blender, Photoshop. EXPERIENCE
Research Assistant, Utah Center of Advanced Imaging Research, University of Utah Classification of Melanoma in Dermoscopic Skin images August 2015 – present
Developed a Hybrid Skin Lesion Segmentation method and extracted features based on ABCD rules and GLCM.
Trained the classifier using Artificial Neural Networks and achieved up to 90% accuracy on test data.
Implemented a Deep learning based approach, where we used pre-trained Convolutional Neural Networks (CNNs).
Employed GPU’s and Parallel Computing Toolbox (Matlab) to extract Deep feature responses of fully connected layers of AlexNet CNNs from large Skin image dataset and trained a SVM classifier.
Coordinated with various Dermatologists in University of Utah to get data and assess the developed Classification system. SELECTED PROJECTS
Automated Recognition of Organ Sub-components for Segmentation Seeding Jan 2015 – May 2015
Developed a Machine learning based technique to detect Organ Sub-components for automatic Segmentation seeding.
Worked on volumetric CT images and trained Haar Cascade 2D Adaboost detectors for the Organ Sub-components.
Implemented a Majority voting technique to eliminate the detected false positives and localized the accurate organ position.
Achieved an accuracy of 95% on training data and 90% accuracy on test data for detecting pulmonary bifurcation truck. Mumbai pothole visualization Sep 2015 – Dec 2015
Developed an interactive visualization of potholes and associated data of Mumbai city in JavaScript.
Employed Google Maps to provide Visualization of the potholes using Google Maps API.
Implemented Reingold Tilford tree using D3.js for navigating into the map to search potholes in various areas of Mumbai.
Implemented a stacked area chart with tooltip and brush to compare number of potholes for selected areas and timelines. Question Answering (QA) system for reading comprehensions. Sep 2015 – Dec 2015
Developed a QA system for reading comprehension as a part of NLP course project using Python.
Extracted the probable answer sentences from the comprehension using a Modified Word Matching algorithm.
Implemented separate rules for each question types (e.g. Who, What, etc.) to trim the answers from the extracted sentences.
Employed NLP libraries and tools such as NLTK, Stanford NLP toolkit and Coreference Resolution. Multi Robot SLAM using Salient Landmarks Sep 2014 – Dec 2014
Implemented an Extended Kalman Filter based Multi Robot SLAM, programmed in MATLAB.
Solved the correspondence problem using unique Salient Landmarks, which help to cooperatively fuse the maps of different robots by transforming them to a global coordinate frame using Affine Transformation.
Matched the redundant landmarks by a variant of Nearest Neighbor algorithm and merged them by Covariance Intersection. Bachelors Thesis: Landmark Based Navigation in Next Generation Vehicles. Jan 2014 – May 2014
Developed a landmark based navigation system, where the user can naturally interact with the system using hand gestures.
Segmented the hand using skin color information in YCrCb color space and tracked using Camshift Tracking method.
Developed gestures using hand features such as finger count, palm center, hand motion, etc. and trained a Decision Tree.
Presented a saliency based landmark identification technique in video frames using a Global Contrast Saliency.
Focused on identifying building landmarks and matched the identified landmarks using SURFs to validate the navigation.