Post Job Free
Sign in

Data Software Engineer

Location:
Vijayawada, AP, India
Posted:
March 28, 2018

Contact this candidate

Resume:

Sai Dilip Reddy.Kiralam

Mobile: 784-***-**** Email: *****************@*****.***

PROFESSIONAL SUMMARY:

•3 years of experience in Aadhya Analytics Pvt.Ltd as Big Data application Developer.

•0.8+ years of experience in Research and rehabilitation center of Usharama.

•Good programming skills in Java, C#, Python,shell scripting.

•Worked on Big Data Technologies like Storm,Spark,Kafka,Elasticsearch,pig.

•Knowledge & Hands on experience with RDBMS databases like PSQL.

•Domain Knowledge about WEB-TECHNOLOGIES and data of XMl, JSON.

•Good knowledge on TOMCAT WEB-SERVER.

•Good knowledge operating systems LINUX /UNIX environments & Shell/Bash scripting.

•Domain knowledge on COMPUTER NETWORKS & IP4 classes & Sub-Netting.

Certifications:

•Machine Learning by Stanford University on Coursera. Certificate earned on October 2, 2016.

•Big Data Fundamentals On Big Data University ( IBM). Certificate earned on August 2015.

•Certified By IBM Career Education Program on the Course IBM CE Bigdata & Analytics with “IBM InfoSphere BigInsights.

Work Experience:

Aadhya Analytics Pvt.Ltd, Gannavaram, Andhra Pradesh JUNE 2015-Present

Big Data Application Developer

Software Engineer

LinkedIn URL :- https://www.linkedin.com/in/dilipbobby/

Academic Qualification:

2017 Master of Engineering in Computer science & Engineering with 70%

from DJR College of Engineering & Technology (affiliated JNTUK) AP.

2015 Bachelor of Engineering in Electronics & Computer Engineering with 70%

from Usha Rama College of Engineering & Technology (affiliated JNTUK) AP.

2011 Intermediate from Andhra Loyola College, Vijayawada, Andhra pradesh.

Secured 63.1 % (Higher Secondary)

2009 SSC with an aggregate of 78.3 % from St.Joseph’s High School, Gunadala,

Vijayawada. (AP)

Technical Skill Set:

OS Platforms

UNIX, LINUX (Ubuntu, Centos, Suse), Windows.

Databases

MYSQL,PSQL

Programming

C, JAVA, C#,Python3

Application Server

TOMCAT

Web Tools

HTML, XML, JSON

Big Data Ecosystem

Hadoop, Map Reduce, HDFS, Hive, Pig, Sqoop, Kafka, Storm, Machine Learning,Spark

INDUSTRIAL PROJECTS June-2015-Till date

Title: Design and Develop of Machine Learning Services using Neural networks

Description:

Developed different Neural networks models for services like sentiment analysis, emotion detection, Relation extraction etc by using the Tensor flow,keras python packages and successfully build API’S services on the top the models by using Flask framework.

Responsibilities:

•Clean & filter the unstructured data

•Preparing the Neural networks models On AWS Gpus

•worked on using different Neural networks algorithms like feedforward,Rnn,Lstm’s.

Environment* : Tensor flow,Keras,pandas,Flask,Jupyter notebook.

Title: Developed of Machine Learning Services for different scenarios

Description:

Developed different machine learning models for services like sentiment analysis, spam detection, Readability assessment using Java Machine Learning Frameworks which are used for extraction of hidden information from the social media data and different types of text based documents.

Responsibilities:

•Analysing the unstructured data

•Filtering the data & Preparing the machine learning models

•Working out with different machine learning classification and regression algorithms.

Environment* : Java machine learning frameworks (OPENNLP,GATE).

Title: ETL of JSON to structural data using Big data-Apache-Storm

Description:

Processing Json data of different Social media’s(Twitter,Facebook, News media) using Apache storm and performing ETL job like converting the data into structural form and perform Aggregation and Correlation of data.& store it into postgresQL.

Environment* : Kafka, Storm, Postgresql.

Title: Design of Recommendations engine for the social networking content based on Readability index

Description:

Based on the posts of person and his standard of writing and keywords used in the content, developed the recommendation model which used to predict the similar type of posts or recommending similar type of persons that a particular may show interest.

Responsibilities:

•Analysing the unstructured data

•Filtering the data & Preparing and finding the readability index of each post of a person

•Preparing the recommendation models using CF, SVD Algorithms.

Environment* : Python, pandas, scikit module.

Languages Known : English, Telugu, Hindi.

Place: SIGNATURE:

Date:



Contact this candidate