Snehal Thakur
+91-997******* E-Mail: acxm2w@r.postjobfree.com
Professional Summary:
1.8+ years of experience in Big Data Hadoop and its components like HDFS, Hive, Sqoop, Map Reduce, Pig and HBase.
Good working knowledge with Hive and Sqoop.
Involved in writing the Hive scripts to reduce the job execution time.
Good team player with interpersonal skills and Quick Learner.
Ability to work in tight schedules and efficient in meeting deadlines.
I am always eager for obtaining new valuable skills and improving my knowledge.
Knowledge of different file formats like ORC, AVRO, JSON, Parquet.
Good knowledge of Core Java.
Knowledge on FLUME and NO-SQL.
Knowledge on SPARK and SCALA.
Professional Experience:
Currently working with Alivetech Services, Nagpur as Software Engineer since March 2015.
Qualifications:
M.tech with 73% from Wainganga College of Engineering, RTMNU, March 2015
B.Tech with 68% from Smt. Radhikatai Pandav College of Engineering, RTMNU, June 2012.
Diploma with 80% from Maharashtra State Board of Technical Education, Maharashtra, April 2009
S.S.C with 75% from School of Secondary Education, Maharashtra, March 2006
Technical Skills:
Big Data Eco System : HDFS, Hive, Sqoop, Pig, HBase, Map Reduce
Programming language : Java, C, C++
Web Technologies : HTML
Database : Oracle, MYSQL
IDE : Eclipse, Netbeans
Servers : Apache Tomcat
Configuration Tool : Subversion (SVN), Git
Operating System : Windows, Linux
Project Details:
Project
Project Name : Web Intelligence
Environment : Hadoop, Pig, Hive, SQOOP, Java, UNIX, MySQL, Spring MVC
Duration : June 2015 to till Date
Role : Hadoop Developer
Description:
Web Intelligence project comprises technical architecture such as data analysis and seamless data delivery to the end user.
Current BI project migrated into new generation technology.
Maximum improvement with minimal support with new generation technology.
Adhoc report generation which can’t possible in current GUI.
Roles and Responsibilities:
Extracted all crawl data flat files generated from various retailers into HDFS for further processing.
Extracted the data from Teradata into HDFS using Sqoop.
Created Hive external tables to store the processed results in a tabular format.
Developed Hive scripts for end user / analyst requirements to perform ad hoc analysis.
Very good understanding of Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance.
Moved all log/text files generated by various products into HDFS location.
Experience in handling Sequence files, RCFile, AVRO and HAR file formats.
Personal Profile:
Date of Birth : 15-Jan-1991
Language Proficiency: English, Hindi, Marathi
Personal Strength : Diligent, Good Team Player
Marital Status : Single