SIDDHARTH SHAH
Union City, CA ***** +1-408-***-**** **********@*****.*** https://www.linkedin.com/in/siddharth-shah-10921582
OBJECTIVE
To secure a position where I can efficiently contribute my skills and abilities to the growth of the organization and build my professional career.
EDUCATION
California State University, Eastbay: Master of Science, Computer Science
GPA: 3.713/4.0
MARCH-2016
CHARUSAT University, India: Bachelor of Technology, Information Technology
GPA: 8.11/10.0
MAY-2014
RELEVANT COURSE WORK
Analysis of algorithms
Distributed Systems
Compiler Design
Computer Architecture
Cryptography & Network Security
Artificial Intelligence
Systems programming
Operating Systems Design
Data warehouse & Data Mining
Parallel Computing
Advanced Computing
Data Structures
TECHNICAL EXPERTISE
Languages: C, C#, Core Java, ASP.NET, PHP, Java Script, Assembly language(MIPs), Shell Scripting, CSS, HTML
Database: Oracle SQL, MySQL, NoSQL
Operating Systems: Windows, Linux/Unix, Macintosh
Tools: Microsoft Visual Studio, Eclipse, VMware Workstation, Cisco packet tracer, LogicWorks, Netbeans, Dreamweaver
Technology: Hadoop, MapReduce, HDFS, YARN, Hbase, Hive, Pig, Sqoop, Zookeeper, Amazon Web Services, Windows Azure
PROJECTS
Apache Hadoop Ecosystem implementation 03/2016 - 06/2016
Configured single node and multimode Hadoop cluster on Ubuntu 12.04 using VMWare Workstation
Performed Word Count and Searching-Sorting programs using MapReduce
Used Hbase NoSQL Database for handled unstructured data which uses the HDFS for the storage
Designed a data warehouse and Created partitioned tables using Hive
Create UDFs and developed SQL queries for Hive
Developed Pig scripts to measure the Highest/Lowest temperature of each year and load output files into HDFS
Imported data from RDBMS to HDFS, Hbase and Hive and exported data from HDFS and Hive to RDBMS using sqoop and MySQL Connector
Cluster co-ordination services through ZooKeeper
Adding and Removing Nodes without any effect to running jobs and data from Hadoop cluster
Scheduling jobs for file system check using fsck and also running the balancer for uniform load on the nodes
MapR Hadoop Cluster Administration 01/2016 – 03/2016
Installed multimode virtual cluster on RHEL-6.5 using EC2 and S3 services of Amazon Web Service(AWS)
Verify the test the cluster using RWSpeedTest TeraGen and TeraSort tests
Configure Cluster Storage and create volumes using MCS
Data ingestion from Local File System to MapR-FS using NFS protocol
Monitoring and maintenance of the cluster using MCS console
32-bit CPU Architecture
01/2015 - 03/2015
Created in Logic Works by using different encoder, decoder, buffer, shifter, register and clock
Main components were: Program Counter, ALU, Data memory, Register File, Program Memory and Immediate Register All the components are connected with the control block.
Tower of Hanoi puzzle was used to check the performance of the CPU.
Cloud Configuration
06/2013 – 11/2013
Configured the cloud using Eucalyptus free open source software on VMware workstation.
It consists of five components: Cloud Controller, Cluster Controller, Node Controller, Walrus Controller and Storage Controller All the devices were interconnected with each other by a LAN cable
provides an interface for users to self-service provision and configure compute, network, and storage resources.
EXPERIENCE
.NET DEVELOPER INTERN SUN SOFTWARE PVT LTD. DEC 2013 – MAY 2014
Worked in a team to develop a B2B Cloud Application for client to gain professional experience.
Cloud Application was developed using Three-Tier Database Architecture.
Front-end was developed on C# ASP.NET, back-end was developed on SQL server 2014.
Application hosted on Windows Azure to provide Cloud properties and benefits.