Post Job Free

Resume

Sign in

Developer Java

Location:
San Jose, CA
Posted:
February 05, 2019

Contact this candidate

Resume:

Ashish Gupta

ac8efi@r.postjobfree.com

408-***-****

linkedin.com/in/ashish-gupta-a633a579

Summary

• Have 8 Years of experience in IT Industry with 5 years of experience in Big Data Technologies

• Sound knowledge of Hadoop ecosystem like, Spark, MapReduce, HDFS, Sqoop, Pig and Flume

• Proficient in developing Apache Spark jobs in Scala

• Exposure as a backend developer in Scala programming language

• Experience with latest in the market cloud trends, like Docker and Kubernetes

• Sound experience in writing map reduce jobs and customization of its various compo- nents

• Well versed in test driven map reduce development using MrUnit Framework

• Undergone official Cloudera trainings on Hadoop and Spark Technologies

• More than 4 years of experience in JAVA application development and 2 years of SCALA backend development

• Familiar with SOAP Web services framework

• Sound understanding and good experience with Object Oriented and Functional Pro- gramming concepts and implementation

Skills

Big Data : Apache Spark, Hadoop, HDFS, MapReduce, Sqoop, Pig, Flume Programming Languages : SCALA, JAVA

Scripting Languages : XML, HTML, Shell scripting

Software / Tools : Docker, Kubernetes, Git, Jira, SVN, Jenkins, Maven, Intellij Education

Ajay Kumar Garg Engineering College 2006 - 2010

Uttar Pradesh, India

B-Tech, Computer Science, A

Experience

Cloud Engineer Jan 2018 – Till Date

Apple Inc. 1 year 1 month +

Environment : Docker, Kubernetes, Scala, GoLang, UNIX Shell Scripting Project Summary:

“GBI-OPS” is one of the Infrastructure and Operations support team in Apple, we provide hardware, a variety of software/tools and support needed by the other Teams. For the past many years, ap- plications have been running on bare-metals and VMs. But now with the advent of cloud In- frastructure and technologies, the whole Infrastructure concept is changed. We as a team are per- forming the below roles and responsibilities for a smoother transition to the new world. Responsibilities:

• Migrating some existing applications to cloud by creating Docker images of the same

• Creating an auto-scalable load balanced production ready setup of the migrated applications.

• Deploying some common Infrastructure support tools (like prometheus), that are common across all apps, for centralized usage, like metrics collection

• Developed a scheduling solution using Nats Priority Queue and Scala for the migrated ETL jobs from VMs to cloud

• Research, setup and customization of some helper tools (like Istio, Ambassador, Envoy, Con sul) in our cloud environment (kubernetes) for easy, standard and extensible usage. - Big Data Developer Feb 2016 – Jan 2018

Apple Inc. 1 year 11 month +

Environment : Pig Latin 0.8/011, Java-1.6, Cloudera CDH3/4 and UNIX Shell Scripting, Hadoop Ecosystem, MapReduce, Spark, Scala

Project Summary:

“iCloud-Reporting” is a reporting tool which captures iCloud meta-data and processes it to generate output that can be used by business. It works on the principle of ETL (Extract, Transform, and Load). Meta-data of iCloud Users for different applications from all over the world comes into the Hadoop clusters, which is then processed by scheduled Map-Reduce Jobs, Pig Jobs and some migrated Spark Jobs for different frequencies (hourly, daily, and monthly) to generate KPIs (Key Performance Indicators) which is then loaded to Database. These KPIs afterwards are shown on Dashboard as reports to be used by business team.

Responsibilities:

• Writing pig scripts/Map-Reduce Jobs and perform validations

• Developed Spark Jobs to generate some business KPIs

• Loading on/off the data files to HDFS Cluster

• Hands on experience in maintenance of jobs on Production Environment

• Jobs development and production deployments

• Stats generation as per the client’s requests

• History generation previous years data

Big Data Developer Oct 2015 – Feb 2016

Infogain Corp 5 months

Environment: Cloudera 4.7.1, Hadoop, Spark, Scala

Project Summary:

“Customer Experience Centre” is an android and iOS App for improving the customer experience and troubleshooting of the registered devices. We generated various business KPIs based on the CSV for- mat data generated from their application as log, to help support business by giving them a deeper in- sight into their customer behavior.

Responsibilities:

• Configured the cluster and scheduled the jobs

• Loading on/off the data files to HDFS Cluster

• Developed Spark Jobs to generate business KPIs

• Generated KPI data weekly and monthly according to the requirement Big Data Developer Oct 2015 – Feb 2016

Infogain Corp 5 months

Environment: Cloudera 4.7.1, Hadoop, Spark Core, Spark Mllib, Spark Sql, Apache Zeppelin, Scala Project Summary:

“Call Centre Voice Sentiment Analysis” was a POC, we did to build a proprietary tool that can automati cally listen, understand and rate customer-agent interaction data based on the voice recordings (in wav - format) saved by the call centers, for training and evaluation purposes. This rating can then help business understand and improve their customer satisfaction index. Responsibilities:

• Loaded the audio wav files on HDFS Cluster

• Developed Spark Application to read the wav files, convert them to text in parallel and then perform sentiment analysis on that text

• Trained and saved a Spark Mllib Classification Model to detect the appropriate sentiment

• Stored the sentiment output as Parquet files and loaded on apache zeppelin UI, using spark Sql Hadoop Developer Apr 2014 – Sep 2015

Elisa Oyj(Finland) 1 year 6 months

Environment: Java, Hadoop, MapReduce, HDFS, Sqoop, UNIX, GIT, Jenkins, JIRA, Pig Project Summary:

“Viihde” was an IPTV based service which had various online portals for sell and support. The data coming from the various online portals, mobile apps, customer care centers was stored in HDFS and then extracted and refined. The useful data was then used for OLAP purposes by the business Ana- lysts.

Responsibilities:

• Developed Map-Reduce programs for various business requirements (analysis of Web logs, DTH logs, CDR data)

• Analyzed various web logs to generate business KPI results

• Written map reduce programs to identify patterns and KPI (most viewed bandwidths, most used cus tomer portals, best campaigns according to the usage) -

• Generated KPI data monthly, weekly and annually according to the need.

• Used Sqoop to transfer data between Hadoop and other relational databases

• Created Pig Scripts that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics

Scala Developer Feb 2011 – Mar 2014

Elisa Oyj(Finland) 3 years 2 months

Environment: Apache Wicket, Multi-threading, Google Guice, Scala, UNIX, svn, Jenkins, JIRA, Java, Java Spring

Project Summary:

“Billing-Integration System” was a middle layer between the billing module and the CRM. As a team member responsible for development and support of the Billing Integration applications, I used to en- hance and maintain the same.

Responsibilities:

• Worked as Scala developer for managing various service requests that are received against bugs found in the application code.

• Performed new application logic development in Scala for the backend of the wicket UI

• Developing new portals and application functionality to support enhancement requirements Java Developer Feb 2011 – Mar 2014

Elisa Oyj(Finland) 3 years 2 months

Environment: Java, Soap-Web services, Unix

Project Summary

“MobileInternetBillingPlatform” was used to bill the mobile type payments done by the end users using the service provider payment gateway. I used to maintain and enhance the already functional in-house Mobile Internet Billing Application

Responsibilities:

• Design and development of new interface and features for MIBP application.

• End to end production env setup and application deployment. Java Developer Feb 2011 – Mar 2014

Elisa Oyj(Finland) 3 years 2 months

Environment: Java, Unix, SOAP web services, Apache Wicket, Spring JDBC Project Summary

“Tarkki” was the name of a document archival system used in Elisa. It was based on SOAP web ser vices and used MySql database to store and retrieve customer Invoices and E-letters metadata. The - actual invoices were stored on an archival system from where they were retrieved. I used to perform application enhancement and service request handling for Tarkki application. Responsibilities:

• Worked as Java developer for managing various service requests that are received against bugs found in the application code.

• Developed new authentication functionality in the application UI to grant access to the e-letters and Invoices from a newly developed application, without creating a session.



Contact this candidate