Post Job Free

Resume

Sign in

Hadoop Developer Data Migration

Location:
Coimbatore, Tamil Nadu, India
Posted:
May 27, 2021

Contact this candidate

Resume:

Siddharthan Raajaamani

***, ****** ******, *******.*********-638701 +91-814******* admq8x@r.postjobfree.com Summary:

An intuitive software engineer and dedicated professional who have helped clients solve IT problems involving transactional and analytical data, modernization and building sustainable systems using cutting edge technologies and tools.

Highlights:

5+ years of IT Experience with Tata Consultancy Services and Barclays Limited in the field of designing, Development and implementation, Application Modernization and Data Migration solutions, Building Data Pipelines, data transformation and data analysis solutions

Expertise in technologies such as Java, Scala, Unix-Shell scripting, SQL, HiveQL, Spark Streaming, Kafka

Solid understanding of Big Data principles and deeper working knowledge in Spark, Hadoop, HBase, Kafka, Sqoop, Hive, oozie, NIFI, Kudu etc.

Data engineering and Analytical solutions for delivering Location Based Insights and demographics based on Geo-Location data using NIFI, Kafka, HDFS, Spark and Scala

Obtained spark certification on Cloudera as certified Hadoop and spark developer

Successfully implemented a Real Time Offer management solution using HDFS, Spark, HBase and Scala on Hortonworks Cluster

End-To-End Project implementation of Enterprise Data Lake by sourcing various systems for future consumption

Worked in client location (Malaysia) and developed the codes as expected from users using Bigdata technologies got appreciation for the operative efforts

Expertise in the areas of Application Development, Application Transformation, Application Migration, Data Migration, Re-Engineering, Re-Hosting and Business rules extraction and its Process

Developed Data quality framework using spark and Hadoop technologies which can be reused and implemented successful in production

Excellent communication, people management, technical and analytical skills Technical Skills:

Scala and Java

Hadoop, Apache Spark, HDFS, MapReduce, Streaming

Apache Hive, HBase, Phoenix, SQL, NoSQL, Kudu

Sqoop, Flume

Kafka, MQ Hub

Oozie, Control M, Tivoli

MicroStrategy Reporting Tool

Hortonworks/Cloudera platform

GitHub

DevOps

Linux, UNIX, Windows

Work History:

BARCLAYS GLOBAL SERVICE CENTER, Chennai:

Feb 2020 – Till Date –Hadoop/Spark Developer

This Project is about to build platform to support real time use cases in industrialized scale. The platform will enable flow of real time streaming data from various channels through which customers interact with bank. Design & architectural patterns for real time data ingestion, processing, consumption for ML, analytics & Reporting dashboards.

Technology stack of Real Time Warehouse Solution implemented as below

Data ingestion through BIW exporter

Data Processing – Spark based data processing pipelines using envelope an Apache spark configuration driven framework

Data Storage – KAFKA, KUDU and HDFS

Job Scheduling using Tivoli job scheduler

Data Access - Impala

Skills: Hadoop, Spark 2.0, Streaming, Java, Scala, Tivoli, Hive 1.2, Unix, Kafka, Kudu STANDARD CHARTERED, Malaysia (TCS):

Apr 2017 – Dec 2019 –Hadoop/Spark Developer

BANCA is implemented as the Policy and Transaction recording system in Standard Chartered Bank. In this project, Enterprise Data Management Platform will source the policy related data from Oracle and data will be catered for MicroStrategy reports on real-time/near to real-time basis. This was implemented for 3 countries.

Parsing the Incoming XML data dynamically using Java

Capturing the Real time Processing of Banking & Insurance data using spark

Loading into Hadoop using spark/hive

Job Scheduling and monitor workflow using Control-M

Data copy and processing using apache Nifi

Developing the Reports using MSTR

Design and development of spark jobs through Control-M Skills: Hadoop 2.6.4, spark 1.6, streaming, Java, Scala, control m, hive 1.2, UNIX, Sqoop 1.4.6, Kafka, Falcon, Micro strategy 10.3, Apache Nifi

AP Nielsen CDAR - QMS, Chennai (TCS):

Dec 2016 - Mar 2017 –Hadoop/Spark Developer

The project involves the analysis, design and development of QMS Application. QMS is used to measure the performance of auditors and supervisors. Auditor collects the required data using handheld terminal (HHT). The processed data is used to monitor the performance of auditors and supervisors. The processed data will remain in HDFS and maintain in different tables. Using the result, different dashboards are prepared to view the performance of the auditors and supervisors.

The events captured in the real time using spark streaming and processed to store the data in HBase for the monitoring purpose.

Java Code to decrypt the files using dynamically generated AES key provided in Java API

Requirements developed using spark in Scala programming language, tested and implemented within the given time (sprint).

Skills: Hadoop 2.6.4, spark 1.6, Java 1.7, Scala 2.10, hive 1.2 AP Nielsen Data quality, Chennai (TCS):

Dec 2015- Dec 2016 –Hadoop/Spark Developer

Nielsen Data quality framework is application ensure the quality of data i.e., processing in the ETL. It can be ingested in the any stage of ETL and check the quality of data on different parameters and will publish as the different dashboards based on the results from the framework. My role in the project is Scala-Spark developer in data quality framework and optimizing the code. Developed the spark program using Scala and implemented in production Skills: Hadoop 2.6.4, spark 1.6, Scala 2.10, hive 1.2 Education:

Bachelor of Engineering in Electronics and Communication, Coimbatore Institute of technology, 2015 Certifications:

Cloudera Certified Hadoop and Spark Developer (CCA175), Hortonworks Certified Hadoop Developer Domain Knowledge:

Banking and Retail



Contact this candidate