Post Job Free

Resume

Sign in

Data Customer

Location:
Jersey City, NJ
Salary:
140000/year
Posted:
April 07, 2019

Contact this candidate

Resume:

Biswajit Kundu

Profile

**+ years of experience encompassing a wide range of skill set and roles. 4.5 Years in Hadoop.

Extensive experience with analysis, design, development, customizations and implementation of software applications using Apache Spark, Map Reduce, Java, PIG, Oozie, Sqoop, Hive, HBase, HCatalog, Storm, Kafka, Apache Geode, Hortonworks Platform, Cloudera Platform.

Extensive experience with analysis, design, development, customizations and implementation of software applications using WBI Message Broker, Websphere MQ Series, WBI Adapters, IBM Websphere ILOG JRule, TIBCO BW, TIBCO Administrator, TIBCO EMS.

Proficient in analyzing and translating business requirements to technical requirements and architecture.

Object Oriented Programming knowledge

Experience in leading team. Handled multiple roles – Technical Architect, Technical Lead, Designer, Developer, and System Admin.

Good communication skills, interpersonal skills, self-motivated, quick learner, team player.

Skill Set

Hadoop: MapReduce, Java, PIG, Sqoop, Oozie, Hive, HBase, HCatalog, Storm, Kafka, Phoenix, Falcon, Shell Scripting, Yarn, Elastic Search, Talend BigData Edition 6.0.1, Spark 2.3, AWS S3, AWS EC2, AWS CICD, AWS Lambda, AWS DynamoDB, AWS RDS, AWS SQS, AWS SNS, AWS CloudWatch, AWS CloudTrail, AWS ElasticBeanstalk, AWS SAM

EAI: WBI Message Broker (5.0, 6.0, 7.0), IBM Integration Bus 9.0, Websphere MQ (5.3, 6.0, 7.0, 7.5), WBI Adapters, Qpasa, IBM Websphere ILOG JRule 7.1.1, TIBCO BW 6.1, TIBCO Administrator 6.1, TIBCO EMS 6.1

Languages: Java, Scala, C, ESQL

RDBMS: Oracle 10g, DB2 9.7

O/S: Linux, Unix, HP-UX, AIX, Windows XP/2007 Server

Source Control: SVN, VSS, MKS, RCC, PVCS, RTC

Tools: Visio, XMLSPY, SQL Navigator, Toad

Professional Experience

Jun 2017 – Till Date Technical Architect HCL America Inc.

Customer: One of the Largest Investment Bank of Switzerland.

For any investment banking measuring risk is very crucial for their business as well as compliance to government regulatory authorities. VaR (Value at Risk) engine is the platform which allows Risk Officers to slice and dice historical data to compute VaR, Stress, SVaR, etc. Risk analytics platform performs aggregation of historical data to calculate risk mesure for different hierarchy of the financial institution and sends to downstream applications for reporting.

Worked with TIBCO DataSynapse GRID developers to understand exiting calculation.

Implemented parquet writer in Java to handle dynamic schema to write parquet files from Tibco DataSynapse GRID to HDFS.

Migrated existing calculation to Spark using Spark SQL Java APIs

Used Spark UDFs and UDAFs in Java for aggregation of PnL (Profit and Loss) vectors.

Used parquet files to leverage performance benefits by using partition pruning, column projection and predicate pushdown.

Team Size : 6

Apr 2017 – May 2017 Technical Architect HCL Technologies, India

Customer: BigData CoE(Hadoop)

Worked with HCL sales team to provide technical support for technical PoCs for new customers.

Feb 2017 – Mar 2017 Technical Architect HCL America Inc.

Customer: One of the Largest Courier Service Provider in USA.

On day to day basis customer wants to find out the caging probability of any shipment across the globe. To find put the caging probability they have developed a PMML model. For any shipment data in their system is passed to the PMML and the caging probability is computed. The probability helps them to plan the shipment in such a fashion that they can minimize the caging time and save revenue.

Performed system analysis for shipment data, caging factors and PMML scoring

Developed Streaming application (Storm Topology) in Java to compute caging probability of shipments.

Designed and developed solution to score caging probability using Java, Storm, Kafka and HBase, JPMML.

Team Size : 5

Jun 2016 – Jan 2017 Date, Technical Architect HCL Great Britain, UK

Customer: One of the Prestigious Banks in UK.

The daily debit and credit transaction detail is captured on Mainframe and extracted to Terada for storage and fraud analysis. Fraud analysis was done on SAS upon querying from Teradata. As per their analysis they are supposed to host 70TB of data in 2 years. Due to data volume it was no longer a viable option to store the data on Teradata due to cost. Customer had adopted Hortonworks Hadoop Distribution (HDP 2.3) as their BigData platform. So they opted for Hive storage for their fraud analysis.

Performed system analysis for daily debit and credit transaction extract from Mainframe

Developed Spark application using Java to validate and load mainframe extract data to Hive.

Developed program to load Mainframe file extract to HDFS using HDFS REST API.

Designed and developed Sqoop jobs to migrate data from Teradata to Hive

Analyzed SAS queries to identify Hive partition key

Designed and developed java application to parse and load data from file to Hive ORC tables

Designed Exception handling and logging approach to adhere customers Central Logging process

Team Size : 12

May 2016 – Jun 2016, Technical Architect HCL Technologies, India

Customer: Renowned provider of health insurance to nearly 260,000 employers and individuals throughout New Jersey, USA.

Customer received data related to health checkups, doctors visit, medical tests from different Hospitals, Medical Test Centers, Doctors and process them for analytics.

Designed ODS from HL7 Data Model.

Developed Spark processing for data loading from source files to Phoenix tables.

Developing Spark processing to map from HL7 Data Model to ODS data model

Team Size : 4

Jan 2016 – April 2016, Technical Architect HCL Technologies, India

Customer: Largest Global Distribution Systems provider for air lines bookings in North America.

Customer sources data from different GDS around the globe and process them. Each GDS has its specific structure. To perform analytics on all the data from different GDS, all the GDS specific data needed to be transformed to common structure. Then this common structure data is processed to find most recent activity, booking, cancellation and updates (Netting Process) against a PNR number. Everyday Netted data are fed to analytics system.

Designed and developed Talend jobs to transform GDS specific structure to common structure and load them to Hive.

Developed Pig scripts to load data from Hive and apply netting logic to Netted Data output structure.

Used Talend BigData edition to apply GDS specific structure to common structure and Pig scripts to implement Netting logic.

Team Size : 6

Jan 2015 – December 2015, Lead Consultant HCL Technologies, India

Customer: 3rd Largest Telecom Service Provider in USA

This project is the second phase of my previous project. After setting up the Hadoop platform customer want the data processing and reporting to be created on Hadoop platform instead of Teradata. The reporting to be created from Hive and HBase instead of Teradata for their different Business tracks like, SCM, Finance, Marketing etc.

Developed pig script to source data from Hive and HBase.

Developed Pig UDF in Java.

Developed generic MapReduce to implement SCD2 on HBase in Java.

Used Apache Phoenix to expose HBase tables as relational data source.

Used Scoop to load data from HDFS to TeraData staging tables

Developed shell script to dispatch preparation output to Teradata using Teradata MLoad scripts

Developed java program to log exception in HBase.

Used Oozie email action for failure notification.

Developed a generic oozie workflow to execute different actions to execute a preparation job.

Team Size : 250

May 2014 – Jan 2015, Lead Consultant HCL Technologies, India

Customer: 3rd Largest Telecom Service Provider in USA

Customer is setting up their BigData platform on Hortonworks Cluster(200 Nodes). They need data to be loaded in Hadoop HDFS environment from sources like file system, database, social Media and even another Hadoop cluster. Basically they wanted to go away from costly Informatica, which was responsible for data loader to Teradata. HCL has developed Data Ingestion Framework (DIF) framework, a configurable application to ingest data from different sources to Hadoop. And then load data from Hadoop to TeraData staging layer.

Designed and Developed Data Ingestion Framework’s backend module to generate oozie workflow, shell script and properties file.

Developed Pig script for DIF’s record id generation module and data cleansing module.

Developed Hive script for DIF’s Hive Partition creation module.

Developed Sqoop jobs to import data from DB sources and to export data to Teradata.

Developed Java programs using Hadoop APIs (HDFS, HBase) to interact with HDFS to handle data files and producing logs into HBase.

Developed Java MapReduce for Data Cleansing and Format validation.

Developed Oozie workflow to execute the above mentioned actions.

Team Size : 14

Feb 2014 – Apr 2014, Lead Consultant HCL Technologies, India

Customer: One of the largest Fast Food Chain in USA

Worked as a Hadoop developer to convert Mainframe based Daily TLD application to Hadoop Based solution. This Mainframe offloading helped customer to process 3000 store data per day compared to 700 stores per day.

Developed PIG scripts for Pre-edit Process, Edit Process, Sales Validation Process and Disagg Process.

Developed Oozie workflow to execute the processes in workflow.

Developed Java and shell script based application to automate testing process and match Hadoop test results with mainframe test results.

Team Size : 6

Jan 2014 – Feb 2014, Lead Consultant HCL Technologies, India

Customer: AIG Japan

Worked as WBIMB subject matter expert to gather AIG SOA environment. Has worked in customer premise to understand customer requirement and communicate the knowledge to offshore team.

Got the knowledge transfer from customer on their SOA platform.

Prepared AIG SOA Landscape document.

Team Size : 3

July 2012 – Jan 2014, Lead Consultant HCL Technologies, India

Customer: BigData CoE(Hadoop)

Worked as a CoE member to learn and develop PoCs on Hadoop based technologies. Also, developed Hadoop based PoCs to support presales team.

Learned Apache Pig, Apache Hadoop, Apache Oozie, Apache Sqoop.

Completed Mongo DB Java Developer certification.

Team Size : 4

Mar 2011 – July 2012, Lead Consultant HCL Technologies, Malaysia

Customer: Biggest Malaysian Bank in Malaysia as well as in South East Asia

Worked as a WBIMB Team Lead to migrate existing OS2 based BTS Teller system to S1 based Branch Front End application.

Creating Technical Specification Document, developing Message Flows, Message Sets.

Leverage existing BTS system and provide ESB approach to integrate Host Application with S1 ET BFE application.

Participated in functional workshop with business analysts to gather business requirement.

Created IFX data mapping from application request XML to backend host application fields.

Participated in performance analysis of BFE applications.

Leveraged and enhanced the common component called CIC (designed and developed by IBM), is used to interact with ESB. CIC is used to orchestrate to invoke different backend applications and consolidate and send response to S1 ET application.

Designed FC component, responsible to transform ET request to Backend specific CWF messages and upon receiving response, transform to XML response sent back to S1 ET application through CIC.

Developed Business Rules using IBM ILog JRule.

Prepared ANT scripts for deployment of message flows and message sets in different Broker environments.

Prepared MQSC scripts to create MQ objects for SIT, UAT and Production environments.

Provided AIG (Application Installation Guide) to setup ESB and ILog JRule components.

Tracking Defects and PCRs in SIT, UAT and Production environment.

Leading team to provide support in SIT, UAT and Production environment.

Environment: WBI Message Broker 7.0, Websphere MQ 7.0, db2 9.7, AIX, IBM ILog JRule 7.1.1.

Team Size : 14

May 2010 – Mar 2011, Senior Consultant HCL Technologies, India

Customer: CoE

Worked as WBIMB and TIBCO Senior consultant to develop TIBCO GI based LEAF System

Created TIBCO BW based services to extract Event, logs from LEAF DB.

Created TIBCO GI based screen to populate the info extracted from LEAF DB.

Developed sample Message Flows for HCL Rapid Framework.

Environment: WBI Message Broker 6.1, Websphere MQ 6.0, db2 9.0, AIX, TIBCO EMS, TIBCO BW, TIBCO GI

Team Size : 5

Feb 2010 – May 2010, Specialist Dell Perot Systems, India

Customer: One of the Leading Healthcares providing Non-Profit Organization in US

Worked as a WBIMB specialist to migrate DataGate interfaces to WBIMB 6.0.

Created Technical Specification Document.

Developed Message Flows, Message Sets.

Executed unit test cases and compared with expected result.

Environment: WBI Message Broker 6.0, Websphere MQ 6.0, db2 8.1, AIX,

Team Size : 16

Apr 2008 – Jan 2010, Technical Expert Zensar Technologies, India

Customer: One of the Leading Insurance Companies in South Africa

Worked as a WBIMB Specialist to provide guidance and day to day help to a fairly new WBIMB team. Was responsible for analysis, development, testing and implementation of Message Flow, Message Set for new implementations and enhancement of existing interfaces. Onshore South Africa (June 2008 to till date).

Defined, developed and implemented Message Flows, Message Sets as per business requirement to integrate frontend (PinBALL) application interface to backend Main Frame applications.

Preparing message set from COBOL Copy Book, XSD.

Developed JSP application to start and stop message flows from web based application using WMB 6.0 Configuration Manager Java API.

Wrote Java application for day to day MQ monitoring using MQ PCF Java API.

Designed and developed common error handling, automated error reprocessing and exception accounting application

Designed and developed Message accounting application.

Helping WBIMB and MQ Admin team in their day to day administration tasks.

Environment: WBI Message Broker 6.0, Websphere MQ 6.0, db2 8.1, Solaris, Windows XP, Compass 6.8, z/OS.

Team Size : 12

Oct 2007 – Apr 2008, Developer/Designer IBM India, India

Customer: One of the Prestigious Banks in UK

As a Developer responsible for analysis, development, testing and implementation of Message Flow to implement a generic component called Pseudo Choreographer, will be used to implement the interfaces needs orchestrated external service invocation.

Developing Message Flows, defining implementation logic.

Preparing test cases and testing in local environment.

As a Designer responsible for analyzing the requirements, defining implementation logic and preparing detail technical design document.

Designed all the different components of Pseudo Choreographer.

Defined low level design for all the different components of Pseudo Choreographer.

Defined input and output message structure for all the components of Pseudo choreographer.

Designed Adapter for Webservice enabled systems interacting Pseudo Choreographer.

Environment: WBI Message Broker 6.0, Websphere MQ 6.0, Oracle 10g, AIX, Windows XP.

Team Size : 9

Nov 2004 – Oct 2007, Developer/Designer/Admin Support IBM India, India

Customer: One of the Leading Automotive Groups in US

As a Developer responsible for analysis, development, testing and implementation of Message Flow, Message Set and configuring Adapters for new implementations and enhancement of existing interfaces.

Developing Message Flows, Message Sets, configuring JDBC and SAP Adapter.

Testing and preparing test pack for interfaces assigned.

Deliver new and complex high quality solutions to clients in response to varying business requirements

Responsible for managing scope, planning, tracking, change control, aspects of the project.

As a Designer responsible for analyzing the requirements, testing and implementation of new implementations and enhancements of existing interfaces.

Translate customer requirements into formal requirements and design documents, establish specific solutions, and leading the efforts including programming and testing that culminate in client acceptance of the results.

Utilize in-depth knowledge of functional and Technical experience in Message Broker and WBI Adapter in conjunction with industry and business skills to deliver solutions to customer.

As an Admin Support responsible for analyzing any failure/problem in Production environment and fixing or giving resolution for the failure. Onshore US 1.5 Yrs (Approx).

Analyzing any production problems and resolving the Remedy Tickets.

Installation and upgrade of MQ on Windows 2003 Server.

Worked on Shell programs and MQ C programs to automate MFGPro transactions between all the plants in Asia Pacific Region and the Centralized Data Center.

Worked as a key team member for QA and Production Promotions and deployments.

Worked as a key team member for configuring JDBC Adapters and SAP Adapter in QA and Production environments.

Defining user roles and activities to Broker using Configuration Manager Proxy API.

Creating and Maintaining MQ Objects like Queues, Channels, and Process Definitions. Executing, Maintaining and Monitoring MQ Triggers.

Deploying bar files using Toolkit.

Creating and registering DSNs.

Creating Subscriptions using RFHUtil.

Preparing Qpasa email alerts and Qpasa Dashboard Views for the Business owners.

Environment: WBI Message Broker 5.0, WBI Message Broker 6.0, Websphere MQ 5.3, Websphere MQ 6.0, Oracle 9/10g, HP-UX, Windows 2003, SUSE Linux.

Achievements: Received a Special Recognition Award from Client, for exceeding customer expectation for contribution to i-TRADE implementation team.

Team Size : 12

Training:

WebSphere Business Integration Message Broker at IBM India

WebSphere MQ Series at IBM India

Personal profile

Name: Biswajit Kundu

Nationality: Indian

Highest Qualification: Master of Computer Application

University: National Institute of Technology, Durgapur (DU)



Contact this candidate