Post Job Free

Resume

Sign in

Data Project

Location:
Richmond, VA
Posted:
April 08, 2015

Contact this candidate

Resume:

SHAKTI PRASAD MOHAPATRA

(Cloudera Certified Administrator for Apache Hadoop (CCAH))

Address: **** ****** ******, *** ****, Glen Allen VA 23060

Contact: 1-804-***-**** Email:aco3i3@r.postjobfree.com

EXPERIENCE SUMMARY

9 years of experience in Information Technology industry with 4+ years in client facing role focused on Banking

and Financial Service domain as well as Insurance industry having worked for Fortune 500 Clients such as

Capital one, Travelers with multiple skillsets such as Mainframe, Hadoop, ETL, Java, Data Virtualization.

SPECIAL SKILLS

Cloudera Certified Administrator for Apache Hadoop (CCAH))

M102 Certified MongoDB for DBAs

Experienced on working with Big Data and Hadoop File System (HDFS).

Hands on experience in working with Ecosystems like Hive, Pig, Sqoop, Map Reduce, Flume, Oozie,

HBASE, Phoenix, Impala

Strong knowledge of Hadoop, HDFS, Hive and PIG.

Efficient in building hive, pig and map reduce scripts.

Experienced in Data Virtualization tools such as Jboss, Teiid.

Being worked in multiple roles such as designer/developer, Tech lead, BSA, Technology evaluator has

given me immense confidence in adapting new technologies and any challenging roles.

Strong exposure to IT consulting, software project management, team leadership, design,

development, implementation, maintenance/support and Integration of Enterprise Software

Applications, Technology Evaluations in Proof of Concepts.

Strong experience on projects involving mobile applications on Digital and Card Processing.

Extensive experience working on projects with Agile Methodology which includes Story writing, Sprint

Planning, providing updates thru Stand ups, participating in Sprint retro with feature team.

Experienced using VERSIONONE to manage agile projects.

Strong experience on both Development and Maintenance/Support projects.

Good team player with excellent communication skills.

TECHNICAL SKILLS

Technologies Big data/Hadoop, Mainframe Legacy Systems, AB INITIO, Data Virtualization

Tools / Languages: CDH 4.2, Hive, Pig, Sqoop, Flume, Impala, COBOL, JCL, Java Map Reduce,

Jboss DV, Teiid Designer.

Databases: Mongo, DB2 V10.0

Operating Systems: Linux, Mainframe ZOS

Database Tools: Phoenix,OMEGAMON,SPUFI,DB2ADM, QMF, File Manager

DB2,Datastudio,ERWIN, Jboss Studio

Scheduling Tools: Oozie,CA 7, Control M

File Management & File Manager, CAFM, Endevor, ChangeMan

Version Control Tools

PROJECT PROFILE:

Employer : Cognizant Technology Solutions Client: Capital One

Role : Senior Associate 5+yrs

Data Virtualization POC:

Objective of this POC is to leverage the advantages of Data Virtualization tools to create an

abstract layer on top of disparate data sources such as DB2, Teradata, Hadoop, Web Services

without actually physically transferring the data.

Roles and Responsibilities:

Technologies & Tools used: Jboss Studio 8.0.0, Jboss DV 6.1 Beta, Teiid Designer, Eclipse, DB2,

Teradata, Hadoop, Hive, HDFS.

Activities:

Evaluating Jboss Teiid,

Creating connection profiles for various data sources such as DB2, Teradata, Hive,

Flatfile through JDBC/ODBC connections.

Create, Deploy, Execute VDBs to test the abstract layer.

Evaluating Apache JENA, RDB2RDF mapping to be able to create a symbolic layer on

top of the abstract layer(Work in progress).

Customer Exposure View(CEV)

As a part of Customer Exposure View (CEV) project, Capital One is trying to create an application

which will provide 360 degree customer view to the Capital One customer support

representatives. This will help the customer support in real time decision making as they can

access the details around the customer like

What all products customer uses?

Customer’s payment activities for all the products

How the customer is performing?

Roles and Responsibilities:

Technologies used – Hive, Pig, Java MapReduce, Unix Shell Scripting, ControlM

Activities :

Filling the data lake with customer data from various sources

Define ETL process using Pig, Java MapReduce

Load the data into Hive

Enterprise Data Store Enhancements & Maintenances

Database Administration and maintenance for various applications using databases within EDS

(Enterprise Data Store) as well as ADS (Application Data Store) environments which is a near

real time Data Store for consumption of operational data for various real time services as well as

for client facing and front end Applications within Capital One.

Roles and Responsibilities:

Understand System requirement, High level Design, Data Model and transform into

Database design.

Coordinate with project team to ensure optimized Table Design approach to be followed.

Arrange High level Design Review, PDM Review with Platform team.

Design the DB2 Objects across Dev and QA regions.

Reviewing Queries in Batch or Online programs (Stored Procedures) to ensure optimal

performance.

Reviewing Load, Unload and other utility jobs.

Prepare and review Pre Prod Documents.

Preparing Production scripts and working with Production DBA to perform the changes in

production environment.

Working with Production DBA to schedule Imagecopy jobs, RUNSTATs and REORG jobs

as required.

Support QS testing

Support Performance testing and Production Implementations.

Monitoring the query performance using OMEGAMON.

Operational Data Store Service Definition Factory(ODS SDF)

This Project involves developing solutions for the Capital One Cards/ Bank/ Financial/Digital

Services line of business that involve ODS. Often these solutions are part of a enterprise wide IT

implementations that target improve Capital One’s business. Being a data store most of the

projects involving ODS, require ODS to store data that could be provided by any of the Capital

One internal systems or its partners and used by same through real time or batch processes.

Technically ETL processes and mainframe processes are developed to load and retrieve data.

Roles and Responsibilities:

Leading offshore team.

Preparing project statistics, estimation, Analysis,

Design, Coding, Review.

Unit testing for the Online and Batch applications.

System testing support, Performing Implementation tasks, Validations during project

deployment.

Reviewing complex business requirements and System requirements.

Analyzing the requirements and Coordinating with multiple stake holders to

design/propose an optimal solution using Mainframe/ETL technologies.

Reviewing Detail Design, Implementation plan with Clients and external Project team.

Reviewing system test cases, Supporting Integration testing, System testing,

Driving Project Implementation.

Employer : L&T Infotech Ltd. Client: TRAVELERS

Role : Developer/Support Analyst 3yrs 7 months

Personal Line Information Platform (PLIP)

PLIP is data warehouse containing data related to personal lines auto. It contains Current as well

as History data pertaining to the auto insurance.

Following are the major tasks performed on the system.

Collecting data from various upstream sources,

Formatting of the collected data.

Storing it in the database.

Extraction of data from database to various downstream sources on a monthly and

quarterly basis.

This Project also involves enhancements activities to the various processes that stores

data/policies pertaining to the auto insurance in the PLIP (Personal Lines Information Platform)

database. There are mainly 4 processes starting with selecting auto insurance data from the file

sent from the CARS DB, then formatting the data in the way PLIP can accept, then splitting the

same into different state wise and finally insert/update the database. The information stored in

PLIP are used by various downstream systems like FIN DB, PRICING, ISO Auto and Automart for

various financial calculations, Reporting and for Audit Purpose.

The important downstream system is the ISO AUTO. This system extracts the necessary

information from PLIP DB and reformats the extracted data in various steps/processes and

reports the valid data to the Insurance Service Office (ISO) which is a regulatory authority of the

United States. The data submitted to ISO is for compliance and any delay/non compliance would

lead to a heavy penalty to the customer.

The system involves various daily, weekly, monthly, quarterly and yearly jobs which are scheduled

in CA 7 to keep the system up and running.

Roles and Responsibilities:

Coding and Unit testing Scheduling and monitoring of various batch jobs.

Preparing Scheduling Calendar yearly basis. Monitoring of the scheduled and On

Demand jobs. Fixing any production abend, preparing the emergency jobs/procs and

raising request to restart the same.

Handling adhoc request from clients which involves some analysis of any existing

process, providing data requested by client or any other external institutions which

requests some particular data.

Handling enhancements of some existing processes. Enhancement involves gathering

requirements, preparing test cases and then coding, unit testing, supporting system

testing and production deployment.

CERTIFICATIONS

Certification Name Year

M102: Mongo DB for DBAs 2015

Cloudera Certified Administrator for Apache Hadoop(CCAH) 2014

IBM Certification for Database Administrator DB2 10 for z/OS(DB2 612) 2013

IBM DB2 UDB V8.1 Family Fundamentals (DB2 700)

2010

DB2(R) Universal Database V8.1(DB2 703) 2009

IBM Database Administrator DB2 UDB V8.1(DB2 702) 2008

Property and Liability Insurance Principles From AICPCU(INS 21) 2007

ACHIEVEMENTS

Award Name Organization Year

Above & Beyond Cognizant Technology Solutions 2011

Achiever of the Month L&T Infotech Ltd. 2009

Spot Excellence Award L&T Infotech Ltd. 2008

Spot Excellence Award L&T Infotech Ltd. 2007

EDUCATION

Bachelor in Engineering in Comp. Sc. & Engg., Orissa Engineering College, Bhubaneswar,

Odisha, India 2006.



Contact this candidate