Post Job Free

Resume

Sign in

Data Life Insurance

Location:
Des Moines, IA
Posted:
December 08, 2016

Contact this candidate

Resume:

Kodanda Ramudu N

Email: - acxtim@r.postjobfree.com Mobile: +1-515-***-****

OBJECTIVE:

To pursue a challenging career in a progressive organization offering opportunities for utilizing my skills towards the growth of organization, here by in long run, preparing myself for taking on greater responsibilities within the company.

Summary:

I am a very passionate IT professional having more than 7 years of IT experience in Analysis, Design, Development and Implementation of Enterprise Data warehouse projects and familiar with the Big Data concepts.

Currently working as Tech Lead in Cognizant Technology Solutions. As a tech lead I got good appreciation from clients for taking care of unforeseen complex issues and delivering the project on time with quality deliverables.

I am a certified Professional of Informatica 8.6/9.x (DI Power Center Developer 9.x Specialist).

Having good experience using complex transformations like JAVA Transformation, XML Transformations, Joiner, Aggregator, Update Strategy and Look Up Transformations.

Gained good experience in using advanced Informatica concepts like partitioning, concurrent execution of workflows and developing the logic to call one workflow from another workflow, using debugger e.t.c.,

Expertise in root cause analysis and bug fix for the production issues.

Worked with heterogeneous sources and targets like Teradata, Oracle, SQL Server, DB2 and Flat files.

Good exposure to COBOL files (VSAM) and complex XML files.

Experience in developing UNIX Shell Scripts for various requirements.

Good knowledge on Control- M scheduling tool.

Proficient in understanding business requirements and translating them into technical requirements and test strategies.

Interact with client on getting clarifications about the business requirements and managed the offshore team.

I have participated in data modeling, preparing high level and low level design documents.

Strong in Data warehousing concepts Fact table, dimensional table, star and snowflakes schema methodologies.

Very good understanding of Agile SCRUM model and SDLC.

Knowledge in Installation & Configuration of Informatica, Oracle, Hadoop pseudo mode installation.

Having very good knowledge and hands on experience on BIGDATA eco system tools like Hadoop, Map Reduce, Hive, Pig Latin .e.t.c.,

Having experience on various domains like Insurance, Banking Financial Services and Health Care.

TECHNICAL SKILLS:

Operating Systems : Linux/Unix, Windows

Languages : Oracle SQL, PL/SQL, Unix Shell Scripting and JAVA

Databases : Teradata, Oracle 10g/11g and DB2.

ETL Tools & Utilities : Informatica 8.6/9.x, Teradata Loading Utilities, SQL

Developer, Teradata SQL Assistant (Query man), Putty,

Control - M and Lotus Notes.

Big Data Eco system : HDFS, Map Reduce, Pig and Hive.

Domain Knowledge : Insurance, Health Care, Banking and Financial Servicces

Experience Summary:

Dec 2009 – Feb 2014 Cognizant Technology Solutions DW Developer DI

Mar 2014 – May 2015 Cognizant Technology Solutions Sr.Developer

May2015 – Till Date Cognizant Technology Solutions Tech Lead

ACADEMICS:

M.C.A from Sri Venkateswara University Campus, Tirupati during 2006-2009 with 81%.

B.Sc from Sri Krishna Devaraya University, Anantapur during 2003-2006 with 83%

Certifications & Awards:

LOMA 280 – Insurance Fundamentals.

Informatica Certified Developer (ICS) – DI Power Center Developer 9.x Specialist (Certificate No. 004-000375)

Received “Spot of Excellence Award” from CTS for addressing Production Issue.

Project #1 Claims Ad-hoc Reporting

Project Description:

Claims Settlement Information system is SFG’s Risk analysis and settlement processing system. The objective of the project is to get data from multiple admin systems, new business systems and integrate the data loading into Enterprise Data warehousing and Claims Data mart. Finally, create the reports based on claim paid to date, policy issue date and other criterias.

Roles and Responsibilities:

Closely worked with business users and business analyst to understand the business requirements.

Worked with data modeler to design dimensional model that fits best to the requirements.

Prepared the high level design.

Created Informatica mappings to load data from different source like ODS, Fiserv, SWAP, CSI to Data warehousing and Data mart.

Used Java transformations to encrypt the sensitive data.

Used XML transformations to extract data from XML files into staging data base.

Used connected and unconnected lookups to look up for values in relational tables.

Designed and developed the mapping which generates the parameter file dynamically.

Prepared the Unit Test Cases and tested the same.

Worked on ETL code elevation from Dev to QA/Test/Acceptance.

Extensively involved in fine tuning of mappings and sessions for better performance and through put.

I have defined the jobs flow which loads the data into Enterprise Data warehousing and Claims Datamart.

Scheduled the jobs in Control M

Project #2 Specialty Markets – Inbound Client: Sammons Financial Group

Project Description:

Sammons Financial Group has implemented the intelligent e-App product for specialty markets final expense products. e-App will provide the data feed in the form of f1203 XML files for pre-defined events. 1203 XML files are ACORD standard xml data feed files, which are used to send data to the distributors. SFG as consumer takes the 1203 XML files and consumes into ODS system for further reporting purpose.

Roles and Responsibilities:

Closely worked with business users and business analyst to understand the business requirements.

ACORD XSD is a very complex XSDS to import into Informatica as a source definition.

I did research and found the solution to import the XSD into Informatica.

Prepared the high level design and presented the same to architects to get sign-off.

Used XML transformations to parse the xml files and loaded into stage database.

There are many views and domain tables in the 1203 XML files, I have analyzed the ACORD model XSD and found the required domain tables, views and created the corresponding table structures to load the data.

Owned the complete project and developed from end-to-end.

Developed mappings and workflows in Informatica to extract data from 1203 XML files for automated feeds and also in tuning of mappings for better performance.

Developed ETL mappings to data mask sensitive information.

Extensively involved in fine tuning of mappings and sessions for better performance and through put.

Maintained Code migration from Dev to Test and Test to UAT and UAT to Production.

Project #3 SRS Invoicing Client: Sammons Financial Group

Project Description:

Sammons Retirement Solutions is a member of Sammons Financial Group. SRS provides simple, innovative and straight forward solutions that can help individual investors live well in retirement. These solutions include mutual funds and annuities. SRS maintains the mutual funds offered by different companies. At the end of each month and quarter it creates the invoices with average daily balance and fee details. The objective of the project is to create Invoices for each company based on Fund type and Fee type.

Roles and Responsibilities:

Worked with business users and business analyst to understand the business requirements.

Closely worked with data modelers and prepared the data model (Snow-flake schema) for the project.

Prepared the high level design and presented the same to architects to get sign-off.

Owned the complete project and developed from end-to-end.

Developed mappings and workflows in Informatica to extract data from CSV files for automated feeds and also in tuning of mappings for better performance.

Written stored procedure to populate calendar, to pull data from fact and dimension tables based on the individual company invoice frequency.

Extensively involved in fine tuning of mappings and sessions for better performance and through put.

Worked with SSRS team to create the reports and owned the SSRS reports for further enhancements as per the business requirements.

Educated the business team on new system and explained them on how feed the data to get better invoices.

Project #4 MASAGA-Keystone; Nationwide

Title : MASAGA-Keystone; Nationwide

Client : Core Logic Inc,

Role : DW Developer DI

Team Size : 4

Environment : Informatica 9.0, Unix, Oracle, Shell Scripting, Putty, Control-M,

SQLDeveloper

Duration : Jan 2012 – Till Date

Description :

Moving Away from Software AG Asap: is the acronym for MASAGA. Data extracts will be provided from DIABLO. Informatica will be used for extraction, transformation and load (ETL) process to load into different product databases. DIABLO will load extract files into Informatica file system. ETL process will look for these county specific extract file every hour to load into target product databases, along with the statistics of the entire loads in BMD. This product provides data for applications which are undisputed resource for searching and locating real estate information. Now, you can search one resource for all the in-depth property and ownership information you need. Customer gets the genuine information by hitting the nation’s largest, most accurate database, covering more than 3,085 counties and representing 97 percent of all U.S. real estate transactions. We obtain property records, tax assessments, property characteristics and parcel maps from tax assessors and county recorders offices across the nation.

KEY RESPONSIBILITIES

Maintain file extract loading information in a centralized repository.

Provide a new way to track file loading processes. ETL process information for property tax roll and transaction data is currently not maintained by the Product Database Management and Infrastructure team.

Efficiently load data into the target databases as extracts are acquired from the Diablo database.

Track any issues that occur during the ETL process.

Provide easy maintenance of the metadata-driven loading process.

Halt or amend the ETL process depending on the severity of the errors that occur.

Dynamically alter the ETL process in order to accommodate different issues that occur.

Provide Management with insight into the ETL processes in order to identify and resolve issues that may occur.

Produce reports detailing the specific files that are loaded throughout the day, counties that have been updated or refreshed, the number of records that have been updated or inserted and errors that have occurred.

BMD’s main requirement is to support reporting that will occur on an ad-doc basis.

Contributions:

As an individual contributor below are my contributions:

Developed various mappings for extracting data from various sources involving flat files and relational source.

Extensively used the various Transformations such as Source Qualifier, Aggregator, filter, Router, Lookup (Unconnected), Expression, Sequence generator and JAVA Transformation, Update Strategy Transformation.

Preparation of Unit Test Cases and testing the same.

Created Stored Procedures in Oracle for Temp table creation, Index creation, swap, analyze, Delta table bulk insert/delete, statistics loadings. These should be executable in multiple schemas on multiple databases in parallel in a given database.

Extensively involved in Fine-tuning of Informatica Code (mapping and sessions), Stored Procedures, SQL to obtain optimal performance and throughput.

Project #5 MLS Direct Feed Load

Title : MLS Direct Feed Load

Client : Core Logic Real Estate

Role : Developer

Technology : Informatica 9.0, Unix, Oracle, Shell Scripting, Putty, Control-M,

SQLDeveloper

Duration : Jan 2012 – Till Date

Description :

Core Logic recognizes that MLS providers, brokers and agents need their IT solutions to do more than ever before. Provide innovative non-dues revenue opportunities. Enhance real estate data. It's more than just real estate listings. It's the robust MLS information technology, all in one place. MLS Professional Services: The real estate industry is constantly evolving, often requiring system modifications or changes that can affect your operations on a small or large scale. CoreLogic provides a full range of Professional Services for MLS that enable to stay in step with the times. The Process Loads the Data to Keystone MLS (Oracle) database used by K2 Realist, Realist Classic Applications. The Source is AGDB Server (SQL), which has data in multiple MLS Boards. The Data is fetched from AGDB MLS Boards which are active for Application. Initially, fetched data is loaded into temp table in Target database. Then the Incremental load runs for every two hours using temp table. After the final Incremental load, the Index creation, swap table & Analyze runs followed by Clone to the Production server.

KEY RESPONSIBILITIES

Objective of the Project:

Data in AGDB server (Source) resides in multiple databases in SQL server.

Need to accommodate complete refresh along with incremental data.

Need to have control over the Boards data being loaded into target.

Historical data is not maintained.

Need have flag to add New Boards data as per client’s Mail.

The Keystone MLS database will be spatial enabled.

The data between the KEYSTONE search servers and loading server will be synchronized using Oracle Streams.

Frequency is 24hrs 7days a week. Main fetch, load and delta’s for that day.

Contribution:

Finalization of the approach for fetching data from multiple databases, and should be capable of restart -ability with high performance

Worked on transaction control to create multiple files with respect to each MLS_Board.

Parallel fetch at session level and multiple database data fetch using sql transformation.

Validation of fetched files data with respect to previous fetch data & source data in case of discrepancy.

Trigger files with event wait to hold till dependent job completes.

Created linux scripts for validation of file counts and trigger mails with details in mail body.

Loading the data into temp table, and creating basic indexes which help in delta deletes and updates.

Performance on fetching incremental data from multiple sources for delta runs and mailing for any connection issues.

A total of 4 to 6 delta fetches and loading data into temp table per day should have great performances; which involved in multiple POC’s.

Creating indexes using stored procedures, swapping table with actual table and loaded table. Analyzing the data.

Reports are generated on demand and adding databases (MLS_BOARDS) based on requests from clients.

Project #6 MEDCO health care solutions

Title : Medco Enterprise Data warehouse System (MEDW).

Client : Medco Health Care Solutions

Technology : Informatica 8.6,Unix,Oracle, Shell Scripting, Putty, Control-M,

SQLDeveloper

Duration : Aug 2011 – Jan 2012

Description :

Medco is Health Plans & Providers Company in USA. The Medco Pharmacy can help clients improve their plan's clinical effectiveness and deliver a level of patient safety in addition to helping meet clients' financial objectives.

Medco Health Solutions provide a wide spectrum of benefit design options for appropriately sharing costs between the plan and enrolees, and providing proper incentives for encouraging preferential use of more cost-effective treatments and pharmacies.

The scope of the Project is to extract the Claims records on legacy System to staging area (TERADATA System). Using the Informatica tool Data is loaded into Warehousing system from staging area for Decision Support Systems and Marketing Departments to take Business Impact Decisions.

The TERADATA utilities like BTEQ, Fast Load and Fast Export are used to extract data from Legacy System.

Mappings and Workflows are developed to extract the data from TERADATA System and load into Data warehousing system

KEY RESPONSIBILITIES

Working with business users and business analyst for requirements gathering and business analysis.

Developed mappings and workflows in Informatica to extract data from files in disparate file formats (fixed width, delimited, etc.) for automated feeds and also in tuning of mappings for better performance.

Used the various Transformations such as Source Qualifier, Aggregator, filter, Router, Lookup (Unconnected), Expression, Sequence generator and JAVA Transformation, Update Strategy Transformation.

Written Teradata Scripts using BTEQ, FAST EXPORT, FAST LOAD...e.t.c., and executed the same.

Gained experience on Triggers, Macros

Extracted the data from various format flat files and databases like Flat Files and consolidate the data into Oracle database.

Extensively used the various Transformations such as Source Qualifier, Aggregator, filter, Router, Lookup (Unconnected), Expression, Sequence generator and JAVA Transformation, Update Strategy Transformation.

Developed mappings and workflows in Informatica to extract data from files in disparate file formats (fixed width, delimited, etc.) for automated feeds and also in tuning of mappings for better performance.

Extensively involved in fine tuning of mappings and sessions for better performance and through put.

Project #7 mANU LIFE jAPAN Life insurance

Title : MLJ Life Insurance

Environment : Informatica 8.6, DB2,Unix,Shell Scripting, Putty, Control-M

Duration : Dec 2009- July 2011

Description :

Manulife Life Insurance, Japan is one of the first foreign life insurance companies to establish operations in Japan, entering the market in 1901. Manulife re-entered Japan in 1999, laying the foundation for the establishment of Manulife Life Insurance Company.

The MLJ Life Insurance system supports traditional and non-traditional products, administration, accounting, valuation, tax, agency, claims, and reinsurance. The system provides the facility for the entry and edit of all application information assesses elementary underwriting risk and either issues a policy automatically or refers the case to an underwriter for further review based on company-defined underwriting limits and automated decision criteria.

KEY RESPONSIBILITIES

Created the Source and Target Definitions in Informatica Power Center Designer

Analyzing the Specification Document and preparation of low level document

Created Reusable Transformations and Mapplets to use in Multiple Mappings

Developed and updated documentation of processes and system.

Wrote Triggers and Stored Procedures using PL/SQL for Incremental updates

Prepared UNIX Shell Scripts

POC #1 BIGDATA – Hive POC

Proof of Concept: Processing DataMart Job logs.

Introduction:

Our client develops mortgage products and maintains the data for analysis and better evaluations of properties. As part of Data storage and maintains every day thousands of jobs runs pulling the data from different sources and storing it in Centralized Data Warehouse. During which, jobs may get failed with different reasons, could be data issue, server issue, network issue.. e.t.c.,

Approximately, 10000 files would be getting generated per day. Analyzing these huge numbers of files and capturing the failure details for further querying is not a simple task.

Client requested to analyze all these files and want to know, frequency of issues and reason for the particular issue and what action is taken as part of resolutions and when was particular failure happened earlier ? and what was the impact ?.

Proposed Solution:

After analyzing the client requirements, we have come up with below mentioned design.

--> Gather all the job logs from Warehouse system and place it in Hadoop Distributed file system.

Since, Data needs to be analyzed for error details in log files, Hive is chosen.

Using hive load the log file data into Hive managed partitioned/bucketed tables. These are the master tables.

Query the master tables and figure out various errors.

Develop a dashboard table, which holds the error type, error details, frequency and Impact of the error on productions and customers.

PERSONAL DETIALS

Name : Kodanda Ramudu N

Date of Birth : Jun 6th 1986

Gender : Male

Marital status : Married

Nationality : Indian

Email Id : acxtim@r.postjobfree.com

Mobile : +1-515-***-****



Contact this candidate