Post Job Free
Sign in

Life Insurance Data

Location:
Mundelein, IL
Posted:
March 01, 2020

Contact this candidate

Resume:

Trinath Chittiprolu

Mobile: 614-***-****

Email: adb2zq@r.postjobfree.com

Professional Experience

Overall 13 years of professional IT work experience in Analysis, Design, Development, Deployment and Maintenance of critical software and big data applications.

3+ years of hands on experience across Hadoop eco systems that includes extensive experience into Big Data technologies like Spark, HDFS, YARN, Hive and Oozie.

Having knowledge about Hadoop architecture and its different components such as HDFS, Job tracker, Task tracker, Resource Manager, Name Node, Data Node and Map Reduce concepts.

Extensively worked on Sqoop, Hadoop, Hive, Spark for importing and exporting the data systems having various data sources, data targets and data formats.

Built SPARK application using Python (PySpark) depending on the business requirement.

Developed SPARK application using UDF’s, DataFrames, and Hive SQL to write data into Hive tables and retrieve back.

Implemented Spark SQL queries and the Hive queries with the programmatic data manipulations supported by data frames in python.

Used SVN version control tool to push and pull functions to get the updated code from repository.

Worked with different clients and expertise in the Insurance, transportation and banking industry

Experience in Agile Methodology and completed the scrum master certification.

Played multiple roles – Scrum master, Tech lead, Developer, and Requirements Analyst.

Lead daily stand-ups and scrum ceremonies for two scrum teams. Work with product owners to groom the backlog and plan sprints. Track, escalate and remove impediments. Report at daily Scrum of Scrum meetings. Track burn down, issues and progress in Version One. Work with component teams to resolve issues.

Collaborated with members of the Product, Business and Engineering Teams to develop and maintain Product Backlogs

Data integration expertise in developing ETL mappings using Informatica PowerCenter 9.x from disparate Data sources like flat files, WSDL, VSAM and XML files.

Experience in Incremental Aggregation, Dynamic Lookups, Transaction Control and Constraint Based Loading, Target Load Plan, Partitioning, Persistent Caching.

Experience in mainframe technologies with COBOL, JCL, DB2, VSAM, CICS and Web Services

Worked in creating DB2 Stored procedures.

Excellent ability with competitive spirit in finding various approaches for problem solving and highly skillful, creative and innovative in developing efficient logic/code.

As a Subject matter expert, periodically communicate our operational costs and push future initiatives to produce cost-befit analysis.

A very Good team player with ability to solve problems, organize and prioritize multiple tasks.

Technical Skills

Operating System

Windows, Linux, MVS, Z/OS, OS/390,

Hadoop/Big Data

HDFS, MapReduce, Spark, HIVE, Sqoop, Impala, HBase, Hue

Languages

Python, Spark, SQL, JCL, COBOL, REXX

Databases

Oracle 11g, SQL Server, IBM DB2

Distributed platforms

Cloudera

ETL Tools

Informatica Power center9.x, SSIS

Tools

PyCharm, FILEAID, File manager, SPUFI, QMF, CHANGEMANN, IBM Debugger, CA Workload, ESP, SOAP, TFS&UCDP

Analytical Tool

Power Bi, Qlik

Others

MobaXterm, Putty, WinSCP, SVN, Autosys, GitHub

Employment History

August 2019 – Till Date

AbbVie

December 2010 – August 2019

IBM Global Services

July 2005 – November 2010

Keane

Education

Bachelor of Technology in Information and technology, Arulmigu Meenakshi Amman College of Engineering Chennai, India, Anna University, 2005.

Certification

CSM (Certified Scrum Master)

INS 21 Insurance Certified Professional.

IBM DB2 Certified (703) Application Developer

Awards & Recognitions

IBM GIC Onsite Talent Award for consecutive years

Year 2016 and 2017

IBM Eminence and Excellence award for consecutive years

Year 2013 and 2014

Keane DI3 Award (Delivery Integration*Innovation*Inspiration)

August 2010

Keane, K-Pin Award -2009

February 2009

Professional Experience

AbbVie, North Chicago, IL BigData Engineer

Project : Name : Operations Business Insight (OBI)

Client : AbbVie

Start Date : 19AUG2019

End date : Till Date

Role : Sr. Hadoop and Spark developer

Project Description:

OBI delivers the Business analytics and data management capabilities across all operational units (Quality, Purchasing, Finance and central service). Develops the use cases and define with the business to enable differentiation through analytics by defining a data approach with a simple and streamlined data centralization process.

Project Responsibilities:

Involved in complete Bigdata flow of the application starting from data ingestion from upstream to HDFS, processing and analyzing the data in HDFS

The Operational Data Lake (ODL) platform is used to load the single source of data across OBI functional units.

Experienced in importing and exporting the data using Sqoop from Relational Database systems /SAP application to HDFS and vice-versa.

Created Partitioned and Bucketed Hive tables in Parquet File Formats with Snappy compression and then loaded data into Parquet hive tables from Avro hive tables.

Experienced in handling large datasets using Partitions, Spark in Memory capabilities, Broadcasts in Spark, Effective & efficient Joins, Transformations and other during ingestion process itself.

Developed PySpark application using DataFrame, SPARK SQL, Broadcast, UDF’s functions and Hive SQL.

Analyzed the SQL scripts and designed the solution to implement using Pyspark

Developed shell scripts for running Hive scripts in Hive and Impala

Involved Hive table creation, loading and visualizing the data using Qlik.

Developed Hive queries to process the data and generate the data cubes for visualizing

Developed Spark scripts by using Python and shell commands as per the requirement.

Created Autosys workflow for PySpark application to extract and load the same in Hive partitioned table.

Performed SQL Joins among Hive tables to get input for Spark batch process

Provided support for QA team during the test cycle and fixed defects on the PySpark application.

Environment: Hadoop, Python, HDFS, YARN, HIVE, Sqoop, Linux shell scripting, Spark SQL, ServiceNow, Autosys.

MetLife, Wilmington, DE BigData Engineer, IBM

Project : Name : PCTS - PERSONAL CLOSEOUT TRACKING SYSTEM

Client : METLIFE

Start Date : 05FEB2017

End date : 16AUG2019

Role : Hadoop and Spark developer

Project Description:

Metropolitan Life Insurance Company is known as MetLife. It is one of the largest global providers of insurance, annuities and employee benefit programs. It provides a full range of Life insurance, Dental, Disability, Annuities auto and homeMetLife has approximately 100 million customers worldwide, has operations in nearly 60 countries and holds leading market positions in the United States, Japan, Latin America, Asia, Europe, the Middle East and Africa.

Personal Closeout Tracking System (PCTS) is the application which deals with the Admin data. Currently the Contract list is generated at each monthly and sent to Admin system to extract the Life and Group level information including the rate records. After receiving the data from Admin system, it creates different reports based on business functionality. Currently the systems run in a scripted environment using the vendor code and we are doing the migration by using web services and XML.

Project Responsibilities:

Analyzed legacy application modules and extract logic to implement in PySpark along with new requirement provided by business.

Developed PySpark application using DataFrame, SPARK SQL, Broadcast, UDF’s functions and Hive SQL.

Loaded data from HDFS to hive tables using PySpark application.

Experienced in handling large datasets using Partitions, Spark in Memory capabilities, Broadcasts in Spark, Effective & efficient Joins, Transformations and other during ingestion process itself.

Analyzed the SQL scripts and designed the solution to implement using Pyspark

Writing the HIVE queries to extract the data processed.

Worked on POC to load various structured files coming from mainframe to hive tables using single PySpark program.

Involved Hive table creation, loading and visualizing the data using Power Bi.

Developed Spark scripts by using Python and shell commands as per the requirement.

Created Oozie workflow for PySpark application to extract and load the same in Hive partitioned table.

Involved in utilizing Spark API over Hadoop YARN as execution motor for information investigation utilizing Hive and presented the information to BI group for producing reports, after the handling and examining of information in Spark SQL

Performed SQL Joins among Hive tables to get input for Spark batch process

Developed oozie workflow for scheduling & orchestrating the ETL process.

Reviewed the PySpark code developed by offshore team.

Provided support for QA team during the test cycle and fixed defects on the PySpark application.

Created mainframe programs and jobs to place data in Linux for HDFS data movement.

Trained offshore team to develop PySpark applications and Hive SQL’s to retrieve and manipulate the same.

Manage multiple projects parallel with offshore on both Big Data and ETL applications

Involved in everyday SCRUM gatherings to examine the advancement/advance and was dynamic in making

Scrum gatherings progressively gainful.

Environment: Python, Spark SQL, Hadoop YARN, Spark Streaming, Hive, Hue, Oozie, Maestro, HDFS, Zeppelin, SSIS, SQL Server and UNIX

Nationwide, Columbus OH Team Lead, IBM

Project : Name : CUSTOMER SERVICE BILLING APPLICATION (CSB)

Client : NATIONWIDE

Start Date : 13DEC2010

End date : 31Jan2017

Role : TEAM LEAD

Project Description:

Nationwide is one of the largest multi-insurer and financial services organizations. The company also operates regional headquarters in Des Moines, Iowa, and San Antonio, Texas, and Gainesville, Florida. Nationwide consists of three core businesses: domestic property, casualty insurance and life insurance and retirement savings, and asset management. It provides a full range of insurance and financial services, including auto, homeowners and commercial insurance, life insurance, annuities, retirement plans, mutual funds, and employer-related administrative services, and also engaged in strategic investments

CSB is the application which deals with the billing solutions for Nationwide Insurance (NI). It comprises of New Business, Statement, Receipt, Reconciliation, Renewal, Cancellation, Reinstatement, General Ledger and Endorsement processes. Nationwide acquired Allied Insurance and they want to generate and process the Allied Insurance policies from NBP. So, the business had taken decision to use the NBP as the going forward billing system and Allied Insurance system as Policy system.

Project Responsibilities:

Being Advisory Subject Matter Expert, Interacted with Clients directly for requirement gathering and translating the same to High Level Design.

Translated customer requirements into formal requirements and design documents, establish specific solutions, and lead the efforts including programming and testing that culminate in client acceptance of the results. Delivered new and complex high-quality solutions to clients in response to varying business requirements.

Worked as Tech Lead to offshore and onsite Developers on the Requirements, Design and Development of CSB application.

Lead daily stand-ups and scrum calls. Work with product owners to groom the backlog and plan sprints. Track, escalate and remove impediments. Report at daily Scrum of Scrum meetings. Track burn down, issues and progress in Version One. Work with component teams to resolve issues.

Communicated the progress to senior management thru 'Burndown Charts'. Monitored the Quality thru metrics and mentored team thru the project management processes.

Provided technical leadership to production support team.

Worked in creating DB2 Stored procedures.

Worked on Multiload, Fastload, Fast Export, TPT utilities to Load and unload data to and from the Teradata database.

Used Informatica client tools such as Informatica Power Center, Workflow Manager, and Workflow Monitor etc. The objective is to extract data from source systems like Oracle, flat files, salesforce application into a staging database and then load into single data warehouse.

Developed standard and reusable mappings and mapplets using various transformations like Expression, Sorter, Aggregator, Joiner, Router, Lookup (Connected and Unconnected), Filter, Update Strategy, Sequence Generator, Normalizer and Rank.

Experience in creating Reusable Tasks (Sessions, Command, Email) and Non-Reusable Tasks (Decision, Event Wait, Event Raise, Timer, Assignment, Control).

Developed shell scripts for Daily and monthly Loads and scheduled jobs using UNIX and ESP.

Transitioned changes to Run/Support team for each Production release and provide initial problem troubleshooting and resolution for Production Failures.

Performing the code migration in to the Release packages thru Harvest, Changeman and running the Package level audits.

Reduce the elapsed processing times and meet required SLAs.

Monitoring the Jobs and Schedule them using CA-7 and ESP

Reviewed deliverables of all phases in SDLC i.e. Design document, Code Review, Unit & System test results and Deployment plans.

Responsible for Estimation & Tracking of the Maintenance requests.

Responsible for Team building, mentoring and coaching new team members to make them productive in short span of time.

Environment: Informatica 9.5, Teradata, Oracle 11g, UNIX, ESP

CSX, Jacksonville, FL Team Lead, Keane

Project : Name : TERMINAL YARD MANAGEMENT (TYMS)

Client : CSX

Start Date : 15JUL2005

End date : 20NOV2010

Role : TEAM LEAD

Project Description:

TYMS is an integrated Transport Management System for CSX, whose primary responsibility includes delivery of commodities via trains, rail cars, trailers, containers etc. CSX services the customer's needs by picking up the commodity at the customer's industry, switching rail into train, moving the train to destination and finally delivering the commodity to the customer or to another railroad. TYMS was designed to maintain a perpetual track-to-track inventory of cars within a yard and assist yard forces in the process that accomplishes the handling of rail traffic through a switching classification yard.

Project Responsibilities:

As a Team Lead and Analyst played a key role in various phases of the project, viz., Estimations, Enhancements and Implementations using Agile project management methodology.

Coordination between offshore and Client Project Team, Project manager. Providing Technical and Business guidance to Offshore team members.

Work closely with Business Analysts in Gathering requirements, analyzing feasibility and timelines.

Assign tasks to offshore resources and help them in their analysis

As a Senior Developer coded many programs in CICS, Websphere MQ, and DB2 Stored Procs.

Develop/modify existing COBOL applications to send messages to the presentation layer (JAVA) using Message Queuing and MQI calls.

Analyzed, designed and developed JAVA-COBOL-CICS-IMS applications for Reengineering and SOA adoption.

Written application programs to transfer user data from SHIPCSX to Mainframe systems using COBOL/MQ Series for the completion of user request.

Experience in setting up NDM - Connect Direct connection, FTP and SFTP mailbox for file transmission between CSX and outside vendors.

Participated in application testing for newly created COBOL/MQ series modules.

Create, Modify, Enhance Jobs (JCL’s) and Proc’s as part of the Integration.

Designed complex mappings for date chaining and deduping before loading the data into target database.

Performing the code migration into the Release packages thru Changeman and running the Package level audits

Environment: Z/OS, COBOL, CICS, DB2, UNIX, SPUFI, XML, JCL, VSAM, TSO/ISPF, File-Manager, FTP, Endevor, CA-7, File-Aid, IBM Debugger, MQ series, Interest and Xpeditor

Training’s and certifications:

Project management at IBM USA

REXX tool process at IBM USA

Agile process at IBM USA

ETL tool- Informatica at IBM USA

Teradata – database concepts at IBM USA

Scrum master certification at IBM USA

References: Available upon request.



Contact this candidate