RAMAKOTESWARARAO YAKKALA
Contact: +1-732-***-****, E-Mail:****.*******@*****.***
LinkedIn Profile: https://www.linkedin.com/in/ramakoteswararao-yakkala-b1484b75/
JOB OBJECTIVE
To secure a position with an organization of repute where I can Exercise, maximize my ETL, BIG DATA, DW & BI, to meet business objectives.
PROFESSIONAL SUMMARY
With 16+ years of overall IT Experience, I have worked in Various Roles with varied technology Stack on Integration/Migration projects, E2E delivery of Big Data Projects of Customer Journey, Data warehousing projects. This experience includes Development/ Operations of applications for Verizon, T-Mobile, USAA, M&T Bank and MMC.
PROFILE SUMMARY
Enriched with good hold of Data warehousing & Business Intelligence projects management involving various technologies like Informatica, oracle, Teradata & concepts like Client/Server, grid, cloud and approaches like Dimensional, Logical & Physical data modeling.
Knowledge of advanced concepts of ETL, ELT, Data Mining/Cleansing/Profiling/Masking, BI Analytics, BIG DATA, Hadoop, Visualization tools, python, R etc…
Extensively used ETL methodology for supplying Data Extraction, Transformations, loading to process solution in a corporate–wide–ETL Solution using Informatica DEI/BDM 10.4.0,10.2.2,10.21,10.2.0, Informatica Power Center 10.5.1,10.1.0,9.6,9.5.1,9.1.0,8.5,7.1, Informatica IICS.
Developed scripts which help in smooth flow of files with process in Cloud Data Integration (CDI) and Colud Application Integration (CAI).
Created mapping tasks and TaskFlows based on requirement in CDI/CAI.
Created Process, Sub-Process and Reusable Process, reusable service connectors and reusable app connection to Oracle Web Services, to read File in Local and SFTP folders using File Parsers and to Write to Target Folders.
Created Multiple Task Flows for loading data from different Sources to Salesforce using SalesForce connector with
Data Synchronization Task, Mapping Task and used Bulk API,and Standard API as required.
Worked on databases like Oracle 19c,12c,11g/10g/9i, Teradata 16/15/14/13, SQL Server 2018, Netezza, Snowflake DB.
Extensive Experience using SQL, PL/SQL, SQL*Plus, SQL*Loader, Unix Shell Programming/Scripting, Hadoop
Knowledge in SQL, PL/SQL, Visio, Erwin, DataStage, Spark, Python, Scala.
Good knowledge in Cloud Platforms like Informatica IICS, Snowflake, AWS, Azure Data Factory.
Experience in loading data into Teradata Database. Implemented Teradata Parallel Transporter connection as part of Performance Tuning by creating Stored Procedures.
Involved in Complete ETL code migration to Teradata from Oracle RDBMS.
Interacted with Management to identify key dimensions and Measures for business performance.
Extensively worked on Data migration, Data Cleansing and Data Staging of operational sources using ETL process and providing data mining features for Data warehouses.
Optimized the mappings using various optimization technique like Push Down Optimization (PDO).
Implemented performance tuning techniques on Targets, Mappings, Sessions and SQL statements.
Exposure to Dimensional Data modeling that includes, star schema and snowflake schema modeling.
Extensive experience in Relational database/data warehouse environment: Slowly changing dimensions (SCD), operational data stores, data marts.
Proficient in all phases of the Software Development Life Cycle (SDLC), including requirements definition, system implementation, system testing and acceptance testing, Production Deployment and Production Support.
Used the job scheduling tools like Control-M, UC4 (Appworx), Cisco Tidal, Autosys, Automic.
Substantial experience in Telecommunications, Financial and Banking Domain.
Excellent communication, interpersonal, analytical skills and strong ability to perform as an individual as well part of a team.
EDUCATION
B.E in ECE (Electronics and Communication Engineering) from MADRAS UNIVERSITY passed out in the year of 2002.
TECHNICAL SKILL - SET
ETL Tools
Informatica DEI/BDM 10.4.0/10.2.2/10.2.1/10.2.0/10.4.0, Informatica Power Center 10.5.1,10.1.0,9.6.1,9.5.1/9.1.0/8.x/7.1(Designer, Workflow Manager, Workflow Monitor, Repository Manager, Integration Services, Repository Services, Administration Console, Informatica IICS
Data Modeling Tools
Microsoft Visio 2007/2000, Erwin 7.0/4.1,
Databases
Oracle 19c,12c,11g/10g/9i/8i/8.0/7.x, Teradata 16/15/14/13, SQL Server 2018, MS Access, Hive,Snowflake DB
Tools
Toad, SQL Navigator, SQL*Loader, MS Access Reports, Informatica Data Analyzer, Teradata Utilities (MLoad, FLoad, Teradata SQL Assistant)
Languages
SQL, PL/SQL, C, Python, Unix Shell Scripting/Sed, Scala
Reporting Tools
Cognos 11.x, Tableau 2020.x
Scheduling Tools
Control-M, UC4 (Appworx), Cisco Tidal, Automic, Autosys
Other Tools
HADOOP (HDFS, Pig, Sqoop, Hive,Hbase ),Spark, Mongo DB, FLUME,KAFK
Operating Systems
HP-UX, IBM AIX 4.2/4.3, MS DOS 6.22, Win 7, Win NT 4.0, Sun Solaris 5.8, Unix and Red hat Linux7.2.2.
Cloud Technolgies
Informatica IICS, Snowflake, AWS, Azure Data Factory (ADF)
CERTIFICATIONS AND TRAININGS SUMMARY
Informatica Certified Developer, Certificate No: 292412
Oracle SQL Developer Certification.
Work/Professional Experience
Client : MMC (Marsh & McLennan Companies) USA
Duration : May’2023 to till date
Project Name : CIS (Customer Information Services)
Role : Data Engineer
Environment: : IICS,Informatica PC 10.5.x, Oracle 19c, SQL SERVER, HP-UX,
Automic Scheduler
Project Description:
The purpose of this project is migrating the Oracle Financial Force and ERP Systems data to Salesforce of CERTINIA. This project was critical for consolidating various business processes, enhancing data integrity, Conducted comprehensive analysis of existing data warehouse structures. From CERTINIA, Creating a Data pipelines for Financial Business applications like CIS (Customer Information Services) to provide the reports on the ProjectValueAdd(PVA),Time Transactions, Account Receivables and Backlog Versions amounts for MMC(Marsh McLennan Companies).
Roles & Responsibilities
Participated in all phases including Client Interaction, Requirements Gathering, Design, Coding, Testing, Release, Support and Documentation.
Designed, developed, and implemented ETL processes using IICS data integration.
Identify ETL specifications based on business requirements and create ETL mapping documents, high level documentation for Produce Owners.
Design & developed mapping Tasks and Taskflows for large volumes of Data.
Extensively used Parameters (Input, IN/OUT parameters), Expression Macros and Source Partitioning Partitions.
Extensively used Cloud transformations-Aggregator, Expression, Filter, Joiner, Lookup (Connected and Unconnected),Router, Sequence Generator,Sorter,Update Strategy Transformations.
Developed Cloud integration parameterized mapping templates (DB and table object parameterization for Stage, Dimension (SCD Type-2, CDC and Incremental loads) and Fact load process.
Developed CDC load process for moving data from Source to SQL Datawarehouse using Informatica Cloud CDC for Oracle platform.
Developed complex Informatica Cloud Taskflows (parallel) with multiple mapping tasks and Taskflows.
Performed loads into Snowflake instance using Snowflake Connector in IICS to support data analytics for Treasury team.
Created scripts to create on demand Cloud Mapping Tasks using Informatica REST API.
Creates scripts which will used to START and STOP Cloud tasks through Informatica Cloud API calls.
Extensively used performance techniques while loading data into Target systems using IICS.
Loading Data to the Interface tables from multiple data sources such as Text files and Excel Spreadsheets as part of ETL jobs.
Automate data processes using scripting languages and ETL tools to reduce manual effort and increase efficiency.
Troubleshoot data issues and provide timely resolutions to ensure data availability and integrity.
Document ETL processes, data models, and data dictionaries to ensure data lineage and facilitate knowledge sharing.
Create trouble tickets for data that could not be parsed.
Unit Testing and System Integration testing.
Client : M&T Bank USA
Duration : Mar’2022 to April’2023
Project Name : PUB
Role : Sr. Informatica Developer
Environment: : Informatica DEI/Big Data Edition 10.4.x, CDH 6.3, Hive, Impala, Oracle,
SQL SERVER, HP-UX, Automic Scheduler
Project Description:
The project involves in development of data ingestion pipelines for business applications like Marketing/Treasury data to process operationalized BigData ETL pipelines that consume raw banking data, apply complex business rules, and produce US Government regulatory reporting datasets for M&T US bank.
Roles & Responsibilities
Implemented Informatica BDM mappings & workflows for Data ingestion of various data sources into Hadoop Data Lake.
Implemented the robust process through informatica BDE, which will help the business team to dynamic change the business needs.
Design & developed of BDM mappings for large volumes of Data.
Created dynamic workflows/mapping so that it can handle various Business model and rules for ingestion of various data sources.
Designed and implemented dynamic mapping/workflow for the ingestions (File/DB ingestions).
Automated Informatica BDM mappings/Applications by using the Automic scheduling.
Good understand of Informatica BDE architecture and various components and their usage.
Implemented Sqoop functionality by exporting data from Data Lake to any Relational sources.
Client : USAA USA
Duration : Jan’2021 to Feb’2022
Project Name : P&C Financial LRADS (Loss Reserving Analytics Data Store)
Role : Sr. Informatica Developer
Environment: : Informatica Power Center 10.x, Snowflake DB, Netezza, SQL SERVER, HP-UX, Control-M
Project Description:
The USAA is a financial services group offering banking, Auto investing, and insurance to people and families who serve, or
served, in the United States Armed Forces. As part of Data Modernization, Migrating data from Legacy to Cloud Snowflake for
multiple areas like Claims, Financial, Policy and Pricing applications. The LRADS process is to support the Loss Reserving business team. The purpose of LRADS is to serve as a source for Premium and Loss data used for estimation of Reserves, Trend analytics and Catastrophe analytics. All the LRADS process extracts financial premium and loss data from the P&C Data warehouse (CWH/SDS) and loads to the LRADS DataMart.
Roles & Responsibilities
Implemented Informatica mappings & workflows for Data ingestion of various data sources into Snowflake.
Ensures Client satisfaction by delivering defect free deliverables on time in Agile model.
Involved in requirement analysis (Process) and worked with BSA (Business System Analyst) and PO for further clarifications.
Responsible for E2E Testing for multiple applications.
Responsible to validate the data between Flat files/snowflake/Netezza and SqlServer.
Recreated S2T docs for existing Informatica Mappings.
Client : T-Mobile USA
Duration : Jan’2018 to Dec’2020
Project Name : Informatica BDE 10.x
Role : Sr. Informatica Developer
Environment: : Informatica Big Data Edition 10.2.x, AWS Hadoop Distribution: Hortonworks 2.6,
Teradata SQL Assistant 15/14,Teradata Utilities (BTEQ,MLoad, FLoad, Fast Export),HP-UX
Project Description:
The project involves in development of data ingestion pipeline for business applications to process Finance/Digital/Marketing data. The objective of the project is to build real-time and batch Data-ingestion pipeline to process incoming data from external vendors by refining data and apply validations and transformations and finally store data to Data-warehouse which is further used in reporting and analytics. Also, changing business needs changes in existing projects and/or new launches. Migration of existing
projects with trending technologies helps in betterment in terms of efficiency, accuracy, through-put and cost effective.
Roles & Responsibilities
Developed, implemented, and take the responsibilities for setting up Informatica BDM tool from inception to production phase by using Apache Spark as the execution engine in running the Data Ingestion into Data-Lake.
Closely worked with Business users, Informatica Product support group and Hortonworks teams for on boarding informatica BDE.
Implemented Informatica BDM mappings & workflows for Data ingestion of various data sources into Hadoop Data Lake.
Implemented the robust process through informatica BDE, which will help the business team to dynamic change the business needs.
Involved in setting up Hadoop configuration (Spark, Hadoop and Hive connection) for Informatica BDM and testing Various execution engines of informatica BDE.
Design & development of BDM mappings in various execution engines (Hive, Blaze and spark) mode for large volumes of INSERT/UPDATE.
Setting standards in adopting of the ETL tool.
Created dynamic workflows/mapping so that it can handle various Business model and rules for ingestion of various data sources.
Designed and implemented dynamic mapping/workflow for the File ingestion and DB ingestion.
Automated Informatica workflow by using the control-m scheduling.
Good understand of Informatica BDE archericture and various components and there usage.
Implemented Sqoop functionality for DB ingestion and exporting from Data lake to any Relational sources.
Designed and demonstrated in creation of the schema parameter file and automated generation of parameter file for various workflow.
Used IDQ Mapping variables/output and capture into workflow variables.
Involved in poc phase of the project to benchmark against various ETL tools.
Implemented workflow and mapping On cloud AWS in utilization of S3 buckets in reading and writing the data to and from data lake.
Extensively worked on code migration from one environment to another by using Infa CLI commands.
Client : T-Mobile USA
Duration : Jan’2018 to Dec’2020
Project Name : IDW (Integrated Data warehouse)
Role : Sr. Software Developer
Environment: : AWS Hadoop Distribution: Hortonworks 2.6,
Teradata SQL Assistant 15/14,Teradata Utilities (BTEQ,MLoad, FLoad, Fast Export),HP-UX
Project Description:
Data is been ingested into Hadoop through a framework called DMF (Data movement Frame) into Edge node, Once the data is available in edge node, Hadoop Jobs are been triggered to move the data from edge node to Hadoop and then Hive Queries are been executed for creation of hive table and load the data into hive table. Transformation are being applied to the ingested data for the desired reports based on Business need.
Roles & Responsibilities
Extensively worked with Partitions, Bucketing tables in Hive and designed both Managed and External tables in Hive to optimize performance.
Created and worked Sqoop jobs with full refresh and incremental load to populate Hive External tables.
Experience is writing python scripts and Unix bash scripting.
Worked on Pig to do data transformations, event joins, filter and some pre-aggregations before storing the data onto HDFS.
Worked on creation of Oozie workflow for Daily ingestion jobs.
Developed Simple to complex Map/reduce Jobs using Hive, Sqoop and Pig.
Configure Kerberos for user authentication to the domain.
Design and Develop Pig Latin scripts and Pig command line transformations for data joins.
Creating hive tables to the imported data for validation and debugging.
Experience in using Sqoop to migrate data to and fro from HDFS and My SQL,Oracle and Teradata.
Designing and creating Oozie workflows to schedule and manage Hadoop, Hive, pig and sqoop jobs for Daily Loads.
Designed and deployed big data analytics data services platform for Data injection into Hadoop Data lake.
Client : Verizon Data Services INDIA
Duration : June’2016 – Dec’2017
Project Name : VISION5 - Single Biller
Role : Consultant-System Development
Environment: : Informatica 9.6.1, Teradata SQL Assistant 15/14,Teradata Utilities (BTEQ,MLoad, FLoad,
Fast Export),HP-UX
Project Description: A new instance of VISION platform, VISION5, will be created to handle all VzT CMB billing. The New Business Process will have the FiOS customers in scope for this phase will be billed by VISION5 and the revenue will be booked in vSAP via the FDW/RACE/vSAP path in place of current process of FiOS Customers in scope for this phase are billed by RIBS/CBSS, and the revenue is booked in people soft today.
The objective of the Single Billed project is to move FiOS Customer accounts from RIBS billing systems into a new instance of the VzW Vision platform referred as VISION5. aEDW and its Sub Applications will make the changes necessary with the new VISION5 instance while removing the complexity involved with interfacing with 5 different legacy billing systems.
Challenges:
Eliminating Five Different Billing Systems and incorporating the functionalities in single System V5 adopted from Wireless, Scope Reduction, Elimination of Waste, and Technological Consolidation are few challenges.
Architecture, designing, generating Vision ID Process for New customers & existing, One Bill mechanism in parallel with Bundle Re-Architecture continuing BAU.
Single Biller, Unique Customer Identification Vision ID Conversion and sync up of existing Warehouse 6000+ tables & 200+ TB of Data, 40000+ Reports is very huge data Cleansing, Exploring Challenge.
Data Replication, Configuration, Architectural Setup using GoldenGate for Sales & IBM-CDC for Billing Data.
Meeting the ROI, KPI, and PXQ metrics by managing the Project Scope, Estimation, Costing, Release management, Configuration Management, Change Management with Best Project Management practices.
Responsibilities:
Played Key role in the Project initiation and responsible for phases definition by Services (FiOS, Copper), and Phases for 6M Customers Vision Conversion.
Managing Design & Architecture, Development, ITO testing & Deployment Teams of aEDW Interfaces, QUANTUS, Vision Conversion Modules.
Participated in the strategic planning of MDR Migration, project planning, Hardware & software requirements procurement for Dev & Prod Environment. Played key role in allowing 85K hours project & 3.9 M Savings by retaining ownership with Verizon IT owing the E2E solution leaving Teradata partnership.
Responsible for ETL Conversion tool & Unit testing at Map level to convert the existing mappings to be compatible to run on Teradata Environment.
Participated in all phases including Client Interaction, Requirements Gathering, Design, Coding, Testing, Release, Support and Documentation.
Interacted with Management to identify key dimensions and Measures for business performance.
Involved in defining the mapping rules, identifying requires data sources and fields
Extensively used RDBMS and Oracle Concepts.
ETL data load using Teradata load utilities like BTEQ, Fast Load & Multiload.
Dimensional modelling using STAR schemas (Facts and Dimensions).
Develop ETL Jobs using various transformations to handle the Business Logic.
Preparation of Technical Specification document and Migration Documents.
Develop Informatica Mappings, Sessions and Workflows.
Involved in Performance Tuning.
Created Informatica mappings with PL/SQL procedures/functions to build business rules to load data.
Loading Data to the Interface tables from multiple data sources such as Text files and Excel Spreadsheets as part of ETL jobs.
Used various Informatica Transformations in building the ETL jobs to get the data loaded into Data warehouse.
Involved in Data Analysis for developing Dashboards and other end user solutions. Worked with Business Analysts and acts as a liaison between technical departments and business departments.
Analysed data flow requirements and developed a scalable architecture for staging and loading data and translated business rules and functionality requirements into ETL procedures.
Participate in development, defect resolution on existing ETL processes.
Unit Testing and System Integration testing.
Trouble shooting data and error conditions using the Informatica Debugger
Client : Verizon Data Services INDIA
Duration : June’2016 – Dec’2017
Project Name : Customer Journey Analytics
Role : Consultant-System Development
Environment : HDP 2.4.1, Hive, Sqoop,pig, Oozie, Apache Ambari,Spark,Scala,mysql,
Oracle, Teradata 14,SQL server, RHEL 6
Project Description: CJA is the one of the key project which is built on Hadoop platform. In this we are pulling all the customers touch points data from different systems and implemented model which helps better insights of customer journey so that we could provide the better customer experience and reduce the Churn rate.
Smart cart provides Omni channel experience to the customer. Whenever customer interact with any engagement channel and somehow, he couldn’t complete the order then even after few days he goes to some other channel, we are giving the feasibility to complete the order from the point where his previous order was dropped. We are storing the Customers orders information as smart cart save request for the customers whoever completed their address validation, credit check and then comes to order page but couldn’t able place an order. once smart cart order becomes sale then record will deleted from the cart and even inactive cart also removed after 30 days.
To provide better customer experience and increase the RCM, we started abandon cart project. Here we tracking all the existing FiOS customers who comes to one of the online ordering channel, Verizon.com and visits add/upgrade services pages and then come out of the website .We send these customers details to Marketing team then they will analyse the customers data in multiple aspects and do the campaigns multiple ways like email communication, Adds, D2D campaigns and also we provide the offers for those customers based on customer – package segmentation.
Roles and Responsibilities
Pull the RDBMS(Teradata,ORACLE,SQL Server) data into HDFS by using sqoop and we pull the file feed using shell script
Develop the shell scripts to insert process details into mysql audit tables Once Sqoop is completed
Create the hive tables on top of HDFS data files
Create domain tables which are having the partition columns on proc_date and src_system and load data into them
Create the key tables, Node table and Fact table and load data using hive scripts
Automate the entire process using the oozie workflow manager and coordinator Jobs
We used the mail action nodes to send the job status details.
Pull the smart cart save request events from kafka broker and load into hdfs using apache Nifi in real time manner
Move the smart cart events from Nifi target location to stage location in batch mode and create hive table on top that files
Apply the xml parsing logic and store the final result into one more Hive table, which is source for Reporting team
Automate the complete process using oozie workflow scheduler.
Pull the Data from Teradata to Hadoop environment using Sqoop and create a hive table
Applying the parsing logic on xml table data field and get the required information using Hive xml functions
Applying ETL logic & filter to get only abandon cart data from complete online customer’s data set
Export the final feed to Teradata using Sqoop which is golden source for marketing team for their campaigns
Automate the entire job and schedule the job daily by using oozie workflow
We used the mail action nodes to send the job status details to appropriate teams and also send each action node statistics into mysql table for auditing purpose.
Client : Verizon Data Services
Duration : July’2014 – May ’2016 India
Project Name : OrBiT MDR – AEDW Migration
Role : Senior Analyst-System Development
Environment: : Informatica 9.6.1, Teradata SQL Assistant14, Teradata Utilities
(BTEQ,MLoad, FLoad, Fast Export),HP-UX
Description: “Increase revenues by reducing costs” initiative doctored the OrBiT program and visualizing it through Consolidation of Applications, Systems & Data Centres. As part of this MDR, Metrics Data Repository the operational and strategic Metrics Data Warehouse for Verizon Telecom Services is being migrated to aEDW, active Enterprise Data Warehouse. This consolidation effort will eliminate the overlap of some existing order and billing data and will also enrich aEDW data mart with new data it does not currently possess. The Strategic Benefits are #) share Customer level business intelligence/ analytics by having both performance (MDR) and marketing, campaign (aEDW) metrics Under one platform #) Enable cross-domain
analysis #) Enhance Speed to market by creating one common source for performance metrics, Marketing and Sales tracking & Compensation.
Challenges:
Identifying the Project scope and Phases in spite of Requirements gaps, Technological Gaps and Architectural gaps between the existing and proposed migrating environment with proper Analysis.
Project Estimation, Costing & Budgeting, Identify ROI and Project planning, identifying Hardware & Software requirements involving all the stakeholders.
ETL Code Conversion to be compatible to work on Teradata Database & Plan 60+ TB Data Migration.
Setting up the environment to utilize the VDSI Resources to reduce the project costs and identify the optimal balance of onshore & offshore team sizes.
Project Execution and meet the project timelines with proper Risk management & other challenges like Monitoring controlling, Measuring the Project Progress with proper metrics, Teams Training on technologies.
Responsibilities:
Participated in the strategic planning of MDR Migration, project planning, Hardware & software requirements procurement for Dev & Prod Environment. Played key role in allowing 85K hours project & 3.9 M Savings by retaining ownership with Verizon IT owing the E2E solution leaving Teradata partnership.
Responsible for ETL Conversion tool & Unit testing at Map level to convert the existing 9000+ mappings to be compatible to run on Teradata Environment.
Participated in all phases including Client Interaction, Requirements Gathering, Design, Coding, Testing, Release, Support and Documentation.
Interacted with Management to identify key dimensions and Measures for business performance.
Involved in defining the mapping rules, identifying requires data sources and fields
Extensively used RDBMS and Oracle Concepts.
ETL data load using TeraData load utilities like BTEQ,FastLoad & MultiLoad.
Dimensional modeling using STAR schemas (Facts and Dimensions).
Develop ETL Jobs using various transformations to handle the Business Logic.
Preparation of Technical Specification document and Migration Documents.
Develop Informatica Mappings, Sessions and Workflows.
Involved in Performance Tuning.
Created Informatica mappings with PL/SQL procedures/functions to build business rules to load data.
Loading Data to the Interface tables from multiple data sources such as Text files and Excel Spreadsheets as part of ETL jobs.
Used various Informatica Transformations in building the ETL jobs to get the data loaded into Data warehouse.
Involved in Data Analysis for developing Dashboards and other end user solutions. Worked with Business Analysts and acts as a liaison between technical departments and business departments.
Analyzed data flow requirements and developed a scalable architecture for staging and loading data and translated business rules and functionality requirements into ETL procedures.
Participate in development, defect resolution on existing ETL processes.
Unit Testing and System Integration testing.
Trouble shooting data and error conditions using the Informatica Debugger.
Few Roles & Responsibilities for below enhancement projects of MDR July 2013 to June 2014 India
#) Consumer Affinity Programs
#) 500 GB DVR Expansions
#) Mass Market Weekly Access Line Walk
#) Verizon Websites - Downgrades
#) Sales Channel in Billing/Ribs
#) Network Walled Garden
#) California Cramming Rules - Compliance
#) FiOS TV New STB Strategy - vBundles
#) TPM Sub-Regional Reporting Restructure (CBST0139 Re-sync process)
#) Live TV Content Expansion (MPEG4)
#) FiOS Temporary Service Suspension Details to <SVC> Billing Data Marts
#) Reports for FDV New Sales vs. Migrates
Acting as a good Team member & part of change control board.
Responsible for MASKING environment creation for ETL DEV team to avoid the project closure by clearance team.
MDR part: Involved right from JAD Sessions, High level, low level documents and through the project full lifecycle (Design, ETL & DB Development, testing, documentation, Implementation by CA’s, and Post implementation support till the transition)
Involved in designing the Database and functional specification of the project.
Involved in mapping, sessions and workflows design using Informatica 8.6, Database Tables, objects in Oracle 11g & TOAD, Unix shell scripts, Appworx jobs, Tidal Jobs, unit test plan, Code reviews and Test cases and release related documents and activities.
Involved in team management, performance evaluation besides assisting in selection and recruitment process, holding knowledge sharing session for the teams & Coordinating with offshore team and onsite client coordinator.
Client : Verizon Data Services
Duration : Jan’2012 – June’2013 India
Project Name : PROJECT NORTH – MDR – SPINCO & RETAINCO
Role : Senior Analyst
Environment: : Informatica 8.x,Oracle 10g,HP-UX
Description: Verizon sold out West part of business to Frontier which includes business of FIN, H S I, FTV, FDV at 14 states & few wire centers of CA. The Project North includes a kind of Demerge of existing COMPLETE VERIZON S/W, H/W, Data, Resources and