Post Job Free

Resume

Sign in

Big Data Test Architech

Location:
Chicago, IL
Salary:
110$
Posted:
January 23, 2020

Contact this candidate

Resume:

Upendra Kumar

adbetw@r.postjobfree.com

469-***-****

Summary:

11+ years of experience working in various domains and serving UK, Europe, and US customers by delivering IT services and meeting their business related expectations.

Experience creating and revieweing the Business Requirement documents (BRD’s), Functional Requirement documets (FRD’s), System Requirements Specification (SRF’s), and ETL Mapping documents.

Rich experience in doing Data analysis, profiling, Pruning, and finding the data insights and ability to present this in understandable format to the business customers and Data Scientists.

Experience working in Agile Methodologies (Sprint planning, review, retrospective meeting, User Story creation, Story point estimation, Backlog planning, Daily Scrum standup meeting, Creating Backlog,Velocity, Burndown chart reporting).

Experience working in Bigdata Hadoop technologies (Hadoop Cloud Architecture – HDFS, MapReduce, HiveQL, Pig, Sqoop, Spark, and Hbase eco system tools) including ETL Dataware house concepts.

Hands on experience using complex SQL queries(DDL, DML, All type of table joins, All analytical functions, All aggregate functions, etc), PLSQL programming languages (Procedures, Functions, Triggers, Cursors, Exception handeling).

Experience working on Unix commands including shell scripting for creating and executing the shell jobs.

Experience working on backend tools such as Oracle, SQL server, Teradata, Cassandra, and Hive databases.

Experience in Database server (SQL server 2000 and 2014) configuration assessment and Memory storage requirements for DataMigration projects.

Experience in estimating the project efforts in PD’s for complex and large projects in all the project stages (SG0, SG1, SG2).

Experience working in Insurance, Healthcare, BFS - core banking,Retail, and Life Sciences domain.

Having rich experience working on JIRA / Github/ Jenkins/ HPALM/ SQUIDS tools.

Experience managing all the project Test phases (Requirement gathering, project estimation, Test Plan and Test Stretagy creation, Test Design, Test Environment Setup,Test Data Management, Test Execution, Deriving Defect Triage call on daily basis with other stakeholders, Test Reporting, UAT, and Pre-production and Post production support and Testinng).

Experience working with iBrowse automation tool for verifying the applications (Web and Desktop) compatibility on multiple browsers.

Aclaimed recognitions for developing ETL automation tool using PL/SQL and Project Estimation template using Visual Basic for Applications (VBA).

Experience managing the project team and work together with the team to achieve cohesive project goal.

Highly dedicated, self motivated, hardworking, and a believer in a cohesive team effort for achieving the overall project objective.

Having excellent communication and presentation skills with ability of strong problem solving.

Education:

Masters in Information Technology- IIT- Kanpur(India), 2006

Technical Skill Set

Programming Languages: Core Java (1.8),Spring Boot 2.0, C, Python, and VBA for excel.

Life Sciences Applications: eTrack and eLNB

Database: Oracle, SQL Server 2000,2008, and 2014, TERADATA, Cassandra,MongoDB, Microsoft CosmosDB.

Operating System: Windows 7/2000/XP and UNIX (HP-UX)

Reporting Tool: IBM Cognos Framework, Tableau

Data Integration Tools: Informatica 9.1.0, Talend 6.4.1, and Datastage IBM Infosphere 8.5

Testing tools: OneLeo Automation framework(Cucumber API), HPALM, JIRA, SQUIDS.

Webservices: Microsoft Azure (Logic Apps), – services, SAG Technolgy (File management and MFTP transfer events)

Office Automation Tools: Outlook, Word, Excel, PowerPoint

Modeling and Data Integration Tools: MongoDB, Derby Database, NoSQL DB, BigData Hadoop framework (HDFS), MapReduce, HiveQL, Pig, Spark, Sqoop, Hbase), AnyLogic 8 University 8.1.0

Certifications:

ISTQB Certification( TCS Internal)

Business Domain certification in Governance Systems from TCS Business Do- main Academy

Certification on Data Structure and Algorithms from Department of Computer Science, Indian Institute of technology Kanpur, India

Certification in insurance from IRDA

Foundation Certification on Big Data and Hadoop Systems

ITIL foundation certification (TCS - Internal)

Professional Experience:

Aug 2019 – till now

Senior Test Architect

Client: Blue Cross Blue Shield Association (BCBSA)

Project: National Data Warehouse Tokenisation (NDW Tokenisation)

Project Overview:

This is an ETL dataware house project which stores and process plan submission files from various plans who are the licencees of BCBSA. Process starts with loading the files at staging layers called Data Cleansing and Verification (DCV) and finally data gets loaded to target data warehouse called ATOMIC schema. This data would be consumed by the other downstream systems for their reporting purpose and business analytical needs. We are tokenizing the PII/PHI related fields for all the members in database and validating the data until reaches to ATOMIC schema. We as a test team are responsible to validate and ensure that data flows to downstreams should have proper data.

Responsibilities:

Actively involved in understanding the critical business requirements for insurance domain and providing valuable input and suggestions to the business after consulting with other stakeholders.

Analyse the requirements (FRD, BRD, NFR, etc) and provide ball park and final project test estimations for project manager review.

Experience creating and reviewing requirements documents – Business, Functional, System and ETL mapping documents along with developing the business rules.

Having rich experience working and designing the dataware house using DW concepts (Fact and Dimension tables, multi-dimensional schema’s such as Star, Snowflakes, and Fact constellation, SCD types 1,2, and 3) and extensive experience on BI reporting tools such as IBM cognos framework.

Prepare Master Test Plan (MTP) and Detail test plan and share with project manager for review.

Creating Test Stretagy to test e2e data flow and data accuracy.

Supporting test manager for his stretagic goal for the team to enable test processes,tool, and technology.

Arrange cross functional training sessions within the team to bring the team member upto the pace for their maximum utilization and productivity.

Enabling tools and technology for their maximum usage in test automation.

Proficient in designing efficient and ETL workflows to capture and process the ETL business functionality.

Experience in gathering and processing raw data at large scale by writing scripts, SQL queries, and webscapping techniques.

Experience in reviewing the test cases,Tracebility matrix created by the team to ensure that all requirements are covered under the test cases.

Involved in the discussions with the BA’s, business users to clarify the gaps in requirements, ambiguities, scope change if any.

Reviewed test execution results done by the team before giving signoff for UAT testing to business users.

Derive defect triage call everyday (During test execution) with other stakeholders (Testing, Development,DBA, other corresponding teams).

Creating Test Outcome Report (TOR) for e2e ETL dataware house testing performed.

Having expertise in using Unix (HP-UX) commands and shell scripting techniques data processing.

Actively involved in data cleansing and pruning the data inorder to remove redundancy and duplicacy of the data.

Actively involved in data profiling and data analysis using SQL and also involved in data scrapping and maintaining the data quality.

Worked and Managed in highly stressful situations and delivered the things within the defined timelines.

Prudently lead and manage the things with less supervision and accomplish the tasks with my own with the support of the wonderful team.

Good at preparing the presentations and presenting the meaningful insights of the data.

Tools & Technology: IBM Infoshere Datastage 11.5, DB2 database, Unix, IBM Tivoli workload scheduler (TWS), BigData Hadoop HDFS, Hive,Sqoop, and Spark 2.4.4 framework

Global Logic, Inc May 2018 – July 2019

Senior Test Architect - Automation

Client: Walgreens Boot Alliance (WBA)

Project: RxR Pharmacy Renewal 2.0

Project Overview:

Walgreens Inc. is scaling the existing Pharmacy Rx Proessing system (IC+) to completely new system called Stock+ which is going to built on Hadoop plateform in kafka streaming environment. This is a part of the renewal program where Walgreens is looking forward to design and build dataware house in order to consolidate and store the data from multiple hetrogenious data sources (Database for source applications – IC+, Flat files, DB backup files, etc). Consolidated data is used by business users, System Analyst, Enterprise Data Analytic (EDA) team to analyse and provide business insights and pridictions. This dataware house contains historical data from eRx processing, Supply Chain Management (SCM), data from Stock+ (Upgraded version of IC+ - Walgreens Rx prescription management inventory and processing).

Responsibilities:

Actively involved in understanding the critical business requirements for insurance domain and providing valuable input and suggestions to the business after consulting with other stakeholders.

Analyse the requirements (FRD, BRD, NFR, etc) and provide ball park and final project test estimations for project manager review.

Experience creating and reviewing requirements documents – Business, Functional, System and ETL mapping documents along with developing the business rules.

Having rich experience working and designing the dataware house using DW concepts (Fact and Dimension tables, multi-dimensional schema’s such as Star, Snowflakes, and Fact constellation, SCD types 1,2, and 3) and extensive experience on BI reporting tools such as IBM cognos framework.

Prepare Master Test Plan (MTP) and Detail test plan and share with project manager for review.

Creating Test Stretagy to test e2e data flow and data accuracy.

Supporting test manager for his stretagic goal for the team to enable test processes,tool, and technology.

Arrange cross functional training sessions within the team to bring the team member upto the pace for their maximum utilization and productivity.

Enabling tools and technology for their maximum usage in test automation.

Proficient in designing efficient and ETL workflows to capture and process the ETL business functionality.

Experience in gathering and processing raw data at large scale by writing scripts, SQL queries, and webscapping techniques.

Experience in reviewing the test cases,Tracebility matrix created by the team to ensure that all requirements are covered under the test cases.

Involved in the discussions with the BA’s, business users to clarify the gaps in requirements, ambiguities, scope change if any.

Used kafka environment as a source for ETL jobs to consume data (Kafka events, Topics) and publish into EDW after necessary business tansfromations in staging layer ( services - Minions, Talend jobs, etc).

Used OneLeo automation framework (Cucumber API) to automate the ETL flow and validate the data flow.

Used HiveQL to process the data on Hive server using Hive Thrift client UI.

Worked on MongoDB to store and manage the metadata information for business objects.

Reviewed test execution results done by the team before giving signoff for UAT testing to business users.

Derive defect triage call everyday (During test execution) with other stakeholders (Testing, Development,DBA, other corresponding teams).

Creating Test Outcome Report (TOR) for e2e ETL dataware house testing performed.

Having expertise in using Unix (HP-UX) commands and shell scripting techniques data processing.

Actively involved in data cleansing and pruning the data inorder to remove redundancy and duplicacy of the data.

Actively involved in data profiling and data analysis using SQL, Pythan, and Bigdata hadoop technologies and also involved in data scrapping and maintaining the data quality.

Worked and Managed in highly stressful situations and delivered the things within the defined timelines.

Prudently lead and manage the things with less supervision and accomplish the tasks with my own with the support of the wonderful team.

Good at preparing the presentations and presenting the meaningful insights of the data.

Supported business and management with clear and insightful analysis of data including mining activities (Including data auditing, aggregation, validation, and reconciliation), advance modeling techniques (Agent Based Modeling using Anylogic modeling and simulation tool), testing and creating and explaining results in clear and concise reports.

Tools & Technology: MongoDB,Derby database, NoSQL DB,Dot Net (.Netframework),C#,Oracle, TERADATA, Cassandra,Microsoft Cosmos DB, Bigdata Hadoop Framework (HDFS, MapReduce), HiveQL, Spark 2.4.4, Pig, Sqoop, HDFS, MapReduce, Kafka environment, ETL informatica, Talend 6.4.1, and BI (IBM Cognos framework) tool, OneLeo cucumber/Selenium automation framework, Talend ETL automation tool.

Nationwide Building Society, Swindon-UK April 2016 - May 2018

Senior Test Architect-BigData Hadoop Systems

Project: Branch Office Systems Software (BOSS) Upgrade

Project Overview:

Branch Office Systems Software (BOSS) is a retail banking application used by retail users at the Branches, Branch Head Offices, and other retail agencies across United Kingdom. BOSS upgrade is a project of upgrading the existing BOSS system from its older versions of Windows 2003 application server to 2012 server, COBOL to C++/C #, SQL 2000 database server to SQL 2014 server and other hardware component upgrade.

Responsibilities:

Actively involved in understanding the critical business requirements for retail banking domain and providing valuable input and suggestions to the business after consulting with other stakeholders.

Involved from acquiring massive amount of data, analyze, and summarise the necessary findings and translated those into a understandable document.

Supporting Test Manager for the test processes enablement and improvement. Also worked closely with TM in order to achieve testing milestones and deliveries.

Help team members to address any issue/doubts and maintain proper communication with no conflicts.

Actively involved in data profiling and data analysis using SQL, Pythan, and Bigdata hadoop technologies and also involved in data scrapping and maintaining the data quality.

Involved in creating and reviewing functional ETL mapping documents for source and target and also responsible for developing the transformation logic between source and target as per the business requirements.

Worked and managed in highly stressful situations and achieved the milestones within the defined timelines.

Actively involved in data cleansing and pruning the data inorder to remove redundancy and duplicacy of the data.

Prudently lead and manage the things with less supervision of my supervisors and accomplish the tasks with my own.

Performed ETL automation using Cucumber/Selenium OneLeo framework to validate source and target systems data.

Good at preparing the presentations and presenting the meaningful insights of the data.

Supported business and management with clear and insightful analysis of data including mining activities (Including data auditing, aggregation, validation, and reconciliation), advance modeling techniques (Agent Based Modeling using Anylogic modeling and simulation tool), testing and creating and explaining results in clear and concise reports.

Having rich experience working and designing the dataware house using DW concepts (Fact and Dimension tables, multi-dimensional schema’s such as Star, Snowflakes, and Fact constellation, SCD types 1,2, and 3) and extensive experience on BI reporting tools such as IBM cognos framework.

Proficient in designing efficient and ETL workflows to capture and execute the ETL business functionality.

Experience in gathering and processing raw data at large scale by writing scripts, SQL queries, and webscapping techniques.

Used Bigdata cloud as a deployement and SaaS as a service model for analyzing, faster processing of data, and easy to maintain the integrated environment.

Having sound practical experience working on real time projects using Unix (HP-UX) shell scripting to analyse, process, and finding the insight of the data.

Environment: Dot Net (.Netframework),C#,Cucumber/Selenium OneLeo Automation framework,SQL server 2000, 2008, and 2014, Cassandra, Teradata, Bigdata Hadoop Framework (HDFS, MapReduce), HiveQL, Pig, Sqoop, MongoDB,Hbase, Kafka environment, ETL Datastage Data Integration and BI (IBM Cognos framework), Talend 6.4.1 Data integration tool.

Consumer Value Store - Retail (USA) June 2015 - March 2016

System Analyst

Project: Script Sync Welcome SMS ETL

Project Overview:

Script Sync Welcome SMS, an ETL as well as functional project which is about managing the overall customer drug prescriptions at Consumer Value Stores (CVS) across United States. It dealt with enrolling the customer for their all drug prescriptions for a single pickup date (i.e. Script Sync Date) at CVS stores and generates extract feed files containing customer enrollment details with Sync Pickup date and sending the SMS text notifications to customer’s registered mobile phone to intimate about readiness of the prescriptions at CVS store.

Responsibilities:

Actively involved in understanding the critical business requirements in Consumer Retail domain and providing valuable input and suggestions to the business after consulting with other stakeholders.

Involved from acquiring massive amount of data, analyzing, and summarizing the necessary findings and translated those into a understandable document.

Actively involved in data profiling and data analysis using SQL, Pythan, and Bigdata hadoop technologies and also involved in data scrapping and maintaining the data quality.

Involved in creating and reviewing requirement documents – Business, System, Functional, and ETL mapping documents for source and target and also responsible for developing the transformation logic among source and target as per the business requirements.

Managed and worked in highly stressful situations and delivered the things within the defined timelines.

Used Bigdata distributed cloud as a deployement and SaaS as a service model for analyzing, faster processing of data, and easy to maintain the integrated environment.

Used Unix (HP-UX) shell scripting processing and finding the hidden insights of the data.

Actively involved in data cleansing and pruning the data inorder to remove redundancy and duplicacy of the data.

Prudently lead and manage the things with less supervision of my supervisors and accomplish the tasks with my own.

Good at preparing the presentations and presenting the meaningful insights of the data.

Supported business and management with clear and insightful analysis of data including mining activities (Including data auditing, aggregation, validation, and reconciliation), advance modeling techniques (Agent Based Modeling using Anylogic modeling and simulation tool), testing and creating and explaining results in clear and concise reports.

Having rich experience working and designing the dataware house using DW concepts (Fact and Dimension tables, multi-dimensional schema’s such as Star, Snowflakes, and Fact constellation, SCD types 1,2, and 3) and extensive experience on BI reporting tools such as IBM cognos framework.

Proficient in designing efficient and ETL workflows to capture and execute the ETL business functionality.

Experience in gathering and processing raw data at large scale by writing scripts, complex SQL queries, and webscapping techniques.

Performed ETL automation using OneLeo ETL framework.

Environment: Dot Net (.Netframework),C#,Cucumber/Selenium OneLeo automation framework,Oracle PL/SQL, TERADATA, Bigdata Hadoop Framework (HDFS, MapReduce), HiveQL, Pig, Sqoop, Spark 2.4.4,Hbase, Kafka, ETL informatica and BI (IBM Cognos framework), Talend 6.4.1 Data integration tool.

GSK, United Kingdom Sept 2014 - May 2015

System Analyst

Project: Clinical Prism (BIODW)

Project Overview:

Clinical Prism is an project based on ETL methodology deals with clinical trial studies data flows from multiple client operational systems to a consolidate database (Data Warehouse) as EDW post transformation imposition. Customers like BI users and Da- ta Scientist use this data store for Business Intelligence, Data Mining and reporting purposes. Data displayed in the reports as per the business requirements through Materialized views and Cognos framework. Overall objective of testing is to verify end to end flow of data based on business logic and verify the report format and its intended data correctness on the Congnos framework. Bigdata hadoop framework was used to store and process a bulk amount of data using HiveQL, Sqoop, HDFS file system before storing the transformed data into the target Data Warehouse.

Responsibilities:

Actively involved in understanding the critical business requirements in Life Sciences domain and providing valuable input and suggestions to the business after consulting with other stakeholders.

Involved from acquiring massive amount of data, analyzing, and summarizing the necessary findings and translated those into a understandable document.

Actively involved in data profiling and data analysis using SQL, Pythan, and Bigdata hadoop technologies and also involved in data scrapping and maintaining the data quality.

Involved in creating and reviewing requirement documents – Business, System, Functional, and ETL mapping documents for source and target and also responsible for developing the transformation logic among source and target as per the business requirements.

Managed and worked in highly stressful situations and delivered the things within the defined timelines.

Actively involved in data cleansing and pruning the data inorder to remove redundancy and duplicacy of the data.

Prudently lead and manage the things with less supervision of my supervisors and accomplish the tasks with my own.

Good at preparing the presentations and presenting the meaningful insights of the data.

Supported business and management with clear and insightful analysis of data including mining activities (Including data auditing, aggregation, validation, and reconciliation), advance modeling techniques (Agent Based Modeling using Anylogic modeling and simulation tool), testing and creating and explaining results in clear and concise reports.

Having rich experience working and designing the dataware house using DW concepts (Fact and Dimension tables, multi-dimensional schema’s such as Star, Snowflakes, and Fact constellation, SCD types 1,2, and 3) and extensive experience on BI reporting tools such as IBM cognos framework.

Proficient in designing efficient and ETL workflows to capture and execute the ETL business functionality.

Performed ETL automation using OneLeo automation framework.

Experience in gathering and processing raw data at large scale by writing scripts, SQL queries, and webscapping techniques.

Used Bigdata multicloud as a deployement and Platform as a service (PaaS) model for analyzing, faster processing of data, and easy to maintain the integrated environment.

Having rich experience in Unix (HP-UX) basic commands and shell scripting processing and finding the hidden insights of the data.

Environment: Dot Net (.Netframework),C#,Cucumber/Selenium OneLeo automation framework,Bigdata Hadoop Framework,HiveQL, Pig, Sqoop, Spark, HDFS, MapReduce, Kafka, Oracle PL/SQL, Teradata,ETL informatica, Talend 6.4.1 Data integration tool, and BI (IBM Cognos framework).

Humana Health Care Inc (USA) Mar 2013 - Aug 2014

System Analyst

Project: EDW - IFM System

Project Overview:

IFM is part of the Corporate Systems Data-mart (CSD). This system was used to gene- rate claims lag reporting. It enables the financial analysts to make accrual entries and post them to corporate general ledger. The data model created as part of this system is also used for MER reporting and other Financial managerial reporting downstream.

CSD is used for analysis and decision making by Finance department of Humana. IFM Team is supposed to integrate the data into CSD through Informatica from various da- ta sources including EDW. Testers are supposed to test those ETLs from requirement and data perspective. Bigdata hadoop framework was used to store and process a bulk amount of healthcare data using HiveQL, Sqoop, HDFS file system before storing the transformed data into Data Warehouse.

Responsibilities:

Actively involve and responsible in Sprint planning, Reviews, and Retrospective meeting

Responsible for creating the User Stories for the finalized sprint

Responsible for estimating the User Stories for their story points

Responsible discussing with Subject Matter Experts (SME’s) for business requirements and any discrepancies in the business rules.

Responsible for creating, review, and finalizing the requirement documents- Business, System, Functional, ETL data mapping.

Responsible for attending Daily Scrum standup meeting for the status updates and encouraging the team to attend the same and provide their daily tasks updates.

Responsible for attending the Backlog meeting to discuss the backlog left from the previous sprint and manage accommodate into the next sprint.

Responsible for creating and managing the reports – Sprint Backlog, Velocity, Burndown chart, and Product Backlog report.

Experience working in Healthcare domain (Medicare, Medicaid plans, Rx, Dental, and Vision claims processing and Payment for claims,etc.

Successfully managed a team of 4 members in my team in terms of project deliveries and professional and personal conflicts.

Responsible for appraising the team members for their yearly H1 and H2 performance appraisal.

Environment: Bigdata Hadoop Framework (HDFS, MapReduce), HiveQL, Sqoop, Hbase and Spark, Oracle PL/SQL (TOAD), ETL Informatica tool, Unix(HP-UX),

Orange County, California Sept 2011 - Feb 2013

Test Lead/Data Modeler -.Net database Batch testing

Project: Property Tax Management System (USA)

Project Overview:

TCS has offered implementation of the Property Tax Management System as a solution to determine the value of all taxable property in Orange County. It includes functional Modules like Assessment Appeal, Tax Bill Creation, Penalty Application, Claim for Re- fund, Accounting & Balancing, Refund Processing, Use of Funds, Payments Processing, Billing and Notices, Tax Compliance & Operations, Power to Sell Bankruptcy, Lien Management & Enforcement, Fee Based Data Services, CORTAC, Mello Roos / 1915 Bond Assessment.

Responsibilities:

Analyzed the testing effort required for the project

Estimated the time, resources and budget required to perform the testing to management

Defined the strategies and developed the test plan for the tasks, dependencies and participants required to mitigate the risks to system quality and obtain stakeholder support for this plan

Managed the Hardware and software requirement for the Test Setup

Monitored and enforced all the processes for testing as per the standards defined by the organization

Reviewed the Test Cases documents; Oversaw new requirements, change in the Project

Attended the regular client call and discussed the weekly status with the client

Involved in preparation of the Test Plans, Test Scenarios, Test Cases for module, integration and system testing

Prepared Test Data for the test cases and Test Environment to execute the test cases

Reviewed and executed Test Cases prepared by other team members

Managed Defect Tracking and provided mandatory information of defects to developers

Initiated walkthroughs for every module under the supervision of BA’s in order to find out any missing as well coverage of functionality during testing

Maintained Summary Reports, Lesson Learnt documents from the experience of the previous project

Prepared Suggestion Documents to improve the quality of the application

Conducted Review Meets within the Team

Environment: Team Foundation Server (TFS - Defect Tracking Tool), .Net based batch testing using Oracle SQL and PLSQL, and Big Data Hadoop testing.SQL server 2008

State Bank of India, India Apr 2009 - Aug 2011

Test Engineer

Project: SBI CBS Development Support

Project Overview:

It is a support cum development project owned by State Bank of India. It was the big- gest core banking implementation with more than 9000 branches across Asia continents. TCS is customizing the product developed by Financial Network Systems (FNS Australia) as per their requirements. It was developed on using COBOL as programming language and Oracle database for storing and managing transaction entries. Application is divided into numerous modules and owned their core banking functionality. It consists of modules like Customer Information Facility (CIF), Loans and Deposits etc.

Responsibilities:

Involved in Requirement Analysis, Test Design, Test Execution, defect reporting/testing, and giving support to UAT users for their UAT testing

Analyzed the CR’s raised by the client, did impact analysis of the changes implemented, tested thoroughly and promote to UAT

Involved in Regression test suite creation and execution for ensuring the impact of changes before promoting the changes into UAT/Production

Took cross training KT sessions among the team members to enhance the business as well as technical knowledge about the applications

Acquired good practical knowledge for functionality of NPA,Write-off, and QABC for loan and cc/od accounts

Environment: Oracle 10g (TOAD), COBOL and UNIX (HP-UX)

Reserve Bank of India, India Jan 2008 - Mar 2009



Contact this candidate