Resume

Sign in

Data Manager

Location:
Ottawa, Ontario, Canada
Posted:
October 03, 2016

Contact this candidate

Resume:

VINCENT FIORILLI

NETEZZA DATABASE ADMINISTRATOR

SENIOR DB2 ARCHITECT

PERFORMANCE EXPERT

CURRICULUM VITAE

Personal Profile:

excellent at multitasking and prioritizing of workload

great interpersonal skills

excellent troubleshooter

able to support large development teams

very client focused and committed

attention to detail and precision

has Secret Clearance

TN Visa

Technical Profile:

Over 30 years as a DBA of which the last 20 years have been with NETEZZA and DB2 LUW. Has performed work in all phases of a project including:

performance tuning at the application and database level

proof of concept with NETEZZA

migrating from DB2 to NETEZZA

installing and configuring DB2 LUW (Unix and Windows) and DB2 (mainframe)

data replication

data modeling

database design

database implementation

database maintenance

experience with data warehousing and VLDB (Very Large Databases)

E-mail: acwv8o@r.postjobfree.com

Tel: 514-***-****

PERSONAL INFORMATION

NAME: Vincent Fiorilli

LANGUAGES

SPOKEN: English, French

WRITTEN: English, French

EDUCATION

UNIVERSITY

Bachelor of Science - Computer Science and Mathematics

McGill University, 1977

COLLEGE

Diploma of Collegial Studies - Physical Sciences

McGill University, 1974

OTHER

Data Warehousing DB2 LUW EEE / ESE

NETEZZA SQL

UNIX ORACLE 8 Storage Structures

EXPERIENCE IN INFORMATION TECHNOLOGY over 30 years

EXPERIENCE AS A DBA over 25 years

SUMMARY OF TECHNICAL SKILLS

HARDWARE: IBM P-SERIES

SUN FIRE

STORAGE AREA NETWORK (SAN)

IBM 3090/30XX/43XX

OPERATING SYSTEMS: AIX 5.3 (KORN SHELL)

SUN SOLARIS 8, 9 (KORN SHELL)

LINUX REDHAT

OS/VS/MVS/XA, OS 390, Z/OS (PARALLEL SYSPLEX)

LANGUAGES: SQL

ISPF DIALOG MANAGER

COBOL

DATABASE:

NETEZZA – PUREDATA FOR ANALYTICS

DB2 LUW (VERSION 6, 7, 8, 9, 10.1, 10.5)

DB2 LUW PURESCALE / PUREDATA FOR TRANSACTIONS

DB2 LUW ESE WITH DATABASE PARTITIONING (MPP)

DB2 Z/OS (VERSION 7 WITH DATA SHARING)

ORACLE 8 (ACADEMIC)

DATA MODELLING TOOLS: Infosphere Data Architect

POWER DESIGNER

ERWIN

BACHMAN FOR DB2

SYSTEM ARCHITECT

DATA MODELING TECHNIQUES: THIRD NORMAL FORM (Entity Relationship),

STAR and SNOWFLAKE schemas (DIMENSIONAL MODELING)

HISTORY OF WORK

COMPANY YEAR OF MANDATE DURATION FUNCTION

IBM - Canadian Border Services Agency 2015 - 2016 in progress Database Architect

NETEZZA / DB2

La Capitale Insurance 2015 1 month NETEZZA SME

Performance Expert

Deloitte Consulting - State of Connecticut 2015 5 months DB2 LUW Technical Lead /

Advisor / SME/ DBA

Government of Canada 2012 - 2014 36 months NETEZZA /

DB2 MPP Data Warehouse DBA

University of Ottawa 2011 2 months DB2 Migration Expert

Aetna Insurance 2010- 2011 9 months Senior DB2 MPP Data Warehouse DBA

Blue Cross Blue Shield (Wellmark) 2009- 2010 6 months Senior DB2 MPP Architect –

Performance Expert

TATA Communications (Teleglobe) 2007 - 2009 24 months Senior DB2 MPP Data Base Architect –

MPP Performance Expert

IBM Canada 2006 - 2007 8 months Senior DBA DB2 LUW

Canadian National Railway 2006 4 months Senior DBA DB2 LUW

Royal Canadian Mounted Police - RCMP 2006 5 months Senior DBA DB2 LUW /

DB2 ZOS

Tech Data 2005 – 2006 4 months Senior DBA DB2 LUW

Royal Canadian Mounted Police - RCMP 2004 – 2005 12 months Senior DBA DB2 LUW / DB2 ZOS

IBM – Morgan Stanley 2005 2 months DB Architect DB2 LUW

The GAP 2004 1 month DBA DB2 ZOS / DB2 LUW,

Data Propagator expert

CIGNA 2001 – 2004 36 months DBA DB2

MERCK – MEDCO 2000 3 months DBA DB2

SAPIENT CORPORATION 1999 – 2000 6 months DBA DB2 LUW

AXA CANADA TECH 1998 – 1999 12 months DBA DB2

BELL HELICOPTER TEXTRON CANADA 1997 – 1998 12 months DBA DB2

HYDRO-QUEBEC 1996 – 1997 6 months DBA DB2

BELL SYGMA 1995 – 1997 22 months DBA DB2

FINANCIAL COLLECTION AGENCY 1996 4 months Data Architect

HYDRO-QUEBEC 1992 – 1994 20 months DBA DB2

COMPANY: IBM - CBSA (Ottawa, Ontario)

YEAR OF MANDATE: 2015 - 2016

DURATION OF MANDATE: 17 months (in progress)

FUNCTION: Consultant "Database Architect Netezza / DB2"

As a Database Architect working on the E-Manifest– USA/Canada Border Crossings Project, provided technical services as follows:

Create Netezza databases

Design and implementation of Netezza tables

Creating Distribution keys which ensure data is evenly distributed across Data Slices

Design of table columns ensuring that Netezza Zone Maps are used, thus allowing data to be skipped during query processing

Design of Netezza Clustered Based Tables and Materialized views

Establish Referential Integrity to improve query performance

Manage NZ objects using nzadmin

Use nzadmin and nzhw to monitor / manage NZ SPUs/Disks

Using nzload, load NZ tables with massive amounts of data

Validate performance of newly designed / modified tables

Grooming of tables and generating statistics

Analysis of execution plans and tuning of SQL / NZSQL and/or redesign tables

Ensuring that workload on SPUs is balanced

Ensuring/maximizing co-location of tables in SQL table joins

Minimizing broadcast joins thus reducing network traffic and communication between SPUs

Code, test and troubleshoot procedures using NZSQL and LINUX scripting

Backup and restore of Netezza databases using nzbackup (Full, Cumulative, Differential, Schema Only) and nzrestore (entire database and table level)

Using nz_migrate migrate database table(s) from one NPS server to another

Configure ODBC/JDBC drivers

Advocate and implement “Best Practices” for statistics gathering, reorgs and backups

Advise, mentor and lead other DBAs in the group on Netezza technology

Work with Data Modelers and Enterprise Architect

Work and support ELT Developers

Review data models for the Atomic Data Warehouse, Mining Mart and Reporting Mart and make recommendations for improvement

Using Aginity Workbench make DDL changes

Using Aginity Workbench Explain SQL and make changes to the Netezza databases to improve performance

Conceive, design and develop a system which reconciliates data replication from the source to target database (DB2 MPP to Netezza)

Develop scripts, stored procedures (using sql and nzplsql), federated objects, triggers which collect and store metrics and issue alerts when there are discrepancies between source (DB2 MPP) and target (Netezza) databases

Plan and install DB2 PureScale

Make recommendations for server architecture (CPUs, memory, disk, network) of Members and Caching Facility

Work with Linux administrators on configuration of CPU, disks, file systems, memory

Create cluster file systems (GPFS) from raw disks for database and logs

Supported / mentored / advised / lead CBSA team of 6 DBAs on DB2 Pure Scale

Advise management on technical architecture and direction

Perform high availability / failover tests

Performance testing, monitoring and tuning

Create and configure MDM (Master Data Management) database

Plan, install, test CDC (Infosphere Data Replication) which is used to propagate database changes from DB2 Pure Scale to the CBSA Data Warehouse (DB2 MPP and Netezza)

Develop Linux scripts which are used to monitor, stop and start CDC on source Member VMs

Configure TSAMP (Tivoli System Automation for Multiplatforms) to use the above scripts for automatic Member failover

Planning, installation and configuration of IBM Data Server Manager (aka. Optim Performance Manager) on PureScale

Create PureScale MDM databases

Configure PureScale to do backups and log archiving using TSM

Resolve complex database and database server issues

Perform database restores and recoveries on PureScale using TSM

Write and implement Linux database backup scripts for PureScale

Write and implement Linux database monitoring/alerting scripts for PureScale

Migrate databases from DB2 non-PureScale to PureScale

Resolve performance issues using IBM Data Server Manager, db2top and other tools

Created DB2 Pure Scale Installation Guide

ENVIRONMENT:

Red Hat Linux, Netezza (IBM PureData)

Netezza 6+1 S-Blades (96 CPUs, 96 SPUs, 96 FPGAs, 240 Data Slices, 64 Terabytes)

Netezza Platform Server (NPS) 7.2 LINUX

Netezza Performance Portal

NzAdmin

Aginity Workbench for PureData System for Analytics

TSM (used for NZ backups and restores)

db2top, db2mon

IBM Data Studio

IBM Data Server Manager (aka Optim Performance Manager)

Change Data Capture (aka Infosphere Data Replcation)

Master Data Management (MDM)

DB2 MPP 9.7

DB2 PureScale 10.5

COMPANY: La Capitale Insurance (Quebec, Quebec)

YEAR OF MANDATE: 2015

DURATION OF MANDATE: 1 month

FUNCTION: Consultant " NETEZZA SME / Performance Expert "

As a NETEZZA SME / Performance Expert provided technical services as follows:

perform a heath check analysis of the entire Netezza environment

verify if Netezza best practices are being followed

work with architects and review the logical data model

review the physical data model and make recommendations for improvement (denormalization, proper distribution of data across data slices, co-location and other techniques)

implementation of Cluster Based Tables and Materialized Views

analysis Explain plans for problem SQL

analysis and recommendations on zone maps

improve performance of queries (some queries went from execution time of several hours to seconds)

review the entire ETL process and make recommendations for improvement

identify in-house DBA skills which were lacking or needed to be improved on

ENVIRONMENT:

Red Hat Linux, IBM PureData, IBM Netezza

Netezza 6+1 S-Blades

Netezza Platform Server (NPS) 7.0.4, LINUX

Netezza Performance Portal

NzAdmin

Aginity Workbench for PureData System for Analytics

Informatica

COMPANY: Deloitte Consulting - State of Connecticut (Lake Mary, Florida)

YEAR OF MANDATE: 2015

DURATION OF MANDATE: 5 months

FUNCTION: Consultant “DB2 LUW Technical Lead / Advisor / SME / DBA "

As a DB2 LUW Technical Lead / Advisor / SME / DBA provided DB2 technical services for Deloitte on various Health Care systems being developed for the State of Connecticut.

Lead and mentor the DBA team

Provide DB2 support for a team of over 50 developers

Work with managers and architects on defining requirements and advising on best DB2 solutions / practices

Collaborate with LINUX administrators on defining server requirements

Use Erwin to review and make changes to the data model

Plan and execute database schema changes

configure HADR

Upgrade DB2 10.1 to 10.5 FP 5 with PureScale

Configure/optimize database manager, database and DB2 registry

Configuration of Buffer Pools

Create and configure instances (including DB2 Pure Scale instances)

Design and implement tables and indexes

Design and implement foreign keys

Design and implement Columnar Tables using DB2’s BLU acceleration

Monitoring of instances and databases for performance, locking and other issues

Analysis of access paths and tuning of SQL

Create scripts for backups (Full, Incremental and Delta), alerting and reports

Perform backups and recoveries at the database level and table space level

Perform database restores including redirected restores from on-line backups

Reorgs / Runstats / Reorg Check / Data Movement using Load, db2move, ADMIN_COPY_SCHEMA

Review Stored Procedures and make / implement recommendations to improve performance

Set up Work Load Manager to control locking issues and run-away queries

Advocate and implement best practices for statistics gathering, reorgs and backups

Give training/courses to application developers on SQL performance tuning

Enforce best practices for database security

Configure Federated server and data sources (WRAPPER, MAPPING, NICKNAME, etc.)

Investigated and resolved DB2 server/database issues

ENVIRONMENT:

DB2 10.1 and 10.5 (Pure Scale)

HADR

Db2top, Db2mon

IBM Data Studio

Erwin

IBM InfoSphere Optim Performance Manager

LINUX

COMPANY: Government of Canada (Ottawa, Ontario)

YEAR OF MANDATE: 2012 - 2014

DURATION OF MANDATE: 36 months

FUNCTION: Consultant " NETEZZA / DB2 MPP Data Warehouse DBA "

As a NETEZZA DBA / DB2 LUW DBA provided technical services as follows:

Migrate to MMP (Massive Parallel Processing) DPF BCU environment

Make recommendations for server architecture (CPUs, memory, disk, network)

Create a plan for the migration of a single partition database to a 16+1 partition database

Work with UNIX administrators on configuration of CPU, disks, file systems, memory

Determine which tables to replicate and which to hash

Review data model and determine Hash Keys

Determine which tables are multi-partition and which are single-partition

Determine which single-partition tables to Replicate (based on queries with table joins)

Determine Distribution Keys (Partition Key) for partitioned tables

NETEZZA Proof of Concept

Work with IBM on a proof of concept to establish/validate performance improvements of Netezza versus DB2 MPP

Establish and document functional differences between Netezza and DB2 MPP

Load tables using nzload and compare times with DB2 load

Run resource intensive queries on Netezza and compare times with DB2

Explaining of queries and review access plans

Migrate to NETEZZA

Convert DB2 DDL to Netezza DDL

Creation of hundreds of tables and other Netezza objects

Design Netezza Distribution keys for DB2 single partition tables

Develop and test data migration scripts which pipe DB2 data and load this data on Netezza using nzload

Use the above scripts to migrate the entire DB2 data warehouse to Netezza

Daily Activities

Use nzadmin and nzhw to monitor / manage NZ SPUs/Disks

Design and implementation of Netezza tables ensuring proper data distribution and co-location of data for queries

Design of table columns ensuring that Netezza Zone Maps are used

Design of Netezza Clustered Based Tables and Materialized views

Establish Referential Integrity to improve query performance

Manage NZ objects using nzadmin

Using nzload load NZ tables with massive amounts of data and validate performance of newly designed / modified tables

Grooming of tables and generating statistics

Analysis of access paths and tuning of SQL / NZSQL and/or redesign tables

Code, test and troubleshoot procedures using NZSQL and LINUX scripting

Backup and restore of Netezza databases using nzbackup (Full, Cumulative, Differential, Schema Only) and nzrestore (entire database and table level)

Using nz_migrate migrate database table(s) from one NPS server to another

Configure ODBC/JDBC drivers

Advocate and implement “Best Practices” for statistics gathering, reorgs and backups

Advise, mentor and lead other DBAs in the group on Netezza technology

Work with Data Modelers and Enterprise Architect

Work with ETL Developers on changing strategy from ETL to ELT

Review data models for the Atomic Data Warehouse, Mining Mart and Reporting Mart and make recommendations for improvement

Using Aginity Workbench create Netezza databases, make DDL changes

Using Aginity Workbench Explain SQL and make changes to the Netezza databases to improve performance

Encryption of data

Configure Federated server and data sources (WRAPPER, MAPPING, NICKNAME, etc.)

Upgrade DB2 9.5 to 9.7

Configure/optimize database manager, database and DB2 registry

Configuration of Buffer Pools

Create and configure instances

Create and configure databases

Design of tables and indexes

Design and implement MQTs (Materialized Query Table)

Design and implement MDC (Multi-Dimension Clustering) tables

Monitoring production databases for performance and other issues

Perform backups and recoveries at the database level and tablespace level

ENVIRONMENT:

Red Hat Linux, IBM PureData, IBM Netezza

Netezza 6+1 S-Blades (96 CPUs, 96 SPUs, 96 FPGAs, 240 Data Slices, 64 Terabytes)

Netezza Platform Server (NPS) 7.0.4, LINUX

Netezza Performance Portal

NzAdmin

Aginity Workbench for PureData System for Analytics

TSM (used for NZ backups and restores)

SVN

Db2top, Db2mon, db2pd

IBM Data Studio

IBM Design Studio

IBM InfoSphere Data Architect

IBM InfoSphere Optim Performance Manager

Datastage

Change Data Capture

SPARC Enterprise M9000, 32 CPU

COMPANY: University of Ottawa (Ottawa, Ontario)

YEAR OF MANDATE: 2011

DURATION OF MANDATE: 2 months

FUNCTION: Consultant "DB2 Migration Expert "

Migration from DB2 Mainframe to DB2 AIX

As the DB2 SME technical lead for this position the following were performed:

make recommendations and guide the migration of DB2 ZOS to DB2 AIX

review current environment and recommend server architecture on AIX

make recommendations for partitioning/parallelism

capacity planning at OS level (disk space, CPUs)

design of file systems and maximizing of striping

instance and database creation

database manager configuration

database configuration

tablespace and buffer configuration

mentor team members and oversee activities

support team members for complex problem resolution

preparation of Federated objects

preparation of DB2 Audits

DB2 monitoring and resolving performance issues

oversee activities pertaining to ongoing replication of data from mainframe to AIX

preparation of backup and restore scripts

ENVIRONMENT: MVS, Z/OS, DB2 V9.1,

DB2 Connect, Distributed Data Facility (DDF),

AIX 5.3, DB2 V9.7, db2mon, db2top, nmon

COMPANY: Aetna Insurance (Hartford, Connecticut)

YEAR OF MANDATE: 2010 - 2011

DURATION OF MANDATE: 9 months

FUNCTION: Senior DB2 MPP Data Warehouse Data Base Administrator

As a Senior Data Warehouse DBA, the following activities were performed for this Insurance company in this Data Warehousing arena:

Review the logical models for various projects

Review physical data models and make detailed recommendations for optimization

Review the DB2 setup at the OS level (file systems, striping, etc.) and identify problems and make recommendations for improvement

Make best practices recommendations for use of DB2 V9 features (table/range partitioning, row compression, self tuning memory, MQTs, MDCs, Statistical Views, etc.)

Database performance tuning through database physical design, configurations and implementations of MQTs, MDCs, database and table partitioning, indexing and related settings

SQL analysis and tuning of complex queries (includes rewriting the SQL and/or tuning the database)

Resolve problems / issues

Performance monitoring of the servers and DB2

Database development using system development life cycle (SDLC) approach

Use of DB2 Advisors

Provide DB2 LUW related technical expertise to the development team

Advise Data Architects on issues/questions

Review the ETL and make recommendations for improvement

Design and create Tables, Table Spaces, Indexes, etc.

Migration of DDL from Dev to Prod

Review the DB2 instance setup, make recommendations for improvement and implement

Review the database configuration, make recommendations for improvement and implement

Create/modify DB maintenance scripts

Create/modify DB Alert and Health scripts

ENVIRONMENT: The production environment consisted of the main data warehouse database (see below) and 3 Data Mart databases.

Production Data Warehouse Database:

DPF - 21 AIX P595 servers (114 CPUs, 1 Terabyte memory, > 50 Terabyte disk)

AIX 5.3,

DB2 LUW V9.5 / V9.7 (with DPF - 114 DB partitions),

MDC, MQT, Range Partitioning, Statistical Views,

Data Stage, Business Objects,

db2mon, db2top, nmon, Workload Manager,

IBM Data Studio, Infosphere Data Architect

COMPANY: Blue Cross Blue Shield - Wellmark (Des Moines, Iowa)

YEAR OF MANDATE: 2009 - 2010

DURATION OF MANDATE: 6 months

FUNCTION: Senior DB2 MPP Architect – DB2 Performance Expert

Provide DB2 subject matter expertise and technical leadership to the corporate DW architects, DBAs, developers, business analysts and systems analysts on the development of the Integrated Data Store.

As a DB2 architect / DB2 performance Expert the following activities were performed for this Insurance company in this Data Warehousing arena:

Review the conceptual and logical models

Review the physical data model and make detailed recommendations for its optimization

Review the DB2 setup at the OS level (file systems, BCU, striping, etc.) and identify problems and make recommendations for improvement

Make best practices recommendations for use of DB2 V9 features (table/range partitioning, row compression, self tuning memory, MQTs, MDCs, Statistical Views, etc.)

Database performance tuning through database physical design, configurations and implementations of MQTs, MDCs, database and table partitioning, indexing and related settings

SQL analysis and tuning of complex queries which the in-house DBAs were unable to tune (includes rewriting the SQL and/or tuning the database)

Resolve problems / issues which the in-house DBAs were unable to

Performance monitoring of the server and DB2

Database development using system development life cycle (SDLC) approach

Use of DB2 Advisors

Recommend and configure Workload Manager (previously Query Patroller and DB2 Governor)

Provide DB2 LUW related technical expertise to the development team

Mentoring and leading of the corporate DBAs

Advise Data Architects on issues/questions

Review the ETL and make recommendations for improvement

Review the DB2 instance setup, make recommendations for improvement and implement

Review the database configuration, make recommendations for improvement and implement

DB maintenance scripts – determine what is need and recommend

Alert and Health scripts – determine what is need and recommend

ENVIRONMENT: Production:

DPF - 2 servers (40 CPUs, 80 GB memory - 10 DB partitions)

M4000 - Coordinator/Catalog server (8 CPUs, 16 GB memory - 2 DB partitions)

M5000 - Data Server (32 CPUs, 64 GB memory - 8 DB partitions)

DB2 V9.5 (with DPF),

MDC, MQT, Range Partitioning, Statistical Views,

Power Designer,

Data Stage, Business Objects,

Solaris 10, Netbackup,

db2mon, db2top, Workload Manager

IBM Data Studio

COMPANY: TATA Communications (Teleglobe) (Montreal, Quebec)

YEAR OF MANDATE: 2007 - 2009

DURATION OF MANDATE: 24 months

FUNCTION: Senior DB2 MPP Data Base Architect – MPP Performance Expert

Work under the supervision of the Business relationship Manager in the IT group for Corporate and Data Warehousing (DW) projects and interact with the Corporate DW Architect, Business Analyst and the DW production DBAs.

The following activities were performed for this large international telecom company:

Review the conceptual model (snowflake schema)

Review the physical data model (snowflake schema) of this 10 tera-byte database (largest fact table contained over 3 billion rows) already deployed in production and make detailed recommendations for its optimization

Challenge IBM regarding its previous efforts with performance and set-up of DB2

Produce a high level architecture report for management outlying observations and recommendations and best practices

Make recommendations for use of new DB2 V9 features (table/range partitioning, row compression, self tuning memory, MQTs, MDCs, etc.)

Database performance tuning through database physical design, configurations and implementations of MQTs, MDCs, database and table partitioning, indexing and related settings

Review the DBM and DB setup, the ETL (Informatica) processes and make/implement recommendations to improve performance and identify problem areas

SQL analysis and tuning (ad hoc queries were generated by Business Objects)

Performance monitoring (CPU, disk I/O waits, locking, etc.)

Installation of DB2 V9 on new servers and migration of DB2 V8 database to V9

Installation of Fix Packs

Work with OS sysadmin and make recommendations for DB2 file systems and BCU set-up

Creation and configuration of instance and database

Perform Backups and Restores

Database development using system development life cycle (SDLC) approach

Use of DB2 Advisors

Use of Query Patroller and DB2 Governor

Review the production environment and make recommendations to automate the maintenance and monitoring of DB2 and the warehouse

Provide DB2 LUW related technical expertise to the development team

Mentoring of Production DBAs

CUBES OLAP :

The purpose of this project was to design and develop a combined fact table (snowflake schema) from the above warehouse which is to be queried by hundreds of users using an OLAP tool (NovaView). The queries are ad hoc and random. Using MQTs, MDCs and other DB2 features query response time was improved from 100 to 1000 times compared to normal indexing. Many queries which took 1 hour to run would now run in 3 seconds or less. The user experience was to query the fact tables using an OLAP tool and having OLTP response times.

Designed the ETL for the combined fact table including the refreshing of the MQTs.

ENVIRONMENT: Production:

BCU MPP - 5 servers (20 CPUs - 17 DB partitions)

Eserver 326m, AMD64 DS4300/DS4800

1 server used as the Admin. -Catalog node and the other 4 servers as database partition nodes.

DB2 LUW V8 (with DPF) & V9 (with DPF)

DB2 Query Patroller, DB2 Governor

DB2 Data Warehouse Edition (DB2 DWE) Version 9.1.1

Linux (Redhat), Netbackup

NMON, db2mon

COMPANY: IBM Canada (Montreal, Quebec)

YEAR OF MANDATE: 2006 - 2007

DURATION OF MANDATE: 8 months

FUNCTION: Consultant "Senior Database Administrator"

As senior DBA, worked mainly on the Enterprise Data Warehouse and Decision Systems Data Marts of the National Bank of Canada.

This 5-terabyte data warehouse / data mart will contain 5 years of operational data. It consisted of 325 tables. The largest table was 1.8 billion rows - 500 GB.

The following activities were performed:

plan, install and configure DB2 Data Warehouse Edition

plan and install DB2 ESE

plan, install, configure and administer DB2 Query Patroller

plan and install DB2 Fix Packs

convert non-partitioned instances to partitioned (Dev and Pre-Prod) and create a partitioned instance in Prod

configure / tune instances for best performance (Sort Heap, FCM Buffers, etc.)

created partitioned databases in Dev, Pre-Prod and Prod

configure / tune databases for best performance (TSM, Logs, Buffer pools, IO Servers, etc.)

work with the UNIX administrator to design file systems which will maximize striping of Table Spaces

make use of the DB2 Configuration Advisor and the DB2 Design Advisor

create SMS and DMS table spaces (Regular, System Temp and User Temp)

make recommendations for partitioning of tables, MDC (Multi-Dimensional Clustering) tables and Replicated MQTs (Materialized Query Table)

perform SQL reviews, run Explains and make recommendations

performance monitoring and tuning using VMSTAT, IOSTAT, TOPAS, snapshot monitor and the event monitor

perform online and offline database and table space backups (Full, Incremental and Delta) and write scripts for backups

restore multi-partitioned databases and table spaces and Roll Forward

perform re-directed restores of partitioned databases from one server to another

create/maintain UNIX scripts which do monitoring, alerts, etc.

research, recommend and implement DB2 features which reduce maintenance costs and DB2 outage and improve performance

provided technical direction and support for the development team

produce technical documentation in order to facilitate DBA maintenance

problem determination and resolution

work with the DBA team and architect to develop best practices for instances, databases, backups, etc.

mentoring of junior DBA's

provide on-call production support

ENVIRONMENT: Production:

Data Warehouse: 1 P570 with 8 CPUs (64-bit) each and 46.5 Gig of memory,

Data Marts : 1 P570 with 4 CPUs (64-bit) each and 16 Gig of memory

AIX 5.2, DB2 8.2 (FixPak 14) 64 bit with Database Partitioning Feature (DPF),

DB2 Data Warehouse Edition 9.1.1, DB2 Query Patroller

2 instances / 2 databases

Support on other Environments

Also provided DB2 production support on other various AIX, LINUX and Windows platforms which included both partitioned and non-partitioned databases.

Number of servers supported: 37

Number of instances supported: 40

Number of databases supported: 71

COMPANY: Canadian National Railway (Montreal, Quebec)

YEAR OF MANDATE: 2006

DURATION OF MANDATE: 4 months

FUNCTION: Consultant "Senior Database Administrator"

As a DBA, worked on several projects including the Car Accounting Historical Data Warehouse.

This 3-terabyte warehouse will contain 7 years of operational data when fully populated.

The following activities were performed:

design and implement partitioning of tables using DPF

design and implement MDC (Multi-Dimensional Clustering) tables

design and implement replicated MQTs (Materialized Query Table)

created fact tables and dimension tables

creation and use of User Temp Tables

analysis and implementation and promotion of DDL across environments

implementation of User Defined Functions

implementation and tuning of stored procedures

perform SQL reviews, run Explains, make and implement recommendations

performance monitoring and tuning using NMON, Quest Central and the event monitor

exporting/importing/loading of tables

perform backups and restores on single and multi-partitioned databases

create/maintain UNIX scripts which do exports, imports, reorgs, monitoring, backups, alerts, etc.

design a strategy which will automatically roll off partitions of range-partitioned tables and implement the strategy using a stored procedure

problem determination and resolution

ENVIRONMENT: Production: 2 P570s with 8 CPUs (64-bit) each and 32 Gig of memory each

AIX 5.3, NMON, HADR, Data Propagator, Quest Central for DB2

DB2 8.2 (FixPak 11) 64 bit with Database Partitioning Feature (DPF)

Development - 3 instances, 3 databases

Unit and acceptance testing - 10 instances, 12 databases

Production - 6 instances, 6 databases



Contact this candidate