Post Job Free
Sign in

Teradata TFL : Teradata,Hive,Oracle,Unix

Location:
Sunnyvale, CA
Salary:
120-140
Posted:
March 23, 2018

Contact this candidate

Resume:

Kanwaldeep Singh Multani

812-***-****

*************@*****.***

Santa Clara, CA

SUMMARY:

Extensive knowledge in Dataware Housing and Data Mart Applications

Experience in Evolving strategies and developing Architecture for building a Data warehouse.

Expertise in Data warehousing and Data migration.

Extensive knowledge of all phases of application development life cycle – Planning, Designing, coding, testing and implementing of mainframe applications.

Extensive use of TD utilities : Fastload,Mload,Tpump,TPT

Extensively used various features of TERADATA,Vertica,Hadoop/Hive,PL/SQL.

Experienced in interacting with end users and analyzing business needs and analyzing business requirements.

Strong in ETL/ELT design and development skills.

Expertise in Metadata models.

Strong leadership, interpersonal and oral/written communication skills.

COMPUTER SKILLS:

OS: UNIX, MAC OS

Languages: PL/SQL, SQL

Databases: TERADATA, Vertica, Hive, ORACLE

Scripting: Unix Shell scripting

Scheduling: Autosys, Oozie

CERTIFICATIONS:

Teradata 12 Certified Professional

IBM Certified Database Associate (DB2 9 Fundamentals)

EXPERIENCE:

GBI (Global Business Intelligence): iTunes Financials

Client: Apple Inc., CA 01/2012 – TillDate

Role: Teradata Developer

Apple GBI (Global Business Intelligence) reporting enhancement project is for building reporting structure for Apple ITMS (iTunes Store) Sales, Marketing and Finance teams.

GBI iTunes Financial aims at financial vendor reporting on iTunes purchases. This project aims at induction of data from source system to Apple Enterprise data-warehouse environment with multiple transformations and summarization.

Environment: Mac OS, Teradata, Vertica, Hadoop-Hive.

Responsibilities:

-Understanding Business requirements and check the feasibility of the requirement

-Studying existing application and doing the impact analysis for the new changes to be incorporated.

-Review of functional specifications with Functional team and doing gap reviews.

-Creating technical specifications and preparing implementation strategy and execution plan.

-Writing stored procedures in TERADATA and shell scripts for data loading.

-Writing HIVE Queries for data aggregation.

-Data extraction from TERADATA and VERTICA for data analysis.

-Managing Infra tasks both as Developer/PM

-Working on POC’s(Spark) on new technology

-Leading offshore team of 5 members

Company Profile / Project Experience:

Teradata Business Projects

Description

This involves creating/enhancing existing business functionality to cater to new

business requirements in Teradata

Responsibilities

Models : Subscription based models, Simple Transactional models

Aggregates : Daily, Monthly, Quarterly

Use of stored procedure and shell scripting for data loading

Metadata driven loading and reporting

Creating the processes with Scheduled/Manual run for adhoc processing

Use of validations to ensure data correctness

Use of Autosys to schedule the processes

Archiving and purging older data and reports

Data replication from primary to secondary

Data quality checks between primary and secondary

Hive Business Projects

Description

This involves creating/enhancing existing business functionality to cater to new

business requirements in Hadoop-Hive

Responsibilities

Aggregates : Daily, Monthly

Use of HQL for data aggregation

Use of map side / Sort merge bucket joins for query performance

Scheduling jobs through Oozie

Use of parallelism while running hive queries

Cluster to Cluster migration

Adhoc data loading and history rebuilding

Restartability through oozie

Data Analytics (Vertica)

Description

This involved creating aggregates in Vertica and enhancing existing

And adhoc queries for data analytics

Responsibilities

Aggregate building in vertica

Use of partition swapping from intermediate table to Main tables

Creating projections in vertica to allow faster retrieval of data

Data replication from one vertica environment to other environment

Debugging and solving issues

Regression Automation

Description

This project involved automating the regression testing being performed as

part of many business/infrastructure projects. Objective of this is to reduce manual

effort thereby reducing the level of effort for any project

Responsibilities

Metadata driven steps to be followed for automation

Scripting the flow using the metedata

Validating the data within the same flow

Ensuring restartability of the steps due to any failure

Notification module to notify failure of a step to user

Checksum comparion of pre and post reporting scenarios

Validating the reports with DB

TD Expansion/Failover

Description

This project involved expansion of Teradata databases requiring failover and catch up activities

Activities. In GBI all applications work in Primary and DR mode. During TD

Expansion activities application teams need to switch to other DB making it as their

primary. Once the DB is upgraded team needs to perform catchup activity to ensure

all the data is loaded in upgraded DB

Responsibilities

Data comparison between Primary and DR to ensure data accuracy before expansion

Analysing the processes which would require catch up after DB ugrade

Scripted failover activity from one Primary to Secondary

Monitoring the processes running from new DB

Loading the lagged data in ugraded DB once expansion is completed

Data compariosn between 2 DB’s

Failover activity from Secondary to Primary

Best Practices:

D2P: Following development to production process

Restartability : Ensuring restartability in all the processes to avoid re-execution of steps

Reusability : Generic function files are created which can be reffered in all the scripts.Single profile file(env variables) for all scripts

Deployment : Migration of data from dev to test and test to prod using tool so that same version of tested code is deployed in production

Security : Using encrypted DB credentials for security purpose

Business Validations : Validation switch kept in DB(ON/OFF) to govern if a validation should run or can be skipped

Peer Reviews : Peer code reviews to ensure all coding standards are followed

Data Quality Checks : Validation between primary and secondary DB to ensure data correctness

Production Implementation strategy:

oPre Implementation Activties – Code comparison, Approvals

oImplementation – Checklist including all impacted objects and the status of their deployment in each system

oPost Implementation : Validations to ensure all required changes are done

oMonitoring : Monitoring of processes post changes

oProduction Data Validation : Functional validation of data after production run

Declaration:

I Kanwaldeep Singh Multani do here by declare that the information given above is true and correct to best of my knowledge.



Contact this candidate