Post Job Free
Sign in

Data Sql Server

Location:
Hyderabad, Telangana, India
Salary:
$55
Posted:
April 14, 2017

Contact this candidate

Resume:

ABDUL

aczs4s@r.postjobfree.com

512-***-****

Summary

More than 8+years of IT as ETL Data Stage Developer / Data Warehouse Analyst.

Over 6 years of experience in Data modeling, Data warehouse Design, Development and Testing using ETL and Data Migration life cycle using IBM Web Sphere Data stage 11.x/9.x/8.x/7.x

Experience in Data Modeling, Star Schema/Snowflake Schema modeling, FACT & Dimensions tables, Physical & logical data modeling.

Used Both Pipeline Parallelism and Partition Parallelism for improving performance.

Identified and tracked the slowly changing dimensions, heterogeneous Sources and determined the hierarchies in dimensions.

Experience in implementing DB2, DB2/UDB, Oracle 10.2 / 9i / 8i / 8.0, SQL Server, PL/SQL, SQL*Plus database applications.

Experience in Requirement Analysis, Design, Development, Testing and Implementation of Data warehouses and Database business systems.

Used ETL Tool, to create jobs for extracting the data from different databases, flat files & Mainframe files.

Extensively Implemented Error Handling Concepts, Testing, debugging skills and Performance tuning of targets, sources, transformation logics, extensively used clear case and version control to promote the jobs.

Experience in all Testing Phases: Unit testing, Integration Testing, Regression testing, Performance testing.

Strong skills in analysis database designing, Coding using SQL and PL/SQL, Developing, Writing, Unix Shell Scripts and code/script testing. Coded Stored Procedures, Packages and Triggers using PL/SQL.

Coordinate with team members and administer all onsite and offshore work packages.

Schedule and plan all data stage tasks. Review all functional business applications and specifications.

Design various block diagrams and logic flowcharts and prepare various computer software designs.

Maintain and perform tests on all Software Development Life Cycle and ensure required product.

Good communication, interpersonal and presentation skills.

Excellent work ethics and quick learner.

Willing to learn, adapt new technologies and third party products.

Seeking a challenging position in an environment that will allow me to utilize my professional and interpersonal skills, while promoting personal growth.

Technical Skills:

Data Warehouse

IBM Data Stage 9.1/8.1/7.1/7.5/EE Parallel Extender, Data Stage Server Edition,

Quality stage, IBM Information Analyzer,MetaStage 6.0

Languages

SQL, T-SQL, PL/SQL, SQL*PLUS, C, C++, VB, JDBC, XML

Database

Teradata V2R12/V2R13, Oracle 11g/10g/9i/8i/8.0/7.0,

SQL Server 2012/2010/2008/2005, PL/SQL, UDB DB2 9.1/9.7, MS SQL Server 6.5/7.0, TSQL, MS Access 7.0/97/2000, Excel

Dimensional Modeling

Data Modeling, Star Schema Modeling, Snow- Flake Modeling, Fact and Dimensions, Physical and Logical Data Modeling Erwin 3.5.2/3.x

Operating Systems

Windows 2000/ NT/ XP/ Vista/98/ 95, Windows server 2008/2003, UNIX.

Others

TOAD,Netezza,Active Batch 9.0, MS Office.

Academic Qualifications

Bachelor of Technology in Electrical and Electronics Engineering.

Professional Experience

Whataburger LLC, San Antonio, TX June'15 – Till Date

ETL DataStage Developer

Description: Concentric will be receiving most of the invoices for Whataburger. They will be scanning in the invoices and give the user the ability to code the invoice or match it to PO Receipts and send the results to JD Edwards. Concentric has an AP Immediate Export that they will be exporting. The file from Concentric is to be mapped to the layout that these EDI tables require

Responsibilities:

Participate in all phases including Requirement Analysis, Design, and Coding, Testing, Production support and Documentation.

Monitoring the daily batch process and fixing any issues that arise during the process.

Work on creating mapping documents from business requirements and designed the ETL specification documents

Design and develop Datastage jobs for cleansing incoming data into common data specifications and used Quality Stage for matching different data applying business rules which consisted of data loads and data cleansing

Design and develop Datastage jobs to load dimension and fact tables for different data marts based on granularity of data extracted from different sources

Deploy Datastage jobs and fine-tune them by implementing best practices, Environment variables, parameters and following naming standards

Perform performance tuning of the jobs by interpreting performance statistics of the jobs developed.

Analyze the existing informational sources and methods to identify problem areas and make recommendations for improvement. This required a detailed understanding of the data sources and researching possible solutions.

Monitor the daily batch processes and fix any issues arising in the loads

Working on data integration and migration from SQL server to IBM netezza

Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.

Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.

Environment: Web sphere Datastage 11.5, SQL Server 2012, Netezza, Active Batch 9.0

Citi Bank, San Antonio, TX Oct '14 – May '15

ETL Data Stage Developer

Description: FFD (Federated Financial Data store) is an account repository that provides a single enterprise view of the account data for Citi Bank customers. FFD houses Consumer credit card accounts, Legacy BAC Small Business credit card accounts, HELOC/ELOC card accounts, Automated Investment accounts, Mutual Fund Columbia accounts and Trust accounts.

Responsibilities:

Analyzing the existing informational sources and methods, understanding the customer expectations and identifying the problem areas

Documented design and solution that meets business requirement with technical vision and best practices

Coordinating with the onshore team; Performing peer review of the deliverables

Handled an effective approach to perform code reusability in order to run jobs in multiple instances at the same time for different business scenarios. Implemented this requirement by passing the Parameter value for the file name and the invocation ID.

Involved in the discussions with SME’s to validate the reports generated by Investigate stage and preparing of ACL (Check List) document suggestions made by the Business.

Creating an CR’s (Request for change) and getting approval though a scheduled meeting for bringing in the changes in the production environment

Transitioning the applications to the new team members. This includes both Technical and application knowledge of the system

Interacted with End user community to understand the business requirements and in identifying data sources.

Analyzed the existing informational sources and methods to identify problem areas and make recommendations for improvement. This required a detailed understanding of the data sources and researching possible solutions.

Worked on Fast Track for mapping the source fields to target fields, Business Glossary for writing all Business terms, Technical terms and Term linkage. Metadata Workbench to check the Lineage data flow from Source to Target.

Used Quality Stage to ensure consistency, removing data anomalies and spelling errors of the source information before being delivered for further processing.

Excessively used DS Director for monitoring Job logs to resolve issues.

Analyze performance and monitor work with capacity planning.

Maintain and perform tests on all Software Development Life Cycle and ensure required product.

Environment: Web sphere DataStage 8.5, 9.1, Quality stage, DB2 UDB, Oracle10g, IBM InfoSphere Metadata Workbench

Barclays Bank, Delaware, DE Aug '13 – Oct'14

Role: Sr. Data Stage Developer

Description: TSYS will be sending files 5 days a week to Barclays and those files are loaded into the staging location. Cleansing of the data is done in every layer, from staging location the files are loaded to Source and from the Source, the data is loaded to DMA, from DMA layer the data is further cleansed and loaded to FMT layer. The tables present in these two layers (DMA and FMT) are used as our Source tables for Finance and files will be generated out of the Marts that are being populated.

Responsibilities:

Worked on creating mapping documents from business requirements and designed the ETL specification documents

Based on these documents, marts are populated from the source tables, consisting of Reg Mart, AFF Mart, Retail Deposit and TMS/NTM.

Except the Retail Deposits, rest all other marts have both Business and Consumer cards based on the platform code (CPC) provided in the application.

The CCAR and IHC files that are created are sent to the Federal Government and to UK.

Participated in all phases including Requirement Analysis, Design, and Coding, Testing, Production support and Documentation

Design and develop Data stage jobs for cleansing incoming data into common data specifications and used Quality Stage for matching different data applying business rules which consisted of data loads and data cleansing.

Deploy Data stage jobs and fine-tune them by implementing best practices, environment variables, parameters and following naming standards.

Perform performance tuning of the jobs by interpreting performance statistics of the jobs developed.

Analyze the existing informational sources and methods to identify problem areas and make recommendations for improvement. This requires a detailed understanding of the data sources and researching possible solutions.

Got in touch on a regular basis with Business to understand the requirements clearly

Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.

Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs

Environment: Web sphere Data stage 8.5, Oracle10g, Quality stage,

Prudential Financial, Inc., Newark, NJ Jan'12 – Jun'13

Data Stage Developer

Description: Prudential Financial, Inc. is a Fortune Global 500, provides insurance, investment management, and other financial products and services to both retail and institutional customers throughout the United States and in over 30 other countries.

The project objective was to collect, organize and store data from different operational data sources to provide a single source of integrated and historical data for the purpose of reporting, analysis and decision support to improve the client services.

Responsibilities:

Interacted with End user community to understand the business requirements and in identifying data sources.

Analyzed the existing informational sources and methods to identify problem areas and make recommendations for improvement. This required a detailed understanding of the data sources and researching possible solutions.

Implemented dimensional model (logical and physical) in the existing architecture using Erwin.

Helped in preparing the mapping document for source to target and Wrote Configuration files for Performance in production environment.

Designed and developed ETL processes using Datastage designer to load data from Oracle, MS SQL, Flat Files (Fixed Width) and XML files to staging database and from staging to the target Data Warehouse database.

Used Datastage stages namely Hash file, Sequential file, Transformer, Aggregate, Sort, Datasets, Join, Lookup, Change Capture, Funnel, Peek, Row Generator stages in accomplishing the ETL Coding.

Developed job sequences with proper job dependencies, job control stages and triggers.

Used QualityStage to ensure consistency, removing data anomalies and spelling errors of the source information before being delivered for further processing.

Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.

Developed UNIX shell scripts to automate the Data Load processes to the target

Performance Tuning of Jobs, Stages, Sources and Targets.

Created PL/SQL Procedures/Functions to build business rules to load data.

Environment: WebSphere Datastage 7.X, 8.0(Parallel Extender), Teradata, DB2 UDB 9.0, Oracle 10g, MSOffice Suite.

LIC, India Dec'09 – Oct'11

Data Stage Developer

Description: LIC is an Indian state-owned insurance group and investment company. It is the largest insurance company in India. Infacs is an accounting package for Insurance Sector for their internal use in the Head Office It consolidates the data from different branch offices and Divisional offices and show in the form of report. Also it is for internal Accounting purpose.

Responsibilities:

Interacted with End user community to understand the business requirements and in identifying data sources.

Analyzed the existing informational sources and methods to identify problem areas and make recommendations for improvement. This required a detailed understanding of the data sources and researching possible solutions.

Implemented dimensional model (logical and physical) in the existing architecture using Erwin.

Studied the PL/SQL code developed to relate the source and target mappings.

Helped in preparing the mapping document for source to target.

Worked with Data stage Manager for importing metadata from repository, new job Categories and creating new data elements.

Designed and developed ETL processes using Data stage designer to load data from Oracle, MS SQL, Flat Files (Fixed Width) and XML files to staging database and from staging to the target Data Warehouse database.

Used Datastage stages namely Hash file, Sequential file, Transformer, Aggregate, Sort, Datasets, Join, Lookup, Change Capture, Funnel, Peek, Row Generator stages in accomplishing the ETL Coding.

Developed job sequencer with proper job dependencies, job control stages, triggers.

Used QualityStage to ensure consistency, removing data anomalies and spelling errors of the source information before being delivered for further processing.

Excessively used DS Director for monitoring Job logs to resolve issues.

Involved in performance tuning and optimization of Datastage mappings using features like Pipeline and Partition Parallelism and data/index cache to manage very large volume of data.

Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.

Used Autosys job scheduler for automating the monthly regular run of DW cycle in both production and UAT environments.

Participated in weekly status meetings.

Environment: IBM Information Server 8.5 (Designer, Director and Administrator), QualityStage, Test Director

SEED Health Care Solutions, India Jan'08 – Nov'09

SQL Developer

Description: CareDynamix is revolutionary integrated Hospital Management System providing comprehensive features in Hospital Management and information. CareDynamix provides the users complete control and confidentiality over patient's information such as medical reports, prescriptions history, interdependent visits etc.

CareDynamix allows the users to collect, store and extract patient’s movement, diagnosis and prescription history, inter department visits, out patient’s appointments, admission, operations room schedule, pre operation checklists etc.

Responsibilities:

Created new procedures, functions, triggers, packages, and SQL*Loader scripts, and modified existing codes, tables, views, and SQL*Loader scripts

Incorporated business rules and constraints and created new tables for all modules.

Worked on claims processing reports on investigation and payment.

Generated claims report weekly, monthly and yearly.

Worked on reports to give parallel comparison of different-2 fiscal year of claims.

Fine-tuned long running claims reports.

Developed expense reports to provide claims from accessing different-2 tables.

Tuned SQL queries to improve response time.

Developed a customer data warehouse to understand customer behavior to various Claims related programs and identify critical customers in support of improved decision-making across the organization

Monitored performance and changed performance requirements through application of Database tuning and performance optimization techniques.

Worked under tight schedules for successful development and implementation.

Troubleshoot system issues, monitored scheduled jobs and set up maintenance plans for proactively monitoring the performance of SQL server databases.

Performed daily tasks including backup and restore by using SQL Server 2005 tools like SQL Server Management Studio, SQL Server Profiler, SQL Server Agent, and Database Engine Tuning Advisor.

Converting Stored procedures to Datastage server jobs

Environment: MS SQL Server 2000, MS Access, Datastage Version 7.X, MSOffice Suite.



Contact this candidate