Post Job Free

Resume

Sign in

Data Architect Sql Developer

Location:
Round Rock, TX
Posted:
April 04, 2023

Contact this candidate

Resume:

UDAYA PARSA

adwcl2@r.postjobfree.com

248-***-****

SUMMARY:

●13+ years of experience in IT industry especially in client/server business systems and Decision support Systems (DSS) analysis, design, development, Testing and implementation.

●Around 12 years of experience in loading and maintaining Data Warehouses and Data Marts using DataStage ETL processes.

●Strong knowledge of Extraction Transformation/Verify and Loading (ETL/EVL) processes using Ascential DataStage, UNIX shell scripting, and SQL Loader.

●Expertise in working with various operational sources like DB2, Oracle, Teradata, Sybase, Flat Files into a staging area.

●Extensively worked on Parallel Extender on Orchestrate Environment for splitting bulk data into subsets and to dynamically distribute to all available processors to achieve job performance.

●Experience in Evolving Strategies and developing Architecture for building a Data warehouse by using data modeling tool Erwin.

●Designed and Developed Data Models like Star and Snowflake.

●Excellent database migration experience in Oracle 7.x/8i /9i, SQL.

●Hands on experience in testing and implementation of the triggers, Procedures, functions at Database level using PL/SQL.

●Proficiency in data warehousing techniques for data cleansing, Slowly Changing Dimension phenomenon, Surrogate key assignment, change data capture.

●Extensive experience in loading high volume data, and performance tuning.

●Worked in a team with onsite-offshore model and interacted and coordinated with onsite and offsore team members

●Played significant role in various phases of project life cycle, such as requirements definition, functional & technical design, testing, Production Support and implementation.

●Worked on Different databases (DB2, Oracle, Sql Server, Sybase) and database utilities (Autoloader, Export, Imports, SQL*Loader, Sync sort), SQL, Procedures and Functions for Back End Processes on UNIX machines.

●Knowledge of Data Warehouse Architecture Star Schema, Snow flake Schema, FACT and Dimensional Tables, Physical and Logical Data Modeling.

●Excellent team member with problem-solving and trouble-shooting capabilities, Quick Learner, highly motivated, result oriented and an enthusiastic team player.

Education: Master of Science in Biomedical Engineering

Certification: Certification in Datastage v8.0.1

TECHNICAL SKILLS

ETL tool : IBM INFOSPHERE/ASCENTIAL DATASTAGE 7.5/8.0.1/8.5/8.7/11.0.1/11.3/11.7

Programming Languages : SQL, PL/SQL, and TOAD

Databases : ORACLE 7.X/8.0/8i/9i/11g, SQL SERVER 6.5/7.0, Teradata

Data Modeling Tools : ERWIN, ORACLE DESIGNER.

Operating Systems : Windows NT/2000, UNIX, AIX.Solaris

PROFESSIONAL EXPERIENCE

IBM Watson Health, Remote Jan‘2020 – Till Date

Sr. ETL Engineer

Flexible analytics Tiger team focusing on designing a Data Vault defined as a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3NF and Star Schemas. The design is flexible, scalable, consistent and adaptable to the needs of the enterprise. The project runs in phases delivering couple of modules every 2 months for the teams across the globe to utilize.

●Worked with data modeler on mapping documents. Data analysis to address questions in mapping from source files to the data vault.

●JIRA for project plans

●Design and develop ETL processes

●Peer reviews to fine tune and performance tune while following standards

●System testing before submitting to the QA team

●Code migrations using ISM and SVN check-in, maintaining code, regular code backups

●Worked with user for analytical testing and back with data architect based on feedback to change table structures

●DS knowledge transfer to peers

●Working with other teams to setup environment to be able to use the same DS code

●Service now for submitting tracking tickets

●MoveIt automation tool for external file sharing

●Production support for the daily loads, various data extracts

●Cross-train operations team on production failure and resolution

●Monthly release planning, execution and post load support

Environment-: InforSphere Information Server DataStage 11.7, PL/SQL, Oracle 11g, SQL Developer, UNIX, WinScp, MoveIt

Kaiser Permanante, OR Oct’19 – Jan’20

Data Integration Lead

Mission of the project is to support production ETL 24/7. Monitor the jobs using online monitoring tools work with other operational support teams. Debugging the errors troubleshooting the issues Creating and updating the incidents in ServiceNow. Working with INFA BDM and DevOps tools. Willingness to work in development Informatica experience is a must

●Production Support

●User Support

●Data analysis for new enhancements

●Service Now for tickets, addressing and closing tickets depending on criticality

●Daily hand over sessions with offshore team and leading them to resolve critical issues

●Proposing and presenting plans to the client on developing new monitoring processes

●Working with development teams to come up with resolution

●Code deployment and UAT approvals

●Secondary support for Informatica ETL

Environment-: InfoSphere Information Server DataStage 11.3, Informatica, PL/SQL Developer, Oracle 11g, SQL, Toad, UNIX, WinScp, Catalyst, SQL Server, Cognos FM 10.2.2

Oregon Health & Science University (OHSU), OR Jan‘14 – Oct ‘19

Data Integration Specialist

The mission of the Information Technology Group (ITG) is to develop, implement and maintain technology-based services and solutions enabling OHSU to effectively manage information to accomplish its missions. This position is responsible for supporting ICD-10 reporting and other assigned ICD-10 related project efforts. The primary focus is BI solution design, development, testing and implementation including Data Warehouse (DW) modeling using ER Studio and ETL development using IBM Datastage. Work with various IBM Infosphere tools (DataStage, Information Analyzer, Metadata Workbench and business Glossary as well as Oracle PL/SQL and MS SQL Server to process Epic/Clarity and other clinical data to meet analytic requirements.

●Analysis and design of existing ICD9 code for ICD10 impacts

●Analysis of ICD10 impacted objects on non-icd10 impacted objects

●Development work to update and make tables, views, packages, DS jobs ICD10 compliant. Time to time backup of code as required.

●Monitoring and handling data/code related issues in nightly CTST runs. If cycle crashes due to a clash with the daily backup/maintenance window issue is reported to the concerned team for further resolution.

●Worked for 2-3weeks on Transplant project as a replacement for one of the resources. Supported QA, Performance tuning of existing jobs, Unit testing and fixing bugs opened through QA. Worked with the Data Architect on requirements and designing logic(Patient,Organ,Readmit) for ETL development. Documented the logic for the team to review, uploaded docs on bridge for future reference.

●Peer review of ICD10 code modified by fellow developers. Coordinating with the team for resolving the discrepancies.

●Used Catalyst to migrate pl/sql code between environments

●Extensively working on pl/sql as part of development and QA

●Applied DS patches to resolve issues like ViewData failure, when dragging table definitions onto the SQL builder canvas, the stage columns do not show the related table definition relationships(exposed by changing the grid properties in the output columns tab) and this causes Impact Analysis not to find the related tables.

●Analysis, Development, Testing and Production Support for Research Projects starting fiscal year 2015. Source systems for most of these projects is SQL Server and target remains Oracle. Handling new datatypes like CLOB and using SCD, CDC stages depending on requirement.

●Worked on Cognos FM to add/update existing packages and publish for user. Added new query subjects and relationships. Worked on IBM Cognos FM 10.2.2 for 6-8months with increasing activity ongoing.

●Worked extensively on JIRA to plan and implement Bi-weekly Sprints. Status track, work estimation, prioritization with story points.

●Worked on Service Manager (SM) extensively prior to JIRA and still continuing to use SM for some of the projects that have not transitioned to JIRA yet.

●High level understanding of Tableau to connect to sources like Oracle and Hortonworks Hadoop HIVE to extract and analyse data. Created reports and publish package for user accessibility. Worked on Tableau for over a month.

Environment-: InforSphere Information Server DataStage 8.7/11.0.1/11.3, PL/SQL Developer V8.0.4.1514,Oracle 11g, SQL, Toad, UNIX, WinScp, Catalyst, SQL Server, Cognos FM 10.2.2, Tableau 10.3

HAP, Southfield, MI Jun‘11 – Jan’ 14

Sr Data Integration Programmer

Health Alliance Plan Healthcare is one of the largest insurance companies in the United States which provides medical insurance plans to individual/family and companies. The purpose of the project was to create a centralized Data Warehouse by integrating its Policy and Claims databases for providing better support to organizations.

●Developed data transfer strategy from various legacy data sources.

●Worked on Stored Procedures to run pre session and post session commands

●Extensively worked with DataStage Active and Passive Stages for Data Staging and Data Transformation

●Used Complex Flat File(CFF) stage to read data from Mainframe sources and handling EBCDIC to ASCII translations.

●Extensively worked on LookUp, Merge, Join, Aggregator, Sort, RemoveDuplicate and Transformer Stage .

●Developed mapping to load the data in slowly changing dimension.

●Used Surrogate Key generator, aggregate, expression, lookup, update strategy and rank transformation.

●Developed joiner transformation for extracting data from multiple sources.

●Design and Developed pre-session, post-session routines and batch execution routines.

●Extensively involved in creating database procedures, functions and triggers.

●Involved in Performance Tuning of source, target, mappings and sessions.

●Designed and Developed data validation, load processes, test cases, and error control routines using PL/SQL, SQL.

●Used Autosys Scheduling tool to Schedule the jobs.

●Extensively used Hashed files to improve the performance of Referential Lookups.

●Worked on Facets conversion data and created data analysis documents to develop ETL jobs.

●Designed jobs to convert the existing data in EDW to FACETS

●Responsible entirely for putting together the business requirement, ETL development and documentation of Pharmacy Claims, Admissions, Hedis data, Member eligibility data.

●Developed ETL jobs to Insert, Update and Delete records sourcing Audit Tables.

●Developed ETL jobs for new facets tables to capture deltas every week.

●Designed jobs for type 2 history maintenance.

●Extensively worked on Harvest to check in, check out code for deployment in QA and PRODUCTION.

●Designed ETL generic jobs to extract data from fixed width files and load into EDW.

●Developed server jobs to read data from Sql server, transform zoned datatypes, load into sequential files.

●Designed sequencers to run jobs upon receiving certain number of files and send emails upon not satisfying the conditions, archiving the files after procession, creating audit records with source and target counts.

●Presented ETL code power points to help the team on client side understand the code developed.

●Responsible for preparing test cases, system testing and documentation.

●Created architectural flow diagrams explaining high level ETL design to the business and management.

●Worked closely with client and SME’s to get overall business understanding on various modules.

Environment-: InforSphere Information Server DataStage 8.1/8.5,Oracle 11g, SQL, Toad, UNIX, UNIX Shell Scripts, AUTOSYS

Walgreens, Deerfield IL Jul’10 – Jun’11

Senior Datastage Developer

The EOB is a statement that a Medicare Part-D member( a member who enrolled in Medicare plans) receives during those months that the member has activity. On June 11, 2010, the Center for Medicare and Medicaid Services (CMS) issued the final guidance for the 2011 Model Explanation of Benefits (EOB). The new model requires significant revisions to the current EOB format and layout. Content changes are also required, but will not be as extensive as the formatting changes. The CMS objectives for the 2011 EOB guidance include: Clearer designation of the member’s current benefit phase; inclusion of other plan coverage information; improved member-friendly format. This project is aimed at meeting the CMS requirements, introducing EOB in alternate formats and languages.

●Availability of member materials in an alternate language, if that language is the primary language of more than 10% of a plan’s service area.

●Disclosure within member materials that the document is available in alternate formats or languages.

●Shell scripting to generate feedback and error summary reports, point extracts to E1 database, ftp extract files, generate filelayout for daily priority reprints, generate audit report, scheduling EOB Monthly jobs, generate file for reprint and reprocessing, update database tables.

●Implementation of lock down structure(LDS) for EOB, LDS was introduced by Walgreens in 2010

●Verification of Priority, Regular and Reprocessing reprints generation, file transfer to the requested location and support throughout week days.

●Active participation in client level business meetings and responsible for decision making in datastage job design, development, testing, deployment.

●Designed logic to determine the number of days to interrogate formulary changes, Member claims during the current calendar year shall be evaluated to determine if the member is impacted by a formulary change.

●EOB processing and reprocessing is modified to exclude claims that have a date of service that falls in a prior reporting calendar year, these claims termed as cross-over claims are filtered out using ETL jobs.

●Documented each and every job in EOB as the project earlier lacked detail design documents even for the existing jobs.

Environment: IBM Infosphere/Websphere Datastage 7.5, Oracle 10g, AIX 5.3, Putty, ESP Scheduler, Windows 2000 Xp, Toad for oracle

Walgreens, Deerfield IL Sep’09 – Jun’10

Senior Data Engineer

Claim solutions includes an entire module called billing and payment system. New project Billing and Payment Re-engineering planned to overtake this module. Providing the whole data structure of existing project to design new system is the goal of this project.

●Gathering the requirements for the new billing and payment system development to support the overall goal of improved SLA performance within the Claim Solutions organization by reducing server load from Billing & Payment transactions

●Design and develop queries to extract historical data

●Design/Develop/Quality Test datastage code to check validations necessary meeting business requirements

●Support for UAT, Integration testing, Performance testing

●Working on Batch architecture to transform datastage job to a multi-instance one, parameterize and run the processes on daily basis, Create/Submit runsheets for ESP scheduling and automation

●Shell Scripting to trigger multi-instance datastage job in parallel and archiving the files once after processing

●Responsible for adhoc requests from different teams and data extraction by modifying the parameters

●Batch architecture to define datastage job parameters in database tables and execute datastage jobs on Unix by calling the process ids assigned in the database

Environment: IBM Infosphere/Websphere Datastage Enterprise Edition 8.0.1, Oracle 10g, AIX 5.3, Erwin, Putty, ESP Scheduler, Windows 2000 Xp, Toad for oracle

Walgreens, Deerfield IL Oct’08 – Aug’09

Datastage Developer

Current state claims processing for discounted drug pricing program is a manually intensive and error prone process. This project aims at developing an automated future state solution for reliable high volume processing of claims. Extracting, transforming and loading mainframes EDI data using complex flat file stage. EDI files carried data regarding Advance shipment notice, Purchase order, Posted receipt of the shipment.

Responsibilities:

●Analyzed current state database and involved in database modeling discussions to design relational database to cater to future state application’s needs.

●Worked-out mappings from current state data in data warehouse to future state application’s relational database

●Low level design of transformation logic for claims processing

●Analyzing metadata for EDI transactional data used in B2B communication and designing ETL interface for processing EDI data.

●Worked in CVS as version control system

●Worked in a team with onsite-offshore model and interacted and coordinated with onsite and offsore team members

●Worked on unix scripts to schedule Datastage jobs

●Tuned SQL quires to optimize the performance.

●Worked with MQ stage for using a DataStage job to watch a queue and convert messages into relational data.

●Used Complex flat file to handle complex logic from mainframes data source stage that extracts data from a flat file containing complex data structures such as arrays, groups. Various formats of data structures are handled like EBCDIC/Binary, variable length of records, unusual characters.

●Effectively used Datastage stages to load large volumes into Data warehouse, Datamart and staging environments.

●Developed batch log process, which will insert and update a record for every time when job runs with all the details like job start time, job end time, job status, elapsed time etc.

●Did production support for the warranty period of 2 months.

●Implemented best practices to test ETL code.

●Loaded several portioned and non partitioned table.

●Created common jobs the get table statistics. Drop the partition before load and Create partition after load.

Environment: IBM Websphere Datastage and Quality stage 8.0.1, Oracle 10g, AIX 5.3, Erwin, Putty, ESP Scheduler, Windows 2000 Xp, Toad for oracle



Contact this candidate