Post Job Free

Resume

Sign in

Data Entry Etl Developer

Location:
Haslet, TX
Posted:
April 04, 2023

Contact this candidate

Resume:

MADHURI BATHULA

E-mail ID: adwb2j@r.postjobfree.com Contact No. 978-***-****

OBJECTIVE

Seeking a challenging position in an organization, where I can utilize my skills in providing solutions for DWH

Using ETL Methodologies.

PROFESSIONAL SUMMARY

13 years of IT experience in all phases of Data warehouse life cycle involving Analysis, Design, Development, Coding, Testing and Production Support of Business Intelligence Projects.

Strong knowledge in data warehouse concepts, Fact table, dimension table, star and snowflakes schema methodologies and data modeling.

Extensive experience in Data Warehousing projects in Collecting Client requirements, Data modeling and Development and Testing experience in ETL tools SSIS, SSRS, Informatica 10.x/9.x/8.x/7.x/6.x Designer, Workflow manager, Workflow monitor, Repository manager

Have extensive knowledge in BI Technologies like Informatica Power Center 10.x/9.x/8.x/7.x/6.x, Teradata V2R5/V2R6/12/13/14/15, SQL Server 2012/2015 and Oracle 12c/11g/10g/9i/8i.

Experience in writing SQL queries and optimizing the queries in SQL Server 2010/2012.

Performed data analysis and data profiling using SQL on different sources systems including SQL Server 2012.

Expertise in working with SQL platform and good in PL/SQL.

Good working knowledge on AWS environment with different services DynamoDB, S3, AWS Glue, CloudWatch, Amazon Redshift, AWS Auto Scaling, Elastic Search & Lambda functions.

Expertise in Enterprise Data Warehouse, Data Integration, Data Marts and Data Migration projects.

Expertise in end to end data integration projects using ETL tools Informatica & SSIS and RDBMS on Oracle, Teradata, Redshift, Pervasive DB, Maria DB and SQL server.

Familiarity with entity-relationship/multidimensional modeling (star schema, snowflake schema).

Experience in development and design of RDBMS -OLTP, dimensional modeling using data modeling tools ERWIN, MS Visio.

Have domain knowledge in PBM, Health care, Insurance, Retail, Telecom, Wireless, Manufacturing, Financial.

Extensive Experience working with End business users, Product owners, BA, BSA, SMEs, App DBAs, Admin teams, Prod support as well as senior management.

Experience in integration of data source like Teradata, Oracle, ODI, Mainframes, Sales Force and flat files such as delimited, fixed width, csv, excel, ncpdp, xml, zip etc.

Development and Testing of Data Warehouse/Business Intelligence (DW/BI) applications (ETL) and Perform unit, performance, integration testing and E2E testing for the developed code and generate code quality reports.

Developed ETL Strategies using an ideal mix of Database based Loading Strategies and ETL based load strategies.

Experienced in Data governance of data profiling, linage, completeness, Cleansing, Standards, accuracy, quality and validation of data.

Experience with ABC data model for data control & Audit and security items.

Involved in the full development lifecycle (SDLC) (STLC) from requirements gathering through development and support using ETL tools SSIS & Informatica Power Center.

Developed Complex mappings from varied transformation logics like Unconnected/Connected lookups, Router, Filter, Expression, Aggregator, Joiner, Union, Update Strategy, SQL, TC, XML and many more.

Proficient in performance tuning on informatica components as well as on database end.

Worked on Slowly Changing Dimensions (SCD) Type 1, 2, 3 to keep track of historical data.

Good in writing Unix Shell Scripts, WINSCP, FTP protocols and Windows based file transfer environments.

Worked on Water fall model /Agile Scrum/ JIRA methodologies in all projects.

Experienced in creating jobs for data load using scheduling and tools Control-M, Tivoli and Redwood Cronacle

Created high level and low-level design documents and provide solutions for the requirements.

Involved in code reviews, architectural reviews and peer reviews of project and maintained documentation as per client standards.

Excellent communication and interpersonal skills, ability to learn quickly, good analytical reasoning and high adaptability to new technologies and tools.

Experience with coordinating and leading onsite-offshore development kind of environments.

Strong Team working spirit and relationship management skills.

EDUCATION & CERTIFICATIONS

BTech in Electronics and Communication Engineering – India

Certified Green Belt in Six Sigma

Completed Teradata Certification (V2R5)

TECHNICAL EXPERTISE

Programming Languages

C, C++, Core Java, PHP, HTML, XML, SQL, PL/SQL, Unix Shell scripting

Web Technologies

HTML, XML, CSS, DHTML, VB Script, JSON, OOP, Java Script.

Databases

MS SQL Server, MySQL, Oracle, MariaDB, Teradata, Pervasive, AWS- DYNAMO DB

Database Development Tools

TOAD, PLSQL Developer, Teradata SQL Developer, Nexus, SQL DEVELOPER, SQL*Loader, Oracle SQL*Plus, SQL Navigator, MYSQL,

Data Modeling Tools

MS Visio, ERWIN, ER Studio

ETL/Data ware housing Tools

Informatica, SSIS, Amazon RedShift, Teradata Load utilities

Test automation tools

POSTMAN, API tools, Docker, Neo load, SERENITY BDD

Scheduling Tools

Control-M, Tivoli, Redwood Cronacle, Perfaware, TWS

Operating Systems

Windows Family, Unix.

Methodologies

Agile/Scrum, Waterfall

Environment

Magic Studio, Oracle Pl/Sql, MySql Workbench, Daytech, SQL Server 2016, Excel 2013 (v15.0), Windows 2016/2012 R2 Servers, UNIX, HTML 5, JIRA 6.4, MS Office Tools 2010, Office 365 and AWS, Neo load.

Bug Tracking tools

Jira, Bug tracker, ALM Script TCL, JavaScript

Build tools

Atlassian, Jenkins, GITLab, GIT Hub, Source Tree, SVN, Tortoise CVS, DELL Share point, Ant, Maven

Networking

TCP/IP, LAN/WAN, FTP, HTTP/HTTPS, Ethernet.

Packages

MS Office (MS Access, MS Excel, MS PowerPoint, MS Word), Office 365.

Defect Management Tools

JIRA, Remedyforce, HP Service Center, Project Issues.

PROFESSIONAL EXPERIENCE

Client: Neiman Marcus Group – Irving, TX Jul ‘20 – Till date

Data Engineer / ETL Developer

Project: EDA-Enterprise Data Analytics – The main purpose of the project is to migrate current data platforms (AA, CXP, EDW) to the cloud (AWS) to support cloud first strategy and deliver business value with increased flexibility and agility. Mainly I am involved in EDW area.

EDW (Enterprise Data Warehouse) is a centralized corporate data warehouse for customer source information, detailed sales history, and promotion programs. The EDW takes inputs from multiple systems and created a common look of the data (Standardization). After standardization of the incoming data, the EDW performs matching of customer to sales transactions. The EDW provides many extracts that support the processing requirements for other systems. The standardized product information is used by product reporting, CRM/Marketing areas.

Responsibilities:

Responsible for Agile application Development, planning meetings, daily stand up meetings, Sprint Retrospective, Sprint Planning, Story Estimates, PI Planning, Backlog Grooming and customer reporting backlogs(bugs).

Responsible for data analysis, data validations, data quality, data comparisons and impact analysis, root cause analysis for deviations.

Writing complex T-SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data. Reviewed SQL for missing joins & constraints, data format issues, miss-matched aliases, casting errors.

Develops SQL queries to query the AWS Dynamo DB, AWS Redshift and SQL server databases to find out correct data for testing and to test the backend process of the developed code.

Creating and managing schema objects such as tables, views, indexes, stored procedures, and triggers & maintaining Referential Integrity.

Work on day to day operational issues submitted by customers through tools like JIRA, UARS (User access request system), Incidents.

Creation of database objects like tables, views, materialized views, procedures using oAWS cloud tools like SnowFlake, AWS SQS Queue, lambda’s, AWS Glue, AIrflow etc.

Used principles of Normalization to improve the performance. Involved in ETL code using PL/SQL in order to meet requirements for Extract, transformation, cleansing and loading of data from source to target data structures.

Architected/Developed Informatica Batch/Real-time (CDC) processes to feed the Informatica serving as a single access of customer data between applications.

Responsible for resolving recurring incidents permanently, performing break fixes.

Analyze and interpret all complex data on all target systems

Responsible for root cause analysis of the production issues

Complete E2E and Regression testing of Data Warehouse/Business Intelligence (DW/BI) applications for the data flowing through the EDA-Snow flake environment.

Working with different feeds files of.csv, json, .gz, .md5 formats and with backend python scripts (.py) to load the file to preprocessing stage, stage, target tables in Snowflake.

Working on ETL data loads for various Sales, Customer, Promotions files to make sure all the Programmable fields, calculated fields are coded as per business logic as per requirements.

Working on Audit Metadata tables for defining DAG schedules, dependencies & ingestion entries with proper file notations match with .json file definitions in Bitbucket.

Creation of Confluence pages documentation for the developed and tested work.

Document software defects, using a bug tracking system, and report defects to software developers. Document test procedures to ensure reputability and compliance with standards.

Environment – AWS EC2, S3, GCP, RDS, MySQL, Snowflake Cloud, AWS Lambda’s, Python, Airflow DAGs, JIRA, Bit Bucket, Confluence, SharePoint, AWS Glue jobs, Json files, Business Objects

Client: Change Healthcare – Fort Worth, TX Mar ‘19 – May ‘20

ETL Developer

Project1: UPBS-Unified Pharmacy Benefits Solutions – UPBS is a unified solution platform developed to fulfil the needs of individual areas like eRX, MedRx, Medicaid, Medicare, Medispan and SelectRx. The individual processing of claims and various forms of prescriptions are designed to bring into single platform where all the above areas data can flow into single solution across PBM business for CHC. It is designed into different PODS where each POD will deliver their own set of deliverables and all those pieces are connected in UI-UX bridge which is UI for unified PBS administrative users to setup and configure commercial clients, payors and submitters.

It also maintains a reporting environment on AWS RedShift on top of UI where the end business users can view their reports in AWS Redshift. All the above PODs used the AWS cloud environment for processing the data.

Responsibilities:

Responsible for Agile application Development, planning meetings, daily stand up meetings, Sprint Retrospective, Sprint Planning, Story Estimates, PI Planning, Backlog Grooming and customer reporting backlogs(bugs).

Work with Product Owners and Business Analysts to define acceptance criteria.

Working in all phases of Software Testing Life Cycle (STLC) and considerable knowledge of Software Development Life Cycle (SDLC) that include Requirement gathering, Analysis/Design, Documentation, Development and Testing.

Develops SQL queries to query the AWS Dynamo DB, AWS Redshift and SQL server databases to find out correct data for testing and to test the backend process of the developed code.

Extracted claims, members, providers, users, EOB, Programs, drug, pharmacy data in the form of NCPDP files and uploaded the claims data.

Worked on EDI files, familiar with X12 - 835,837 Claim transactions between Provider and Payor and EDI 999 Acknowledgement.

Worked extensively on Medicaid / Medicare data to fulfill their needs.

Have extensive knowledge on PBM business process of Members, Prior Authorizations, Accumulations, copay, co-insurance, Deductibles, Sponsor/ Patient Pay, CoPay, OutOfPacket & Family Coverage, Coupons, discounts etc.

Responsible for data analysis, data validations, data quality, data comparisons and impact analysis, root cause analysis for deviations.

Writing complex T-SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data. Reviewed SQL for missing joins & constraints, data format issues, miss-matched aliases, casting errors.

Worked with Meta-Data Driven SSIS Packages to pull the data from different Sources and load to Data mart.

Experience in creating complex SSIS packages using proper control and data flow elements with error handling.

Validating full load, incremental loads, calculated fields and their logic used to load data among various reports.

Writing Regression Queries to updated store procedure and validating the data flow between source and target databases are as expected.

Working on Jira stories in order do data setup in AWS Dynamo DB along with files upload in S3 location and related Lambda functions testing.

Working on ETL data loads for Pharmacy, Chain and Claim billing info loads for various Claim status and make sure all the Programmable fields, calculated fields are coded as per business logic as per requirements.

Complete E2E testing of Data Warehouse/Business Intelligence (DW/BI) applications (ETL).

Majorly working on API services to communicate with claim processor & pricing Orchestrator using POSTMAN Rest API tool with different collections.

Make sure the UI-UX environment feed is back to AWS Dynamo and vice versa, also Wireframes design is aligned with developed UI framework.

Automation of project setup and deployment through Jenkins and GIT for continuous Integration and continuous Deployment.

Document software defects, using a bug tracking system, and report defects to software developers. Document test procedures to ensure reputability and compliance with standards.

Creation of Confluence pages for the developed and tested work. Involved in Bi-weekly technical demos.

Environment - SSMS, SQL Server, T-SQL, EDI, AWS RedShift, SQL Workbench, AWS Dynamo, Postman, JIRA, Jenkins/Git, Swagger UI, Confluence, SharePoint, AWS Glue jobs, AWS Lamda’s

Project2: SelectRx–PBM: – SelectRx - PBM is the proprietary adjudication system for the commercial business. It was designed to provide a peek into our adjudication system and its corresponding data warehouse. SelectRx-PBM mainly focus on member eligibilities, promotions, coupons that are applicable to consumers. It also maintains payer and provider solutions. I am involved in various analysis and testing of SelectRx Modernization projects. Under PBM Modernization project we are migrating the data from Maria DB to SQL server and building enterprise DWH on SSIS environment and generation of organization reports using SSRS.

Responsibilities:

Responsible for Agile application Development, planning meetings, daily stand up meetings, Sprint Retrospective, Sprint Planning, Story Estimates and customer reporting back-logs(bugs).

Work with Product Owners and Business Analysts to define acceptance criteria.

Working in all phases of Software Testing Life Cycle (STLC) and considerable knowledge of Software Development Life Cycle (SDLC) that include Requirement gathering, Analysis/Design, Documentation, Development and Testing.

Write complex SQL queries using joins, sub queries and correlated sub queries to retrieve and validate the data between legacy database and new database.

Develops SQL queries to query the MariaDB & Pervasive database find out correct data for testing and to test the backend process of the application.

Created SSIS packages to populate data from various data sources.

Creating and managing schema objects such as tables, views, indexes, stored procedures, and triggers & maintaining Referential Integrity.

Used DDL and DML for writing triggers, stored procedures to check the data entry and payment verification.

Responsible for T-SQL scripts by analyzing transformation logic used in store procedure ETL’s to automate the SelectRx ETL data testing.

Extracted claims, members, providers, plan, drug, pharmacy data in the form of NCPDP files and uploaded the claims data.

Worked on EDI files, familiar with ANSI ASC X12 837/835 Claim transactions and EDI 999 Acknowledgement.

Worked on 270,271 Healthcare Eligibility, coverage and Benefit Inquiry files.

Worked on Claim Data exchange between Sure scripts and CHC for PBM Business in the format of EDI files.

Responsible to perform negative testing of database using different scenarios to find and raise the defects.

Validating full load, incremental loads, calculated fields and their logic used to load data among various reports.

Writing Regression Queries to updated store procedure and validating the data flow between source and target databases are as expected.

Performing cross verification of data and parallel testing in MariaDB and PervasiveDB.

Perform functional testing to identify various critical points in the application and automate it by using Selenium web driver for Reporting Portal.

Written and executed complex Selenium tests scripts for automation testing of the web application using Selenium IDE to validate the reporting portal Styling and look.

Conducts semi-complex technical analyses to design and implement test software (unit test/automation) for software applications or software test tools that achieve desired functionality Troubleshoot application issues in production.

Supporting development team to move to CICD (Continuous integration and Continuous Delivery) process.

Running complex unit test scripts handed over by developer and writing java scripts for different test scenarios using specific test data.

Performing programming skills to Standing up virtual databases to validate data between source and target using Docker tool as automation platform.

Identifies the root cause of issues through simulations in dev or test environments and documents it for future reference.

Automation of project setup and deployment through Jenkins and GIT for continuous Integration and continuous Deployment.

Recognize and classify the plan for Test Scope, Test Strategy High-Level Document and Test Data within the context area of each sprint.

Performance and integration testing for the developed code and generate code quality reports using Neoload.

Report, prioritize, track, and remediate any bugs found in the testing process.

Coordinating with DBAs, Release Management and external team for technical needs and to ensure the timely release of project.

Environment - SSMS, SSRS, T-SQL, EDI Files, Maria db, Pervasive db, Putty, JIRA, Jenkins, Git, Swagger UI, Selenium, Neoload

Client: Enterprise Holdings Inc. – St. Louis, MO May ‘18 – Mar ‘19

Data Engineer / Team Lead

Project: GDPR: GDPR – Global data protection regulations – The project relates to protection of PII data of customers who doesn’t want to maintain their data in our database after their rental transactions with enterprise. The project focus to perform data protection of customers who requests to wipeout their data in our database and report back to the user with completion of their request.

Responsibilities:

Design data flow for the project and review the same with data architects. Responsible for set up of dev environment and required database objects.

Responsible for Analysis of the specifications provided by the clients, develop, maintain and implement database tables design in both production and non-production environment based on the business needs.

Responsible for working with tertiary groups such as Security, Architecture, IT Operations and QA to ensure quality design, delivery and adherence to corporate standards.

Work on day to day operational issues submitted by customers through tools like JIRA, UARS (User access request system), Incidents.

Creation of database objects like tables, views, materialized views, procedures and packages using oracle tools like Toad, PL/SQL Developer and SQL* plus.

Used principles of Normalization to improve the performance. Involved in ETL code using PL/SQL in order to meet requirements for Extract, transformation, cleansing and loading of data from source to target data structures.

Partitioned the fact tables and materialized views to enhance the performance.

Worked on SQL*Loader to load data from flat files obtained from various facilities every day.

Created scripts to create new tables, views, queries for new enhancement in the application using TOAD.

Created indexes on the tables for faster retrieval of the data to enhance database performance.

Involved in data loading using PL/SQL and SQL*Loader calling UNIX scripts to download and manipulate files.

Testing the migration of LC data from Sum Total (i.e. Oracle) to DWH (Oracle) databases.

Generate Reports and Redaction requests on weekly basis for Customer data and various other business requirements.

Responsible for data analysis, data validations, data quality, data comparisons and impact analysis, root cause analysis for deviations.

Automation project setup and deployment through Jenkins and TFS GIT for continuous Integration and continuous Deployment.

Writing complex SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data. Reviewed SQL for missing joins & constraints, data format issues, miss-matched aliases, casting errors.

Participating in Peer reviews, design review, code reviews, mentor other developers to ensure the quality of code and design.

Responsible for documenting the artifacts, process improvements, tuning techniques, purge activities, Production issues.

Responsible for purge activities on weekly and monthly basis to save space on Teradata and responsible for performance improvement of the system.

Work on automation of code to run the code hands free without manual intervention, also monitoring of daily loads and work on production issues to make production environment stable.

Research current technology trends to provide input on how they may influence overall enterprise architecture.

Environment - Informatica 10.2, ERWIN, Teradata 14.0, Teradata SQL Assistant, SQL Developer, Unix Shell scripting, JIRA, Jenkins, Git

Client: Charter Communications – St. Louis, MO Sep ‘17 – Apr ‘18

Sr. ETL Developer / Team Lead

Project: P-270 (Project-270) This is migration project related to TWC, BHC data migrating with Charter data after acquisition. It mainly focusses on the merging of all accounts, marketing, sales, billing data etc of TWC, BHC data with existing legacy charter information. On top of 3 organizations data after merging, building single point of view of three organizations data and building common DWH.

Responsibilities

Coordinate with Business Analysts and Data Stewardship team to understand business requirements.

Designed data flow for the project and participated in Data model reviews.

Responsible to set up of Dev environment from scratch using ETL tool Informatica.

As a Team Lead and interacting with Business Analyst to understand the business requirements. Involved in analyzing requirements to refine transformations.

Responsible for Analysis of the specifications provided by the clients.

Involved in dimensional logical model with 10 facts, 30 dimensions with 500 attributes.

Worked with DBA to create the physical model and tables.

Involved in data validation of data loaded into WH tables using AWK, SED commands through shell scripts.

Preparation of HLD, project plan based on the business functional spec.

Review of ETL Detailed design documents for Mapping and DDL specification document for creation of tables, defining keys/constraints on tables and data types.

Created logical and physical data models using Erwin including the use of naming standards (.nsm) files and Global Domain Dictionaries.

Analysis of Data model to check table constraints and columns in Data mart

Extracting data from sources like Oracle, Mainframe Db2 and Flat Files using Informatica Power center designer and power exchange and transforming them using the business logic and loading the data to the target warehouse.

Created Unix scripting using PMCMD for Informatica execution automation and scheduling.

Developed numerous reports and utilities using Awk, Sed.

Designing mappings as per the business requirements using Transformations such as Source Qualifier, Aggregator, Expression, Lookup, Filter, Sequence generator, Router, Union, Update strategy etc.

Coordination of system/Integration/UAT testing with other teams involved in project and review of test strategy

Created proper PI taking into consideration of both planned access of data and even distribution of data across all the available AMPS.

Written complex SQLs using joins, sub queries and correlated sub queries. Expertise in SQL Queries for cross verification of data.

Developed the Teradata Macros, Stored Procedures to load data into Incremental/Staging tables and then move data from staging into Base tables.

Reviewed SQL for missing joins & constraints, data format issues, miss-matched aliases, casting errors.

Mainly focused on data comparison & Anomalies between legacy to BI environment.

Responsible for development of Teradata-Molad, Fload, and Bteq scripts to load the data into EDW.

Responsible for Metadata of Analysis, Data linage, Consistency, completeness, validation and accuracy.

Responsible for maintenance of Metadata repository as per data governance standards.

Incorporated Reusable objects for data validation to ensure data quality.

Helping the team in fixing the technical issues if any and Tuning of Database queries for better performance.

Keep tracking of all CI list, maintaining versions and Change requests.

Environment - Informatica 10.1, ERWIN, Oracle, Teradata 14.0, Teradata SQL Assistant, Jira, Jenkins, MicroStrategy, Unix Shell scripting, Control-M, Perfaware

Client: CVS Health – Richardson, TX Nov ‘16 – Aug ‘17

DWH Specialist

Project: HMI/SPEDM (Specialty Enterprise Data Mart) This project is related to PBM business. It mainly focusses on the members, claims, prescriptions, orders, enrollments, referrals, pharmacy and payer data. Health management data includes Specialty data, Pharmacy claims data, medical claims and disease management. The Pharmacy claim has information about the drug usage of the patient whereas the medical claims have diagnosis code, the tests carried out or any medical procedures done for a patient.

The objective is to integrate the HMI data, which is on oracle system on to EDW2 (Teradata) system. Data integration incorporated using ETL tools Informatica & SSIS. This enables reporting to business users to see metrics on dashboard.

Responsibilities

Involved in Project planning, DEV Effort estimation.

Designed data flow for the project and participated in Data model reviews.

Created ETL Spec documents for ETL mappings development.

Analyzing and Profiling data from various data sources and legacy systems into TERADATA warehouse.

Architected/Developed Informatica Batch/Real-time (CDC) processes to feed the Informatica serving as a single access of customer data between applications.

Responsible for resolving recurring incidents permanently, performing break fixes.

Analyze and interpret all complex data on all target systems

Responsible for root cause analysis of the production issues

Creating and modifying MULTI LOADS for INFORMATICA using UNIX and Loading data into EDW.

Extracted claims, members, providers, plan data in the format of X12 837/835 files and uploaded the claims data into state portals in secured way.

Tracking end to end Release cycle for break fixes coding, testing, migration, deployment & support.

Data Cleansing / Standardization activities are performed on source data.

Worked on Claim Data exchange between Atena and CVS Health for PBM Business in the form of EDI files.

Experience with Provider, Member and Claims data processing using EDI files, NCPDP files through secured

Responsible for monthly extracts of Provider, Member and Claims data and upload them to state portals and Ftp the files on specified formats using different tools such as sends+, Kleopatra, dist.

Responsible for encryption / decryption of NCPDP, EDI files before/after data upload to state portals.

Used Metadata manager for validating, promoting, importing and exporting repositories from development environment to testing environment.

Involved with Data Stewardship Team for designing, documenting and configuring Informatica Data Quality environment for management of data.

Extensively used Sql*Loader for loading data into tables.

Responsible for testing Teradata Objects in Preprod and BKP environments as per project needs.

Responsible for development of Teradata-Mload, Fload, and Bteq scripts to load the data in to EDW.

Incorporated Reusable objects for data validation to ensure data quality.

Incorporated ABC data model for controlling data flow in EDW.

Responsible for Reconciliation and fall out process.

Automated Data warehouse and DataMart refreshes using Maestro.

Improved performance of the sessions by creating partitions.

Responsible for resolving recurring incidents permanently, performing break fixes.

Extensively worked with the Debugger for handling the data errors.

Defect Analysis and Defect Prevention Activities.

Worked on Performance tuning in order improve load timings.

Worked on UNIX scripts to move files across different environments.

Responsible for ON Demand loads as per business user request.

Responsible for loading data till package layer to help reporting folks.

Monitoring various loads like Daily, Weekly, halfrly, yearly Loads using Incremental Loading strategy.

Responsible for migrating the code between environments and Involved in UAT Testing.

Coordinate with all support teams like PST, Migration, QA, DBAs and Informatica Admins during



Contact this candidate