Post Job Free

Resume

Sign in

Data Manager

Location:
Las Vegas, NV
Salary:
120000
Posted:
January 02, 2020

Contact this candidate

Resume:

SWAPNA

Mobile: 702-***-**** Email: ada60y@r.postjobfree.com

PROFESSIONAL SUMMARY

Around 13+ years of IT experience in Software Development Life Cycle (SDLC).

Around 4 years of Experience in Cloud in Microsoft AZURE using Azure Cloud, Azure Data Factory, Azure Data Lake Analytics, Azure Data Bricks, GIT, Azure DevOps, Azure SQL Data Warehouse.

Around 9 years of experience in developing and designing ETL methodology for supporting data transformations using Informatica Power Center and SQL Server Integration Services (SSIS).

Experience in creating reports, Snapshots, Drilldown and Drill through reports using SQL Server Reporting Services (SSRS) and Power BI

Over 3 years of experience in IBM Mainframes.

Business knowledge in domains like Retail, Transportation, Energy & Utilities, Pharmaceutical, Health insurance, Sales.

Hands on experience working on Flat Files, XML, COBOL Files and Databases including Oracle, SQL Server, DB2 and Teradata.

Exposure to Hadoop and Big Data.

Extensively worked with scheduling and supporting Informatica power center.

Extensive experience in error handling and problem fixing in Informatica.

Involved in unit testing for validating the data is mapped correctly which provides a qualitative check of overall data flow up and deposited correctly in target tables.

Strong analytical and conceptual skills in database design and development using Oracle.

Expertise in Database Programming (SQL, PL/SQL) using Oracle.

Core competencies in Python, Spark, U-SQL, HQL, COBOL, JCL, VSAM, SQL, DB2.

Extensive knowledge of IBM tools & utilities including File-Aid, IDCAMS, QMF, SCLM, Xpediter, Endevor, TSO, Control-M, SYSCSORT, SPUFI, Pan-valet and File Master.

Extensively worked in Requirement Analysis, Design, Development, Testing and Implementation.

Worked extensively with the Onsite – Offshore Delivery model, Co-ordination with clients/users, understanding of client requirements.

Experience in analyzing the existing mainframe legacy system, understanding the business functionality.

Expertise in analyzing Functional specifications and technical specifications based on the requirements.

An innovative and effective team player with good initiative.

Excellent Communication skills, organizational skills, analytical skills and strong interpersonal skills.

ACADEMICS:

Bachelor of Engineering in Computer Science, JNTU, India.

TECHNICAL SUMMARY:

Domain Expertise : Transportation, Energy & Utilities, Pharmacy, Health Insurance, Sales

Operating System : OS/390, MVS/ESA, WINDOWS 98/NT/XP/10, UNIX

Cloud Infrastructure : Azure Cloud, Azure Data Factory, Azure Data Lake Analytics, Azure Databricks, GIT, Azure DevOps, Azure SQL Data Warehouse

Languages : Python, Spark, HQL, U-SQL, SQL, PL/SQL, COBOL, JCL, XML, HTML

ETL Tools : Informatica Cloud, SSIS, Informatica Power center 6.x/7.x/8.x/9.x/10.x, Informatica Data Explorer

Reporting Tools : SSRS, Power BI, MicroStrategy

Databases : Oracle, SQL Server, DB2, Teradata, MS-Access

Scheduler : Control-D, Maestro

File System : VSAM

Librarian Products : ENDEVOR, PANVALET, SCLM

Software Utilities : Maximo, TOAD, Toad Data Modeler, SQL*Loader, Serena, ISPF, FILE-AID, XPEDITER, File Master, QMF, SPUFI, Control-M, SUPERC, FTP, SYSCSORT, TSO

PROFESSIONAL EXPERIENCE:

Client: Retail Business Services April 2019 – Till date

Project: Perpetual Inventory

Role: Azure Lead

Responsibilities

Data analytics and engineering experience in multiple Azure platforms such as Azure SQL, Azure SQL Data warehouse, Azure Data Factory, Azure Storage Account etc. for source stream extraction, cleansing, consumption and publishing across multiple user bases.

Created Azure Data Factory pipeline to insert the flat file, Orc file data into Azure SQL.

Cloud based report generation, development and implementation using SCOPE constructs and power BI. Expert in U-SQL constructs for interacting multiple source streams with in Azure Data Lake.

Involved in data analysis. Performed data quality checks and prepared data quality assessment report

Designed source target mapping sheets for data loads and transformation

Developed pipelines to transform data using activities like U-SQL scripts on Azure Data Lake Analytics

Transform data using Hadoop Streaming activity in Azure Data Factory

Developed Pipelines to load data from on prem to AZURE cloud database.

Loaded JSON input files to Azure Data warehouse

Developed Pipelines in Azure data factory using copy activity, Notebook, Hive, U-SQL to load data

Developed Pipelines in Azure data factory to call Notebooks to transform data for reporting and analytics.

Reports are developed on Power BI on top of Views in Azure SQL.

Scheduled Pipelines in Azure pipeline

Environment: Azure Data Factory, Azure SQL Server, Azure Data Lake Analytics, Azure Data Bricks, Python, Spark, HQL, U-SQL, SQL, HDInsight, Azure Data warehouse, GIT, Azure DevOps, Power BI, JSON.

Client: PA DMV June 2016 – March 2019

Project: Traffic operation Analytics (TOA)

Role: Azure Data Engineer

Responsibilities

Data analytics and engineering experience in multiple Azure platforms such as Azure SQL, Azure SQL Data warehouse, Azure Data Factory, Azure Storage Account etc. for source stream extraction, cleansing, consumption and publishing across multiple user bases.

Created Azure Data Factory pipeline to insert the flat file data into Azure SQL.

Cloud based report generation, development and implementation using SCOPE constructs and power BI. Expert in U-SQL constructs for interacting multiple source streams with in Azure Data Lake.

Involved in data analysis. Performed data quality checks and prepared data quality assessment report

Designed source target mapping sheets for data loads and transformation

Developed pipelines to transform data using activities like U-SQL scripts on Azure Data Lake Analytics

Transform data using Hadoop Streaming activity in Azure Data Factory

Developed informatica mappings to load data from on prem to AZURE cloud database.

Loaded JSON input files to Azure Data warehouse

Developed Pipelines in Azure data factory using copy activity to load data

Designed and developed stored procedure in SQL server

Developed Pipelines in Azure data factory to call stored procedures to transform data for reporting and analytics.

Reports are developed on Power BI on top of Views in Azure SQL.

Scheduled Pipelines in Azure pipeline

Did daily health check and monitored batch jobs

Environment: Azure Data Factory, Azure SQL Server, Azure Data Lake Analytics, Azure Data Bricks, Python, Spark, HQL, HDInsight, Azure Data warehouse, GIT, Azure DevOps, Power BI, JSON, Informatica Power Center.

Client: NV Energy, Las Vegas, NV Sep 2015 – May 2016

Project: Operational Analytics

Role: Azure Data Lead

Responsibilities

Worked in Regulatory Compliance IT team where worked as Data Architect role which involved Data Profiling, Data Modeling, and ETL.

Responsible for Big data initiatives and engagement including analysis, brainstorming and architecture and worked with Big Data and Big Data on Cloud, Master Data Management and Data Governance.

Transform data by running a Python activity in Azure Databricks.

Created Azure Data Factory pipeline to insert the flat file, Orc file data into Azure SQL.

Cloud based report generation, development and implementation using SCOPE constructs and power BI. Expert in U-SQL constructs for interacting multiple source streams with in Azure Data Lake.

Designed and Developed SSIS packages

Developed long term data warehouse roadmap and architectures, designs and builds the data warehouse framework per the roadmap.

Involved in creating Hive tables and loading and analyzing data using hive queries Developed Hive queries to process the data and generate the data.

Developed Pipelines in Azure data factory to call Notebooks to transform data for reporting and analytics.

Designed and developed a Data Lake using Hadoop for processing raw and processed data via Hive.

Utilized Apache Spark with Python to develop and execute Data.

Used ETL/ELT process with Azure Data Warehouse to keep data in Blob Storage with almost no limitation on data volume.

Data modeling, Design, implement, and deploy high-performance, custom applications at scale on Hadoop /Spark and implemented Data Integrity and Data Quality checks in Hadoop using Hive scripts.

Designed and developed T-SQL stored procedures to extract, aggregate, transform, and insert data and developed SQL Stored procedures to query dimension and fact tables in data warehouse.

Coordinating with Client and Business Analyst to understand and develop reports.

Environment: Azure Data Factory, Azure SQL Server, Azure Data Lake Analytics, Azure Data Bricks, Python, Spark, HQL, U-SQL, SQL, HDInsight, Azure Data warehouse, GIT, Azure DevOps, Power BI, JSON.

Client: NV Energy, Las Vegas, NV June 2012 – Oct 2013

Project: Enterprise Work and Asset Management (EWAM) - Application Support

Role: Informatica Lead

Responsibilities

Initial analysis, routing and response to service calls related to BI, Informatica and MicroStrategy.

Incident and defect resolution.

Technical support for data fixes as required by the business for Business Intelligence.

Business consulting i.e., business support for queries on Business Intelligence.

System Administration i.e., supports in completing activities which includes integration Informatica and Business Intelligence with Maximo.

Coordination with the EWAM core team and business units as required for requirements, solution, testing and training.

Monitor & Resolution of batch processing errors in Informatica.

Documentation as required by the agreed Application Support process.

Leading a team of developers working from offshore.

Design and modeling of enterprise data warehouse.

Developed simple & complex mappings ETL routines for loading Slowly Changing Dimensions (SCD) and aggregated fact tables.

Created database tables, views, indexes, synonyms, triggers, functions, procedures, cursors and packages.

Created test cases for Unit testing and System testing.

Environment: Informatica Power Center 8.6.1/9.1, Informatica Data Explorer, Oracle 10g, TOAD, SQL, PL/SQL, SQL*Loader, Toad Data Modeler, Maximo, Serena, MicroStrategy, Windows 7.

Client: NV Energy, Las Vegas, NV April 2011 – May 2012

Project: Enterprise Work and Asset Management (EWAM)

Role: Informatica Lead

Responsibilities

Leading a team of developers working from offshore.

Thorough Analysis of existing source systems and target systems.

Profiling the data using Informatica Data Explorer and applying cleansing rules based on the profiling report.

Creation of data migration strategy and approach for migrating capital assets, and its operation and maintenance data from source system to the staging area and from staging area to new asset management system IBM MAXIMO.

Designing data migration solution and creation of necessary technical specification documents,

Developing and reviewing Informatica ETL mappings, sessions and workflows based on the technical specification document.

Creation of MEA interface components for loading data into MAXIMO.

Design and modeling of enterprise data warehouse.

Design and development of end-to-end ETL process.

Developed simple & complex mappings ETL routines for loading Slowly Changing Dimensions (SCD) and aggregated fact tables.

Developed incremental logic in Informatica to extract data from source to staging and from staging to target data warehouse.

Created schedulers for daily running of workflows to populate data into warehouse.

Improved the Performance of ETL sessions for migration of data without affecting business continuity.

Created database tables, views, indexes, synonyms, triggers, functions, procedures, cursors and packages.

Created test cases for Unit testing and System testing.

Environment: Informatica Power Center 8.6.1/9.1, Informatica Data Explorer, Oracle 10g, TOAD, SQL, PL/SQL, SQL*Loader, Toad Data Modeler, Maximo, Windows XP.

Client: CVS Caremark, Dallas, TX June 2010 – Sep 2010

Project: Operations EDW

Role: Informatica Developer

Responsibilities

Extensively used Informatica Client Tools-Designer, Workflow Manager, workflow Monitor and Repository Manager using Informatica Power center 8.6.1.

Developed and Implemented ETL processes using Informatica client tools - Source Analyzer, Warehouse designer, Mapping designer, Mapplet Designer, Transformation Developer.

Performed data manipulations using various informatica transformations like Joiner, Rank, Router, Expression, Lookup, Aggregator, Filter, Update strategy and Sequence generator etc.

Used Informatica Workflow Manager and server to read data from sources to write to target databases, manage sessions/tasks and to monitor server ETL Implemented for Databases.

Used UNIX Shell Scripts for scheduling the sessions in Informatica.

Used the Maestro Scheduler tool for monitoring the jobs.

Used workflow manager for creating, validating, testing and running the Sequential and Concurrent batches and sessions and scheduling them to run at specified time with required frequency.

Debugging invalid Mappings using break points and testing of Informatica sessions, workflows and target data.

Environment: Informatica Power Center 8.6.1, Oracle, TOAD, SQL, PL/SQL, SQL*Loader, Maestro, DB2, UNIX, Windows XP.

Client: Excellus BCBS, Rochester, NY May 2008 – Mar 2009

Project: Late Enrollment Payment (LEP)

Role: System Analyst

Responsibilities

Involved in Maintenance, Enhancements and testing of the backend business process of MEDICARE Subsystems –Provider, Claims, and Third Party Liability (TPL) Management.

Extensively used Informatica Client Tools-Designer, Workflow Manager, workflow Monitor, and Repository Manager using Informatica Power center 7.1/8.x.

Development and Modification of mappings, sessions and workflows as per design document.

Developed and Implemented ETL processes using Informatica client tools - Source Analyzer, Warehouse designer, Mapping designer, Transformation Developer.

Used Informatica Workflow Manager and server to read data from sources to write to target databases, manage sessions/tasks and to monitor server ETL Implemented for Databases.

Extensively utilized the Debugger utility to test the mappings.

Created sessions and workflows to run with the logic embedded in the mappings using Workflow Manager.

Coordination with clients for issues resolutions and client confirmations.

Reviewing code and facilitating Code Walk through with the offshore deliverables.

Performed code reviews and handled change request.

Delivered code as per quality and coding standards, prepared unit testing cases

Environment: Informatica Power Center 7.1/8.x, Oracle 9i, TOAD, PL/SQL, SQL, UNIX, SQL*Loader, Windows 2000, MS Office.

Client: Humana, Louisville, KY July 2007 – Apr 2008

Project: Claims Adjudication System (CAS)

Role: Informatica Developer

Responsibilities:

Used ETL tool Power Center7.1 for extracting data from Flat file, Oracle which as collected from upstream databases and loaded into target Oracle database.

Performed code reviews and handled change requests.

Designing and creation of complex mappings using SCD type II involving transformations such as expression, joiner, aggregator, lookup, update strategy Transformations

Used Informatica Workflow Manager and server to read data from sources to write to target databases, manage sessions/tasks and to monitor server ETL Implemented for Oracle.

Responsible for monitoring all the sessions that are running, scheduled, completed and failed.

Studied Session Log files to correct errors in mappings and sessions.

Created sessions and workflows to run with the logic embedded in the mapping using workflow manager.

Environment: Informatica Power Center7.1, Oracle 9i, PL/SQL, SQL, Windows NT, MS Office.

Client: Blue Cross Blue Shield, Little Rock, AR May 2006 – June 2007

Project: Health Advantage Members

Role: Programmer Analyst

Responsibilities:

Coded programs and modified existed programs as per specification requirements in VS COBOL II using DB2, VSAM files and Flat files.

Worked on tasks related to complex claim-modules and Provider sub-system.

Worked in provider module in correcting certain provider related validations.

Coded one-time programs to fix bugs related to provider and NPI fields.

Developed and enhanced Batch modules in COBOL/DB2 under Endevor environment.

Debugging of the Batch modules using Expediter and File-Aid tools and performed Unit testing through batch, system testing.

Created necessary libraries, datasets, GDG’s, Control cards, JCL's, Procedures, Copybooks, VSAM files concerned with the project.

Executed FTP jobs to transfer the files between external agencies interacting with the system.

Used IBM transfer utilities moving the test elements into production.

Environment: MVS Z/OS, COBOL, JCL, DB2, Oracle, VSAM, File-AID, Endevor, Xpediter, TSO, SPUFI, Microsoft word, Microsoft Excel.

Client: Philip Morris, India Apr 2004 – May 2006

Project: Application Maintenance Support

Role: Programmer Analyst

Responsibilities:

Involved in analysis, design, and development of new programs and enhancement of existing system.

Responsible for development, unit test of each program, debugging program.

Involved in user support activities using Vantive application. This is the problem reporting system.

Coded programs and modified existed programs as per specification requirements.

To participate in code reviews and test reviews.

Sticking to CMM level norms and processes for Documentation.

Extensively used XPEDITER for debugging programs.

Environment: IBM MF, COBOL, JCL, VSAM, File-AID, Endevor, Xpediter, SPUFI, ISPF, Microsoft word, Microsoft Excel.

Client: Tata Iron & Steel Co, Ltd (TISCO), India Oct 2003 – Apr 2004

Project: Human Resource Information Area (HRIA).

Role: Developer

Responsibilities:

Coding the program/modules as per the specifications in COBOL, VSAM files, Flat files.

Performing reviews in each phase using Xpeditor.

Unit testing and Review of the programs.

Preparing JCL’s and conducting unit test.

Maintaining the data for audit purposes.

Environment: COBOL, VSAM, JCL, TSO/ISPF, SQL, File-AID, Xpediter, SPUFI.



Contact this candidate