Post Job Free
Sign in

ETL Tools : Informatica Intelligent Cloud Services (IICS), Informatica

Location:
Riverview, FL
Posted:
March 26, 2021

Contact this candidate

Resume:

Venkata Bonam

Phone: 678-***-****

Email: ***********.*****@*****.***

Professional Summary

Around 6 + years of professional experience in requirements gathering, Analysis, Design, Development and testing of Data Warehousing and Business Intelligent applications, Bigdata solutions, Development of Data Integration systems and Tackling challenging architectural and scalability problems in Finance.

Experience in handle multiple projects, Initiative and Adaptable. Self-motivated, organized team player with strong problem solving and analytical skills and total commitment to the organization goals.

Excellent analytical, problem solving, communication and interpersonal skills with ability to interact with individuals at all levels.

Experienced with Data Warehousing applications and directly responsible for building robust and High-tolerant ETLs to Load data from multiple sources into enterprise Data Warehouse using multi-dimensional models.

Experienced in designing and developing ETL processes to read and write data to cloud application like Salesforce.com, Azure and AWS in a way of Cloud Computing.

Excellent understanding of Relational Database Systems, Normalization, logical and physical data modeling for OLTP and dimensional modeling for OLAP.

Experience developing applications for loading/streaming data into NoSQL databases (HBASE and Cassandra) and into HDFS.

In-dept knowledge and hands-on experience in Performance Tuning, Query Optimization and SQL profiler for debugging SQL performance issues.

Strong SQL development skills including writing Dynamic-queries, sub-queries and complex joins for generating Complex Stored Procedures, functions, Triggers, User-defined Functions, Views and Cursors.

Experienced in using Source Code Version Control Systems like Team Foundation Server (TFS), Informatica Version Control and GIT.

Extended experience with ETL Informatica tools like Power Center 10.2.0/10.1.1/9.x/8.x, Power Exchange, Data Validation Option (DVO), Designer, Workflow Manager, Workflow Monitor, Repository Manager, Repository Server Administration Console.

Experience in job scheduling using job scheduling environments IBM-Tivoli, Tidal and Autosys.

Experience in created simple and complex UNIX Shell scripts to perform various operations like file operations per and post load, automation of ETL jobs based on the control tables, parsing of error log files etc.

Good exposure to Mainframe Systems and experience in handling COBOL files.

Involved in various stages of testing (Unit, System, Integration, and User Acceptance Test).

Strong experience in documenting process, analyzing business requirements, working with requirements traceability matrix, re-engineering business processes and designing.

Well Experienced working in Agile and waterfall development methodologies.

Strong undersanding of fundamentals and hands-on experience with Kafka, Apache Hadoop components like HDFS, Yarn, Pig, Hive, Spark, Spark SQL, spark Streaming, Map Reduce and Sqoop.

Involved in building POCs using python and Spark-SQL for faster migrations and processing of large data sets and exploring of optimizing it using Spark Context, Spark-SQL, Pair RDD's, Spark YARN.

Education:

Master of Science in Engineering, Wright State University, Dayton, Ohio

Certifications

Cloudera Certified Spark and Hadoop Developer (License ID: 100-024-126)

Technical Skills:

ETL Tools : Informatica Intelligent Cloud Services (IICS), Informatica Power center 10.2/9.x, Power Exchange, Informatica Data Quality (IDQ), SSIS 2013/2016

RDBMS : Oracle, SQL server, Teradata, DB2.

NOSQL Database : Apache HBase, Cassandra

Languages : Python, Java, SQLs, PL/SQL, T-SQL, Shell and batch Scripting.

DB Tools and Utilities : TOAD, SQL Management Studios, SQL Developer, PostgreSQL

Web Technologies : XML, REST, SOAP, WSDL, JSON

Data Visualization tools : SSRS, Tableau, PowerBI

Scheduling Tools : Autosys, Control-M, IBM-Tivoli.

Other Tools & Technologies : Putty, WinSCP, Jenkins, GitHub, Bitbucket.

Methodologies : Agile, Waterfall

Collaboration Tools : SharePoint, Wiki, Confluence, Team Foundation Server (TFS), JIRA

Cloud Technologies : Salesforce.com, Azure, AWS

Big Data Technologies : Cloudera Distribution, HDFS, Spark Core, Spark SQL, Spark Streaming, Apache Hive, Pig, PostgreSQL

Professional Experience:

Wells Fargo N.A, Charlotte, NC Duration: 03/2020 – Present

Role: Sr. Data Engineer/ETL

Perform deep dive analysis of different data source systems to identify key information and key data elements to build high tolerant and low latency integration solutions.

Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.

Involved in developing Spark code using Python and Spark-SQL for faster testing and processing of data and exploring of optimizing it using Spark Context, Spark-SQL, Pair RDD's, Spark YARN.

As a member of the Date Engineers team worked on design and build of integration solutions to integrate various heterogeneous sources to salesforce.com in a cloud computing way.

Work on enhancements and maintenance activities of the data including mining, profiling, tuning and modifying existing solutions and re-engineered Existing ETL for better Performance.

Anticipate mitigating risks, escalating issues appropriately keeping all necessary parties informed, formulating and implementing a working-around often in a critical time frame.

Design and develop various automatize processes and scheduled them to execute in a timely manner to meet downstream SLA’s.

Responsible for maintaining quality reference data in Oracle by performing operations such as cleaning, transformation and ensuring Integrity in a relational environment.

Developed the Cassandra data model by considering the Reporting needs and the source RDBMS data model.

Environment: Informatica Power Center 10.2.0 HF2, Azure SQL database, MS SQL Server 2017/2013 Management Studio, Cloudera Distribution, Spark 3.0.1, Python, PostgreSQL 9.x, Salesforce.com, Cassandra, Hive, SQL Server, Oracle 12c,11g, DB2, XML Files, JSON, Autosys, Shell Scripting, UNIX, Windows.

CKE Restaurants Holdings Inc., Nashville, TN Duration: 08/2019 – 03/2020

Role: Data Engineer/ETL

Designed and developed Cash Reconciliation process for accounting department at CKE to reduce manual intervention.

Worked with various delivery vendors to find out key data elements and integrated them with CKE system.

Built Multi-Dimension model data warehouse to reconcile cash and sales of corporate CKE stores all over the world.

Built Data pipelines by creating Cloud ETL mappings in Informatica Intelligent Cloud Service and published them to be used in API calls.

Introduced and implemented design standards for ETL processes to enhance performance and audit capability.

Developed Cloud ETLS for Data ingestion into Azure Cloud Database.

Worked on a POC to integrate Amazon S3 bucket with Informatica Intelligent Cloud Services (IICS) for processing multiple files parallel.

Wrote multiple queries and views on the fly to populate business reports on request.

Introduced GitHub as a version control plugin for IICS.

Written various PowerShell scripts to handle pre- and post-data load tasks.

Involved in performance optimization of IICS jobs and design efficient queries to query data from azure cloud database.

Environment: Informatica Intelligent Cloud Services (IICS), Azure SQL Database, Azure Platform, Amazon EC2/S3, MS SQL Server 2017 Management Studio, PowerBI, SQL Server, Oracle 12c, XML Files, Power Shell Scripting, GitHub, Windows.

Wells Fargo N.A, Charlotte, NC Duration: 05/2018 – 08/2019

Role: Sr. Integration Developer

Collaborated with various business team in understanding the requirement, wrote various complex queries to extract, and analyzed critical data elements.

Based on the business requirement, performed in-depth analysis of source data structures and designed source to destination mappings and transformation rules.

Performed data Quality analysis on the source system data, designed, and developed reusable mappings known as Mapplets using Informatica Designer to conduct data quality checks before loading the data into respective data marts.

Used parallel processing Concepts to process high volume data sets like financial accounts and transactions data.

Developed tables, Views, Materialized views, Stored Procedures, Functions, Indexes (Clustered and Non-Clustered index) and Triggers using SQL Server Management Studio.

Worked on applying fixes and new enchantments to the existing SSIS packages that extract data from various source systems and load it into landing tables.

Collaborated in dramatically increasing OLTP transactions on systems handling several billion transactions weekly through the implementation of advanced database techniques such as join optimization, table structure and fragmentation analysis, bulk inserts, analysis of indexes and locks, creation of custom timers to detect bottlenecks.

Comprehensively worked on SQL scripts using local and Global Temp Tables, Variables and Table Variables, Common Table Expressions (CTE) as per the requirement and convenience.

Worked on troubleshooting existing SQL statements and Scripts to provide efficient performance solutions as a part of Performance tuning activity.

Worked on design, building of multiple integrations at the same time, and met the deadlines for the deliverables and projects impact numerous systems.

Optimizes data warehouse performance by dealing with any data conflicts that arise and keeping data definitions up to date.

Based on the downstream applications requirement, developed Jill Scripts to created Autosys jobs and boxes to schedule run ETL jobs based on the dependency.

Migrated over one terabyte of Well Fargo finance data (Oracle and SQL server DB) to Salesforce Financial Service Cloud (FSC).

Involved in designing and developing tables in HBase and stored aggregated data from Hive Tables to HBase.

Developed Shell scripts to automate various application processes and scheduled those scripts to execute timely manner to meet downstream SLA’s.

Contributing to and collaborating with the team to fulfill the goals of the project by involving in team activities based on agile methodologies.

Using Salesforce Data Loader, performed manual data operations like inserts and updates to Salesforce Objects on Test sandbox.

Assisted QA team in Creating Testing Requirements Document (TRDs) and Test Case scenarios to test functional changes to the system/application as per the project requirements.

Environment: Informatica Power Center 10.2.0, IICS, SQL Server Integration Services (SSIS), MS SQL Server 2017/2013 Management Studio, Salesforce.com, Hive, HBase, SQL Server, Oracle 12c,11g, DB2, XML Files, JSON, Autosys, Shell Scripting, UNIX, Windows.

Waddell & Reed Financial Inc., Overland Park, Kansas Duration: 04/2016 – 05/2018

Role: ETL/ELT Developer

Involved in all the phases of SDLC from requirement gathering to production implementation.

Using the Concept of Pre and Post execution of scripts implemented audit checks before and after a data load, which reduced manual work.

Worked on enhancements and maintenance activities of the data including tuning, modifying of stored procedures for code enhancements and re-engineered Existing ETL for better Performance.

Based on the requirements used Sqoop to import data my SQL server to HDFS in various load types like incremental, Column based and Full Refresh.

Created Managed, External, Partition, bucketing tables based on the different file types in HDFS in Hive.

Wrote complex SQL queries including inline queries and subqueries for faster data retrieval from multiple tables.

Based on the requirements used Sqoop to import data my SQL server to HDFS in various load types like incremental, Column based and Full Refresh.

Created Managed, External, Partition, bucketing tables based on the different file types in HDFS in Hive.

Created complex ETL mappings that involved extracting data from various heterogeneous data sources into staging area.

Built ETL solutions to extract and transform data from legacy mainframe systems.

Worked with EBCDIC (Extended Binary Coded Decimal Interchange) and ASCII data formats.

Lead Migration of repository objects, ETL components, services and scripts from test environment to production environment.

Used Pig to do transformations, event joins, filtering, cleansing and some pre-aggregations before storing the data into HDFS.

Using Control-M Scheduler created File transfer job and scheduled them to transfer output files generated by Informatica Workflow to respective destination server.

Worked with production issues like bottlenecks, data validation, report errors and provided appropriate solutions.

Environment: Informatica Power Center 10.1.1/9.6.1, Informatica Power Exchange, Mainframe systems, DB2, SQL Server 2012, SQL Server Management Studios 2012, SSRS, Sqoop, Hive, pig, HDFS, Cloudera Distribution, SQL Server, Oracle 12g, Python, T-SQL, VSAM files, XML, Control-M, Windows.

American Automobile Association (AAA), Dearborn, MI Duration: 08/2015 – 03/2016

Role: Production Support Analyst

Identified manual processing deficiencies, and instituted automation techniques which reduced manual labor and increased workflow efficiency.

Analyzed existing job schedules and their dependences and implemented required modification to reduce Batch processing time.

Analyze production issues to determine root cause and provides fix recommendations to the development team.

Consult with users, management, and technical personnel to clarify business issues, identify problems and suggest changes/solutions.

Worked on Change Management Tools to track all the incidents and changes and to plan the production changes as per the change management policies

Anticipate mitigating risks, escalating issues appropriately keeping all necessary parties informed, formulating and implementing a working-around often in a critical time frame.

Made use of Service-now in creating Incident Ticket to move ETL’s and Tidal Jobs from TEST environment to LOAD environment and Change Ticket from LOAD environment to PROD environment.

Developed new ETLs, re-engineered existing ETL’s for better performance and efficient data pipelines.

Validate Informatica changes for every release using Informatica Data Validation Option (DVO).

Worked with Informatica Data Quality (IDQ) for cleansing and applied business rules.

Attend daily Scrum meetings as a Support team member and assisted other team members on performance or dependences related Impediments.

Environment: Informatica Power Center 9.5/9.1, Informatica power exchange, Informatica Data Quality (IDQ), Mainframes Systems, SQL Server 2012 & 2008, PL/SQL, Oracle, DB2, Flat Files, Json, Windows, MS Outlook, Autosys Scheduler, JIRA, Service-now, Visio, Word, Excel and Power Point.

Sonic Corporation Inc., Oklahoma City, OK Duration: 06/2014 – 06/2015

Role: ETL Developer

Worked with Business analysist to understand and gather requirement and Create ETL Design documents.

Based on the requirement developed ETL mappings to extract raw material cost information from DB2 source system and loaded into SQL Server staging area.

Extensively made use of Informatica Power Center tool - Source Analyzer, warehouse designer, Mapping Designer, Workflow Manager and Workflow Monitor.

As a part of Star schema data model loaded Point-of-sales data into EDW using SCDs (Type 1/Type 2/ Type 3) methods.

Created Stored Procedures and Functions in SQL Serve DB to extract Supply Chain data by joining various detail tables.

Tuned the existing mappings for Optimum Performance.

Made use of PMCMD commands to Start, Stop or Abort Workflows and Tasks on command line.

Wrote various Shell Scripts to move files from one location to other and clean up older files in archive folder based on the populated date.

Lead the integration of GitHub with Informatica for Version control.

Involved in the development of new reports/Dashboards using Tableau and relevant data sources like Excel, SQL server etc.

Participated in daily Scrum meetings to update work progress as a part of Agile Project Methodology.

Monitored Autosys Jobs on regular bases and in case if any failures provide detail report on the failure and provide solution to resolve the failure.

Environment: Informatica Power Center 9.1/9.x/8.x, SQL server, SQL Server Management Studios, SQL, Shell Scripting, Developer, UNIX, Tableau, Bitbucket, GitHub, Agile, Autosys, Windows 7.



Contact this candidate