Post Job Free
Sign in

Data Developer

Location:
Grand Prairie, TX
Posted:
January 03, 2018

Contact this candidate

Resume:

Chipo Tony Tsikada

Tel: 469-***-****

Email: *********@*****.***

**** **** ****

Grand Prairie, TX, 75052

Summary

* ***** ** ********** ** OBIEE 11.1.1.5, and OBIEE10.1.3.3.2, which included building and configuring repository, creating and managing intelligence Dashboards, Answers, Delivers and iBots (Agents).

Proficient in Oracle Data Integrator (ODI), Informatica Powercenter/cloud, Dell BOOMI design, development, testing and deployment.

Trained end-users to easily design report layouts directly in a Web browser, this dramatically reduced the time and cost needed to develop and maintain reports on of OBIEE 11g/ 10g.

Extensive experience on OBIEE Administration Tool covering all three Layers namely Physical, layer, BMM Layer and Presentation Layer.

Extensive experience in OBIEE/ Siebel Analytics Presentation Services including Answers, Interactive Dashboards, Delivers/agents (iBots), Cache Management, Scheduler Configuration, BI Publisher.

Expertise in OBIEE Publisher, Oracle BI / Siebel Security Setup (groups, data access/query privileges), Metadata Objects (Subject Area, Table, and Column) and Web Catalog Objects (Dashboard, Pages, Folders, Reports) and BI Publisher.

Well versed in Debugging, Optimizing and Performance Tuning of OBI Dashboards / Reports to improve performance at Database

Excellent problem solving skills with strong technical background and good interpersonal skills.

Quick learner and an excellent team player, ability to meet deadlines and work under pressure.

Following standard SDLC processes, from requirements gathering, functional design, technical design, system testing, integration testing, deployment and support as they relate to data warehouse projects and enterprise data integration efforts.

6yrs working with OBIA 11g

5+ years of professional experience in IT, including 4 years of work experience in Hadoop Big Data technology.

Hands on experience on major components in Hadoop Ecosystem like MapReduce, HDFS, HIVE, PIG, Hbase, Flume and impala.

Good understanding of NoSQL databases and hands on work experience in writing applications on NoSQL databases like HBase.

Experienced in moving o data from relational databases like Mysql, Oracle, SQL Server using Sqoop.

Experience in developing Pig Latin and HiveQL scripts for Data Analysis and ETL purposes .

Experienced in moving unstructured data to HDFS using Flume.

Efficient in building hive, pig and map Reduce scripts.

Hands on experience in IDE tools like Eclipse, Visual Studio.

Data Profiling and discovery

TECHNICAL SKILLS

Reporting tools: Siebel 7.8.4, OBIEE 11.1.1.5/6, OBIEE 10.1.3.x.

Languages: SQL, PL/SQL, HTML

Big Data ecosystem: Hadoop – HDFS, Map reduce, Hbase, Pig, Hive, Flume, Impala

Fidelity Investments Sept 2016 - Current

Grapevine, TX

OBIEE Developer/Data Analyst

Data discovery using Endeca Information Discovery tool

Collect data and metadata using Oracle business Intelligence as native data source for Endeca to analyze it to render statistics such as: maximum and minimum values; attributes, lengths, patterns, data types and data integrity.

Experience with the Integrator Knowledge Module for Oracle Data Integrator to enable the Enterprise Information Module to load data into the Endeca Server.

Data Profiling: assessing the quality of data through metrics to discover rules based on the data.

Experience in Cross Column analysis and inter-table analysis

Profiling on Impala with Talend.platform

Identify, compare, and resolve data quality problems.

Evaluate large dataset for quality and accuracy.

Determine business impact level for data quality issues.

Work with Programmers to correct data quality errors.

Determine root cause for data quality errors and make recommendations for long-term solutions.

Research and determine scope and complexity of issue so as to identify steps to fix issue.

Develop process improvements to enhance overall data quality.

Develop and execute data cleanup measures.

Maintain a record of original data and corrected data.

Ensure adherence to data quality standards.

Resolve all internal data exceptions in timely and accurate manner.

Identify areas of improvement to achieve data quality.

Analyze, query and manipulate data according to defined business rules and procedures.

Pacific Electric & Gas Company April 2013-Sept 2016

San Francisco, CA

OBIEE Administrator/Developer

Developed Metadata repository (RPD), configured the Physical Layer, Business modeling and Mapping and the Presentation Layer as per the data requirements using OBIEE Admin tool.

Performed modeling activities which included building star schemas as per the business requirements and configuring the Logical Layer, Logical tables, dimensions and columns. Established the logical joins/keys.

Used Aggregate Tables and specified Aggregate Levels for each source.

Designed Intelligence Dashboards and created Global Filters.

Customized the logical tables and columns, defined the Analytics dimensions, aggregations and levels.

Created Dimensional Hierarchy, Level based Measures and Aggregate navigation in BMM layer.

Configured charts, views, filters in user interface. Configured Global Filters and Prompt in Presentation Services.

Added new Dimensions and Columns to the Subject Areas to full fill the additional Requirements of the Business Analysts.

Used customized, Dashboard prompts & local prompts on UI.

Formatting results, Filtering requests, and Showing results with Pivot, Chart, & Views in Presentation Services.

Set up Data level and Object level Security

Add Users and Roles

Cache Management

Environment: Oracle Business Intelligence 11.1.1.5, Oracle 10g, Oracle SQL developer, Toad, Windows 2007. Linux

Darden Restaurants May2012 - April 2013

Orlando, FL

OBIEE Administrator/ Developer

Created/Modified Metadata repository (.rpd) by using BI Admin tool: Physical Layer, Business Model and Mapping, and Presentation Layers.

Created dimension hierarchies and assigned levels for each level.

Designed and Configured Dashboards in Analytics for different groups of users to view figures across regions, products with the facility to drill down to individual customer relationship manager.

Developed Role-Based Dashboards for Managers and Business Heads to review the Forecasts vs. Revenues and Contact Relationship Management (CRM) to analyze the financial service center statistics for different campaigns, Customized pre-built dashboards.

Created Pivot Tables and used narrative views and used dashboard prompts for creating intelligent dynamic dashboards.

Extensively used Delivers and agents

Performance Tuning of Dashboards/Reports and made required changes to repository (Aggregate Navigation).

Developed custom reports/Ad-hoc queries using and assigned them to application specific dashboards.

Environment: Oracle Business Intelligence suite (OBIEE 11.1.1.3, Oracle 1og, Toad 9.0, Linux, SQL Server 2005, MS Visio 2007 professional

Client: PNC Detroit, MI July 2008 to May 2012

Role: BIG DATA ENGINEER

PNC is one of the largest bank in United States and its headquarters is at Pittsburgh, founded in 1845.

Project Scope: Analyze and develop reports for the customers discontinuing bank based on banking trends.

Developing the solution report to the company for improvement in banking service.

RESPONSIBILITY:

• Installed multi cluster nodes on Cloudera platform with the help of Admin.

• Ingested the data from various file system to HDFS using Unix command line utilities.

• Worked with Pig, HBase, NoSQL database HBASE, for analyzing the Hadoop cluster as well as big data.

• Implemented the NoSQL databases like HBase, the management of the other tools and process observed running on YARN.

• Wrote and Implemented Apache PIG scripts to load data from and to store data into Hive.

• Wrote Hive UDFS to extract data from staging tables and analyzed the web log data using the Hive QL.

• Involved in creating Hive tables, loading data and writing hive queries, which runs map reduce in backend and further Partitioning and Bucketing was done when required.

• Developed Hive Ad-Hoc queries filtered data in order to increase the efficiency of the process execution by using functions like joins, group by and so on.

• Increased the time efficiency of the HIVEQL and reduced the time difference of executing the sets of data by applying the compression techniques for Map-Reduce Jobs.

• Built Big Data solutions using HBase handling millions of records for the different trends of data and exporting it to Hive.

Environment: Java 8, Eclipse, Hadoop, Hive, HBase, Map Reduce, Pig, JAVA 6, Impala

Education: Bachelors in Finance, University of Texas at Arlington: Graduated 2006



Contact this candidate