Jeevan Reddy
Sr. ETL Informatica IDMC/ADF/SAP BODS Lead
Phone: 713-***-**** Email: *************@*****.***
Summary:
Over 14+ years of experience in the IT industry with a strong background in software development and 13+ years of experience in Development & Testing Business Intelligence solutions in data warehousing and decision support systems using ETL tool Informatica Cloud IICS/IDMC, Data Governance, Master Data Management, Informatica Power Exchange, Informatica Data Quality/CDQ, Informatica DCDG Informatica Power Center.
Experience in Developing business critical Informatica entities using IICS/IDMC Informatica Intelligent Cloud Services (CAI - (cloud application integration) & CDI - (cloud data integration)) EDC, AXON DG.
Experience in Cloud Integrations using Informatica Cloud Services, Informatica Cloud Real Time (CAI), NetSuite and Salesforce.com via SOAP API and REST API.
Experience working with Informatica CDI and CAI modules, including designing and implementing data integration solutions.
Experience in various domains like HealthCare, Finance, Telecom, Insurance, Agriculture & Forestry and Banking.
Strong in Data warehousing methodologies of Star /Snowflake schemas of Ralph Kimball, Bill Inman.
Architected/Developed Informatica Batch/Real-time (CDC) processes to feed the Informatica MDM (Master Database Management) serving as a single access of customer data between applications.
I took part of creating the Data Architecture Guidelines and Data Governance Framework in Insurance and Financial.
Ensure a holistic approach to Enterprise Data Governance is employed in projects that address the people and process components of Data Quality, Data Stewardship, Data Privacy & Protection, Data Policy & Standards, and Metadata Management.
Designed the real-time analytics and ingestion platform using Spring XD and Flume. Hands on experience with multiple NOSQL databases including Mongo DB and HBase.
Extensive experience with Data Governance and Informatica Data Quality 10.2 (IDQ/IDE) tool kit, Analysis, data cleansing, data matching, data conversion, exception handling, and reporting and monitoring capabilities of IDQ 10.X
Expertise in Integrations with Salesforce.com and backend Interfaces (Legacy systems like Siebel, SAP) by using Web Services and Web Methods, Informatica as an integration layer.
Worked on various Salesforce.com standard objects like Accounts, Contacts, Leads, Opportunities, Dashboards and Reports.
Worked with Informatica power exchange and Informatica cloud to integrate Salesforce and load the data from Salesforce to Oracle db.
Ability to write complex SOQL queries across multiple objects within the SFDC database. Created and deployed several workflows, and Reports using salesforce.com platform.
Experienced working with Informatica Big Data Analytics with Hadoop - Horton works.
Experienced in Implementing Big Data Technologies - Hadoop ecosystem/HDFS/ Map-Reduce Framework, HBase, Sqoop, Pig, Oozie and HIVE data warehousing tool.
Expert in understanding the data and designing/Implementing the enterprise platforms like Hadoop Data Lake and Huge Data warehouses.
Azure Data Factory (ADF) – Data orchestration, pipeline development, and integration with Azure services.
Expertise in Snowflake Data Warehouse for designing scalable, cloud-native data solutions, implementing ELT processes, and optimizing performance for analytical workloads.
Developed/Created Datasets, Salesforce Wave Reports, Dashboards and Approvals to continuously monitor data quality and integrity. Expertise in Reporting, Customizing the Dashboard and Scheduling Dashboard Refreshing.
Have good understanding of Tableau architecture, design, development, and end user experience.
Extensive experience in working with Tableau Desktop, Tableau Server and Experience in using Tableau functionalities for creating different Requests, Filters, Charts, Interactive dashboards with Page and dashboard Prompts.
Developed Tableau data visualization using Cross tabs, Heat maps, Box and Whisker charts, Scatter Plots, Geographic Map, Pie Charts and Bar Charts and Density Chart.
Experienced in the use of agile approaches, including sprint planning, daily stand-up meetings, reviews, retrospectives, release planning, demos, Extreme Programming, Test-Driven Development and Scrum.
Worked as an ONCALL production specialist with primary and secondary duties.
Analyzed and resolved the incidents raised by the application users on priority (low, medium and high) through Enterprise support tickets.
Technical Skills:
ETL
Informatica Power center 10.1,9.6, 9.1, 8.x, 7.x, Informatica Cloud IICS/IDMC, Informatica IDQ/CDQ/CAI, Microsoft ADF, SAP BODS
BI Tools
Salesforce Wave Analytics, Business Objects, Tableau 9.x, Power BI
Big Data Ecosystems
Hadoop, Map Reduce, HDFS, Hive, Pig, Sqoop, Oozie, Spring XD
Operating Systems
Windows 95/98/2000/2003 Server/NT Server Workstation 4.0, UNIX
Programming
Java, R, PIG, Hive, C, SQL, PL/SQL, HTML, XML, DHTML
Other Tools
Eclipse, SQL*Plus, TOAD, MS Visio, TOAD 8.0, Aginity, CA workstation, ESP
Scripting Languages
SQL, PL/SQL, UNIX Shell Scripting
Methodologies
Agile,E-R Modeling, Star Schema, Snowflake Schema
Data Modeling Tool
Erwin 3.5/4.1
Databases
Oracle 11G,9i/8i/7.3, MS SQL Server 2008, DB2, Netezza, Sybase, Teradata, Azure Data lake, SNOWFLAKE
Education: Master’s degree in computer science, University of Houston, Texas.
Professional Experience:
Johnson Controls Inc. AUG 2021 – Current
Sr. Informatica IICS/IDMC/CDQ/CAI/CDGC/ADF/SAP BODS Lead
Responsibilities:
Designed Data migration from Oracle Fusion, Sap ECC and Oracle ERP to Sap S4 HANA using Informatica IICS/IDMC/CDQ/CAI/BODS/ADF.
Design and implement data storage solutions using Azure services such as Azure SQL Database and Azure Data Lake Storage ADLS.
Designed and implemented data migration pipelines using Azure Data Factory (ADF) for moving data from Oracle Fusion, SAP ECC, and legacy ERP systems to Azure Data Lake and Snowflake.
Integrated ADF pipelines with Snowflake for automated ingestion and transformation workflows.
Developed Data Ingestion, Transformation and maintain data pipelines using ADF Azure Data Factory and Azure Data Lake.
Working as an Interim Admin on Informatica IICS/IDMC to enable the Users, creating the User Groups and enabling the right services and connectors.
Extensively worked on IICS/IDMC Cloud Data Integration CDI, CAI cloud application integration, Cloud Data Quality CDQ, Enterprise Data Catalog EDC, Data Governance DG, Data Synchronization DS, Data Replication DR and Mass Ingestion MI services.
Manage and perform data cleansing, de-duplication and harmonization of data received from, and potentially used by, multiple systems in IICS/IDMC.
Developed various CAI processes to load data from various sources like SQL Server, JSON and flat file data into cloud application using REST API.
Participate in the development and implementation of enterprise metadata standards, guidelines, and processes to ensure quality metadata and support for ongoing Data Governance.
Involved in onboarding of technical and business metadata into the Informatica Enterprise Data Catalog EDC and Axon DG environments, ensuring the population of data lineage and linkage between the technical and business metadata.
Build the end-to-end process on Informatica CDQ for data cleansing, standardizing, Address cleaning and De-duplication.
Created some Informatica mass ingestion tasks to bring in the data from legacy ERP into azure data lake.
Data ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.
Writing and analyzing existing scripts for Azure Data Bricks processing of the data.
Design and development of Data Integration and Application mappings, transformations, sessions, workflows and ETL batch jobs, shell scripts to load data from Source systems to Staging database HANA.
Reusable transformations and Mapplets are built wherever redundancy is needed.
Performance tuning is performed at the Mapping level as well as the Database level to increase the data throughput.
Designed Data migration from SAP ECC and Oracle ERP to S4 HANA using IICS/IDMC, SAP SLT and Data Services.
Collaborated with Functional SMEs to generate requirements and data ETL procedures.
Played a technical liaison role with functional and business users to generate data mapping and data Validation technical documents.
Leading the effort of optimizing the loading process of large financials and Supply Chan (SAP AR, AP, GL, MM and Recurring) data sets including both transparent and clustered tables from legacy ECC and Oracle to S/4 HANA.
Designed and implemented Customer Master Data mapping and reference data implementations in SLT using ABAP include programs.
Implemented performance filters, source side filtering for migrating large data sets.
Build the process to validate and monitor the Preload and Post load validation reports in Power BI.
Masterminded a range of custom Power BI dashboards to help visualize complex data sets, bringing about trend identification and process improvement for stakeholders.
Implemented advanced DAX queries in Power BI desktop, providing detailed insights and enabling 20% better performance.
Improvised ETL methodologies using SQL and ETL to import data into Power BI, resulting in a significant boost in data reliability.
Mentoring the Business stakeholders on Informatica CDQ for data cleansing.
Lead offshore resources on guiding the implementation and conduct code reviews.
Environment: Informatica IICS/IDMC, CDQ, SAP Data Services (BODS), ADF, Snowflake, UNIX, Oracle Fusion, Oracle 12g, Flat files, XML, Shell Scripting, Microsoft Azure Data Lake.
QVC- Qurate Retail Group JULY 2019 – AUG 2021
Sr. Informatica Power Center/IICS/IDMC/SAP BODS Lead
Responsibilities:
Designed Data migration from Peoplesoft and Mainframes to S4 HANA using Informatica 10.2/IICS/IDMC.
Design and development of mappings, transformations, sessions, workflows and ETL batch jobs, shell scripts to load data from Source systems to Staging database HANA using IICS/IDMC, EDC, AXON, CDI.
Reusable transformations and Mapplets are built wherever redundancy is needed in IICS/IDMC.
Performance tuning is performed in IICS/IDMC at the Mapping level as well as the Database level to increase the data throughput.
Designed Data migration from SAP ECC to S4 HANA using SAP SLT and IICS/IDMC.
Collaborated with Functional SMEs to generate requirements and data ETL procedures
Played a technical liaison role with functional and business users to generate data mapping and data Validation technical documents.
Leading the effort of optimizing the loading process of large financials (SAP AR, AP, GL and Recurring) data sets including both transparent and clustered tables from legacy ECC to S/4 HANA.
Designed and implemented Customer Master Data mapping and reference data implementations in SLT using ABAP include programs.
Implemented performance filters, source side filtering for migrating large data sets.
Designed and configured SLT Replication (Real time and Batch) from SAP ECC sources using HANA Data provisioning and BODS.
Expertise in CMC configuration and administration of SAP Data services repositories.
Environment: Informatica Power Center 10.2/ IICS/IDMC, SAP Data Services (BODS), Data Modelling (Erwin), UNIX, Peoplesoft, Oracle 12g, Flat files, XML, Shell Scripting, Putty, WinSCP and Toad.
Honeywell International Inc. (NTT Data) MAR 2017 – JULY 2019
Sr. Informatica Power Center/Cloud Lead
Description: The purpose of this project is Honeywell Commercial Excellence Analytics wants to build a common platform for all Honeywell sales team that will drive the right dialog with Sales on driving the right, measurable activities to generate target number of opportunities, manage and improve forecast accuracy and track sales capacity to deliver and continuously improve Seller’s productivity by creating a Sales Pipeline Dashboard using Salesforce Wave Analytics to track HON Sales Opportunities data in the Ops Center Org. Salesforce Wave is a business intelligence (BI) platform from Salesforce.com.
Main intent of this project is Honeywell has multiple salesforce instances and it has a Siebel system. Currently there is no way to know across Honeywell what is the pipeline for each of the business. There is business like HBT, PMT, AERO, SPS each have multiple salesforce instances and they are struggling on reporting that is why we bring in the data across all SF instances in one place where they can see everything so that all Executives and SBG leaders (SBG Presidents, SBG Sales and CE leaders) can review monthly to understand Win rate, AOP targets and the corresponding pipeline coverage. This will help business understand the health of the pipeline and take corrective action, as necessary.
SPINCO: Worked as a Data Migration Lead to migrate all the Homes related data from multi org Honeywell into Resideo.
SPS GDM MVP2: Worked as a Data Migration Lead to migrate the legacy data (ACS, RAE and SAP) in to GDM org.
Responsibilities:
In-depth practical knowledge of all modules and features of Sales Force both Sales and Service Cloud. Also, have good exposure in other areas of project execution like Customer facing, Requirement gathering and analysis, Consulting, Solution Designing, Documentation, Implementation, Development, and support of Business Solutions.
Involved in extracting, transforming, and loading data Opportunities, Accounts, Users, Leads, Contacts, Record type, Opportunity History, Tasks, Events interactions tables from various source systems to Salesforce.com and reverse data feed from Salesforce for CRM Honeywell.
Worked with Informatica Cloud IICS/IDMC/Power Center to create Source /Target connections, monitor, synchronize the data in SFDC.
Automated Validation and De-duplication of Salesforce data using Informatica IICS/IDMC Cloud Customer 360 CC360.
Implemented effortless Consolidation and Integration of hierarchical data from multiple systems to provide a Single View of Customer using Informatica IICS/IDMC CC360.
Worked in Data Cleansing and mapping data from source salesforce.com to target Oracle using IICS/IDMC.
Data Migration (from salesforce.com to Oracle) using Informatica Cloud IICS/IDMC, EDC, AXON, and Power Center.
SOQL queries to fetch the data from Workbench and Explorer. Designed and developed ETL and Data Quality mappings to load and transform data from sources such as Oracle and SQL to Data warehouse using Power Center and Cloud.
Performed data profiling and analysis of various objects in SalesForce.com (SFDC) using IDQ and MS Access database tables for an in-depth understanding of the source entities, attributes, relationships, domains, source data quality, hidden and potential data issues, etc.
Implemented Change Data Capture (CDC) on Source data from Salesforce.com using IICS/IDMC.
Extracted various data from SalesForce.com using Informatica 10.1 with Sales Force Adapter.
Created the dashboards in salesforce wave analytics and used multi-dimensional analysis (Slice & Dice and Drill Functions) to organize the data along a combination of Dimensions and Drill-down Hierarchies giving the ability to the end-users to view the data from heterogeneous viewpoints.
Customized the Salesforce Wave Dashboards to the track usage for productivity and performance of business centers and their sales teams.
Scheduled the Informatica jobs using Autosys.
Environment: Informatica Power Center 10.1, Informatica Cloud/IICS/IDMC, MDM, Informatica REV, Autosys, Data Modelling (Erwin), UNIX, Siebel, Oracle 12g, Salesforce.com, Salesforce Wave Analytics, Flat files, XML, Shell Scripting, Putty, WinSCP and Toad.
Northwestern Mutual, Milwaukee (Life Insurance and Financial Planning) MAY 2016 – FEB 2017
Sr. Informatica Power Exchange/IDQ Developer
Description: This project is about Northwestern Mutual moving towards Integrated Advisor, there is a need to adapt planning in the rewards & recognition for which data is currently not present in the awards platform. This project focus on to move the planning PPA (Personal Planning Analysis) & BPA (Business Planning analysis) data to the BIIP platform so that data can be accessed easily by the awards platform. For the achievement of the Award, Rewards & Recognition team required the number of Personal Planned Analysis (PPAs) & Business Planned Analysis (BPAs) with 2+ modules that are being delivered by a FR (Financial representative) for the awards timeframe accumulated through calendar month end being reported on.
Responsibilities:
Involved in Design and develop the architecture for all data warehousing components e.g. tool integration strategy; source system data ETL strategy, data staging, movement and aggregation, information and analytics delivery and data quality strategy.
Designed and developed ETL and Data Quality mappings to load and transform data from sources such as DB2, Oracle and Sybase to Data warehouse using Power Center and IDQ/IDE.
Extensively used Informatica Data Quality (IDQ) transformations like Match, Consolidation, Exception, Parser, Standardizer and Address Validator.
Developed IDQ Match and Merge strategy and Match and consolidation based on customer requirement and the data.
Built several reusable components in IDQ/IDE using Parsers Standardizers and Reference tables.
Performed data profiling and analysis of various objects in SalesForce.com (SFDC) using IDQ and MS Access database tables for an in-depth understanding of the source entities, attributes, relationships, domains, source data quality, hidden and potential data issues, etc.
Worked with the Business Analysts on IDQ - Data Profiling, Data Validation, Standardization and Data Cleansing for the Oracle 12c data migration to rebuild and enhance the business rules.
Also, worked with Business Analysts to modify/enhance rules for physical/mailing addresses using IDQ -Address Validator.
Developed both one-time and real-time mappings using Power Center 9.6 Power Exchange.
Registered the Data maps for Real-time CDC Changed Data Capture data in Power Exchange. Worked on Extraction Maps Row Test in Power Exchange Navigator.
Implemented Change Data Capture (CDC) on Source data from Salesforce.com.
Extracted various data from SalesForce.com using Informatica 9.6 with Sales Force Adapter.
Worked on updating Sales Force using external ID and created various objects in Salesforce.
Created new and modified existing Hierarchies in the universes to meet Drill Analysis of the user’s reporting needs and involved in performance tuning and optimization on Business Objects Universes by creating Aggregate tables.
Performed integrity testing of the Universes (Universe Structure checking, Object parsing, joins parsing, Conditions parsing, Cardinalities checking, Loops checking and Contexts checking) after any modifications to them in terms of structure, classes, and objects.
Used multi-dimensional analysis (Slice & Dice and Drill Functions) to organize the data along a combination of Dimensions and Drill-down Hierarchies giving the ability to the end-users to view the data from heterogeneous viewpoints.
Environment: Informatica Power Center 9.6, Informatica Power Exchange 9.6, Informatica IDQ/IDE 9.6, Autosys, Hadoop Ecosystem, Data Modelling (Erwin), UNIX, Windows 2007 Professional Client, Sybase, Oracle 10i, DB2, SAP Business Objects, Flat files, XML and COBOL, Shell Scripting, Putty, WinSCP and Toad.
John Deere World Headquarters, Illinois (Agriculture and Forestry) SEP 2013 – MAY 2016
Sr. Informatica Developer/IDQ Developer
Description: This project will be a joint effort between the newly created Machine Knowledge Center (MKC) and PV&V personnel from the enterprise. The intent of this project is to develop a process in which PV&V can mine and utilize customer information from the database created by the JDLink product that is sold on John Deere equipment. Leveraging this data will help PV&V to enhance the product knowledge of how our machines are used by our customers. This data can be used to enhance the reliability, durability, and performance of both current and future product offerings. Part of the process development of this project will be to identify what resources will be needed to develop this service for the PV&V community to obtain this valuable customer data. This project will also identify potential cost to acquire this information.
Hadoop Projects:
John Deere Customer Product (JDCP) and Load Profile data collected from the Customers and the Source team are loaded into Hadoop Ecosystem. On this data, we perform Data cleansing and business transformations are implemented in Hadoop ecosystem using Map Reduce jobs. The final data is provisioned to downstream systems for reporting and dash boarding purposes.
Responsibilities:
Successfully designed and architected the Integrated Data Warehouse in John Deere on Big Data platform.
Designed, developed, implemented, and maintained Informatica Data quality IDQ/MDM application for matching and merging process.
Utilized Informatica IDQ/IDE 9.1 to complete initial data profiling and matching/removing duplicate data.
Installed and configured content-based data dictionaries for data cleansing parsing and standardization process to improve completeness conformity and consistency issues identified in the profiling phase using IDQ/IDE.
Configured Analyst tool IDE and helped data stewards or business owners in profiling the source data create score cards applying inbuilt DQ rules and validating the results.
Experienced working with Informatica Big Data –To Read/Write HDFS files, Hive Tables and Hbase.
Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing.
Importing and exporting data into HDFS and Hive using Sqoop.
Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
Mastered the ability to design and deploy rich Graphic visualizations using Tableau.
Working on generating various dashboards in Tableau Server using different data sources such as Netezza, DB2 and Created report schedules, data connections, projects and groups.
Expert level capability in table calculations and applying complex, compound calculations to large, complex big data sets.
Worked closely with business power users to create reports/dashboards using tableau desktop.
Environment: Informatica Power Center 9.6, Hadoop Ecosystem, Informatica DVO, IDQ, MDM, Power Exchange, Data Modelling (Erwin), Netezza, PL/SQL, DB2, Sybase, Tableau V8, Shell Scripting, Putty, WinSCP and Toad, Aginity tool.
EARTHLINK CORPORATE HEAD QUARTERS, ATLANTA (Networks and Communications) OCT 2012 - SEP 2013
Sr. Informatica Developer/IDQ Developer
Responsibilities:
Served as an ETL Developer/Data Quality Analyst in the deployment of FPR Financial Product Reporting and PDS Persistent Data Staging. Primary responsibility is to do the Data Quality checks and data integration using Informatica Data Quality and Power center, Unix Shell scripting, architecting and developing of a custom ETL framework that consisted of over 144 processes using Oracle native language (PL/SQL) and load that into Oracle DWH.
Environment: Informatica Power Center 9.1, Informatica IDQ/IDE, Power Exchange, Web Services, IDQ, UNIX, Windows 200 Professional Client, Oracle 8i/9i Enterprise Edition, PL/SQL, Teradata, SAP BOXI R2/6.5, VSAM files, Flat files, XML and COBOL, Shell Scripting, Putty, WinSCP and Toad.
J.P Morgan Chase, NEW YORK (Banking and Financial) NOV 2011 - SEP 2012
Sr. Informatica Developer
Responsibilities:
Served as a Senior ETL Developer/SQL Developer in the enhancement of an existing custom ETL framework that collected, cleansed, and integrated the company’s performance data (i.e., cash-flows, liquidity positions) from various operational source systems. The enhancements were part of an overall initiative in improving an internal custom built Liquidity Risk Management System (LRSMS) that supported and provided JP Morgan Corporate Treasury executives, senior managers, and business analyst with analytical reporting capability on the company’s liquidity position, sources and uses of cash, forecasting of cash flows and stress test modeling, funding counterparties, and developing funding plan as needed. Major contributions and/or accomplishments included: designing and developing an ETL component that dynamically constructed in real time the SQL to load over 100 source feeds into the Liquidity Position fact table using a Meta Data strategy; designing and developing an ETL process that mapped custom products and/or services hierarchical relationship with the company’s general ledger products for reporting purposes; developing Sybase objects such as tables, views, indexes, triggers, procedures, and functions to support the ETL Meta Data Rule component; providing support, guidance, and training in the deployment of the solution in various environments such as integration, QA, and production.
Environment: Informatica Power center 9.1, Webservices, SQL Server 2008, Oracle 11i/10g, Teradata, PL/SQL, Power exchange 9.1, Sybase, SAP Business Objects XI 3.x, TOAD, Windows XP, UNIX maestro, ERWIN 4.2, Control-M.
GROUP HEALTH CO-OPERATIVE HEAD QUARTERS, WA (Healthcare) MAR 2011 - NOV 2011
Sr. Informatica Developer
Responsibilities:
Served as an ETL Developer/Data Analyst in the deployment of Claims, Hospital Events, In-Patient Pharmacy, Hospital Billing, and Professional Billing. And so on to the Data Warehouse to make easier access of Patients Claims data, provide analytics, trending and comparisons, reporting and exporting capabilities that support the business needs, but most of all increase data utilization, and improve customer services and productivity. Primary responsibility using Teradata utilities (SQL, B-TEQ, Fast Load, MultiLoad, FastExport, Tpump, Visual Explain, Query man), Teradata parallel support, Unix Shell scripting, architecting and developing of a custom ETL framework that consisted of over 144 processes using Oracle native language (PL/SQL) and load that in to Teradata.
Environment: Informatica Power center 8.6, Power Exchange 8.6, HIPAA(835,837-Institutional/Professional-Inbound/Outbound),Webservices, Business Objects/6.X, Oracle 11g/10g, PL/SQL, Flat files, XML,COBOL,, Teradata.
ADECCO GROUP OF NORTH AMERICA, FL (IT Staffing) AUG 2010 - MAR 2011
Informatica Developer
Responsibilities:
Served as one of the ETL Informatica Developer involving in Gathering the requirements from the end users and Involved in analysis of source systems, business requirements and identification of business rules. Design and Implement Informatica mappings to Migrate data from various legacy applications/acquisition offices to a centralized application. Responsible for Analysis, Design and Implement of various data marts for BackOffice (Financial, payroll, Benefits and HR Modules) using Data Modeling Techniques, Informatica 7.6. Tuned mappings and sessions for better performance on the data loads.
Environment: Informatica Powercenter (Designer 7.6, Repository Manager 7.6, Workflow Manager 7.6),Power Exchange, Business Objects XI/6.X, Oracle 11g/10g, PL/SQL, SQL*PLUS, SQL Server 2008,2005 Flat files, XML, TOAD, UNIX,, Erwin 4.0 and Shell Scripting.