Post Job Free

Resume

Sign in

Vertica, Netezza, Oracle, ETL Informatica, Unix Shell Scripting

Location:
Dayton, NJ
Posted:
November 27, 2018

Contact this candidate

Resume:

Yugandhar Reddy Hosamane

Email: ac7s43@r.postjobfree.com Cell# 848-***-****

Professional Summary:

Over 12+ years of Information Technology experience in Design and Development and able to deliver data, management vision, goals, priorities, design principles, and operating policies in support of the business goals.

Solid experience in Oracle, Vertica, Netezza, Informatica, Unix Shell and worked in OLTP/ Data warehouse and Web Technologies.

Experience in Management and implementation of database models, data flow diagrams, database schemas, db scripts, structures and data standards to support a robust data management infrastructure.

Solid experience in multiple full life cycle development projects including gathering business requirements, analysis of the systems, design data strategies, requirements, Development and Implement best practices and methodologies in data analysis and design.

Hands on experience with tuning of the SQL queries of Data Warehouse systems.

Solid experience in Oracle Development using SQL/PLSQL and worked extensively in writing Complex SQL queries, Procedures, Functions, Packages, Views, Triggers, Cursors, Record Types, Collections, Dynamic SQL, Bulk Binding etc.

Expert Level in utilizing the Oracle supplied Tools for data loading using SQL*Loader, External Table and UTL File package in reading/writing data from/to flat files and transformation/loading.

Solid experience in Performance tuning of SQL by reviewing the explain plan, Trace Files, AWR Reports and Optimizing them by creating indexes, using optimizer hints etc.

Solid experience in tuning the PLSQL code by profiling with DBMS_PROFILER and reviewing the reports and fixing by using bind variables, collections, bulk collect, global temp table etc. and solid debugging techniques for any data issues during development and production support.

Over 5+ years development experience in Vertica Database and deep understanding of the architecture and its Core components like projections, segmentation, K-safety, bulk data load using copy command, VSQL utility etc.

Experience in Migrating the Oracle database to Vertica, Netezza database to Vertica and bench marking the performance matrix.

Expert Level in Development of ETL processes using Informatica by utilizing the core components like Repository Manager, Designer, Workflow Manager, Workflow Monitor and also the production support and monitoring and troubleshooting the batch jobs and their corresponding workflows.

Worked heavily with UNIX shell scripting and created many utility & generic scripts to load data files, execute Oracle/Vertica SQL scripts, Informatica Workflows and transforming the files using AWK/SED.

Strong Experience in Autosys tool in creating command/box jobs and scheduling/monitoring for the batch processes and trouble shooting for any issues.

Experience working in Agile Environment and participation in the daily stand up calls and updating the Stories/Tasks using the Rally tool.

Solid functional knowledge and excellent exposure to Equities, Options, Fixed Income and Market Data.

Worked on Version control tools like Perforce, Bit-Bucket, Microsoft TFS (Team Foundation Services).

Hands-on experience across all stages of Software Development Life Cycle (SDLC) including business requirement analysis, data mapping, build, unit testing, systems integration, user acceptance testing support and Production Support.

Provide guidance and work leadership to less-experienced staff members.

Excellent teamwork skills, good oral, written communication and presentation skills.

Technical Skills:

Databases : Oracle 12c/11g/10g/9i/8i, Vertica v7/v8, Netezza Mustang.

Languages : PL/SQL, SQL, UNIX Shell Scripting, HTML, PERL, XML

Big Data : Netezza Twinfin, Vertica, Hadoop, Hive, Impala

Programming : SQL, PL/SQL, Shell Scripting

ETL Tools : Informatica

Modeling Tools : Visio

Version Control : Perforce, GIT Bit-Bucket, Microsoft TFS

Job Scheduler tools : Autosys

Tools/Utilities : Toad, Super Putty, SQL developer, Autosys, Vi Editor, Squirrel, MS Office

Bug Trackers : HP Quality Center, TFS, JIRA, GIT

Operating Systems : Linux, Windows

Functional Areas : Banking & Finance

Methodology : Waterfall and Agile

PROFESSIONAL EXPERIENCE

Client : Bank of America Merrill Lynch, New York City Mar 2018 – Till Date

Project : Automation of Manual Trade Upload Process

Objective of this project is to automate the processing of Sales Trades data. The Sales Operations team drops csv files of the Sales Trades data into a shared folder. An automated process picks up these files and runs multiple validations on each attribute of the data like Products, Clients, Sales person, Cusip etc. against the Reference data and also captures Audit Information.

Responsibilities:

Developed Informatica Mappings, Sessions and Workflows to process raw files

Created Oracle packages to transform the data and apply validations

Developed Unix shell scripts and vsql files to load data into Vertica using Unix pipes

Troubleshoot performances issues via query analyzer and improved performance of the Database by using Partitions, Indexes and query tuning

Work with the Business partners various stakeholders to understand the requirements, file format, validations on data attributes

Led the implementation from design to inception by managing resources, dev efforts, key deliverables and issues.

Liaised between business and technical team to ensure business requirements were met. Optimizing the automated feed for faster ingestion of the data

Involved in writing the shell scripts to allow Trades to map to correct Reference data identifiers.

Responsible for end to end deliverables including QA and user acceptance testing

Maintained the code in Bit Bucket.

Environment: Oracle 11g, Vertica v8, Unix Shell Scripting, Autosys, Informatica

Client : Bank of America Merrill Lynch, New York City Mar 2017 – Feb 2018

Project : Data Lineage of Trade

Objective of this application is to capture the trade event life cycle from the time it is sent from source system to how it is reported in Sales reporting tools so that transparency is available to the users. A user interface is designed so that users can easily access trade data and interpret events which impact the trade.

Tracking events on trades is a tedious process and there is a strong reliance on Support Team members along with Dev Team members to pull and analyze the data and provide information back to the requestor. The request can sometimes take several days to complete.

The system is able to track and capture all the trade life cycle events and transformations and hold the data for 2 years post the trade date or for 2 years after the trade is no longer a live trade. The User Interface is built so that a user can easily search and navigate through the history of the trade.

Responsibilities:

Prepared the Design document to demonstrate the feed flow from Oracle & Vertica to HDFS.

Created PL/SQL Packages, Procedures, Functions, Object Types, Collections, Dynamic SQL to capture the trade events and populated into Data Lineage tables.

Created new Vertica tables using projections & segmented them across the nodes. Written vsql scripts to populate the trade event data into these tables,

Reviewed code changes of other team members & provided valuable feedback.

Dynamic partitioning implementation in Hive on the external tables.

SparkSql to de-duplicate trade events.

Impala using Hue for querying capabilities.

Scheduled new jobs to execute the Oracle, Vertica, Unix, & HDFS scripts for capturing the trade events during the nightly batch cycle.

Performance tuned the sql scripts so that the batch SLA is not affected due to creation of numerous Autosys jobs for Data Lineage.

Delivered project with zero defects and well within the estimated timelines and Maintained the code in Perforce.

Environment: Hadoop, HDFS, Impala, Talend, Spark, Oracle 11g, Vertica, Shell Scripting, Autosys

Client : Bank of America Merrill Lynch, New York City Mar 2016 – Oct 2017

Project : Master Client Data and Client Brief

Objective of this project is to replace the current data source for Client information with Enterprise Reference Data system that contains data about parties, accounts and other primitive Reference data. As part of this migration Operational Risk associated with reliance on an ageing Client Platform for Client Reference Data will be mitigated. This will in-turn simplify the Client derivation process to use source Client Identifiers for Trade data processing.

Responsibilities:

Coordinated with the Reference Data team to understand the new Data formats and changes to source identifiers

Design underlying data structure and data model to minimize impact on existing processes

Design the loading technique to populate data from Oracle to Vertica

Involved in writing the shell scripts to allow Trades to map to correct Clients based on Trade Client identifiers

Work with 40+ Trading systems and 10+ downstream to ingest the Trade data and processing of the same.

Responsible for end to end deliverables including QA and user acceptance testing

Generating Recons to assess the impact of the new Client data changes

Rewrite Trade Processing logic to accommodate systems not in line with the new Client Identifiers.

Generate PDF reports to the end users with the 360 degree view of each Client across different Books, Products, Industry and Regions.

These PDF reports helped the end users to see how the client was performing in the Sales world and decide what actions can be taken to make the client perform better. These reports was used by 100+ end users.

Environment: Oracle 11g, Vertica V8, Unix Shell Scripting, Autosys, Informatica

Client : Bank of America Merrill Lynch, New York City Nov 2014 – Feb 2016

Project : Trade Split processing

Based on the agreement between the managers and marketers, trades had to be split into multiple trades among different marketers with the split in revenue accordingly. This split can be done based on different attributes of the trade.

The objective of this project is to create the automated process to split the trades into multiple trades based on the business rules set by Marketers/Managers etc.

Responsibilities:

Involved in the business requirements gathering and analysis, understanding client’s requirements and system analysis.

Designed the data structure and data model based on the requirement and finalise the same.

Designed the new Vertica tables to store the split rules set by business

Created automated shell scripts to the apply these split rules on the trades before loading the data into final tables.

Performance tuned the sql queries to load the data faster with less memory.

Generated the automated test scripts and included the same in the scripts to check whether process ran fine or not.

The process and tables were designed in such a way that rule can be changed any time without modifying any scripts.

The application had around 33+ different type of split rules with each type of rule having around 10k different rules.

Environment: Vertica, Shell Scripts, Autosys

Client : Bank of America Merrill Lynch, New York City Nov 2013 – Oct 2014

Project : Migration from Netezza Mustang to Vertica

Objective of this project is to migrate the data from the ageing Netezza mustang to new Vertica Data Warehouse.

Responsibilities:

Designed and developed the Vertica Database structure from scratch.

Created automated scripts to load the data into the tables on daily basis with the trade data.

Used Named pipe concept to load the data from Oracle into Vertica.

Created aggregate tables in such the way that retrieval of data on UI was easy.

Demonstrated performance enhancements of using Vertica over Netezza via POC to compare the performance of various Data warehouse in scope.

Liaised with various Downstream, Development teams and QA to facilitate smooth transition/migration from Netezza to Vertica.

Worked on tuning the existing scripts using proper distribution keys, better explain plan, rewriting optimized joins etc.

Delivered project with zero defects and well within the estimated timelines.

Got the best team player and Developer award from higher management.

Environment: Oracle 11g, Netezza, Vertica, Shell Scripts, Autosys, Informatica

Client : Bank of America Merrill Lynch, New York City Nov 2012 – Oct 2013

Project : Big Data POC

Given the exponentially increasing data volume for reporting, there was a need to replace the current Netezza Mustang version with a big data product. In order to come up with the decision for which product to be chosen for the analytics in the group, it was required to carry out a “Proof of Concept (POC)”. This involves maintaining the same data volumes and following the best practices for data loading and manipulation for all the competing databases and comparing them on multiple parameters like data loading speeds, logic building capabilities, hardware and licensing costs etc. to arrive at a fair decision as to which product is best suited for the project needs.

Responsibilities:

Understanding the architecture of the 3 databases viz. Netezza Twinfin (IBM), Vertica (HP) and Sybase IQ (SAP).

Understanding and coding the best practices provided by the 3 vendors.

Coordinating with multiple groups and vendors to setup the POC databases.

Conversion of current processing scripts into the corresponding code for the 3 databases.

Interacting with professionals for each DB to achieve the best approach for implementation.

Performance tuning of the code after deploying it to the databases to extract the best possible numbers out of these databases.

Parallelizing the data loads to distribute the load on available nodes of the column store architecture to get the best loading results from Vertica.

SQL refactoring and logic rebuilding to achieve the current state of trades in the most optimized manner.

Creating Autosys jobs and writing UNIX shell scripts to interact with these databases in order to simulate the production like setup.

Documenting all the actions and steps performed during multiple stages of POC for each of the databases.

Generating comparison charts to represent the performance of the 3 databases on multiple parameters that can help the company to decide the best suited product.

Presenting the comparison numbers to the management depicting which database performs better than others in different scenarios.

Environment: Oracle 10g/11g RAC, Netezza Twinfin, Vertica, SQL, PL/SQL, UNIX Named Pipes, Informatica, Autosys

Client : Bank of America, Charlotte Nov 2010 – Oct 2012

Project : Compass Web and Desktop Application

The tool provides dash-board capabilities along with multi-dimensional analysis for commissions and production credits; trade search capabilities and ability to analyze global client performance.

It gives a hawk eye view of the revenues and commissions generated across various lines of businesses in GCIB domain with respect to all the trade dimensions (marketer, client, product etc.) providing the facilities of rollups and aggregates at different levels.

Responsibilities:

Involved in the business requirements gathering and analysis, understanding client’s requirements and system analysis.

Designed and developed the Netezza Database structure from scratch.

Created automated scripts to load the data into the tables on daily basis with the trade data.

Created aggregate tables in such the way that retrieval of data on UI was easy.

Performance tuning of queries for effective retrieval of data on UI.

Implemented Netezza queries for aggregating data to be displayed on the application.

Wrote SQL queries with analytical functions, aggregate functions, groups and rollups.

Environment: Netezza, UNIX, Autosys, Informatica

Client : Bank of America, Bangalore Jul 2006 – Oct 2010

Project : Sales Associate Hierarchy Maintenance tool and Operations maintenance application

A tool to manage reports entitlement and access allocation for Global Markets sales hierarchy. An instrumental tool in easy and fast on-boarding of sales associates during Legacy Bank of America Securities LLC and Merrill Lynch merger. The single hierarchy for the Global Market Sales Reporting was implemented and all the other hierarchies got sunset. Excellent DB-design was replicated for building associate on-boarding tools of other Bank of America Securities LLC groups like Research and Traders.

Also created an application that provides provision for operations team of Global Markets and Research Technology to create, modify and delete the trades, trade attributes like Client, Marketer, Trader, Book and Broker attributes based on the request from the business teams so that reports are generated properly on time

Responsibilities:

Wrote Hierarchical queries to generate sales hierarchy.

Usage of UTL_FILE to generate PL SQL reports.

Development of oracle procedures to publish dynamic profile XML, which gave access details of a sales associate.

Used XMLDOM package to generate the profile XML. Usage of UTL_FILE to generate PL SQL reports.

Development of XML queries to generate the XML files to communicate with UI using the XMLDOM package.

Development of Oracle Functions, Procedures, Packages, Cursors and PL/SQL Scripts to include the business requirements. Wrote SQL queries to insert the metadata.

Usage of UTL_FILE to generate PL SQL reports.

Environment: Oracle 9i, SQL, PL/SQL, Oracle XML packages, SQL* Loader, UNIX, Autosys

EDUCATION

Bachelor from JNTU University, Hyderabad, India

Achievements/Extracurricular Activities:

Won few client appreciations and certificates while working with Bank of America client in Infosys Limited.

Won Spot awards for the best work done in Infosys Limited

Got appreciations from Infosys Higher management for successful release of projects and helping other teams.



Contact this candidate