Post Job Free
Sign in

Database Architect

Location:
Fremont, CA
Salary:
170,000
Posted:
March 30, 2020

Contact this candidate

Resume:

SUKHBIR S. JOLLY

Cell: (***) *******

Email: adcjt4@r.postjobfree.com

SUMMARY

Possess over 15 years of experience in comprehensive Applications analysis, design, development, testing, implementation, Systems Integration, Data Migrations/Conversions/Interfaces, performance tuning, Data Warehouse development, and some Oracle Database Administration, using the Oracle RDBMS and Oracle Exadata on UNIX and LINUX platforms.

I have worked in companies associated with the following industries: Banking and Finance, Mobile Software, Telecommunications, Comprehensive Wealth Management, Databases, Software Development, Internet Web Site Personalization and Risk Management and Credit Analysis. CORE EXPERTISE

● Performance tuning of oracle backend database processes and applications.

● Re-architecture and rewrite of oracle backend database processes for performance and scalability.

● Designing, developing and implementing large data migrations.

● Data Warehouse design and development.

● Designing, developing and implementing automated database interfaces between non-oracle/legacy applications and oracle OR oracle to oracle.

● End to end backend database development.

● Designing and developing data extracts required by other databases/applications. PROGRAMMING LANGUAGES

PL/SQL, SQL, Korn Shell, sed and awk,Perl, HTML, C, Java Script, XML RDBMS

Oracle 7.x, Oracle 8i. Oracle 9i, Oracle 10g, Oracle 11g, Oracle 11g Exadata 8 Node RAC, Oracle 12c SOFTWARE TOOLS/UTILITIES

● Sql*Loader, Sql*Plus, Oracle Import/Export, Oracle Enterprise Manager

● Toad, ER-Win (data modeling)

● Benthic software tools for oracle,Clearcase

BIG DATA

Hadoop Architecture, HDFS, Hive, Sqoop

(Possess theoretical knowledge only, but can come upto speed) SOFTWARE PACKAGES

● Oracle Financials 10.6, Oracle Order Entry and Shipping Modules of Oracle Manufacturing

● Minx Manufacturing Application, ASK Manman Manufacturing Application

● Telecommunications Record Information Billing System (TRIS+)

● DataFaction High Networth Client Accounting System OPERATING SYSTEMS HARDWARE PLATFORMS

● HP-UX - HP C7000 Blades (used for Oracle Exadata)

● Sun Solaris - HP V Class Machines.

● UNIX 4.2 BSD and System V - Sun Enterprise Series

● Red Hat Linux - MC68030 based minicomputer running UNIX

● Windows NT/2000 Server Enterprise Edition -. Dell, dual Pentium CPU System Oracle 11g, 8 Node Exadata RAC Hardware Configuration 32 HP C7000 blades. Each blade is 32 cores with 288GB of RAM totaling 1024 cores and 9.2TB of RAM. They have been configured using 2 Violin Flash Storage Arrays (40TB total) Apple Inc, Sunnyvale, CA, Database Consultant, 05/2017 - 01/2018 Performance improvements and scalability of oracle data imports, oracle batch processes and online queries used by the Sales and Trends Application for Content Providers selling content using Apple’s Itunes Framework. Oracle Database size was 200 Terabytes. This application is used to report Sales and Trends for Content Providers content sales on a daily, weekly, monthly, quarterly, yearly, basis by region, city, country, state/province, songwriter, song publisher, singer, movie actor, movie director, movie script writer, book publisher, and so on or a combination of these. Examples of content include, music, books, audio, movies and so on. Project Responsibilities

Performed the following for performance improvements of 40 data imports, 20 oracle batch processes and 200 online queries:

● Analyzed and thoroughly documented the performance issues and presented the documented issues to the DBAs, database developers and QA, to get their approval to proceed.

● Implemented the performance changes with the database developers.

● Documented unit and integrated tests for performance improvements.

● Worked with QA and oracle developers to perform the test cases, in order to make sure that the functionality was maintained, after the performance changes had been implemented.

● Worked with QA, oracle developers and DBAs to demonstrate the performance improvements. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, Oracle 12c on Linux, Oracle Enterprise Manager, AWR, Toad, Sql Trace, automated via cronjobs

Verizon Wireless, Walnut Creek, CA, Database Consultant, 11/2014 – 05/2017 Applications For Performance Monitoring of Verizon Wireless Nationwide 4G/LTE Networks using Key Performance Indicators. These applications are called, ALPT (Alcatel/Lucent Performance Tool), ELPT (Ericsson Performance Tool) and CLPT (Core Performance Tool). About 1 billion records are loaded each day. Total production ALPT database size: 10 Terabytes, Total ELPT production database size: 15 Terabytes, Total CLPT production database size: 5 Terabytes

Project Responsibilities

● Re-wrote and automated data archival processes, thereby reducing storage requirements, which also helped data load and report performance.

● Documented Method of Procedure (MOP) for deployments in Production and DR (Disaster Recovery) database environments.

● Designed black box and white box test cases for new database features and worked with and helped QA in detailed testing.

● Worked with subject matter experts and technical leads to design, develop, test and deploy the complete CLPT

(Core Performance Tool) Application, used for monitoring the performance KPIs for Voice over LTE.

● Making data types in some query join conditions the same, thereby avoiding FTS on queries.

● Worked with subject matter experts and technical leads to design, develop, test and deploy new features for the three Applications, ALPT, ELPT and CLPT. These new features included many new KPIs that were required to be monitored by the by radio frequency engineers, business users and subject matter experts.

● Removed redundant portions of PL/SQL code, after discussions with Subject matter experts and technical leads, to improve batch load performance.

● Worked with DBAs and operations to analyze, debug and fix any production performance issues either by providing a patch fix or by more detailed fixes like data model changes and data migration, providing performance fixes in the database packages and so on.

● Re-designed and wrote detailed error handling, error logging and debug logging processes in the stored procedures.

● Improved the partition merge processes performance by 10 times, by using bulk data concurrent inserts from partitions to be merged into temp tables and exchange partition methods.

● Improving the sql query cost, buffer gets, physical reads and cpu cost by inserting missing join conditions on indexed columns that were causing merge join Cartesian on very tables, which is very expensive.

● Modified existing indexes by adding the missing column in the where clause to avoid index skip scans which are costly especially on very large tables.

● Partitioned large tables that were not partitioned and were causing performance/scalability issues during reporting. This also involved writing data migration code to migrate data from non-partitioned to partitioned tables.

● Added parallel query and append hints to speed data loads and perform hourly and daily summarization.

● Used partition exchange feature of oracle for large data inserts into transactional tables.

● Using partition exchange feature, to speed up data loads.

● Designed and developed automated data purge procedures to keep rolling one month for 15 minute transactional data, rolling three months for hourly transactional data and rolling one year for daily transactional data.

● Used the hybrid columnar data compression feature of Exadata to reduce space usage drastically.

● Worked with DBAs and operations to setup weekly procedures for collecting table and index statistics upto 20% for the cost based optimizer to improve report query performance.

● Replaced deletes in the pl/sql code that was causing data fragmentation and causing the batch processes to slow down.

● Improved load and select performance in the stored procedures by adding missing filters in the sql queries.

● Understood all the database packages/procedures in detail and made sure all the queries are tuned found functional issues, discussed with subject matter experts, got their approval and fixed them.

● Rebuilt transactional tables and indexes to remove data fragmentation.

● Modified sql loads to use direct path loads.

● Improved performance by loading data into transactional tables concurrently through multiple oracle sessions.

● Pinned the key network element table and KPI summary tables and all its index blocks into the Oracle Exadata flash cache and de-normalizing the network element table, to improve the report performance by 20 times.

● Pinned the busy hour tables into Oracle Exadata flash cache, to speed up report run times by a factor of 20 times.

● Used the hybrid columnar compression for data compression to drastically reduce storage requirements.

● Helped tune the oracle database batch processes from other database teams.

● Mentored the UI Java developers from other database teams in sql and pl/sql tuning.

● Providing extracts from ALPT/ELPT/CLPT databases to big data and other teams for their processing. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, sed/awk, Oracle 11g Exadata 8 node RAC, on UNIX, Oracle Enterprise Manager, AWR, Toad, Oracle import/Export utilities, Sql Trace, automated via cronjobs State Compensation Insurance Fund, Pleasanton, CA, Database Consultant, 06/2013 – 10/2014 Project Responsibilities

● Performance Tuning Of Oracle Claims, Quote, Legacy, Billing and Fraud Data Warehouse ETL processes. (Data Warehouse size: 15 – 20 Terabytes). Improved performance by 10 - 20 times and also made the Data Warehouse systems scalable with data growth.

● Designing, developing, testing and implementing, load and performance testing of new ETL processes for Data Warehouses.

● Working with Big Data Team to design, develop, test very large data extracts and interfaces from OLTP source Databases for Hadoop based Big Data ETL processing.

● Designing, developing, unit/integration testing, load and performance testing of data refresh processes from Oracle data warehouses and OLTP databases required for SOLR searches. The main key behind this denormalization which makes the SOLR merge and re-index of its file structures very fast. This process made the SOLR data merge and re-index 20 – 30 times faster, as the refreshes were incremental. OLTP Database size was 7 Terabytes.

● Designing, developing and implementing data archival strategies for the Oracle data warehouses and OLTP databases thereby making the ETL processes much faster and scalable and also making the OLTP batch processes faster and efficient and scalable.

● Reviewing SQL and PL/SQL code written by developers from Data Warehouse and OLTP teams and documenting issues with their code from performance perspective, exception handling, debug and error logging. dead lock and locking issues, context switching issues that can cause huge bottle necks, commenting of code and code indentation and readability, using correct query hints for parallel processing, bypassing redo logs, adding the missing indexes, adding the missing query filters that can make the query much more efficient, or re-writing the SQL query in a more efficient manner, creating tables/indexes for data warehouses in larger block size tablespaces like 32k or 64k, for faster read and write performance, pinning tables in the Oracle SGA for improving Dashboard query performance and so on.

● Working with developers to debug and fix performance and database issues in production, QA and UAT databases.

● Working with Oracle DBAs to change Oracle server parameters for performance improvements, monitor and providing quick and long term fixes for database performance issues caused by ETL processes.

● Working with Release Management Team and Developers to plan, document and implement and validate Database Releases for Data Warehouses and OLTP databases on Linux, Sun Solaris and HP-UX operating systems, for production Releases.

● Documenting Release notes, applying and validating database code Releases on all Development, QA and UAT databases.

● Working with Load and Performance testing teams.to make sure that new code changes for Data Warehouse ETL did not impact the SLA and also the new OLTP database code did not load the OLTP database and did not impact the new and existing query performance.

● Performing POCs for Data Warehouse and OLTP teams to provide high performance and scalable database solutions.

Innopath Software, Sunnyvale, CA, Lead Developer and Data Warehouse Architect, 03/2011 – 05/2013

● Completely re-architected, re-designed and re-wrote the Data Warehouse and ETL processes, thereby making them more scalable, reliable and improved process performance by 15 - 20 times. Took complete ownership of the entire project from design to deployment. This project was developed for Mobile Update customers with very large data sets like Verizon, Tata, US Cellular, Intel, Qualcomm, Fujitsu and Orange. This project was implemented as a SaaS hosted service model for some customers.

● Architected, designed, developed and implemented Data warehouse and ETL solution for Active Care customers, with data containing intelligent solutions for Mobile Carriers, OEMs and internal operations. This was implemented as a SaaS hosted service model. Took complete ownership of the project from design to deployment.

● Architected, designed, developed and fully automated data extracts for very large customers like Verizon and Tata to provide list of mobile phones with outdated firmware. These extracts helped the customers in monitoring and improving the FOTA (Firmware Over The Air Update), success rate, and thereby reducing customer service calls for mobile device issues related to outdated firmware. Took complete ownership of the project from design to deployment.

● Architected, designed, developed, tested and implemented the end to end data mobile analytics warehouse/ETL solution, required by OEMs and Network service providers, to provide detailed analysis for the following business cases: Cell Phone battery heating up quickly, Cell Phone battery level drains quickly, Dropped Calls, Phone Reboots, Application crashes on Cell phone. o These business cases help the OEMs to use the detailed analysis provided by the Mobile Analytics UI and Reports to have intelligent conversations with Network providers regarding why the cell phone battery heats up, cell phone battery drains quickly, possible reasons for dropped calls, phone reboots and application crashes.

o These business cases help the Network Service Providers to use the detailed analysis provided by the Mobile Analytics UI and Reports to have intelligent conversations with customers regarding why the cell phone battery heats up, cell phone battery drains quickly, possible reasons for dropped calls, phone reboots and application crashes.

o This was built to reduce the cell phone returns that are causing the OEMs and Network Service Providers to lose billions of dollars in revenue and also to reduce damage to cell phone brand names, thereby reducing loss in sales revenue.

o Data Warehouse size was about 7 terabytes, with a daily data volume from 5 – 10 million. o This project was implemented as a SaaS hosted service model.

● Re-architected, re-designed, developed and fully automated the Java based batch processes used for Mass Campaigns, Device Import, Subscription Import, thereby making them scalable, reliable and improved their performance by 20 – 30 times. Took complete ownership of the project from design to deployment.

● Provided database performance solutions in production (emergency situations) for the following very large volume database processes that perform the following: o Delete orphaned devices and subscriptions and data fix for devices.

● Provided data fixes for random issues created in production due to inefficient sql scripts, that performed incomplete data updates or had to be killed due to performance issues, thereby causing data to be inconsistent.

● Acted as mentor for Java developers and database developers trying to implement database scripts. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, Oracle 10g and Oracle 11g, on Linux and UNIX, Oracle Enterprise Manager, AWR, Toad, sed and awk, Oracle import/Export utilities, Sql Trace, automated via cron jobs. Performed the following for each of the above projects:

● Documented data model changes. Performed complete coding, unit and integrated testing.

● Documented detailed unit and integrated test cases, detailed installation and upgrade instructions for operations.

● Worked with QA and wrote test plans for them, to expedite their testing processes and improve efficiency of their test plans.

● Worked with QA for load and performance tests. Documented business use cases.

● Wrote automated test scripts for black box testing. Worked with DBAs for Oracle server tuning.

● Proved to Business and QA that there functionality was maintained.

● Agile software development method was used for all of the above projects. Dun & BradStreet, San Mateo, CA, Database Consultant, 01/2008 – 03/2011 Performance Tuning Of Batch Processes

Summary

Performance tuned the batch PL/SQL processes to run 10 – 12 times faster, thereby reducing the batch window SLA and preventing a large number of customer support issues and backlogs. This also prevented the company from losing some very large customers with large data volumes and made the existing customers very satisfied in terms of response times. There were around 100 batch processes that had performance issues that were not meeting customer SLA and were not scalable.

Some of the techniques used to improve performance were:

● Optimal re-partitioning of tables and indexes. Partitioning the un-partitioned tables and indexes, thereby reducing I/O and CPU cost drastically.

● Re-visiting and optimizing the Batch business processes, thereby removing redundant code, hence making it faster. In some cases re-writing few batch process packages from scratch in an optimal fashion.

● Creating subsets of very large tables, using parallel, (create table as method), and use these in the queries wherever possible, instead of hitting the very large tables each time. This was done once a day, before the batch process window. Performing partition specific updates, inserts and selects in parallel.

● Designing and developing data archiving strategies, thereby reducing data volumes, hence making the batch processes faster. Using INSERT-ALL wherever possible.

● Reorganizing large to very large tables and indexes that have very large delete operations on them.

● Pinning database packages in SGA. Re-ordering indexes, making them very selective, thereby improving the I/O and CPU cost of the queries. Making data types in some query join conditions the same, thereby avoiding FTS on queries. Improving the sql query cost, buffer gets, physical reads and cpu cost by inserting missing join conditions on indexed columns that were causing merge join cartesian on very tables, which is very expensive.

● Converting vertical tables into horizontal tables, thereby reducing the row counts to be processed drastically.

● Modified existing indexes by adding the missing column in the where clause to avoid index skip scans which are costly especially on very large tables. Re-writing inefficient batch process queries.

● Pining small tables and indexes in the SGA. Using sql loader direct path load instead of conventional path load. Performed the following for each batch process:

● Coding, unit, integrated and functional testing. Documented performance changes in detail.

● Documented performance tuning test plan. Documented detailed functional test plans.

● Made sure that the business functionality was not affected by the performance changes.

● Documented database structure changes in detail. Helped QA in their white and black box testing.

● Documented production deployment procedures. Documented performance improvement results.

● Tested functional changes on a development environment. Tested the performance changes in a production like environment, with the same infrastructure like production.

● Performance tuned 400 UI based queries to run 8 - 10 times faster. This prevented the company from losing some very large customers and made the existing customers very satisfied in terms of response times. It also helped in generating several prospects from old client referrals. Performed the following:

● Provided trace output of the old and new queries.

● Documented the performance changes.

● Provided the new optimized queries to the developers to integrate into the Java code.

● Worked with Oracle DBAs and Java developers to test these queries in environments with production volumes.

● Worked with QA to certify the tuned queries functionally. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, Oracle 10g, on Linux and UNIX, Oracle Enterprise Manager, AWR, Toad, sed and awk, Oracle import/Export utilities, Sql Trace, automated via cronjobs. Merced Systems Inc, Redwood City, CA, Senior Integration Engineer, 06/2006 – 12/2007 Performance Tuning Projects

Project Scope

Improved the performance of ETL database batch processes that were already in production by 10 – 15 times, thereby improving customer retention, satisfaction, confidence and future customer wins. This was done for 10 existing customers, who were very frustrated with the performance issues. These performance improvements were also applied to the ETL processes of the new customers, thereby making them transparent of performance issues. Provided complete details on the performance tuning case studies for these 10 customers, to the Merced development and support teams, to be able to handle this for other existing customers and new customers. Project Responsibilities

● Understood the ETL processes in detail and documented performance issues.

● Suggested and discussed performance improvement strategies with the customers before implementation.

● Worked very closely with customer IT and DBA team. Designed test cases for performance testing.

● Documented and presented performance improvement results to the customers. Tested functional changes on a development environment.

● Tested the performance changes in a production like environment, with the same infrastructure like production.

● Made sure that the business functionality was not affected by the performance changes. Modified code and database level changes to get improved results.

● Lead the project from Merced side and obtained customer sign off on performance improvements and functional accuracy.

● Performed detailed data mapping between source and MPS datamart. Worked with customers to get details of data sources, server settings, db connection settings. Developed full user acceptance testing document for the ETL interfaces. Architected complete error handling procedures. Worked with the customer DBA for I/O allocation of data and other DB performance related items. Documented detailed interface flow diagrams.

● Designed and developed error handling reports and daily load reports.

● All the above interfaces were run via the cron jobs. Performed complete development, unit, module and integrated testing. Made sure that all these interfaces were performance tuned and scalable. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, Oracle 8i and 9i, on Linux and UNIX, Oracle Enterprise Manager, AWR, Toad, sed and awk, Oracle import/Export utilities, Sql Trace, automated via cronjobs. Harris-myCFO Inc, Senior Engineer, Redwood City, CA, 06/2000 – 05/2006 ETL Interface between Client Accounting System (DataFaction) and Oracle Based Expense Manager Project Scope

This interface extracts vendor, bank account, invoice, invoice payment, credit card detail, GL account distributions, cash receipt, cash disbursement, cash adjustment transactions, bank account balances data for all client entities from client accounting system into multiple data files, loads the data into interface tables, performs polling on data loads, performs data transformations, validations, loads into Expense Manager tables, performs complex business, control and amount total verifications, generates data verification reports, performs detailed error handling at each stage of the interface process, automatically informs key Business and Technical staff of Interface completion status via E-mail and Pagers, generates user friendly error reports to fix problems with the source system/user/business related. ETL Invoice Payment Approval Interface between Oracle based Expense Manager and Client Accounting System

(Datafaction)

Project Scope

This interface extracts Client entity invoice payment approval data from Expense Manager oracle DB, performs polling on data, data transformations, data validations, loads the approval data into client approval interface table, performs data verifications, generates verification reports, performs detailed error handling at each stage of the interface process, automatically informs key Business and Technical staff of Interface completion status via E-mail and Pagers, generates user friendly error reports to fix problems with the source system/user/business related. ETL Interface between Client Accounting System (Datafaction) and Centralized Client Reporting Application Project Scope:

This interface extracts, transforms, validates, loads and summarizes the data from Datafaction ( Client Accounting Application) into the Client Reporting application for generating reports like, Summarized open and paid invoices, summarized credit card expenses, summarized payments by vendor, summarized payments by quarter, summarized expense approvals, vendor summary, credit card detail, summarized cash receipts, summarized cash disbursements, summarized cash adjustments, summarized profit and loss statement, summarized income statement, summarized balance sheet, summarized statement of net worth, summarized cash transactions, cash balances per bank account, categorized cash flow, count of voided checks, summary of credit cards and credit card payments by vendor etc.

Project Responsibilities

● Designed and architected the Expense Manager data model, after detailed discussions with business analysts, users and managers, performed detailed data mappings between Client Accounting system (DataFaction) business entity attributes and Expense Manager data model attributes.

● Documented Interface Software Architecture Design document which contained: Detailed data mappings between client accounting business entities and interface tables, designed directory and file structures for data extracts from Client Accounting system for each business entity, detailed data transformations, data validations, data verifications, data mapping from interface tables and expense manager tables, project performance requirements, physical and logical data models, all database object definitions, high level and detailed flow diagrams of the extract and interface processes, extract and interface systems architecture diagrams, interface success and failure notifications and resolution procedures.

● Designed, developed, tested and deployed the interface process, documented and performed detailed unit, module and integrated test plans, documented system test plans for QA staff and helped them in testing, interface operations and support procedures, interface error handling and recovery procedures, planned and performed detailed dry runs during all stages of the project, documented software architecture design for vendor development of the file extract process from datafaction, worked with UNIX Systems Administrator and Oracle DBAs to perform operating system striping across disk controllers for maximum data throughput and uniform I/O distribution, laid out the Oracle datafiles and tablespaces across disks and controllers to obtain maximum performance throughput and uniformity in I/O, laid out the interface tables and expense manager DB tables and indexes across the tablespaces to obtain maximum throughput, pinned the PL/SQL packages into the shared pool area for performance improvements, used SQL and PL/SQL tuning knowledge for

improving performance, performed performance benchmark dry runs in the production mirror environments before releasing to production, achieved a total Interface runtime of 30 minutes. Database Batch Interface To Determine Business Issues In A Database Application

● This interface dynamically executes a set of sql queries which are inputs to the interface and then spools the results into flat files, that are emailed to technical and business folks to analyze the results and determine the business issues associated with the data in the database application.

● Each query corresponds to a business issue that needs to be monitored.

● The business issue queries are stored in interface tables. Queries can be added to, deleted or edited in the interface table, depending on the business issues to be determined.

● All the flat files generated for that night’s run ( that contain the business issues spooled output ) are stored in the directory for each day.

● This interface helped keep the Application data clean and also help fix major bugs in the application that were causing these business issues in different software releases. Project Responsibilities

Designed, developed and implemented the interface. Performed detailed unit and integrated test plans. Documented detailed test plans and install instructions. Obtained internal user sign off on the functionality. Technology Used:

PL/SQL, SQL, Sql*Loader, Sql*Plus, korn Shell, Oracle 8i and 9i, on Linux and UNIX, Oracle Enterprise Manager, AWR, Toad, sed and awk, Oracle import/Export



Contact this candidate