Resume

Sign in

Data Manager

Location:
Rossville, GA, 30741
Posted:
March 29, 2018

Contact this candidate

Resume:

Anil N VillamKanDaThil Phone: 423-***-****

Email : ac4yz6@r.postjobfree.com

SUMMARY:

● 16+ Years of Dynamic career with full cycle Software experience and in System Analysis, design, development and implementation of Relational Database and Data Warehousing Systems

● Proven track record in planning, building, and managing successful large-scale Data Warehouse and decision support systems. Comfortable with both technical and functional applications of RDBMS, Data Mapping, Data management, Data transportation and Data Staging.

● Strong experience in working with very large datasets. Worked with very large databases (VLDB) of 3-5 Terabytes- handling large datasets with over 500 Million rows. Good knowledge about row based and columnar database systems.

● Very strong experience in ETL process. Involved in Planning, Coding, tuning, testing and implementation of ETL jobs Using Datastage 9.1, Informatica 9.1, SSIS, Oracle stored Procedures, Sql Server Stored procedures.

● Strong experience and understanding of Job scheduling. Worked with various scheduling tools like CA Workflow Monitor, datastage scheduler, informatica scheduler, Control-M, Proprietary scheduling tool-procmon, crontab and windows scheduler.

● Very strong in writing complex SQL queries using various analytical functions.

● Experience with OS scripting - Unix and windows batch jobs supporting scheduling and automations. Writing scripts to move/copy files between servers using SCP and SSH.

● Experience with python scripting, writing scripts to process files and spreadsheets and automating email reads Using IMAP.

● Used Teradata SQL Assistant, PMON and data load/export utilities like BTEQ, Fastload, Multiload, Fastexport, Teradata Parallel Transporter and Exposure to TPump on UNIX/Windows and running the batch process for Teradata.

● Experience in dealing with various unstructured data types like XML,JSON, Text files, PDF’s, EDI’s etc. Developed jobs to parse the data in XML tags, using XML Input and XML Output Stages. Developed Python and perl scripts in process the unstructured data.

● Developed Exception Handling Mappings for Data Quality, Data Profiling, Data Cleansing and Data Validation. Developed slowly changing dimension mappings of Type1, Type2 and Type3 (version, flag and time stamp).

● Broad understanding of dimensional data modeling like Star schema, Snowflake schema, Fact and Dimension table design, Experience with modeling tools such as Erwin and Vision

● Sound Knowledge of Data Warehouse/Data Mart, Data Modeling Techniques. Very good understanding of Dimensional Modeling.

● Involved in providing Level 0 & Level 1 Production Support to the warehouse feeds coming from 40 source systems (Daily/Month/Quarterly).

TECHNICAL SKILLS:

ETL Tools ETL DataStage 9.1,,8.5, Parallel Extender, ProfileStage, QualityStage, Informatica 9.1, SSRS

Databases TeradataV2R5/V2R6,Oracle11g/10g/9i/8i,DB29.x/7.x,SQLServer 2000/2005/2018/2012,Access dB

Programming Perl, SQL, PL/SQL, Shell Scripting, XML, Python scripting Operating

Systems

Unix, Linux, Windows NT/XP/2000/2003 server/Windows 7 Scheduling Control-M, AutoSys, Crontab Utility; CA Workflow Automation Versioning&

Automation

Migration

Clearcase,Git, VSS, Microsoft team foundation.

Connectivity Citrix, Putty, Cygwin X, WinSCP, Exceed EXPERIENCE:

Software Application Design Specialist

Blue Cross Blue Shield of Tennessee

Chattanooga, TN

http://www.bcbst.com/

Jan/2014 to Till Date

EDW/CBDW/CDW(Teradata) Warehouse

This is the core data warehouses at BCBST. Most of the source systems from various business areas are linked into this data warehouse. The EDW data warehouse resides on z/os DB2, CBDW resides on UDB and CDW resides on Teradata. This data warehouses feeds out of most areas of business to mention some

-claims in facets, HR data from workday, Financials from Oracle financials, Hedis Data, Pharmacy data coming in from ESI and many more

● Gather requirements, Design with proper Impact analysis, test, document, implement and develop software, data mart models and perform programming to meet project and New business requirements.

● Create Datastage jobs using different stages like Transformer, Aggregator, Sort, Join, Merge, Lookup, Data Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture, Change Apply, Sample, Surrogate Key, Column Generator, Row Generator, Etc.

● Processing data from various sources like databases, XML, Flat files, Excel spreadsheets, JSON etc...

● Loading Data into the Enterprise Data Warehouse(CDW) using Teradata Utilities such as BTEQ, Fastload, Multiload and Fast Export in Unix environments. Used Volatile and Temporary tables as needed.

● Tuning of SQL statements using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes, revision of correlated sub queries, usage of Hash functions, etc.

● Use SQL to query the databases and to do as much crunching as possible using very complicated SQL Query to achieve better performance.

● Data Profiling, Data Analysis and Data Validation using Analytical functions within Complex SQL queries . Identifying Data issues, Data anomalies and planning corrective actions.

● Automation using Unix scripting and batch scheduling analysis. Submitting Job breakdown to the job scheduling team.

● Creating roles and profiles as needed. Granting privileges to roles, adding users to roles based on requirements..

● On call and Production support, Analysing and resolving high priority production issues. Responsible for production support of more than 1000 jobs that runs on production.

● Peer to Peer Code Review.

Environment: IBM Info Sphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Oracle 11g, Sql Server 2005/2008/2012, Flat files, XML files Shell Scripts, CA workload Automation, Windows 10, AIX, Teradata 15.10, Sybase Pre Prep conversion for Datastage Upgrade from 9.5 to 11.5 This project is to update all the deprecated stages to the latest stage to prepare for the 11.5 migration planned latter this year.

● Leading g roup of 4 developers for the conversion efforts.

● Used Connector Migration Tool to convert old deprecated stages to appropriate 11.5 acceptable stages.

● converting and productionizing around 600 parallel jobs to the 11.5 acceptable version.

● Converting the server jobs to parallel jobs.

● Documenting and Testing the jobs to validate the conversion.

● Used MD5Sum to compare the file outputs between the old version and new version.

● Deploying to validated jobs to production ( ver 9.1). Environment: IBM Info Sphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Oracle 11g, Sql Server 2005/2008/2012, Flat files, XML files Shell Scripts, CA workload Automation, Windows 10, AIX, Teradata 15.10, Sybase TPS and Metadata

This project automates the data governance and collection of metadata of various data warehouse system. All business analyst and develop submit the TPS document to modelers to apply any database changes. Once approved the modelers acknowledges the TPS document into a designated email box. These emails are processed to fetch the metadata of the tables in nightly schedule.

● Gather requirements, Design, document and implement the data mart to represent the data governance at bluecross.

● Python scripting, writing scripts to process files and spreadsheets and automating email reads Using IMAP.

● Create Datastage jobs using different stages like Transformer, Aggregator, Sort, Join, Merge, Lookup, Data Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture, Change Apply, Sample, Surrogate Key, Column Generator, Row Generator, Etc.

● Unix scripting for process automation and basic file manipulations

● Scheduling and maintenance

Environment: IBM Infosphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Sql Server 2005/2008/2012, Flat files,Spreadsheets, Python 3.5.1, Shell Scripts, CA workload Automation, Windows 10, AIX, Redhat Linux v7.4

BHI - The nation’s largest health information data warehouse, bringing together the claims experience of 54 million BCBS members nationwide - HIPAA compliant to protect member privacy. All Home Plans are to submit their Member Data, Products, Accounts, Medical Claims and Rx Claims On a monthly basis to Association. These monthly extract files will need to reflect standard NDW formats and be sent by the last day of the month containing membership information as of the first day of that month using a consistent format conformed to standard specifications. BHI has four stages of data certification to assure accurate data, including independent actuarial review.

● Support the transmission of member delta data monthly to BHI.

● Data Profiling and verification of the quality and delta percentage of change within the expected norms using complex sql Queries.

● Analyse the unexpected changes or higher variance of change data. Handling it with respective functional group in case data anomalies.

● Provide level 0 and level 1 Production Support to the warehouse feeds coming from 40 source systems

Environment: IBM Infosphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Sql Server 2005/2008/2012, Flat files,Spreadsheets, Shell Scripts, CA workload Automation, Windows 10, AIX, Redhat Linux v7.4,

BIPM Dashboard Reporting

IS Data Mart that Integrates various operational databases and presents a reporting dashboard to the IS executives .

● Gather requirements, Design, document and implement the data mart from various sources(( Keylight, HPSM, HR, PEGA, ESP, PPM)

● Reverse engineering CASM and data mapping to HPSM to have minimum impact on the frontend dashboard reporting.

● Writing complex sql queries and datastage Jobs to load the data mart.

● Testing and Job scheduling.

Environment: IBM Infosphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Sql Server 2005/2008/2012, Flat files,Spreadsheets, Shell Scripts, CA workload Automation, Windows 10, AIX, Redhat Linux v7.4

QCR - Quality Care Rewards is a project that helped blue cross achieve 4 star ratings(trending towards 5 star) with NCQA. QCR is a program which pays providers for closing Member health measure gaps as reported by the HEDIS. QCR gets data from HEDIS, SDR and Provider Portlet. QCR performs member provider attribution. It reports the Member gaps to the attributed providers. The provider closes the health gaps for the member and submits attestation. QCR computes the payments for the providers for their services. QCR computes the health measures for our member population and help senior management analyse how our member population is getting benefited.

● Requirement analysis, design and Implementation.

● Writing Teradata Bteq Jobs to load the QCR data mart.

● identifying potential bottlenecks with queries and query rewriting,

● Performed skewed redistributions, join order, optimizer statistics, physical design considerations

(PI/USI/NUSI/JI etc) etc. In-depth knowledge of Teradata Explain and Visual Explain to analyze and improve query performance. Recreating the user driver tables by right primary Index, scheduled collection of statistics, secondary or various join indexes.

● Writing Teradata Stored procs/Macros to support the front end. Environment: IBM Info Sphere Information Server V9.1v Suite [ETL DataStage, Information Analyzer, IS Manager], Oracle 11g, Sql Server 2005/2008/2012, Flat files, XML files Shell Scripts, CA workload Automation, Windows 10, AIX, Teradata 15.10, Sybase Sr ETL Developer/Lead

Tennessee Valley Authority

https://www.tva.gov/

Mar//2009 to Jan/2014

Fuel Data warehouse.

This project was part of TVA, in the process of retiring their mainframe systems and redesigning the fuel data warehouse. The project involved multiple phases, the first being the re-hosting of fuel data warehouse from the mainframes. The first phase used the existing reporting structures (built on powerbuilder) with the loading programs changed from legacy systems to PL/SQL Programs and informatica Mappings. The second phase was for redesigning the fuel data warehouse from the scratch into dimensional model database, the OLAP tool supporting it was Cognos.

● Gathering the requirements from various business groups through a series of meetings and analysis of various sources of information across the fuels group including data loaded in excel, access and other workstation files.

● System representation through flow diagrams written using VIsio.

● Defining logical layer and Mapping, transforming data points to logical layer. Identifying key reporting measures for the fuel group.

● Designing he Fuels data Mart. Design decisions for Partition exchange/ partition swapping/Data compressions. Integrating fuel data mart with the enterprise wide Data warehouse (IECSP) using Materialized views. Analyzing performance bottlenecks and defining various indexes as needed

● PL/Sql Programming using various Oracle-supplied packages, PL/SQL objects, Bulk collects, Pipeline functions, DB-links, Materialized views etc

● Developed Informatica mappings for Slowly Changing Dimensions.Used Mappings, Mapplets, Re-usable Transformations, User Defined Functions. Developed complex Workflows using Sessions, Event Wait Event Raise, Assign, Email, Timer and Command Tasks and scheduled the workflows to run based on file watcher and Email events.

● Extract/Load data into XML Files, Excel/CSV Files and Flat Files.

● Interfacing Oracle stored procedures within informatica. Used PMCMD command to execute the informatica workflows from Shell scripts.

● Job scheduling on Control M.

Environment: Oracle 10g,, PL/SQL, Oracle Utilities, Toad, PL/SQL Editor, Oracle 10g, .NET(C#), Perl, Erwin, Oracle Financials 11i, Crystal reports 10, Solaris, Unix scripting, Informatica 8.6/9.1 Vendor Data Automation

This project helped TVA to automate the process of loading Fuel Samples directly from email attachments. A Perl program would connect to the IMAP email server and read email periodically. This program was integrated with Informatica, which processed and loaded the data into staging tables and generated import files for the fuelworx application to consume.

● Business Requirement Gathering and Analysis of various external vendor data.

● Designing the Logical and physical data Model.

● Building ETL Using Infomatica.

● Job scheduling on control M.

Environment: Oracle 10g,, PL/SQL, Oracle Utilities, Toad, PL/SQL Editor, Oracle 10g, .NET(C#), Perl, Erwin, Oracle Financials 11i, Crystal reports 10, Solaris, Unix scripting, Informatica 8.6/9.1 Fuel Information Tracking

This project uses the GIS technology, that displays exact location of TVA coal Shipments (Rail And Barge) on a map based on the EDI data received from the Transportation vendors. It helps TVA to track and Plan the demurrages associated with shipments. The system alerts the Plants of a potential demurrage situation. It receives a acknowledgment from the Plant, when a demurrage situation occurs, allowing senior management to better understand and make decisions. This project was recognized as the best Innovative project in IT at TVA for the Year 2012.

● Coordinating consignment feeds with various railroads( NS, PAL, BSNL) providing services to TVA in delivering the coal to the coal plant.

● Support the ETL process, enabling it to connect to an IMAP mail server and download emails attachments and loading the data directly into staging table.

● Data Mapping the EDI feeds SPIC Codes to ESRI GIS Software.

● Writing Informatica Mappings to Process the the Feed push it to GIS software which would live stream the current location of the rail cars.

Environment: Oracle 10g,, PL/SQL, Oracle Utilities, Toad, PL/SQL Editor, Oracle 10g, .NET(C#), Perl, Erwin, Oracle Financials 11i, Crystal reports 10, Solaris, Unix scripting, Informatica 8.6/9.1 Carbon Footprints Accounting.

This project helps TVA to account its carbon emission at a corporate level. TVA has various Generation Plants; to maintain the demand it sometimes needs to buy power from other providers, it operates various Machines and vehicles. These entities emit greenhouse gases. The goal of this project is to account all possible emissions and report them to the respective agencies.

● Gathering business Requirements from various business areas at TVA.

● Building the project design document.

● Designing the logical and Physical data model

● Loading data from various sources that included spreadsheets, HTML and PDF’s.

● Used SSAS, SSRS and Excel build various reports and dashboards as per the customer needs.

● UNIX, Perl scripting: Support the ETL process, enabling it to connect to an IMAP mail server and download emails attachments and loading the data directly into staging table.

● Code Quality and Implementation: Involved in generating and documenting Test/QA Plans – unit and Product integration. Managing deliveries and managing system changes and controlling code versions.

● Used Control-M Scheduler to schedule and integrate some of the processes to run at the organizational level.

Environment: Oracle 10g,, PL/SQL, Oracle Utilities, Toad, PL/SQL Editor, Oracle 10g, .NET(C#), Perl, Erwin, Oracle Financials 11i, Crystal reports 10, Solaris, Unix scripting, Informatica 8.6/9.1 ETL and Oracle Developer

Constellation Energy Group

https://www.constellation.com/

Mar/2008 to Mar/2009

PNL is a Batch processing system which supports CEG to post its profit and loss from various Risk (real time) systems to its financial database (Oracle Financials). The PNL maps the accounting rules for every trade performed in the risk systems. Trade information flows into the system through various methods Like XML, Text files and Database Links. The Trade Headers (Start Point) are connected using SOA Software using Oracle Fusion and are loaded at real time- All Risk system is connected to PNL using the TIBCO software. The Batch starts at scheduled time based on the details as loaded by the TIBCO (In the trade Header) and using the trade details (as provided in various DBLinks, Test files or XMLS). CEG uses proprietary scheduling software called PROCMON for scheduling activity. The Various Risk – real time systems that PNL Fetches data from are SECDB, Right Angle, and Settlements etc

● Data Modeling and Application Design: Extensive involvement in Data Modeling (Forward and reverse engineering) using Erwin.

● Planning and implementing table Partitions. Design decisions for Partition exchange/ partition swapping. Performance bottlenecks Analysis, defining various indexes.

● Data Migration: Migrating Stored procedures from Oracle 10g to Oracle 11g. Performance testing and application integration.

● Data Analysis: Worked in Data Mapping, Designing - planning and coding, Performance tuning, Data validation.

● PL/Sql Programming using various Oracle-supplied packages, PL/SQL objects, Bulk collects, Pipeline functions, DB-links, Materialized views etc...Extensively used oracle analytical functions. Written Some Aggregating functions using Oracle Cartridges. Worked with Indexes, constraints and the ANALYZE command on Fact tables to enhance query performance. Used TKPROF, V$sql_plan table to identify query bottlenecks and resource usage, performed Query optimization Using Explain Plan. Performing basic DBA activities- monitoring batch execution – performance monitoring

● Building ETL Mapping using Informatica.

● Process automation: Scheduling Process using proprietary scheduling tool Proprietary tool call ProcMon.

● Other activities: Writing Oracle stored Procs and attaching it to .NET applications. Performed some .Net coding to support the reporting needs of Business users. Integrating some .net applications to the reports developed in Crystal reports 10. Database Architect

Siemens Energy Services (UK)

May 2005 to Mar 2008

This project automates the half hourly electricity consumption and aggregation process. The two main user roles in this application are data collector and data aggregator. Data collector collects the half hourly consumption data from the major consumers like industries, commercial establishments etc... through an automated equipment (MV90 dialing). All the consumption receives from the MV90 would be verified against a set of rules. The half hourly consumption that would fail the validation would estimate using the data in the system. The System operates with enormous amount of data. Some of the rules in the estimation and validation are based on building profile data. These data would be converted to the respective flows rand would be forwarded to various market participants. Data aggregator will analyze the half hourly consumption data collected by the data collector and estimate the data if necessary. Bills will be produced from the aggregated data. Consumption details are received as flat files with a predefined format. The structure and content of these files are validated and loaded into the database.

The System receives half hour consumption from a system called MV90. This consumption is validated and loaded into the database. For the half hour hourly consumption that has not been received, the system estimates the consumption based on BSC rules specified. The system produces various outflows, which would be sent to various market participants. This system deals with large amount of data.

● Was primarily responsible for Designing, coding, testing and implementation, Support of the project. Participated in logical and physical data modeling using - Select Model.

● Wrote Scripts to create tables/temporary tables, Indexes, and Views. Developed stored procedures, Triggers, Packages, Materialized views using PL/SQL and oracle supplied functions to realize business logic. Extensive use of Analytical functions in SQL queries for data analysis.

● Involved in Code review of complex sql and stored procedures. Involved in generating and documenting Test Plans – unit and Product integration. Involved in reviewing/ comparing the production with the UAT server. Writing scripts to bring the servers in Sync.

● Data exchange between MV90 and Oracle Database.

● Extensively involved with cursor and cursor manipulation, ref-cursors and cursor expressions to generate sources for XML file generation

● Configuration management using VSS. Responsible for all code check outs and modification. Release code pining.

● Database administration tasks on oracle development database. Have performed backup and recovery on development database

● Used TOAD, PL/SQL Editor extensively to analyze data, develop backend packages, triggers and views, Tuning stored procedures and Complex SQL statements to implement business logic using various Oracle-supplied packages, PL/SQL objects, Bulk collect, Pipe line functions etc. Extensively used oracle analytical functions. Written Some Aggregating functions using Oracle Cartridges. Involved in Performance Tuning of PL/SQL objects using trace, TKprof, v$sql_plan, Db_profiler and Explain plan. Database comparison using oracle dictionary tables and DB link. Involved in planning and implementation of oracle table Partitions. Written scripts to load various text files and XML files into oracle database using sql*Loader. Used other oracle Utilities like Import and export. Transforming and validation of loaded data using SQL and PL/SQL procedure. Worked with a huge data volume (Data base size was above 1.5 Terabytes).

● Interacted with Business users. Effectively managed user expectation during the implementation.

● Involved in Data Migration from SQL Server (Current system) to Oracle (Atlas).

● Data analysis to generate Data Mapping and Documentation.

● Coding using Oracle external tables - pipe functions etc.

● Data cleansing.

● Validating the migrated data using Complex SQL and PL/SQL Stored Procedures.

● Used Informatica in building mapping transformation

● Involved in the CR preparation with the client Manager where in many high priority bugs were handled promptly.

● Installation and data analysis in ‘Pervasive.SQL 2000’ to validate data against the Atlas database and other business needs.

● Writing the High – level and low documents related to the project.

● Extensively involved with the process management, Process management included automation, initiation and was done using UNIX shell scripts. Designed and worked with the Metadata tables that were used in automation phase, wrote PL/SQL & UNIX shell scripts to populate the metadata tables.

● Onsite – offshore coordination, one point support at the client Office. Involved in application deployment and Debugging, fixing implementation issues related Java application. Software Engineer

Aris Global PVt LTD

Dec 2004 to May 2005

The ARISg (Adverse Reaction Information System) is a product of Aris Global Software PVT LTD., ARISg has matured into the most complete solution for global pharmacovigilance, helping its customers to meet regulatory challenges and enhance operational efficiencies. ARISg maintains the adverse reactions and offers continuous worldwide compliance across FDA, EMEA and MHLW and is so flexible, it can handle the differing interpretations of regulatory guidelines. ARISg supports all major reports, including MedWatch, CIOMS I and II, Periodic, PSUR, BfARM, MCA Clinical and Spontaneous Reports, VAERS-1, French CERFA, MCA etc.

● Data Analysis and Meta data from the Source database.

● Preparing the High-level and low-level design documents.

● Documenting the Data mapping between Source database and Target (Arisg) database.

● Planning, Developing and Implementation of migration using the data mapping document.

● Used TOAD extensively to analyze data, develop backend packages, triggers and views, Tuning stored procedures and Complex SQL statements to implement business logic.

● Implemented the mapping between the tables using PL/SQL code, which loaded the data from various staging tables into the base tables.

● Design, quality, SQL and PL/SQL reviews, making specific recommendations to optimize performance and reliability of applications.

● Leading and mentoring a team of 4 plus members and was responsible for Delivery.

● Installation and Configuration of oracle database and performing DBA activities as required for migration team.

● Creating migration log using util_file package.

● Verification and testing of migrated data using complex Data Verification queries.

● Involved in query optimization and performance tuning. Software design Engineer

Finch software India PVT Ltd.

Apr 2004 to Nov 2004

The M ultiFonds Fund Accounting system has been developed to support the complex requirements of Fund Accounting in a global marketplace. M ultiFonds Fund Accounting is completely multi-currency, multi-language, multi-compliance and real-time. The system is very rapid and supports very large volumes. The M ultiFonds Fund Accounting system covers everything linked to Fund Administration and Fund Accounting. This includes, but is not limited to, processing of many kinds of funds; purchase, sale and capital actions of all instruments within the fund; valuation and back-valuation of the NAV and Fund Units; Multi-class funds; pooling; tax management; legal reporting (multi-country) and accounting. A full checklist of the functionality can be seen under M ultiFonds Fund Accounting.

As part of 10 member team, I was responsible for understanding new requirements/ enhancements, Designing, planning and estimation; Analyzing the impact of the change/new design in the product; Preparing the Low-level technical document and coding.

● Impact Assessment of the enhancement using SQL statement and PL/SQL procedure.

● Developed UI using Oracle forms 6i.

● Preparing the Low-level technical document, coding and Unit Testing.

● Involved in query optimization and performance tuning Using sql trace and explain plan

● Performing Code Review of SQL statements and stored procedures.

● Involved in product migration – Migrating the old text base oracle forms to windows based application.

Programmer

Sujaynet Technologies PVT LTd

Jan 2001 to Apr 2004

● Was primarily responsible for Business analysis, Requirement Gathering, Designing, coding, testing and implementation of the project.

● Documenting the Requirement Specs along with the Business users

● Documenting the High level and low level documents

● Database Modeling and Data base design.

● DBA activities.

● Used SQLPlus, Ultraedit (text editor), Forms 6i and reports 6i extensively to Develop backend packages, triggers and views, Tuning stored procedures, Complex SQL statements, User interface, Reports to implement business logic.

● Used TSQL, Visual Basic 6 and Crystal reports extensively to Develop backend packages, triggers and views, Tuning stored procedures, Complex SQL statements, User interface, Reports to implement business logic.

● Performing the Code review

● Product Integration, testing and Implementation.

● Maintenance and support

EDUCATION: B SC. PGDCA



Contact this candidate