Over 6 years of experience across IT Industry as a developer
To work in an energetic team of a leading concern, where I can contribute my skills and knowledge for the development of the concern and myself.
A Software developer, offering 6 years of experience across Software Design & Development
Broad experience in full life cycle of software development process including requirement analysis, design, development, implementation and maintenance of projects
Extensive exposure in developing graphs using AbInintio, unix and creating and scheduling jobs using Control M and Tiwoli scheduler, BRE tool to generate business rules.
Ability to adapt to a variety of platforms and willingness to learn superior technologies to become better and more reliable as a software developer.
Excellent analytical, self-motivated, organizational and interpersonal skills in addition to productive working as a team member.
Wonderful client interaction involved in daily interaction with clients.
Good presentation, interpersonal skills and team player.
Fast learning and ability to deliver the items on time.
AREAS OF EXPERTISE
Operating System : Windows, Unix
ETL Tool : AbInitio
Business rule generate tool : BRE
DB tool : IBM studio, SQL Developer
Scheduler : Control M, Tiwoli
Database : Oracle, MySQL
Version control tools : SVN and Vader
Bachelor of Engineering (CSE), Thiagarajar College of Engineering, Madurai (2013) - CGPA 8.26
Diploma in Computer Science and Engineering, Salem Polytechnic College, Salem(2010) - CGPA 9.59
SSLC, Govt Higher Secondary School, Salem(2007) – 84.6%
Working as an System Engineer in Tata Consultancy Services, Bangalore since JUL 2013 – Till now
Project Name : Financial Data Warehousing (FDW)
Client : JPMC (US)
Duration : Jan’19 to till date
Role : System Engineer
Technologies : AbInintio
Tools used : Control M, Vader, Jira, BRE tool
Database : Oracle SQL developer
FDW comprises of support and development dealt with L3 enhancements, production fix and new target file creation and loads it into oracle DB as per the given requirement. L3 enhancements are comprised of changes in business logic and production fixes are done in the existing production issues and degrading DB. Below are the types of major projects have been done.
OGL to SAP conversion (forward mapping)
Understanding changed request (both L3 enhancements and production fix) raised through Jira with clients and Business Analyst.
Start analyzing the interface impacted and which graph needs changes. Once the objects found to be changed dependency analysis will be done to find any other interface impacted and any downstream is using the files from control M.
Start building for the requirement received and migrate it to UAT environment for testing and load the files into target table share the run results to clients for validation.
BRE will be used for creating business rule xfr for new interface development and in control M corresponding jobs will be created. Testing will be done from control m jobs as well as running unix script from the back end. Follow up to get the UAT signoff from clients.
Dry run will be carried out before a week of prod live to ensure the reliability of the code. Once signoff received production related documents and config files will be prepared for production migration and ITSM will be updated for the production release.
On release day, after production migration process post validation results will be prepared to ensure successful migration and jobs will be monitored for the first run.
Maintaining defect tracker to track the defects raised during production live for better knowledge about production and to avoid further same issues in prod.
Project Name : Data Factory
Client : Key Bank (US)
Duration : Oct’13 to Aug’14 and NOV’16 to May’18
Role : Developer
Technologies : AbInintio
Tools used : Tiwoli scheduler, SVN, PPMC
This project deals with re-platform data from unix to hadoop layer for storing huge amount of data and ease of retrieval. Here data will be sourced from different sources like unix, sftp and sql tables and it will gone through basic validation given in the requirement document, during this time harmonization and business rules will be applied if any. Once the data dumped into hadoop layer, external tables will be created to view the data loaded into the hadoop path. WebEME was used
For consumption, Understanding the business requirements in the Kickoff call with onsite folks and start analyzing source data as per the requirement and report if any discrepancies found in the data.
Start developing ETL functionalities for the requirements provided in requirement document.
For sourcing, understanding the requirements and start creating PSETS on top of the generic graph being used for sourcing and edit the parameter values as per the requirement.
Attending Status calls and Team meetings on a daily basis. Preparing UTR for the developed graphs. Once the peer review is done migrate the code to the higher environments on time (SIT,QV and PROD).
Create parm jobs in the corresponding libraries in order to run the graphs in higher regions and share the details to testing team. Follow up with the testing team and get the signoff on time.
Once we get the signoff from SIT and QV, start preparing production documents CR, Package creation and Scheduling sheet. Provide support on production live.
Maintaining defect tracker to track the defects raised during production live for better knowledge about production and to avoid further issues in prod.
Project Name : Merger and Acquisition
Client : Branch Banking and Trust(US)
Duration : NOV’14– NOV’16
Role : Developer
Technologies : Mainframe, Easytrive, JCL and COBOL
Merging and acquiring is the process of converting non BBT&T customers to BB&T customers due to many strategic business reasons. Here merger is other bank and acquirer is BB&T. This process was carried out for different application like Deposits, Loans and for different mergers like city bank, Bank of Kentucky,Susquehanna and National penn. Here data of merger bank will gone through the predefined standards of BB&T and will get convert as proper BB&T data.
Analyze the source file data and provide the required reports at field level to LOB for source to target mapping.
Developing code and map fields according to the requirement document provided by BA
Digging deep when we are not at expected results by having calls with BA.
Preparing field level and code level review once the development is done.
Preparing balancing report to ensure that source and target monetary fields.
Analyzing the root cause of the issue logged by clients and fix the issue in short and do the regression test to ensure other target files are not impacted.
Responsible for account locking process. This process will make sure that new account number (BB&T provided) been assigned already are not disturbed in forthcoming file cut. It will be done prior to last two file cut.
This process will be ensured by preparing test script for account locking for account locking process.
Provide support to fix production issue if any or any reports needed by LOB after the live conversion.