Post Job Free

Resume

Sign in

Sql Server Data Warehousing

Location:
Louisville, KY
Posted:
December 18, 2023

Contact this candidate

Resume:

Aruna Surakka

* ***** ** ********** ** BI (Talend, SQL, Greenplum)

Louisville, Kentucky,40202

Contact No. : +1-971-***-****

E-Mail : ad12q9@r.postjobfree.com

OBJECTIVE:

Seeking an opportunity to be a part of a reputed organization where I can utilize my personal skills in a challenging and innovative environment that helps me to achieve my personal as well as organizational goals

.

Summary:

Having 7 years of professional IT Experience in Data Warehousing, Relational Database and System Integration. Proficiency in gathering and analyzing user requirements and translating them into business solutions.

Having 7 years of experience in Development and Enhancement projects in Talend Open studio and Talend Integration Suite.

Excellent experience working on Talend ETL and used features such as Context variables, Database components like tMSSQLInput, tOracleOutput, tmap, tFileCopy, tFileCompare, tFileExist file components, tIterateToFlow, tFlowToIterate, tHashoutput, tHashInput, tRunjob, tJava, tNormalize etc.

Involved in TAC (Talend Administrator Console) activities such as Job Scheduling, Server Monitoring, Job Conductor, Task Creation, Job Deployment etc.

Extensively executed SQL queries on Oracle and SQL server tables to view successful transaction of data and to validate data.

Experience in integration of various data sources from Databases like Oracle, SQL Server and formats like flat-files, CSV files and XML files.

Performed various activities like import and export of Talend Jobs and Projects.

Created context variables to store metadata information of all the sources and targets.

Involved in the Data Analysis for source and target systems and good understanding of Data Warehousing concepts, Staging Tables, Dimensions, Facts and Star, Snowflake Schemas.

Performing data analysis to understand and document data profiling rules, data cleansing rules and data transformation rules for data migration projects.

Extensive experience in implementing data warehousing methodologies including Star schema and snowflake schemas.

Extensive knowledge on Change Data Capture (CDC) and SCD Type-1, Type2, Type-3 Implementations. Performed roles as interface and between Database Administration, ETL Development, Testing teams and reporting teams to eliminate the road blocks for smooth flow of information.

Created ETL test data for all ETL mapping rules to test the functionality of the ETL components.

Good knowledge and experience in Metadata and Star schema/Snowflake schema.

Reviewed database business and functional requirements, reviewed mapping transformations.

Experience in Talend Administration Center (TAC) for scheduling and deployment.

Analyzed requirements from the users and created, reviewed the specifications for the ETL. Identified the bottlenecks in the sources, targets, mappings, sessions and resolved the problems.

Extensively used various Performance tuning Techniques to improve the session performance.

Debugging and validating the mappings and Involved in Code Review.

Excellent communication skills and interpersonal abilities with maximum contribution to attain the team goal.

Interacted with customers directly and delivered project without any flaw.

Undergoing Data Engineering Training and having basic knowledge on Azure Data Factory, Python and Databricks

Technical Skills:

Programming Languages

SQL, C, UNIX Shell Scripting

Operating Systems Servers

Windows, UNIX, Linux

Windows 2003-2008, Unix

Software Skills

DB Tools

Project Acquired Skills Database

BI (Talend, Greenplum)

Dbeaver

Informatica, Talend and SQL Oracle SQL Server, Greenplum

Professional Summary:

Projects Under Taken:

Power Data Ingestion

Client : GE Aerospace, GE Aviation, USA

Platform : Talend, Greenplum, Oracle

Role : Team member

Duration : Jul 2021 to Jul 2023

Team Size : 10

This project is basically the movement of tables from different data sources to the datalake. Around 2000 tables are refreshed daily. All the tables are in different data source and we are dumping that in Datalake(GreenPlum) using Talend and HVR. After the development is done we have provided 24*7 support for the project. In this project we migrated the whole lot of data from heterogeneous data sources to datalake i.e. greenplum. But th is an extension where the same source data is picked up by the jobs, but they are additionally getting dumped into S3 bucket as well. The whole system is dependent on metadata tables i.e. objects, columns, schedule_xref, schema_contexts, target_instance_xref tables. So, for S3 buckets additionally 1 row has been inserted into the above columns where the target_instance we have mentioned as 225 for S3 bucket and the active indicator is made Y, for ingestion into S3. Our team scope is to ingest data till S3 bucket and from there a dedicated shoreline team is there who takes care of that to ingest data into redshift.

Responsibilities

Standard Data Ingestion(SDI) process is done where the source data is to be ingested into various databased like Oracle, Greenplum, Postgres and Redshift DB.

Worked on TAC for creating Artifacts repositories and labels.

Promote metadata changes in Oracle database.

Designed and Implemented the ETL process using Talend Enterprise Edition to load the data from Source to Target Database.

Worked on Migration process from Talend 7.2 to 7.3 version.

Worked on Glue catalog in AWS Management Console.

Changes to be made according to Change Requests in existing developed jobs.

Extensively used database connections, file components, tmap etc,

Using TAC deployed and published the jobs.

Involved in Testing the ETL process that meets the business requirements.

This is a development to the existing Datalake Project.

Enhanced the job to create a new target to dump data into S3 bucket.

Then updating and enabling the metadata tables for insertion of data into S3.

Importing the newly created SDI jobs i.e., extraction and ingestion job in TAC in QA env.

Then need to check the logs if the job is successful and if data has been loaded into govCloud.

Once that is done, taking the job logs and uploading those to box folder so that the Ops team will check and take the job to Prod environment.

Once the job is moved to Prod, checking for 3 runs in Prod.

After that close the User story in Rally.

Created jobs to copy files, rename them from S3 bucket to on prem server.

Publishing the jobs to nexus and push to GIT.

Providing daily status update to the client in the scrum calls.

Routine call with the client for understanding the functional requirements and implementing the same in Talend.

Analysis of the source tables and creating the joining condition to put all the reports in 1 sheet.

Creating and deploying the jobs in Talend and scheduling the same in TAC.

Guiding the reporting team to present the report as per the user requirements.

Performing unit testing and helping the users during UAT.

Project

Client : Catalent Inc, USA

Platform : TALEND, SQL

Role : Team Member.

Duration : Jan 2020 to Jun 2021

Team Size : 4

Project Description:

Catalent, Inc. is a global provider of drug delivery technology and development solutions for drugs, biological and consumer health products is located in Somerset, US. The purpose of this project is to create report specific DataMart used to generate reports and dashboards on the Forecast Excellence data. This project will replace the existing manual data gathering and analysis process of Forecast data.

Responsibilities

Involved in business requirement analysis gathering and understanding the functional specification.

Created metadata for oracle, mysql, flatfiles for implementing ETL jobs in Talend.

Worked with different sources such as Oracle flat files, mysql.

Extensively used vector wise load to load data from flat files to staging tables.

Extensively used for joining different sources using tmap components.

Used different methods/functions in tmap for doing transformation logic

Used input and output filters in tmap for filtering the data based on the requirement.

Created context variable to store metadata information of all the sources and targets.

Using Debugger to find the bottlenecks in the jobs.

Using TAC deployed and published the jobs.

Involved in Testing the ETL process that meets the business requirement.

Project

Client : SunOpta, USA

Platform : Talend, SQL

Role : Developer

Duration : Feb 2018 to Dec 2019

Team Size : 5

Project Description:

SunOpta is a leading company in sourcing, processing and production of organic, natural and fruit based food and beverage product. It mainly focusses on organic, non-genetically modified and speciality foods. It offers a wide array of plant-based, fruit and snack options.

Responsibilities

Analyzed requirements from the users and created, reviewed the specifications for the ETL.

Identified the bottlenecks in the sources, targets, mappings, sessions and resolved the problems.

Worked with different sources such as Oracle flat files, mysql.

Extensively used vector wise load to load data from flat files to staging ta

Extensively used different methods/ functions in tmap for doing transformation logic.

Created context variable to store metadata information of all the sources and targets.

Extensively used various Performance tuning Techniques to improve the session performance.

Preparation of Test cases based on ETL specification document, Use cases, Low level Design document.

Project

Client : MTN, South Africa

Platform : Talend, Greenplum, SQL

Role : Team Member.

Duration : Sep 2016 to Jan 2018

Team Size : 4

Project Description:

HR3 integration involves extraction of all relevant employees data from source system (HR3 file), process the data by using business logics and load the data into Technician equipment object (Target system).

FTP Server: ftp.corporate.ge.com (Drop box server)

To establish the connection between Source (HR3) server and Target (Informatica Server) through FTP(by placing the Informatica keys in our Source server).

This server contains all Source files (HR3) in its root directory.

To get the file from FTP server to my local server (Integration server) by using the shell script (HR3_SMAX.sh).

Responsibilities

Involved in business requirement analysis gathering and understanding the functional specification.

Created metadata for oracle, mysql, flatfiles for implementing ETL jobs in Talend.

Worked with different sources such as Oracle flat files, mysql.

Extensively used vector wise load to load data from flat files to staging tables.

Extensively used for joining different sources using tmap components.

Used different methods/functions in tmap for doing transformation logic

Used input and output filters in tmap for filtering the data based on the requirement.

Created context variable to store metadata information of all the sources and targets.

Using Debugger to find the bottlenecks in the jobs.

Using TAC deployed and published the jobs.

Involved in Testing the ETL process that meets the business requirement.

Attributes:

Self initiative, adaptable, quick grasping, Team player with excellent interaction skills

Analytical, innovative and achievement oriented professional.

Excellent time management skills, ability to work under pressure and meet deadlines.

Self motivated ability to work in an intense environment.

Good motivation to the team and always encouraging my teammates to do better.

Declaration:

I hereby declare that the facts stated above are true and correct to the best of my knowledge and belief.

Date:

Place: (Aruna Surakka)



Contact this candidate