Post Job Free
Sign in

Sql Developer Data

Location:
San Antonio, TX
Salary:
55000
Posted:
January 25, 2023

Contact this candidate

Resume:

RATHA SHANMUGAM

San Antonio, Texas • aduxpe@r.postjobfree.com • 210-***-****

https://www.linkedin.com/in/ratha-shanmugam-234086188 EDUCATION

Alagappa University - Karaikudi, TN, India. Aug 2019 - May 2022 Masters in Computer Application (MCA)

Bharathiar University - Coimbatore, TN, India. Aug2012 - May2015 Bachelor’s in Mathematics

SKILLS

● Big data technologies: Hadoop, MapReduce, kinesis, Airflow, Snowflake, Oracle, RDMS, ETL and Data Visualization, Kafka concept.

● Database/Data Warehouse: Oracle, SQL Server, Snowflake.

● Programming languages: Python, C/C++, and Java

● Scripting Languages: Linux shell scripting

● CI/CD Tools: DBT, GitHub and Jira

● Cloud Platform: GCP, AWS & Microsoft Azure.

CERTIFICATION AND COURSES

• 1Z0-071 Oracle SQL Developer: Certified Associate (Database)

• DBT Fundamentals

• Academy Accreditation - Databricks Lakehouse Fundamentals

• Excel Essential Training (Office 365/Microsoft 365) EXPERIENCE

SHIPT Inc. Feb2022-Nov2022

Jr.Data Engineer.

•Shipt is a Leading American delivery service company owned by Target Corporation. It is headquartered in Birmingham, Alabama. Shipt delivers groceries, home products, electronics, etc.,

•SHIPT holds an Enterprise Data Warehouse platform for Data governance & consolidation of large volumes of both structured and unstructured data from across SHIPT enterprise and external sources. Data platform team is a centralized hub and has a federated Mart architecture (hub and spoke) which maintains entire organization wide data. Data science, ML team use the data for various analytics, forecasting.

PROJECTS

Shopper management system Feb2022-Nov2022

The project’s main objective is to receive the Shopper, Stores and Medallia’s data (Reviews and Comments) to AWS/GCP environments. Front end application’s (mobile app/online order) will continue to send the message/data to Kafka. Kafka connectors are established and continue to stream the shopper data to the cloud environment. Data Engineering teams are responsible to create/manage data pipeline to continue to stream the data to the Enterprise warehouse(normalized/aggregated). ETL/ELT process to load the data to staging, stream, normalized and target environment through snowflake and automate the process with airflow scheduler. ROLE AND RESPONSIBILITIES

•Requirements gathering and deep understanding of the existing business model and customer requirements.

•Creating data pipeline to ingest data to cloud environment and schedule with airflow. Prepared Various Unit Test Cases, Integration test and Functional Test Cases.

•Design, construct, test, install and maintain highly scalable data pipeline and tuning the existing data pipeline.

•Recognize and adopt best practices in data processing, reporting and analysis: data integrity, test design, analysis, validation, and documentation.

•Collaborate with business leaders, data scientists, and product managers to understand the data needs.

•Worked on large data files using snowflake (json format files).

•Have good understanding of storing data in S3 buckets and perform read, transformations and actions on S3 data using Snowflake and Snow SQL context.

•Advanced ad hoc data analysis for report amendment using Sql scripts.

•Real time streaming data using message-based topics between producers and consumers and enabled Kafka in our data lakes. Environment : AWS EC2, S3, Kinesis, Airflow, Shell scripting, GITHUB, Kafka and Snowflake,SQL.



Contact this candidate