Post Job Free

Resume

Sign in

Data Engineer Sql Server

Location:
Hyderabad, Telangana, India
Salary:
55000
Posted:
November 20, 2023

Contact this candidate

Resume:

Sowmya

Data Engineer

Gmail:ad1bir@r.postjobfree.com

Mobile:+1-778-***-****

SUMMARY

Over 3 years of experience as a Data Engineer expertise on Data Analytics, Data Architect, Design, Development, Implementation, Testing and Deployment of Software Applications.

Experience in Infrastructure Development and Operations involving AWS Cloud Services, EC2, EBS, VPC, RDS, SES, ELB, Auto scaling.

Extensive experience as SQL Server Developer in utilizing triggers, cursors, functions and stored procedures.

Expertise in Database development for OLTP (Batch Processing, Online Processing), OLAP, ETL, Data warehousing, Data mining, DBMS and Data Modeling.

Experience in creating fact and dimensional model in MS SQL server and Snowflake Database utilizing Cloud base Matillion ETL tool.

Develop dashboards and visualizations to help business users analyze data as well as provide data insight to upper management with a focus on Confidential products like SQL Server Reporting Services (SSRS) and Power BI.

Experienced with JSON based RESTful web services, and XML/QML based SOAP web services and also worked on various applications using Python integrated IDEs like Sublime Text and PyCharm.

Experience in building data pipelines using Azure Data factory, Azure Databricks and loading data to Azure data Lake, Azure SQL Database, Azure SQL Data Warehouse and Controlling and granting database access.

High experience/comfort level interpreting ETL Data Mapping documents, transformation logic, reusable logic, etc.

TECHNICAL SKILLS

Operating Systems: Windows, Linux, Ubuntu

Version Control: Eclipse, Microsoft Visual Studio, IntelliJ, GIT

Languages: Python, Java

Relational Databases: SQL, PSQL, SQL Server, MySQL

Non-Relational Databases: Cassandra, MongoDB, DynamoDB, HBase

Containers: Docker, Kubernetes

Project Management: Agile, Waterfall

Software: Microsoft Office, Adobe Suite

Data Visualization Tools: Microsoft Power BI, Tableau, SSAS, SSRS, JMP Pro

PROFESSIONAL EXPERIENCE

ScalePad, Vancouver, BC AWS Data Engineer Apr 2023 – Present

Responsibilities:

Implemented Installation and configuration of multi-node cluster on Cloud using Amazon Web Services (AWS) on EC2.

Handled AWS Management Tools as Cloud watch and Cloud Trail.

Stored the log files in AWS S3. Used versioning in S3 buckets where the highly sensitive information is stored.

Integrated AWS Dynamo DB using AWS lambda to store the values of items and backup the DynamoDB streams

Automated Regular AWS tasks like snapshots creation using Python scripts.

Designed data warehouses on platforms such as AWS Redshift, Azure SQL Data Warehouse, and other high-performance platforms.

Implemented the machine learning algorithms using Python to predict the quantity a user might want to order for a specific item so we can automatically suggest using kinesis firehose and S3 data lake.

Used AWS EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

Used Spark SQL for Scala & amp, Python interface that automatically converts RDD case classes to schema RDD.

Install and configure Apache Airflow for AWS S3 bucket and created DAGS to run the Airflow

Prepared scripts to automate the ingestion process using PySpark and Scala as needed through various sources such as API, AWS S3, Teradata and Redshift.

Worked on AWS Data pipeline to configure data loads from S3 to into Redshift.

Using AWS Redshift, me Extracted, transformed and loaded data from various heterogeneous data sources and destinations.

Worked on a direct query using Power BI to compare legacy data with the current data and generated reports and stored and dashboards.

Deployed Lambda and other dependencies into AWS to automate EMR Spin for Data Lake jobs.

Celigo, Hyderabad, India Data Engineer Nov 2020 – Feb 2023

Responsibilities:

Creating numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate source systems by using different Azure Activities like Move &Transform, Copy, filter, for each, Databricks etc.

Maintain and provide support for optimal pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with Databricks.

Good Exposure to creating various dashboards in Reporting Tools like SAS, Tableau, Power BI, BO, and QlikView used various filters, and sets while dealing with huge volume of data.

Perform ongoing monitoring, automation, and refinement of data engineering solutions.

Developed ETL pipelines in and out of data warehouse using combination of Python and Snowflakes SnowSQL Writing SQL queries against Snowflake.

Scheduled, automated business processes and workflows using Azure Logic Apps.

Designed and developed a new solution to process the NRT data by using Azure stream analytics, Azure Event Hub and Service Bus Queue.

Experience in using to analyze data from multiple sources and creating reports with Interactive Dashboards using Power BI.

Automated jobs using different triggers like Events, Schedules and Tumbling in ADF.

Performed data flow transformation using the data flow activity.

Implemented Azure, self-hosted integration runtime in ADF.



Contact this candidate