Post Job Free
Sign in

Azure Data Sql Server

Location:
Commerce, TX
Salary:
115000
Posted:
June 24, 2024

Contact this candidate

Resume:

Poojitha Chennuru

Email: ******************@*****.***

LinkedIn : https://www.linkedin.com/in/poojitha-chennuru-9b1b68299/ Phone: +1-430-***-****

Profile Info :

Focused professional with around 7 years of IT experience and 5 years of technical proficiency in Data Engineering involving Business Requirements Analysis, Application Design, Development, testing and documentation in various domains

Experience in Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Big Data Technologies (Hadoop and Apache Spark) and Data Bricks

Expert-level mastery in designing and developing complex mappings to extract data from diverse sources including flat files, RDBMS tables, legacy system files, Applications and SQL Server

Skilled at analyzing information system needs, evaluating end-user requirements, custom designing solutions & troubleshooting for complex information systems management

Movement performs file operations on Data Lake, Blob Storage, SFTP/FTP Servers, getting/manipulating data in Azure SQL Server

Technical and Functional experience in Data warehouse implementations ETL methodology using SSIS, Informatica Power Center ETL tools

Strong experience in Data Cleansing, Data profiling and Data Analysis. Involved building data warehouse using SQL Server, Oracle, and Netezza, Oracle 10g/9i/8i in Finance, Retail, etc.

Excellent knowledge on integrating Azure Data Factory V2/V1 with variety of data sources and processing the data using the pipelines, pipeline parameters, activities, activity parameters, manually/window based/event-based job scheduling

Experience in creating database objects such as Tables, Constraints, Indexes, Views, Indexed Views, Stored Procedures, UDFs and Triggers on Microsoft SQL Server.

Strong experience in writing & tuning complex SQL queries including joins, correlated sub queries and scalar sub queries

Expertise in using Cloud based managed services for data warehousing/analytics in Microsoft Azure

(Azure Data Lake Analytics, Azure Data Lake Storage, Azure Data Factory, Azure Table Storage, SQL, Stream Analytics, HDInsight, Data Bricks etc.)

Hands-on experience in developing pipelines, activities, data sets, linked services for Azure Data Factory

Involved in Data Migration projects from Teradata and Oracle to SQL Server, Created Automated scripts to do the migration using Unix shell scripting, Oracle/TD SQL.

Strong knowledge in Database administration and Data Warehousing concepts

Experience in designing and implementation of cloud architecture on Microsoft Azure.

Well versed experienced in creating pipelines in Azure Cloud ADFv2 using different activities like Move

&Transform, Copy, filter, for each, Data Bricks etc.

Hands-on experience in developing Logic App workflows for performing event-based data movement, perform file operations on Data Lake, Blob Storage, SFTP/FTP Servers, getting/manipulating data in Azure SQL Server.

EDUCATION:

Bachelor of Technology from Siddharth Institute of engineering and Technology, India

Master’s from Texas A&M University Commerce, USA Technical Skills:

Azure : Azure Data factory, Azure Data Lake, Azure Databricks, SSIS, Informatica, Teradata SQL, Teradata tools & Utilities, Azure Data Factory v2, Azure Data Lake Storage Gen2, BLOB Storage, Azure SQL DB, SQL server, Azure Synapse, Storge Explorer.

Languages: Python, SQL, Java, LINUX Shell Scripting, Azure PowerShell, python Scala Databases: Azure synapse analytics, SQL Server, Teradata, Oracle. Operating Systems: Windows, UNIX, Linux.

Tools : Power BI, MS Excel, SharePoint

Professional Experience

TechPort IT Solutions LLC, USA Feb 2024 – Present

Role: Data Engineer

Responsibilities:

Created Linked Services, Data Sets and Self hosted Integration Run times for On-Prem servers and maintained three environments Dev, UAT and Prod.

Involved in business Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.

Actively working with architect team and infra platform team to implement the cloud solutions to handle DataMarts/pods.

Involved in migration project where we are migrating from Informatica to Azure cloud technologies

Implementing Azure Data bricks notebooks scripts to convert Informatica transformation logic.

Implementing Azure Data Factory(ADF) ELT pipelines to copy the data from source systems to Azure Data Lake.

Implemented validation scripts to perform comparison between various source datasets as part of migration.

Working with complex SQL queries, Stored procedures implementation in Azure SQL Data warehouse.

Developing PySpark scripts to perform data cleansing, data curation and transformations in Azure Data bricks for end user consumption.

Providing production support to make sure data availability, monitor schedules, ad-hoc job refresh

Implemented Azure Logic Apps workflows to copy files from Mailbox, SharePoint, Azure Blob Storage to ADLS Gen2.

Implemented CI/CD pipelines for Production deployment of ADF pipelines, Data bricks scripts

Created Linked service to land the data from Caesars SFTP location to Azure Data Lake.

For moving or getting data from different sources we use Azure Logic Apps, Azure Data Factory, Power Shell.

Environment:

Azure Data Factory, Azure Databricks, ADLS Gen2, Python, PySpark, MS-Azure, Azure SQL Database, BLOB Storage, SQL server, Informatica.

Texas A&M University Commerce, USA Oct 2022 - Dec 2023 Job Title: Graduate Assistant

Responsibilities:

Designed database tables, views, indexes and developed various financial reports on sales and tax using SQL queries and stored procedures.

Developed ETL packages that extract data from various data sources, transform and load to destination used for reporting purposes.

Translate the mapping and functional design requirements into a technical design document which consists of the lower-level design details with respect to the ETL process.

Hands-on experience with various transformations such as lookup, a conditional split, multicast, Derived column, merge, for each, for loop, union, sorting, Data profiling, and Bulk insert.

Testing the designs to ensure the system runs smoothly.

Practical experience creating data models (Dimensional & Relational) concepts like Star-Schema Modeling, Snowflake Schema Modeling, Fact, and Dimension tables.

Created different reports using Power BI desktop and the online service with dataset refresh.

Performs additional related tasks as assigned.

Fujitsu Consulting India Pvt Ltd, India Mar 2018 - Jul 2021 Job Title: Azure Data Engineer

Responsibilities:

Creating pipelines, data flows and data transformations and manipulations using Azure Data Factory (ADF) and PySpark with Databricks.

Developed Python scripts for ETL load jobs using Panda functions.

Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.

Created tables, views in Teradata, according to the requirements.

Created proper Primary Index taking into consideration of both planned access of data and even distribution of data across all the available AMPS.

Performed data load from multiple data source to TERADATA RDBMS using BTEQ, Multiload and Fast Load.

Used various transformations like Source qualifier, Aggregators, lookups, Filters, Sequence generators, Routers, Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union etc. Used Informatica Power Exchange to handle the change data capture (CDC) data from the source and load into Data Mart by following slowly changing dimensions (SCD) type II process.

Designed, created and tuned physical database objects (tables, views, indexes, PPI, UPI, NUPI, and USI) to support normalized and dimensional models.

Created a cleanup process for removing all the Intermediate temp files that were used prior to the loading process.

Involved in business meetings to gather requirements, business Analysis, Design, review and Development, testing

Performed tuning and optimization of complex SQL queries using Teradata Explain.

Responsible for Collect Statistics on FACT tables.

Wrote numerous BTEQ scripts to run complex queries on the Teradata database. Created Temporal tables and Columnar tables by utilizing the advanced features of V14.0

Provided architecture/development for initial load programs to migrate production databases from Oracle data marts to Teradata data warehouse, as well as ETL framework to supply continuous engineering and manufacturing updates to the data warehouse (Oracle, Teradata, MQ Series, ODBC, HTTP, and HTML).

The ADF Pipelines are Scheduled using Triggers and Monitored the Performance and Execution In Monitoring Logs and Dashboards.

Loaded the Transformed data in Databricks to Azure Synapse database using JDBC connection

Some of the SSIS Packages are Deployed into Cloud from On-Premis with “Lift and Shift” after making Minimum Configuration Changes.

Leveraged Azure Cloud and Azure Pipelines along with Source Control for Continuous Integration Builds and Continuous Deployment (CI/CD) to Targets.

Performed the ongoing delivery, migrating client mini-data warehouses or functional data-marts from Oracle environment to Teradata.

Environment:

Azure Data Factory (ADF v2), Spark (Python/Scala), Hive, Kafka, Spark Streaming, MS-Azure, Azure SQL Database, Azure functions Apps, Azure Data Lake, BLOB Storage, SQL server, UNIX Shell Scripting, AZURE PowerShell, Data bricks, ADLS Gen 2, Azure Cosmos DB, Azure Event Hub, Teradata, Informatica, Datastage, Teradata SQL Assistant, BTEQ

Tiedot Solutions, India Jun 2016 – Feb 2018

Job Title: SQL SERVER DBA

Responsibilities:

Installed and configured SQL server 2014 and 2016 and windows server 2016,Created and updated database, tables, stored procedures, triggers, functions, and views

Configured, administered and troubleshoot multiple databases in production and staging environments in cluster and standalone environments.

Involved in installing failover clustering on multiple nodes, implemented database mirroring and log shipping.

Monitored and troubleshoot database and server performance at system – wide scale and effectively identify network, database, memory, CPU, disk space, and I/O related bottlenecks.

Experience in installation, upgrade and configuration of MS SQL Server and databases in clustered

(Active/Passive and Active/Active) and non-clustered environments.

Hands on experience with SSIS - Build ETL processes from different data sources like Oracle/DB2i/DB2- LUW to SQL Server/DB2-LUW.

Monitored database system details within the database, including stored procedures and execution time and implemented efficiency improvements.

Provided daily support, troubleshooting, monitoring, optimization and tuning of server and SQL server environments across entire system.

Providing resolutions to assigned tickets as per their severity and meet the deadline before SLA. Environment:

Service-Level Agreements (SLA), Data Management, Data Analysis, Database Systems, Online Transaction Processing (OLTP)



Contact this candidate