Post Job Free
Sign in

Azure Data Sql Server

Location:
Wichita, KS
Salary:
90000
Posted:
August 28, 2024

Contact this candidate

Resume:

Sai Krishna N

*************@*****.***

316-***-****

Data Engineer

PROFESSIONAL SUMMARY:

4+ years of experience in designing and developing database, data warehousing and Business Intelligence using Microsoft technologies Power BI, Excel, SSIS and SQL Server.

Developed SSIS packages, custom Azure Data Factory pipeline activities for on-cloud ETL processing.

Hands on experience in creating pipelines in Azure Data Factory V2 using activities like Move &Transform, Copy, filter, for each, Get Metadata, Lookup, Data bricks etc.

Excellent knowledge on integrating Azure Data Factory V2/V1 with variety of data sources and processing the data using the pipelines, pipeline parameters, activities, activity parameters, manually/window based/event-based job scheduling.

Working on creating Pipelines, linked services, blob storage containers, mapping data flows, activities (Data movement, data transform and control activities), datasets, triggers, Parameters, and variables.

Designing and maintaining dimensional data model with Star Schema having multiple facts and slowly changing dimension (SCD) tables.

Used SSIS to create ETL packages to validate, extract, transform and load data into data warehouse and

data marts.

Worked on Data Warehouse design, implementation, and support (SQL Server, Azure SQL DB, Azure SQL Data warehouse, Teradata).

Extensive knowledge and Hands on experience implementing cloud data lakes like Azure Data Lake Gen1 and Azure Data Lake Gen2.

Providing Azure technical expertise including strategic design and architectural mentorship, assessments, POCs, etc., in support of the overall sales lifecycle or consulting engagement process.

Extracted and transformed SQL Server Source data to Azure data lake using Azure Data factory.

Good experience in Power Query, Power Pivot and Power View for creating the effective reports and dashboards in Power BI.

Demonstrated proficiency in Python, Spark and SQL, with experience in utilizing data science libraries for machine learning

Strong development skills with Azure Data Lake, Azure Data Factory, SQL Data Warehouse Azure Blob, Azure Storage Explorer.

Strong experience in creating dashboards using Power BI visualizations.

Experience working on Custom Visualizations like Chiclet Slicer, Pulse Chart etc.

Expertise in working with Azure cloud platform and AWS services, including AWS Glue and EMR for data extraction and transformation.

Proficient in creating content packs and creating roles to implement Row Level Security (RLS) for the reports.

Experience on enabling various types of filters for reports and pinning visuals into dashboards.

Implemented Istio Service Mesh to enhance microservices communication, improving observability, security, and reliability within distributed systems.

Hands on experience in importing data from developed excel into power bi desktop using get data option.

Experience in using DAX expressions like Logical, Aggregate, and Time Intelligence & Date Functions to create business Calculations based on requirements.

Experience in creating various SSIS packages, troubleshoot errors and creating alerts using Send Mail tasks.

Experienced in SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), Data Transformation Services (DTS) and SQL Server Analysis Services (SSAS).

Expertise in generating reports using SQL Server Reporting Services, Crystal Reports, and MS Excel spreadsheets and power pivot.

Proficient in leveraging Kubernetes to orchestrate and manage containerized applications, ensuring scalability, reliability, and efficiency in data processing pipelines.

Skilled in designing and deploying Kubernetes clusters to support data-intensive applications, optimizing resource utilization and enhancing overall system performance.

Experience in designing report using SQL Server Reporting Services, generated drill down report, parameterized report, linked report, sub reports, matrix dynamic and filter, charts in SSRS.

Experience in creating visual interactions using edit interactions to make few visual to effect data based on requirement.

Good skills in trouble shooting On-Premises Gateway to refresh the dashboards/ reports.

Experienced in leveraging Data Warehouse technologies to enable data-driven decision-making, delivering insights that drive business growth and operational efficiency.

Hands-on experience in Dashboard design on Power BI desktop, have done report creation using virtualization on Power BI.

Extensively used DAX Functions like SUM, AVERAGE, MIN, MAX, CALCULATE and FILTER in almost all reports.

Education:

Masters in Computer Science, Wichita State University, KS

Technical Skills:

Reporting Tools : Power BI, Power View, Power BI Desktop, Power Pivot, SQL, Tableau, SSRS.

Databases : SQL Server, Azure SQL Database, TSQL, MySQL, Oracle, Redshift, Snowflake,

Programming Languages : SQL, T-SQL, DAX, Python, PL/SQL, Spark

Integration Tools : SSIS

Cloud Products AZURE : Azure Data Factory (ADF) V2, Azure SQL, Azure Data factory, Azure Data Lake.

Operating Systems : Windows, LINUX, Unix

Messaging tools : Kafka, EMQ, RabbitMQ

AWS : Lambda, DynamoDB, S3, EC2, Redshift, VPC, Step functions, Glue, SNS, CloudWatch

Bigdata Technologies : Hadoop, HDFS, Hive, Pig, Spark, Machine Learning, Pandas, NumPy, Informatica,

Snowflake, Snow SQL, Data bricks, Kafka, Cloudera

Certifications:

• Certificate in Computational Data Science, Wichita State University.

• AZ-900: Microsoft Azure Fundamentals. Credential ID: 5425CE01DAAA2F6

• Python for Data Science and AI, IBM. Credential ID: H27DKREYYTC4

• Machine Learning, Stanford online. Credential ID: PM59UPRU8PCJ

• Python for Data Analysis: Solve Real-World Challenges.

• DP-203: Microsoft Azure Data Engineer Associate. Credential ID: D0F870721E430888

• Microsoft Certified: Azure AI Fundamentals - AI 900 Credential ID: 5D2C34B5C502ADDB

PROFESSIONAL EXPERIENCE:

Client: Virgin Pulse, Providence, RI May 2022 – Present

Role: Data Engineer - SSIS, Azure SQL, Azure Data Lake, ADF, Power BI

Responsibilities:

Exploring the load performances between ADLS, ADF Pipelines along with Native SSIS ETL packages and accessing the migration of objects from On-Premises to Azure Data Lake.

Strong development skills with Azure Data Lake, Azure Data Factory, SQL Data Warehouse Azure Blob, Azure Storage Explorer.

Created SSIS packages to perform filtering operations and to import the data on daily basis from the legacy data sources to SQL Server 2016.

Created new procedures to handle complex logic for business and modified already existing stored procedures, functions, views, and tables for new enhancements of the project and to resolve the existing defects.

Worked on ingesting the real-time data using Kafka.

Spearheaded the migration of legacy ETL processes to Python-based workflows, reducing system complexity and increasing maintainability.

Integrated Hadoop with Apache Hive for structured data querying and analysis, improving overall data accessibility.

Creating complex SQL scripts to transform legacy data in staging area using SSIS packages.

Created pipelines in ADF using Linked services/Datasets/Pipelines/to Extract, Transform, and Load data from different sources like Azure SQL, Blob Storage, Azure SQL Data Warehouse.

Used SSIS to create ETL packages to validate, extract, transform and load data into data warehouse.

Extracted data from different kind of source systems into Power Query and then cleansed the data by applying business rules and loading it into Power Pivot.

Collaborated with data scientists to deploy machine learning models in production, integrating Python scripts seamlessly into data processing workflows.

Conducted in-depth data profiling and analysis, uncovering valuable insights to support strategic decision-making.

Worked closely with business stakeholders to define data requirements and specifications for various projects.

Designed and implemented data visualization dashboards using tools like Tableau, enhancing business intelligence reporting.

Created, provisioned different Databricks clusters, notebooks, jobs and autoscaling.

Used Azure Maps to Calculate the Distance, Latitude & Longitude.

Created reusable pipelines in Data Factory to extract, transform and load data into Azure SQL DB and SQL Data warehouse.

Designed a Power BI Data model with multiple Facts and Dimension Tables based on the requirement.

Designed and Implemented analytics solutions using Power BI Dashboard and Reports.

Collaborated with data architects to design and implement scalable Snowflake schemas, optimizing data modeling for analytical purposes.

Created effective reports, dashboards and published into Power BI service.

Led a team of data engineers in optimizing and scaling ETL processes for large datasets, enhancing overall system efficiency.

Collaborated with cross-functional teams to gather and analyze business requirements, translating them into scalable data solutions.

Utilized cloud platforms (AWS/GCP/Azure) for developing and deploying scalable and cost-effective data architectures

Implemented robust data quality checks and monitoring procedures, ensuring data integrity and reliability.

Implemented Hadoop-based data storage solutions, ensuring scalable and fault-tolerant data storage.

Environment: Azure Data Factory (ADF v2), Data bricks, Python, PySpark, ADLS Gen 2, Azure SQL Database, Power BI, Azure functions Apps, Azure Synapse Analytics, BLOB Storage, SQL server, Windows remote desktop, UNIX Shell Scripting, AZURE PowerShell, Azure Cosmos DB, Azure Event Hub, Azure Machine Learning, Oracle, Spark-SQL, AWS Step Functions, AWS Cloud Watch, SNS/SQS, Oracle PL/SQL.

Client: JPMC (REMOTE) May 2019 – May 2021

Role: Data Engineer

Responsibilities:

•Created pipelines in ADF using Linked services/Datasets/Pipelines/to Extract, Transform, and Load data from various sources like Azure SQL, Blob Storage, Azure SQL Data Warehouse

•Designed SSIS packages to transfer data from flat files to SQL Server using business intelligence development studio.

•Gathering all the data sources required to build the Business solution

•Interaction with BA’s and business users to understand the business rules. Writing the SQL based on requirements.

•Involved in SQL tuning. Design and Implemented database application using MS SQL Server 2012 and Business Intelligence (BI) tools to support transaction and business decision making.

•Created pipelines in ADF using Linked services/Datasets/Pipelines/to Extract, Transform, and Load data from different sources like Azure SQL, Blob Storage, Azure SQL Data Warehouse.

•Evaluated database performance and performed maintenance duties such as tuning, backup and restoration.

•Migrated to SQL Server 2008 databases to SQL Server 2014 and upgraded SSIS Packages.

•Used Performance Monitor and SQL Profiler for Monitoring memory, processor and SQL Queries.

•Working collaboratively with the team members and sharing knowledge regarding project and providing relevant solutions to customers on time.

•Implemented Azure Data Factory (ADF) extensively for ingesting data from different source systems like relational and unstructured data to meet business functional requirements.

•Design and developed Batch processing and real-time processing solutions using ADF, Databricks clusters and stream Analytics.

•Implemented Snowflake's features such as Snowpipe for real-time data loading, enhancing data freshness for analytics.

•Written simple DAX calculation as per the requirement to maintain business measures.

•Improved performance by optimizing computing time to process the streaming data by optimizing the cluster run time.

•Developed Power BI reports using power query as a feed from SQL Server & different Data sources.

•Ensure data accuracy, integrity, and reliability of both in back-end Power BI reports.

•Created Linked services to connect the external resources to ADF.

•Collaborated with BI teams to create interactive dashboards using Python-based visualization libraries like Matplotlib and Seaborn.

•Created numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate source systems by using different Azure Activities like Move &Transform, Copy, filter, for each, Databricks etc.

•Created reports utilizing SSRS, Excel services, Power BI and deployed them on SharePoint Server as per business requirements.

•Implemented row level security on data and understand application security layer models in Power BI.

•Designing ETL and building data pipelines that clean, transform, and aggregate data from disparate sources Connected, imported, shaped, and transformed data for business intelligence (BI).

•Developed Visualization reports and dashboards using Power BI.

•Using SSIS to create data mapping that loads data from source to the destination.

•Create automated scripts to run the batch jobs and reports using SSIS.

•Collaborated with data scientists to deploy machine learning models on Hadoop clusters, supporting advanced analytics initiatives.

•Create SSIS packages to pull out the data from SQL server and export to Excel spreadsheets and vice versa.

•Ensure the developed solutions are formally documented and signed off by business.

•Worked with team members to resolve any technical issue, Troubleshooting, Project Risk & Issue identification, and management.

•Deployed data engineering applications as Kubernetes pods, leveraging features like replica sets and deployments for high availability and fault tolerance.

•Utilized Kubernetes namespaces to logically isolate and manage resources for different projects or environments.

Environment: Azure Cloud, Azure Data Factory (ADF v2), Azure functions Apps, Azure Data Lake, BLOB Storage, SQL server, Windows remote desktop, AZURE PowerShell, Data bricks, Python, Azure SQL Server, Azure Data Warehouse.



Contact this candidate