Post Job Free

Resume

Sign in

Sql Server Data Analysis

Location:
HITEC City, Telangana, 500081, India
Salary:
85
Posted:
April 08, 2024

Contact this candidate

Resume:

Ranjith Reddy Chintala

Phone: 469-***-****

Email: ad4u6x@r.postjobfree.com

CHICAGO, ILLINOIS

PROFESSIONAL SUMMARY:

* ***** ** ****** ** experience in the field of Data analysis, ETL Development, Data Modelling, and Project Management with experience in Big Data and related Hadoop technologies.

Strong experience in Business and python, Data Analysis, Data Profiling, Data Migration, Data Conversion, Data Quality, Data Integration.

Strong experience in Data Modelling with expertise in creating Star Snow-Flake Schemas, FACT and Dimensions Tables, Physical and Logical Data Modelling.

Build SQL scripts, indexes, and complex queries for data analysis and extraction that provide database coding to support business applications using T-SQL.

Hands on experience in creating tables and Normalization techniques on database Tables using T-SQL and SQL Server

Excellent knowledge on development and deployment of SSIS packages from different sources like Microsoft SQL Server, flat file, CSV, XML, Oracle, Excel.

Experience on creating and designing SSIS Packages using various Data Flow transformations like Fuzzy Grouping, Fuzzy Lookup, Aggregate Functions, Merge Join, Conditional Split, Data Conversion, Derived Colum

Hands on experience in creating SSIS packages to load data using Various SSIS Tasks like Execute SQL Task, bulk insert task, data flow task, file system task, active script task, xml task

Experience in developing Custom Reports, Parameterized reports and different types of Tabular, Matrix, Ad hoc reports and distributed reports in multiple formats using SQL Server Reporting Services (SSRS)

Integrated Azure Cognitive Services' Text Analytics API to perform sentiment analysis on customer reviews and feedback, enriching the dataset with valuable insights for business stakeholders.

Hands-on experience in designing and implementing scalable and secure data architectures using Azure services like Azure Cosmos DB, Azure Stream Analytics, and Azure Event Hubs.

Designed and developed ETL pipelines in Azure cloud to process customer data from APIs and load it into Azure SQLDB.

Proficient in using Azure Key Vault as a central repository for maintaining secrets and referencing them in Azure Data Factory and Databricks notebooks.

Worked on developing user-defined functions (UDF), stored procedures and triggers, views.

Knowledge on various Error Handling Techniques, CTEs, Merge statements using T-SQL.

Knowledge on Extracting data from Progress Database to SQL Server using drivers.

Worked extensively on ETL process using MS SQL Server Integration Services (SSIS). Worked with various versions starting 2012, 2014, 2016.

Knowledge on work environment of Agile Methodologies, JIRA board and DFPT tickets.

Ability to collaborate with peers in both, business, and technical areas, to deliver optimal business process solutions, in line with corporate priorities.

Strong experience in interacting with stakeholders/customers, gathering requirements through interviews, workshops, and existing system documentation or procedures, defining business processes, identifying and analyzing risks using appropriate templates and analysis tools.

Experience in various phases of Software Development life cycle Analysis, Requirements gathering, designing with expertise in documenting various requirement specifications, functional specifications, Test Plans, Source to Target mappings, SQL Joins.

Experience in conducting Joint Application Development JAD sessions for requirements gathering, analysis, design and Rapid Application Development RAD sessions to converge early toward a design acceptable to the customer and feasible for the developers and to limit a project's exposure to the forces of change.

Proficient in performance analysis, monitoring and SQL query tuning using EXPLAIN PLAN, Collect Statistics, Hints and SQL

Hands on experience using applications like Microsoft SQL Server, SSIS, SSRS, Tableau.

Experience in coding SQL/HQL using Procedures, Triggers and Packages.

Good understanding of Relational Database Design, Data Warehouse/OLAP concepts and methodologies.

Implemented Optimization techniques for better performance on the ETL side and on the database side.

Excellent Communication, interpersonal, analytical skills and strong ability to perform in a team as well as individually.

TECHNICAL SKILLS

Analytical Tools & Languages Snowflake, Snow SQL, AWS S3, AWS GLUE, Azure, DBT, Informatica, TSQL, PL/SQL, Advanced Excel Databases: SQL server, MYSQL, Toad, Postman, UFT

Software Oracle SQL Developer, MS Project, Jira, Eclipse, SecureFX, SecureCRT, Git Desktop

Reporting Tools SQL Server Reporting Services (SSRS), Tableau

Additional Skills Database modelling, Dimensional modelling, Star and Snow Schema, Big Data, Visualization, Agile, Tableau, SDLC, Requirement gathering and Project Management, Impact Analysis, GAP Analysis

Testing tools TOSCA, SOAP UI

PROFESSIONAL EXPERIENCE

Client: Portland General Electric ( Remote ) May 2022 to till date

Snowflake Data Engineer

Responsibilities:

● Bulk loading from the external stage (AWS S3), internal stage to snowflake cloud using the COPY command.

● Loading data into snowflake tables from the internal stage using Snow SQL.

● Developed Python scripts to establish a connection to Snowflake and manage the migration process.

● Used COPY, LIST, PUT and GET commands for validating the internal stage files.

● Used import and Export from the internal stage (snowflake) from the external stage (AWS S3).

● Writing complex Snow SQL scripts in snowflake cloud data warehouse to business analysis and reporting.

● Used FLATTEN table function to produce a lateral view of VARIANT, OBJECT, and ARRAY column.

● Implemented the Snow pipes for continuous data ingestion from S3 to Snowflake by configuring the AWS S3 SQS.

● Developed snowflake procedures for executing branching and looping.

● Created clone objects to maintain zero-copy cloning.

● Data validations have been done through information schema.

● Executed queries to fetch and transform data from the source database using the Snowflake Connector for Python.

● Performed data quality issue analysis using Snow SQL by building analytical warehouses on Snowflake.

● Experience with AWS cloud services: EC2, S3, EMR, RDS, Athena, and Glue.

● Implemented optimization strategies that resulted in a 20% reduction in compute costs and a 30% increase in storage efficiency within Snowflake environment.

● Revamped administrative workflows, reducing the time required for routine tasks by 40%, leading to enhanced operational efficiency and resource allocation.

● In-depth knowledge of the Snowflake database, schema, and table structure will be defined, as will virtual warehouse sizing for Snowflake for various types of workloads.

● Designed and implemented Python scripts to extract data from various sources such as databases, APIs, and flat files, ensuring seamless data ingestion into data processing pipelines.

● Utilized Python libraries such as Pandas, Snowpark and requests to interface with data sources, retrieve data efficiently, and handle data formatting and conversion tasks.

● Executed seamless recovery procedures leveraging Time Travel and Cloning features, resulting in a 99.9% uptime and minimal data loss during critical incidents.

● Led the implementation of dbt (data build tool) for Snowflake, creating and maintaining data models that transformed raw data into meaningful, business-ready insights.

● Optimized dbt models and Snowflake queries for improved performance, reducing query execution times and enhancing overall system efficiency.

● Utilized dbt's version control features to manage and track changes to data models over time.

● Implemented step AWS functions to automate and orchestrate the Amazon Sage Maker related tasks such as publishing data to S3.

● Creating Lambda functions with boto3 to deregister unused AMI’s in all application regions to reduce cost on EC2 resources.

● Demonstrated proficiency in Git and GitHub by effectively branching, merging, and resolving code conflicts in complex software projects.

● Maintained a clean and well-organized codebase on GitHub, leveraging Git best practices to manage project history and version control effectively.

● Created Python scripts to ingest raw data from various sources, including databases, APIs, and file systems, ensuring data integrity and reliability throughout the extraction process.

● Implemented data processing logic in Python to clean, transform, and validate incoming data, preparing it for downstream analysis and visualization.

● Implemented External functions in Snowflake which invokes the AWS lambda functions as a remote service for weekly uploads.

● Leveraged the API gateway for the API integration to establish relation between Snowflake and AWS.

● Monitored and tracked issues within the team using JIRA and working application in Agile methodology with SCRUM meetings.

● Developed ETL pipelines in SSIS to migrate data from legacy systems to data warehouse and implemented Physical data models including partitioning and indexing to optimize data processing and querying.

● Demonstrated expertise in performance tuning and optimization of SSIS packages, employing best practices to enhance overall data integration performance and minimize processing times for large datasets.

● Designed and implemented CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or AWS Code Pipeline to automate the deployment and testing of data engineering workflows and infrastructure as code (Ia C) configurations.

● Integrated version control systems (e.g., Git) with CI/CD pipelines to enable automated builds, tests, and deployments for data engineering projects, ensuring consistency and reliability across environments.

● Currently working in Agile, Scrum, stand up meetings, Sprints, Backlog grooming sessions. Optimized SSIS Packages. Reverse engineering and prepared documentation.

● Designed and implemented event-driven architectures using Amazon SNS to publish and subscribe to notifications across distributed systems, enabling real-time communication and event-driven workflows.

● Integrated Amazon SNS with various AWS services and external endpoints to deliver notifications via email, SMS, HTTP/S, and other protocols, facilitating seamless communication between microservices and external systems.

● Scheduled and orchestrated regular dbt runs to update and refresh data models in Snowflake, automating the data transformation processes and maintaining up-to-date analytics

● Ensure that models conform to established best practices including normalization rules and accommodate change in a cost-effective and timely manner.

Environment: Snowflake, Snow SQL, TSQL, Tableau, Microsoft Excel, SSIS, SSRS, SSAS, MS Project, PL/SQL, JIRA, Postman, Soap UI, DBT, AWS S3, GitHub, AWS API Gateway

Client: Concentrics, Hyderabad, India Feb 2018 to Jan 2022

Azure Data Engineer/ETL Developer

Responsibilities:

Engaged in the end-to-end process, encompassing requirements gathering, business analysis, design, development, testing, and implementation of business rules.

Comprehended business use cases, integrated business processes, and authored business and technical requirements documents, logic diagrams, process flow charts, and other application-related documents.

Utilized Pandas in Python for data cleansing and validation of source data.

Designed and implemented an ETL pipeline in the Azure cloud, responsible for extracting customer data from APIs and processing it into Azure SQLDB.

Orchestrated all data pipelines using Azure Data Factory and constructed a customized alert platform for monitoring.

Created Databricks job workflows for extracting data from SQL Server and transferring files to SFTP using PySpark and Python.

Leveraged Azure Key Vault as a centralized repository for managing secrets, referencing these secrets in Azure Data Factory and Databricks notebooks.

Collaborated with cross-functional teams to develop a chatbot using Azure Bot Service and Azure Cognitive Services, providing users with a seamless conversational interface for data queries and insights.

Established Teradata ELT frameworks to ingest data from various sources using Teradata Legacy load utilities.

Designed a common SFTP download/upload framework utilizing Azure Data Factory and Databricks.

Maintained and supported the Teradata architectural environment for EDW applications.

Engaged in logical modeling, physical database design, data sourcing, data transformation, data loading, SQL, and performance tuning.

Provided project development estimations to business stakeholders and delivered projects as agreed upon with business.

Designed appropriate Teradata Primary Indexes (PI) considering planned data access and even data distribution across all available AMPS.

Considered business requirements and factors when creating suitable Teradata NUSI for efficient data access.

Designed process-oriented UNIX scripts and ETL processes for loading data into the data warehouse.

Created Informatica mappings to load data from various sources into the Data Warehouse, applying transformations like Source Qualifier, Expression, Lookup, Aggregator, Update Strategy, and Joiner.

Worked on Informatica's advanced concepts and implemented Pushdown Optimization technology and pipeline partitioning.

Conducted bulk data loads from multiple data sources (Oracle 8i, legacy systems) to Teradata RDBMS using BTEQ, MultiLoad, and FastLoad.

Applied various transformations like Source Qualifier, Aggregators, Lookups, Filters, Sequence Generators, Routers,

Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union, etc.

Utilized Informatica Power Exchange to handle change data capture (CDC) data from the source and load it into Data Mart, following slowly changing dimensions (SCD) type II processes.

Employed Power Center Workflow Manager to create workflows and sessions and utilized various tasks like command, event wait, event raise, and email.

Designed, created, and optimized physical database objects, including tables, views, indexes, PPI, UPI, NUPI, and USI, to support normalized and dimensional models.

Implemented a cleanup process for removing intermediate temp files used prior to the loading process.

Used volatile tables and derived queries to simplify complex queries.

Managed performance monitoring, resource and priority management, space management, user management, index management, access control, and executed disaster recovery procedures.

Automated Teradata ELT and administrative activities using Python and shell scripts.

Performed application-level database administration (DBA) activities, such as creating tables, indexes, and monitoring and tuning Teradata BTEQ scripts using Teradata Visual Explain utility.

Engaged in performance tuning, monitoring, UNIX shell scripting, and both physical and logical database design.

Developed UNIX scripts to automate various tasks involved in the loading process.

Worked with Tableau software for reporting needs and created Tableau dashboard reports, heat map charts, and supported numerous dashboards, pie charts, and heat map charts built on Teradata database.

Environment: Azure Cloud, Azure Data Factory, Databricks, Azure SQLDB, Teradata RDBMS, Tableau, Pandas, Log Analytics, Azure Key Vault, Informatica, Python, Pyspark, SQL, Shell Scripts, etc.

Client: IMedX Medical Transcription (Hyderabad, INDIA ) Dec 2015 to Dec 2017

ETL Developer

Responsibilities:

● Involved in designing and developing SQL server objects such as Tables, Views, Indexes (Clustered and Non-

● Clustered), Stored Procedures and Functions in Transact-SQL.

● Developed ETL pipelines in SSIS to integrate source data into the staging layer, transforming it and loading the data into data warehouse

● Developed stored procedures in SQL Server to standardize DML transactions such as insert, update, delete from database

● Improved system issues in QA issues log by conducting UAT during software development lifecycle in an agile framework.

● Work with the Project Management in the creation of project estimates.

● Analysis of the data identifying source of data and data mappings of HCFG.

● Wrote a complex SQL, PL/SQL, Procedures, Functions, and Packages to validate data and testing process.

● Worked in generating and documenting Metadata while designing OLTP and OLAP systems environment.

● Worked in data management performing data analysis, gap analysis, and data mapping.

● Worked extensively in documenting the Source to Target Mapping documents with data transformation logic.

● Interact with the SME's to analyze the data extracts from Legacy Systems Mainframes and COBOL Files and determine the element source, format and its integrity within the system

● Created SSIS package to load data from Flat files, Excel and Access to SQL server using connection manager.

● Produced report using SQL Server Reporting Services (SSRS) and creating various types of reports.

● Transformation of requirements into data structures which can be used to efficiently store, manipulate and retrieve information

● Collaborate with data modelers, ETL developers in the creating the Data Functional Design documents.

● Ensure that models conform to established best practices including normalization rules and accommodate change in a cost-effective and timely manner.

● Enforce standards to ensure that the data elements and attributes are properly named.

● Work with the business and the ETL developers in the analysis and resolution of data related problem tickets.

● Support development teams creating applications against supported databases.

● Provide 24 x 7 problem management support to the development team.

● Document various Data Quality mapping document, audit and security compliance adherence.

● Perform small enhancements SOR element additions, data cleansing/data quality.

● Create various Data Mapping Repository documents as part of Metadata services EMR.

● Provide inputs to development team in performing extraction, transformation and load for data marts and data warehouses.

● Provide support in developing and maintaining ETL processes that extract data from multiple SOR's residing on various technology platforms then transport the data to various delivery points such as data marts or data warehouses.

● Collaborate with data modelers, ETL developers in the creating the Data Functional Design documents.

Environment: MS Excel, MS Access, UNIX, Windows XP, SQL, PL/SQL



Contact this candidate