Post Job Free
Sign in

Data Analyst Power Bi

Location:
Denton, TX
Posted:
February 27, 2025

Contact this candidate

Resume:

Data Analyst

Prathyusha

Phone: +1-940-***-****

Email: ***********@*****.***

PROFESSIONAL SUMMARY:

Around 8 years of extensive experience in Data Analysis, Data Profiling, Data Cleansing, Data Blending, Data Preparation, and Data Visualization, leveraging top-rated BI tools like Power BI, Tableau, and Advanced Excel to deliver actionable insights.

Proven ability to analyze and process large, complex datasets, consolidate information, and provide meaningful visualizations and drill-down capabilities to enable data-driven decision-making.

Expertise in writing advanced SQL queries, including stored procedures, joins, subqueries, triggers, and creating database objects like tables, views, functions, and indexes for efficient data management and reporting.

Strong understanding of ETL processes, with hands-on experience in data transformation, aggregation, and integration using Power BI, Tableau, and programming languages like Python and R.

Proficient in leveraging Python libraries like Pandas, NumPy, and Scikit-learn for data manipulation, preprocessing, and statistical modeling to ensure data readiness and accurate insights.

Skilled in creating advanced DAX queries and leveraging Power BI’s robust capabilities, including building interactive dashboards, KPIs, and custom reports to support various business domains.

Demonstrated expertise in Tableau for creating interactive visualizations, dashboards, and storyboards, enabling real-time decision-making across Sales, Marketing, Finance, and Operations functions.

Comprehensive knowledge of data warehousing principles, including Fact and Dimensional Tables, Star Schema, and Snowflake Schema, with experience in designing and implementing robust data models for analytical workflows.

Proficiency in working with cloud platforms like AWS, Azure, and Snowflake, implementing scalable solutions for data storage, processing, and analytics.

Hands-on experience with Big Data tools such as Apache Spark (PySpark, Spark SQL) and Hadoop ecosystem components for distributed data processing and analysis.

Expertise in data validation, data quality assurance, and ensuring the integrity of business-critical datasets through rigorous validation and cleansing activities.

Experienced in conducting Exploratory Data Analysis (EDA) and performing statistical analyses, including hypothesis testing, regression modeling, and clusteraing to derive actionable insights.

Developed and optimized data visualizations using advanced tools like Power View, Power Map, and Looker, driving user engagement and growth.

Adept at translating complex business requirements into functional and technical specifications, ensuring alignment with stakeholder expectations and project goals.

Proficient in utilizing Power Query for data transformation and automation tasks within Excel and Power BI.

TECHNICAL SKILLS:

Data Modelling Tools

ERwin, PowerDesigner, Power BI, Tableau, Snowflake

Project Management

MS Project, JIRA, Confluence

Methodologies

SDLC, Agile, Scrum, JAD Sessions, Ralph Kimball Methodologies

Databases

MS Access, SQL Server, Teradata, PostgreSQL, MySQL, Oracle, NoSQL (MongoDB)

ETL Tools

Informatica PowerCenter, SSIS, Talend

Reporting Tools

BO, OBIEE, Power BI, Tableau, SSRS, Crystal Reports

Languages

PL/SQL, SQL, Python (Pandas, NumPy, Matplotlib), R Programming (Tidyverse)

OS

Windows, Linux

Big Data & Cloud

Azure Data Factory, Databricks, Snowflake (Snow-SQL, Snow-Pipe), AWS (S3, Redshift)

Others

MS Office (Excel – Advanced), TOAD, Git/GitHub

PROFESSIONAL EXPERIENCE:

Lending Tree - Charlotte, North Carolina Oct 2023 – Present

Senior Data Analyst

Responsibilities:

Designed and enhanced key Loans and Securities lending applications, improving overall system performance and facilitating the onboarding of new clients onto the lending platform.

Analyzed business requirements and documented functional workflows for data and information movement between source and destination systems, ensuring accuracy and efficiency.

Acted as a Project Manager for Tyler Technologies, overseeing a $6.8 million, 3-year statewide eCOURT application implementation for Oregon counties.

Attended cross-functional meetings with stakeholders, including Tyler Technologies and Microsoft, to collaborate on feature development and implementation.

Conducted reviews of database objects (e.g., tables, views) to assess performance issues, proposed improvements, and implemented optimization strategies.

Developed and updated complex SQL stored procedures, triggers, functions, and other database objects to validate, load, and insert data into appropriate tables.

Wrote and maintained ETL pipelines and job scripts to process, cleanse, and transform data, supporting transactional and reporting needs.

Identified issue and developed a procedure for correcting the problem which resulted in the improved quality of critical tables by eliminating the possibility of entering duplicate data in a Data Warehouse.

Designed and implemented SQL based tools, stored procedures and functions for daily data volume and aggregation status

Follow ELT methods to transform and load the data into Google Big Query, Qlik for visualization

Performed SQL performance tuning for scripts and stored procedures, optimizing query execution times and addressing bottlenecks caused by increasing data volume.

Troubleshot performance issues such as data contention and deadlocks, implementing solutions to resolve bottlenecks and improve system stability.

Conducted data profiling and data quality reviews, identifying column-level issues, cleansing data, and publishing results on SharePoint for broader visibility.

Collaborated with stakeholders to define data quality rules, identified gaps in source systems, and implemented error-checking mechanisms.

Created detailed Source-to-Target Mapping (STM) documentation and ensured traceability of data lineage for regulatory compliance.

Designed and developed SSRS reports, Tableau dashboards, and Power BI visualizations, presenting insights to stakeholders on key metrics for lenders, borrowers, and loans.

Developed automation scripts using Perl, UNIX/Linux shell scripting, and Windows batch scripting to streamline workflows, including automated file processing.

Environment: eCOURT, SQL Server, SSRS, Tableau, Power BI, Python, UNIX/Linux, jQuery, Snowflake, AWS, Azure, VOS, KYC, CDD, Agile (JIRA, Confluence)

Abbott Laboratories - Hyd, India Sept 2020 – June 2023

Data Analyst

Responsibilities:

Developed and supported a key data integration and migration initiative for the conversion, integration, and migration of data from diverse legacy systems to a centralized database, enabling streamlined business intelligence and decision-making.

Interacted with business users to understand business process flows, gather data migration requirements, and evaluate data volume, major data objects, and the level of effort required for migration.

Designed comprehensive data mapping documents and defined business rules for data cleansing, transformation, and standardization to ensure high-quality and actionable insights.

Conducted detailed CRUD analysis on multiple data systems to identify high-impact database objects, assess data flow, and propose improvements to optimize data integrity and data governance practices.

Performed rigorous Data Profiling and Data Quality Assessments to evaluate the accuracy, conformance to standards, and relevance of legacy data for migration efforts.

Developed custom SQL and PL/SQL scripts, including stored procedures, functions, and triggers, to standardize, process, and validate data consistency across multiple databases.

Partnered with data architects and stakeholders to analyze data quality results, define long-term data governance strategies, and implement business rules for maintaining high data standards.

Led efforts to analyze data for source/target mappings, created T-SQL scripts for data processing and automations.

Connect Google Big Query to Qlik and create viz.

Monitored and evaluated KPI performance against targets, identified trends, and provided insights to stakeholders for informed decision-making using Python's analytical capabilities.

Facilitated training sessions and workshops to educate teams on the importance of KPIs, data interpretation, and utilization for performance improvement, including Python basics for data analysis.

Designed and implemented frameworks to extract, transform, and load (ETL) data from various legacy sources into staging areas, followed by parsing, cleansing, and massaging the data to meet organizational requirements.

Built robust data structures and staging areas to facilitate in-depth data analysis and interlinked segment data for deeper insights and accurate reporting.

Automated the creation and extraction of data from multiple data sources, using scripts to parse files, harmonize data, and ensure efficient loading into staging tables and target databases.

Utilized Python for creating scalable ETL pipelines, automating data extraction, and conducting advanced data analysis to uncover actionable insights.

Developed and fine-tuned matching rules and transformation scripts to harmonize data, cleanse inaccuracies, and eliminate duplicates across disparate systems.

Analyzed and compared data consistency across sectors, using advanced Data Visualization tools like Power BI and Tableau to identify trends and discrepancies.

Collaborated in cross-functional Agile teams using tools like JIRA and Confluence to manage project progress and ensure timely delivery of data migration initiatives.

Environment: PL/SQL, SQL, Snowflake, Azure Data Lake, Power BI, Tableau, Python (Pandas, NumPy), SSIS, Talend, JIRA, Confluence, CRUD Analysis, SNCF, IBM Systems

Edvensoft Solutions India Pvt. Ltd, India May 2017 to Aug 2020

Data Analyst

Responsibilities:

Analyzed business requirements to compose functional and implementable technical data solutions, ensuring alignment with organizational goals.

Identified integration impacts, defined data flows, and established robust data stewardship processes.

Created and maintained data dictionaries, data mapping documents, metadata, and ER diagrams for ETL and application support.

Developed and implemented DFD (Data Flow Diagrams), DDL/DML scripts, and data integrity mechanisms to support evolving business needs.

Led Joint Application Development (JAD) sessions as the primary data modeler to enhance existing databases and design new databases.

Utilized Python programming language for data extraction, transformation, analysis, and visualization.

Employed Python libraries such as Pandas, NumPy, Matplotlib, and Seaborn for efficient data manipulation, calculations, statistical analysis, and data visualization.

Collaborated with cross-functional teams, including executives, department heads, and IT professionals, fostering a culture of collaboration and knowledge sharing.

Operated in a data-driven environment with a focus on continuous improvement and innovation.

Access to relevant data sources, systems, and resources required for successful project implementation, with integration of Python into existing data infrastructure.

Emphasized effective communication, collaboration, and teamwork to ensure project success and stakeholder engagement.

Evaluated and optimized data models for business requirements, enhancing scalability and data integrity.

Authored and executed complex SQL scripts to implement database changes, including table updates, index creation, and stored procedure development.

Conducted comprehensive data profiling and data cleansing to improve data quality across staging, development, UAT, and production environments.

Reverse-engineered and consolidated data models to standardize across environments, utilizing tools like Power Designer and Erwin.

Defined ETL mapping specifications and designed workflows to extract, transform, and load (ETL) data into data warehouses and data marts.

Integrated legacy system data into data marts, improving data blending and access for reporting.

Designed optimized logical and physical schemas for data warehousing, adhering to dimensional modeling best practices.

Environment: ETL, Power BI, Tableau, Azure Data Factory, Databricks, Snowflake, SQL Server, Oracle, PostgreSQL, PL/SQL, Informatica, DataStage, JIRA, UAT, WFHB DB, Hadoop, PySpark, Power Designer, Erwin



Contact this candidate