Post Job Free
Sign in

Data Analyst Power Bi

Location:
Overland Park, KS
Posted:
September 10, 2025

Contact this candidate

Resume:

SNEHITH REDDY

DATA ANALYST

+1-816-***-**** ****************@*****.***

SUMMARY

Passionate Data Analyst & Engineer with over four years of hands-on experience in data analytics, engineering, and visualization, turning complex datasets into meaningful insights that help businesses make smarter, data-driven decisions.

Proficient in data visualization using Tableau, Power BI, and AWS QuickSight, creating interactive dashboards that improved decision-making efficiency by 30%.

Worked on creating dashboards in Tableau for reporting and data visualization, and guided business decision-making for multiple stakeholders.

Strong SQL skills with experience working on MySQL, SQL Server, PostgreSQL, and AWS RDS, ensuring data integrity, scalability, and security across cloud and on-premises environments.

Cloud expertise in AWS & Azure, leveraging Azure Synapse Analytics, AWS RDS Multi-AZ, and Azure Data Lake Storage to optimize data processing and storage, reducing query execution time by 35%.

Well-versed in Agile, Waterfall, and SDLC methodologies, ensuring efficient project management and delivery of analytical solutions. Experienced with Shell Scripting in operating systems such as Linux and version-control tools such as Git.

Hands-on experience with ETL pipelines using Apache Spark, Databricks, Azure Data Factory, and AWS Glue, enhancing data processing speeds by 50% and automating workflows.

Expert in data wrangling, warehousing, and predictive analytics, leveraging machine learning techniques for better insights, leading to a 30% increase in business intelligence capabilities.

Skilled in effectively communicating insights through presentations, utilizing problem-solving skills to drive data-driven decision-making processes.

EDUCATION

Western Illinois University IL

Master’s in ASDA and Computer Science Jan 2022 – May 2023

TECHNICAL SKILLS

Programming Language:

Python, R, SQL, Linux, Unix, SAS

Packages:

NumPy, Pandas, Matplotlib, SciPy, Seaborn, ggplot2

Visualization Tools:

Tableau, Power BI, Microsoft Excel, D3.js

Database:

MySQL, SQL Server

Cloud Technologies:

AWS (EC2, S3, IAM, Lambda, Glue, Kinesis), Azure (Data Factory, Synapse Analytics, Azure Functions, Databricks, Storage Services, Key Vault, Active Directory, Azure SQL), Snowflake.

Methodologies:

SDLC, Agile, Waterfall

Other Skills:

Data Cleaning, Data Wrangling, Data Warehousing, Data Governance, Data mining, Critical Thinking, Communication Skills, Presentation Skills, Problem-Solving

IDEs:

Visual Studio Code, PyCharm

Operating System:

Windows, Linux

PROFESSIONAL EXPERIENCE

Blue Cross Blue Shield- USA

Data Analyst/Engineer JUNE 2023 – Present

Designed and maintained sophisticated data pipelines leveraging Azure technologies, Python, SQL, and Excel, ensuring seamless ETL (Extract, Transform, Load) operations for processing large-scale healthcare data, leading to a 30% improvement in data processing efficiency.

Partnered with cross-functional teams—including actuarial, clinical, and marketing—translating raw data into meaningful insights that drove strategic decision-making and enhanced patient care outcomes by 20%.

Developed dynamic and interactive dashboards using Power BI and Azure Synapse Analytics, enabling stakeholders to access real-time performance metrics and fostering a data-driven culture within the organization.

Experienced in developing interactive dashboards, reports, and data visualizations using tools like Power BI and Azure Synapse Analytics for data-driven decision-making.

Engineered comprehensive data models and schemas for relational and non-relational databases such as Azure SQL Database, Cosmos DB, and Azure Synapse Analytics, streamlining data storage for FHIR and EHR integration.

Conducted extensive data transformations using Databricks' Spark DataFrame API, improving the quality, accessibility, and usability of healthcare data for Big Data analytics.

Developed optimized stored procedures, views, indexes, and scripts using Python, PySpark, and Spark SQL, transforming raw healthcare data into curated datasets, improving data accuracy by 25%.

Implemented automated ETL pipelines with Apache Airflow to facilitate continuous updates to patient records, reducing manual processing efforts by 40% and ensuring real-time accessibility of critical healthcare data.

Applied machine learning models (random forest, logistic regression) to predict patient outcomes, estimate healthcare costs, and identify high-risk populations, contributing to a 15% increase in preventative care initiatives.

Demonstrated a strong command of data warehousing principles, including star schema, snowflake schema, and dimension modeling, ensuring structured and efficient data storage and retrieval.

Adopted Agile methodologies to enhance collaboration, streamline project delivery, and maintain adaptability to evolving healthcare requirements.

Mckesson USA

Data Analyst Jan 2023 – MAY 2023

Gathered business requirements and collaborated with senior management to design optimized data models and analytical reports, enabling strategic planning with improved accuracy.

Leveraged SQL queries and Advanced Excel techniques (including Power Query, VLOOKUP, and PowerPivot) for comprehensive data preparation, analysis, and visualization, enhancing operational efficiency by 35%.

Worked extensively with MySQL, MS SQL Server, and Hadoop, processing structured and semi-structured healthcare data from sources such as CSV, Excel, and relational databases.

Utilized data analysis and visualization tools such as DAX, VBA (macros), VLOOKUP, XLOOKUP, Power View, Power Map, Heat Map, pivot tables/charts, and PowerPivot to monitor and enhance healthcare service delivery and patient outcomes at McKesson.

Employed Python libraries (NumPy, Pandas, SciPy, and Scikit-learn) within Jupiter Notebook to perform data cleaning, statistical modeling, and predictive analytics.

Integrated real-time data streaming from Kafka into Hadoop ecosystems, enabling seamless ingestion and processing of patient records.

Extracted data from various databases like DB2, SQL Server, Oracle and Netezza.

Automated critical ETL processes with PySpark and Spark SQL, reducing manual reporting efforts by over 300+ hours annually, streamlining healthcare operations.

Developed tailored BI dashboards for 1,000+ patients, allowing healthcare providers to track medical trends, resulting in a 20% improvement in patient outcome predictions.

Spearheaded the automation of reporting functions by leveraging Power BI, streamlining the creation of reports, dashboards, and scorecards (KPIs) while seamlessly integrating data from MySQL and Data Warehouse sources, significantly enhancing reporting efficiency.

Designed and developed data-driven Power BI dashboards, incorporating a variety of visualization techniques such as Pie Charts, Bar Charts, Tree Maps, Circle Views, Line Charts, and Scatter Plots, ensuring clear and intuitive data representation for stakeholders.

Built dynamic and interactive Power BI dashboards, meticulously aligning them with business requirements and strategic goals, enabling stakeholders to derive actionable insights through visually compelling and real-time analytics.

Designed and implemented a robust data governance framework, ensuring compliance with HIPAA regulations, and increasing data quality and reliability by 25%.

Genpact India

Jr. Data Analyst Aug 2019 – July 2021

Developed custom Python scripts and advanced SQL queries to conduct complex data analysis, improving decision-making accuracy across various business operations.

Leveraged Python along with libraries such as Pandas and NumPy to perform advanced data analysis, cleansing, transformation, and manipulation, enabling the extraction of actionable insights that drive strategic business decisions.

Developed detailed and interactive data visualizations using Matplotlib and Seaborn, effectively translating complex analytical findings into clear, visually compelling reports that cater to both technical and non-technical stakeholders.

Executed complex SQL-based transformations, including pivot/unpivot operations, cross-tabulations, and recursive queries, to streamline data analysis and successfully convert over 10TB of raw data into structured, business-ready insights.

Designed and optimized ETL workflows, implementing parallel processing, data partitioning, and in-memory computing to enhance data pipeline performance, reducing processing time by 60% and improving overall efficiency by 40%.

Utilized Cosmos DB APIs (SQL API, MongoDB API, and Graph API) to enable seamless data access, retrieval, and manipulation, ensuring optimal performance across multiple application use cases.

Engineered and fine-tuned scalable ETL pipelines, ensuring the timely availability of high-quality data while optimizing processing workflows to enhance data integrity, accessibility, and performance.

Conducted performance optimization on Oracle databases, leveraging Oracle Enterprise Manager to identify and resolve bottlenecks, reducing query execution time by 40%.

Built over 50 interactive Tableau dashboards, enabling 30+ stakeholders to track and analyze key performance indicators (KPIs) in real-time, leading to improved operational efficiencies.

Implemented AWS Glue for data integration and ETL (Extract, Transform, Load) processes, streamlining data pipelines and reducing data processing times by 40%, leading to more timely insights and decision-making.

Automated file transfers and validation processes, ensuring the timely availability of high-quality data for seamless analysis.

Implemented Apache Kafka for real-time data streaming, improving data accessibility and facilitating seamless information flow between business units.



Contact this candidate