Post Job Free
Sign in

Data Analyst Power Bi

Location:
Dayton, OH, 45410
Salary:
100k
Posted:
April 29, 2025

Contact this candidate

Resume:

NAVYA SREE

DATA ANALYST

+*(***)******* ************@*****.*** LinkedIn

SUMMARY PROFESSIONAL:

Results-driven Data Analyst with 4+ years of experience in data collection, cleaning, and analysis to support strategic business decisions. Proven expertise in building scalable dashboards, reports, and ETL workflows using SQL and Python. Skilled in leveraging data visualization tools like Power BI and Tableau to deliver clear, actionable insights. Strong background in statistical analysis, business intelligence, and automation to enhance reporting efficiency. Holds a master’s degree in BusinessAnalytics and a track record of driving data-informed improvements across organizations.

TECHNICAL SKILLS:

Data Processing & Analysis: SQL, Python (Pandas, NumPy), Apache Spark, PySpark

Data Visualization & BI Tools: Power BI, Tableau, Microsoft Excel (Dashboards, Pivot Tables, Scorecards)

GCP: BigQuery, Cloud Storage (GCS), Cloud SQL, Data Studio

AWS: S3, Redshift, RDS, Athena

Databases: MySQL, PostgreSQL, Google BigQuery, Amazon Redshift

ETL & Data Pipelines: SQL-based ETL, Apache Airflow, AWS Glue, GCP Dataflow

Data Warehousing: Data Cleaning, Transformation, Quantitative Analysis, Trend Identification, Data Modeling

Version Control & Collaboration: Git, GitHub, Jira

Data Governance & Reporting: Data Quality Checks, Stakeholder Reporting, Insight Generation, Cross-Functional Collaboration

Statistical Analysis & Reporting: Hypothesis Testing, A/B Testing, Descriptive & Inferential Statistics

PROFESSIONAL EXPERIENCE:

JP MORGAN CHASE, PENNSYLVANIA MAY 2023 – PRESENT

DATA ANALYST

RESPONSIBILITIES:

Collected and prepared data from AWS RDS and S3 for financial analysis, ensuring consistency and accuracy across sources.

Cleaned and transformed raw datasets using Python and Pandas to create structured data models for reporting.

Built interactive Power BI dashboards to visualize KPIs such as monthly revenue trends and customer churn.

Used SQL to perform complex joins and aggregations on transactional data stored in Amazon Redshift for executive reports.

Created automated scripts in Python to clean and enrich marketing datasets before loading them into S3.

Identified data quality issues using SQL checks and built alert mechanisms for missing or duplicate entries.

Developed scheduled ETL pipelines using AWS Glue to pull structured data from RDS and push it into Redshift.

Analyzed sales funnel performance by joining user activity logs from S3 with customer data from DynamoDB.

Created SQL queries in Athena to explore clickstream data stored in Parquet format for web traffic insights.

Designed a cost-effective reporting solution by partitioning data in S3 and optimizing queries with Athena.

Collaborated with finance and product teams to translate business questions into data queries and visualizations.

Presented monthly performance reports to senior stakeholders, supported by data visualizations and statistical summaries.

Used Python to automate weekly data pulls and reshape data into reporting-ready formats for Power BI.

Leveraged AWS Lambda to trigger real-time alerts when anomalies were detected in transaction data streams.

Helped reduce dashboard load times by optimizing data models and minimizing real-time refresh complexity.

Contributed to improving data workflows by documenting SQL templates and reusable queries for the analytics team.

Used Amazon QuickSight for ad-hoc visualizations when collaborating with non-technical stakeholders.

Participated in data governance discussions to define naming conventions and ensure consistent metric definitions.

Integrated datasets from third-party APIs and loaded the processed results into S3 for downstream analysis.

Supported A/B testing initiatives by analyzing experimental data and presenting performance comparisons to the product team.

Created reusable SQL scripts to automate monthly revenue and retention reports using data stored in Amazon Redshift.

Conducted root cause analysis on revenue dips by joining transactional data with customer feedback datasets.

Built Python-based validation scripts to check row counts and data types post-ETL from S3 to Redshift.

Collaborated with data engineers to fine-tune Glue jobs for transforming JSON logs into tabular formats for analysis.

Provided weekly insights on customer acquisition cost (CAC) trends using Power BI and Redshift queries.

Performed cohort analysis on user behavior data using SQL window functions to drive product feature updates.

Analyzed marketing campaign performance by integrating UTM-tagged web traffic from S3 and campaign metadata from RDS.

Led knowledge-sharing sessions to train team members on using Athena for querying semi-structured data in S3.

Developed Python scripts to detect outliers in financial transactions and log anomalies in AWS CloudWatch.

Created a centralized metadata tracking system in Excel to document source tables, refresh schedules, and key metrics used in dashboards.

ADROIT SOFTWARE SOLUTIONS, CHENNAI, INDIA JAN 2020 – SEP 2022

DATA ANALYST

RESPONSIBILITIES:

Pulled marketing and transactional data from Google Cloud Storage (GCS) and cleaned it using Python for dashboard reporting.

Wrote SQL queries in BigQuery to aggregate customer behavior data and identify churn risk indicators.

Built interactive Tableau dashboards integrated with BigQuery to visualize sales conversion rates by region.

Automated data transformation jobs using Cloud Composer (Apache Airflow) for daily report generation.

Designed partitioned and clustered BigQuery tables to improve performance and reduce query costs for monthly reporting.

Analyzed campaign performance by joining CRM and ad campaign datasets stored in BigQuery.

Used GCP Dataflow to preprocess large CSV files from GCS into clean Parquet format for analysis.

Collaborated with product teams to define and monitor key performance indicators (KPIs) using BigQuery queries and Data Studio.

Performed exploratory data analysis on user interaction logs from Cloud Logging and visualized usage trends in Data Studio.

Created Python scripts to load survey results from Google Sheets into GCS, and scheduled transformations via Cloud Functions.

Built a real-time reporting dashboard in Data Studio by connecting streaming data from Pub/Sub and pushing it into BigQuery.

Used Cloud SQL for storing intermediate analysis outputs and joining structured data for consolidated reports.

Conducted variance analysis between forecasted and actual financial metrics using Excel and BigQuery extracts.

Cleaned and validated JSON-based event data from mobile apps using Python and uploaded to GCS for batch processing.

Set up scheduled queries in BigQuery to refresh dashboard datasets every morning for leadership reviews.

Collaborated with data engineers to optimize GCS-to-BigQuery ETL pipelines, ensuring schema compatibility and data consistency.

Built cohort analysis queries using advanced window functions in BigQuery to support user retention insights.

Delivered insights to marketing stakeholders with interactive reports in Looker Studio (Data Studio).

Documented data dictionary, table schemas, and reporting metrics to support self-service analytics across departments.

Participated in Agile standups and sprint planning meetings, contributing to continuous improvement of data workflows.

Education

University of Dayton, OH, US Sep 2022 – June 2024

Master of Science in Business Analytics

Osmania University College for Women, India Jun 2017 – July2020

Bachelor of Science in Mathematics, Statistics, Economics

3+ years of experience as a Data Analyst

2+ years of experience with reporting and visualization tools such as Looker, Tableau, Domo, etc

Advanced SQL skills including multiple-table joins, unions, sub-queries, CTE, aggregations, temporary tables, and analytical functions

Technical expertise regarding all things data, including: mining, modeling, transforming, cleansing, and validating

Ability to take vague requests and transform them into concise deliverables

Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy

Excellent communication skills, both verbal and written

BS in Mathematics, Economics, Computer Science, Information Management, Statistics or equivalent experience

Preferred

Intermediate to advanced knowledge of Python

Expertise with Looker

Experience with Data Build Tool (dbt)

Experience with embedded analytics solutions

Advanced knowledge of statistics and experience using statistical packages for analyzing datasets (Python, numpy, pandas, R, Excel, etc.)

Bachelor’s degree in Data Science, Statistics, Mathematics, Computer Science, or a related field.

3-7 years of experience in a data analysis or related role.

Proficiency in data analysis tools such as SQL, Python, R, or Excel.

Experience with data visualization tools like Tableau, Power BI, or Looker.

Strong problem-solving skills and attention to detail.

Excellent communication skills with the ability to present complex data to non-technical audiences.

Preferred

Master’s degree is a plus.

Knowledge of statistical methods and predictive modeling is a plus.

Familiarity with big data technologies (e.g., Hadoop, Spark) is advantageous but not required.



Contact this candidate