Post Job Free
Sign in

Data Analyst Power Bi

Location:
Cleveland, OH, 44115
Salary:
90000
Posted:
September 10, 2025

Contact this candidate

Resume:

AINDLA RANJITH REDDY

+1-216-***-**** ***************@*****.*** LinkedIn GitHub Microsoft Certified: Power BI Data Analyst Associate

SUMMARY PROFESSIONAL:

Results-driven Data Analyst with over 3 years of experience delivering data-driven solutions in financial and marketing domains. Proven expertise in building scalable ETL pipelines, creating automated dashboards, and developing advanced analytics using SQL, Python, Power BI, Tableau, and Looker Studio. Hands-on experience working with large datasets on cloud platforms including AWS (Redshift, S3, Athena, Glue) and GCP (BigQuery, Cloud Storage, Dataflow). Adept at data modeling, statistical analysis, A/B testing, and performance reporting. Strong communicator with a track record of collaborating across cross-functional teams to translate complex data into actionable business insights. Holds a Master’s degree in Information Systems and a Power BI Data Analyst certification.

TECHNICAL SKILLS:

Data Processing & Analysis: SQL, Python (Pandas, NumPy), Apache Spark, PySpark

Data Visualization & BI Tools: Power BI, Tableau, Microsoft Excel (Dashboards, Pivot Tables, Scorecards)

GCP: BigQuery, Cloud Storage (GCS), Cloud SQL, Data Studio

AWS: S3, Redshift, RDS, Athena

Databases: MySQL, PostgreSQL, Google BigQuery, Amazon Redshift

ETL & Data Pipelines: SQL-based ETL, Apache Airflow, AWS Glue, GCP Dataflow

Data Warehousing: Data Cleaning, Transformation, Quantitative Analysis, Trend Identification, Data Modeling

Version Control & Collaboration: Git, GitHub, Jira

Data Governance & Reporting: Data Quality Checks, Stakeholder Reporting, Insight Generation, Cross-Functional Collaboration

Statistical Analysis & Reporting: Hypothesis Testing, A/B Testing, Descriptive & Inferential Statistics

PROFESSIONAL EXPERIENCE:

DEUTSCHE BANK, NEW YORK, NY JAN 2024 – PRESENT

DATA ANALYST

RESPONSIBILITIES:

Collected and prepared data from AWS RDS and S3 for financial analysis, ensuring consistency and accuracy across sources.

Cleaned and transformed raw datasets using Python and Pandas to create structured data models for reporting.

Built interactive Power BI dashboards to visualize KPIs such as monthly revenue trends and customer churn.

Used SQL to perform complex joins and aggregations on transactional data stored in Amazon Redshift for executive reports.

Created automated scripts in Python to clean and enrich marketing datasets before loading them into S3.

Identified data quality issues using SQL checks and built alert mechanisms for missing or duplicate entries.

Developed scheduled ETL pipelines using AWS Glue to pull structured data from RDS and push it into Redshift.

Analyzed sales funnel performance by joining user activity logs from S3 with customer data from DynamoDB.

Created SQL queries in Athena to explore clickstream data stored in Parquet format for web traffic insights.

Designed a cost-effective reporting solution by partitioning data in S3 and optimizing queries with Athena.

Collaborated with finance and product teams to translate business questions into data queries and visualizations.

Presented monthly performance reports to senior stakeholders, supported by data visualizations and statistical summaries.

Used Python to automate weekly data pulls and reshape data into reporting-ready formats for Power BI.

Leveraged AWS Lambda to trigger real-time alerts when anomalies were detected in transaction data streams.

Helped reduce dashboard load times by optimizing data models and minimizing real-time refresh complexity.

Contributed to improving data workflows by documenting SQL templates and reusable queries for the analytics team.

Used Amazon QuickSight for ad-hoc visualizations when collaborating with non-technical stakeholders.

Participated in data governance discussions to define naming conventions and ensure consistent metric definitions.

Integrated datasets from third-party APIs and loaded the processed results into S3 for downstream analysis.

Supported A/B testing initiatives by analyzing experimental data and presenting performance comparisons to the product team.

Created reusable SQL scripts to automate monthly revenue and retention reports using data stored in Amazon Redshift.

Conducted root cause analysis on revenue dips by joining transactional data with customer feedback datasets.

Built Python-based validation scripts to check row counts and data types post-ETL from S3 to Redshift.

Collaborated with data engineers to fine-tune Glue jobs for transforming JSON logs into tabular formats for analysis.

Provided weekly insights on customer acquisition cost (CAC) trends using Power BI and Redshift queries.

Performed cohort analysis on user behavior data using SQL window functions to drive product feature updates.

Analyzed marketing campaign performance by integrating UTM-tagged web traffic from S3 and campaign metadata from RDS.

ADROIT SOFTWARE SOLUTIONS, CHENNAI, INDIA JAN 2022 – JULY 2023

DATA ANALYST

RESPONSIBILITIES:

Pulled marketing and transactional data from Google Cloud Storage (GCS) and cleaned it using Python for dashboard reporting.

Wrote SQL queries in BigQuery to aggregate customer behavior data and identify churn risk indicators.

Built interactive Tableau dashboards integrated with BigQuery to visualize sales conversion rates by region.

Automated data transformation jobs using Cloud Composer (Apache Airflow) for daily report generation.

Designed partitioned and clustered BigQuery tables to improve performance and reduce query costs for monthly reporting.

Analyzed campaign performance by joining CRM and ad campaign datasets stored in BigQuery.

Used GCP Dataflow to preprocess large CSV files from GCS into clean Parquet format for analysis.

Collaborated with product teams to define and monitor key performance indicators (KPIs) using BigQuery queries and Data Studio.

Performed exploratory data analysis on user interaction logs from Cloud Logging and visualized usage trends in Data Studio.

Created Python scripts to load survey results from Google Sheets into GCS, and scheduled transformations via Cloud Functions.

Built a real-time reporting dashboard in Data Studio by connecting streaming data from Pub/Sub and pushing it into BigQuery.

Used Cloud SQL for storing intermediate analysis outputs and joining structured data for consolidated reports.

Conducted variance analysis between forecasted and actual financial metrics using Excel and BigQuery extracts.

Cleaned and validated JSON-based event data from mobile apps using Python and uploaded to GCS for batch processing.

Set up scheduled queries in BigQuery to refresh dashboard datasets every morning for leadership reviews.

Collaborated with data engineers to optimize GCS-to-BigQuery ETL pipelines, ensuring schema compatibility and data consistency.

Built cohort analysis queries using advanced window functions in BigQuery to support user retention insights.

Delivered insights to marketing stakeholders with interactive reports in Looker Studio (Data Studio).

Documented data dictionary, table schemas, and reporting metrics to support self-service analytics across departments.

Participated in Agile standups and sprint planning meetings, contributing to continuous improvement of data workflows.

Education

Cleveland State University Cleveland, OH Aug 2023 – May 2025 Master of Science in Information System

Brilliant Institute of Engineering and technology India Jun 2018 – July 2022 Bachelor of Technology in Computer Science and Engineering

Bachelor’s degree in Data Science, Statistics, Mathematics, Computer Science, or a related field.

3-7 years of experience in a data analysis or related role.

Proficiency in data analysis tools such as SQL, Python, R, or Excel.

Experience with data visualization tools like Tableau, Power BI, or Looker.

Strong problem-solving skills and attention to detail.

Excellent communication skills with the ability to present complex data to non-technical audiences.

Preferred

Master’s degree is a plus.

Knowledge of statistical methods and predictive modeling is a plus.

Familiarity with big data technologies (e.g., Hadoop, Spark) is advantageous but not required.



Contact this candidate