Post Job Free
Sign in

Data Analyst Power Bi

Location:
United States
Posted:
September 19, 2025

Contact this candidate

Resume:

Sai Kalyan

Email: *****************@*****.***

Mobile: +1-937-***-****

Data Analyst

PROFESSIONAL SUMMARY:

Data Analyst with 4+ years of experience, demonstrating expertise in analytical thinking, attention to detail, and problem-solving skills across financial and healthcare domains, fostering innovative thinking. Strong communication and presentation skills to technical and non-technical audiences.

Proficient in connecting dots across applications and businesses to understand the E2E view, effectively communicating in both technical and business terms, with strong PL/SQL skills for complex queries and stored procedures.

Hands-on experience with Oracle Exadata and 10g and above, leveraging query tools to aid data analysis, and expertise with Microsoft Office suite usage for reporting and documentation, ensuring data-driven insights.

Know-how working in Agile/Scrum teams for prioritization of work and resource assignments, demonstrating strong team player skills to influence and guide teams for success, working well with minimal supervision.

Good at identifying priorities and managing multiple projects simultaneously, with know-how in effort and financials estimation, and willingness to ask questions and reach out for assistance as required, ensuring project success.

Proficient in writing and analyzing complex PL/SQL queries, stored procedures, and views, optimizing performance for both operational and ad-hoc analytical use cases, demonstrating strong analytical mindset.

Collaborated with cross-functional teams to define metric definitions and deliver actionable insights, participating in Agile delivery cycles, sprint planning, and retrospectives, aligning solutions to business priorities.

Used Git, GitHub, and Azure DevOps for code versioning and CI/CD pipeline deployments, supporting collaborative development in regulated environments, and ensuring code quality through peer-reviewed merge requests.

Created extensive pipeline documentation including source-to-target mappings and data dictionaries, ensuring knowledge transfer and audit-readiness, and supporting data governance initiatives across projects.

Highly motivated, detail-oriented data analyst passionate about building clean, modular, and scalable solutions that enable data-driven decisions and deliver measurable business impact, fostering innovative thinking.

Experience with data transformation using PySpark, applying cleansing rules and validation logic to prepare analysis-ready datasets, demonstrating attention to detail and analytical thinking skills for data quality.

Designed and developed dynamic dashboards using Power BI and Tableau to deliver business insights across key KPIs, care management trends, and regulatory metrics, showcasing strong presentation skills.

Integrated external APIs and third-party data feeds using Python and RESTful services, ingesting economic indicators and census data into core analysis pipelines, demonstrating problem-solving skills.

Performed root cause analysis and pipeline debugging using Azure Monitor and CloudWatch logs, significantly improving uptime and reliability across mission-critical jobs, showcasing analytical thinking.

Supported UAT and QA teams in defining test cases and expected outputs for ETL workflows and dashboard validation, improving release quality and stakeholder confidence, demonstrating attention to detail.

Implemented data governance audits, tracking data exposure and generating compliance reports, ensuring data security and regulatory compliance, demonstrating attention to detail and analytical thinking.

Leveraged cloud platforms for data transformation tasks, optimizing cost and scalability for asynchronous and micro-batch workflows, demonstrating innovative thinking and problem-solving skills.

Built ETL pipelines to extract, cleanse, and load data from relational, flat file, and API-based data sources into scalable cloud warehouses, demonstrating expertise in data integration and transformation.

Demonstrated ability to effectively communicate across the organization, tailoring communication to both technical and business audiences, providing detailed technical explanations and executive summaries.

Works well in a team environment with minimal supervision, demonstrating strong team player skills and the ability to influence and guide teams for success, contributing to a collaborative and productive work environment.

TECHNICAL SKILLS:

Databases - MySQL, PostgreSQL, SQL Server, Oracle, MongoDB, Snowflake, Redshift, Oracle Exadata

Data Analysis Tools - Excel, Google Sheets, Jupyter Notebook, Tableau, Power BI, Looker, QlikView

Programming Languages - SQL, Python, R, DAX, VBA, JavaScript (basic), PL/SQL

Data Wrangling & ETL - Pandas, NumPy, Alteryx, Power Query, Apache Airflow, Talend, SSIS

Cloud Services - AWS (S3, Redshift, QuickSight), Azure (Data Lake, Synapse, Power BI), GCP (BigQuery, Data Studio)

Data Visualization - Tableau, Power BI, Matplotlib, Seaborn, Plotly, ggplot2

Methodologies - Agile/Scrum, Waterfall, SDLC, CRISP-DM

Others - Microsoft Office suite

PROFESSIONAL EXPERIENCE:

Truist Financial Mar 2023 – Present

Data Analyst

Responsibilities:

Demonstrated analytical thinking by developing dynamic ETL pipelines using Azure Data Factory to process transactional banking data, enabling seamless reporting for treasury and compliance divisions, showcasing attention to detail. I am a team player.

Created parameter-driven ADF workflows leveraging Lookup, Filter, and ForEach activities to manage daily ingest of CSV, JSON, and Excel files from Blob storage into Synapse Analytics reporting tables, showing problem-solving skills. I work well in a team environment.

Designed Power BI dashboards that visualize loan disbursement trends, transaction volumes, and fraud pattern anomalies, improving executive oversight into financial operations and enabling faster response times, demonstrating innovative thinking. I am good with identifying priorities.

Collaborated with finance analysts and auditors to define transformation logic for core financial entities and automate pipeline triggers to ensure data availability for quarterly audits and risk assessments, showcasing strong communication skills. I manage multiple projects simultaneously.

Developed Spark-based Databricks notebooks to cleanse, deduplicate, and join transaction records across multiple sources, improving reporting accuracy and consistency across financial dashboards, requiring strong PL/SQL skills. I am willing to ask questions.

Implemented Azure Key Vault secrets management to securely pass credentials to ADF, preventing exposure of sensitive configuration data and ensuring compliance with internal security standards, demonstrating expertise with Microsoft Office suite usage. I reach out for assistance as required.

Created stored procedures and views in Azure SQL to handle post-load validations and data quality rules, ensuring completeness, referential integrity, and audit-readiness of reconciled banking records, connecting dots across various applications. I understand E2E view.

Optimized slow-performing T-SQL procedures by introducing indexes, refactoring joins, and rewriting aggregations, reducing reporting lag and improving SLA adherence for nightly data refresh jobs, demonstrating proficiency in query tools. I have strong PL/SQL skills.

Partnered with business stakeholders to prioritize high-impact data sets for automation, rapidly building scalable pipelines that replaced legacy SSIS workflows and improved maintainability of the reporting stack, communicating to both technical and non-technical audiences. I can influence and guide team.

Used Git-integrated Azure DevOps for version control, pipeline deployment, and defect tracking, aligning with agile sprint planning and enabling faster resolution of development and QA feedback, showcasing know-how working in Agile/scrum teams. I prioritize work.

UnitedHealth Group Feb 2020 – April 2022

Data Analyst

Responsibilities:

Built scalable ETL pipelines using AWS Glue and PySpark to ingest, clean, and standardize healthcare claims from multiple provider systems into Redshift for centralized analytics and downstream consumption, demonstrating analytical thinking. I am a team player.

Developed custom Spark transformations to normalize diagnosis codes, join patient encounter records, and enrich clinical dimensions, improving reporting efficiency and model readiness across care programs, showcasing attention to detail. I work well in a team environment.

Automated ingestion of new patient records using S3 event triggers and Lambda functions, initiating downstream Glue workflows and ensuring real-time updates to claims processing pipelines, demonstrating problem-solving skills. I am good with identifying priorities.

Collaborated with compliance teams to implement HIPAA-compliant data masking policies in Glue jobs and Redshift views, safeguarding sensitive PII fields in analytical and reporting environments, showcasing innovative thinking. I manage multiple projects simultaneously.

Created Redshift-backed Power BI dashboards that visualize patient utilization trends, chronic condition management, and care team performance across provider networks and regions, requiring strong PL/SQL skills. I am willing to ask questions.

Tuned Redshift queries by applying distribution keys, sort keys, and WLM queue adjustments, enhancing report responsiveness and lowering compute usage during peak clinical analysis hours, demonstrating expertise with Microsoft Office suite usage. I reach out for assistance as required.

Configured validation logic using PyDeequ in AWS Glue to monitor data quality metrics and implemented SNS notifications for failed jobs, schema mismatches, and record count anomalies, connecting dots across various applications. I understand E2E view.

Supported care management teams by generating patient segmentation datasets, enabling personalized outreach based on predictive readmission risk scores and historical visit behavior patterns, demonstrating proficiency in query tools. I have strong PL/SQL skills.

Maintained documentation for Glue job flows, schema evolution, and transformation rules, ensuring team knowledge continuity and simplifying stakeholder impact analysis during model changes, communicating to both technical and non-technical audiences. I can influence and guide team.

Participated in sprint planning and stand-ups, coordinated backlog grooming with clinical SMEs, and contributed to cross-team knowledge sharing sessions to drive data literacy and adoption, showcasing know-how working in Agile/scrum teams. I prioritize work.

Educational Details:

Master's in Business Analytics - University of Dayton



Contact this candidate