Krishna Sai
Email: **************@*****.***
Mobile: +1-214-***-****
Data Analyst
PROFESSIONAL SUMMARY:
Data Analyst with 5+ years of experience demonstrating analytical thinking and problem-solving skills, using SQL and innovative thinking to deliver data-driven insights within agile teams. Strong attention to detail ensures accuracy.
Proficient in writing and analyzing complex PL/SQL queries and stored procedures, connecting dots across applications to provide an E2E view and support data analysis needs. Works with Oracle Exadata.
Expertise in Microsoft Office suite, creating presentations for both technical and non-technical audiences, effectively communicating complex information with strong communication skills across the organization.
Developed and maintained data solutions, identifying priorities and managing multiple projects simultaneously, while working in Agile/Scrum teams for prioritization and resource assignments effectively.
Built advanced SQL queries using joins, CTEs, window functions, and case logic to support ETL workflows, reporting, and KPI tracking across OLAP and OLTP systems, demonstrating attention to detail.
Automated recurring data reporting and cleanup processes using Python, reducing manual workloads and improving data quality, while demonstrating willingness to ask questions and reach out for assistance.
Migrated legacy reports into modern solutions connected to data sources, improving refresh cycles and dashboard usability, while demonstrating strong communication and presentation skills to stakeholders.
Integrated hybrid cloud environments to unify enterprise datasets and streamline reporting pipelines across business units, demonstrating analytical thinking and problem-solving skills in complex scenarios.
Created parameterized pipelines with dynamic datasets and lookup logic, improving modularity and enabling rapid onboarding of new data sources with minimal rework, showcasing innovative thinking.
Implemented security measures and governance aligned with data protection policies to ensure secure access and compliance with internal IT controls, demonstrating attention to detail and team player skills.
Delivered curated, validated datasets and reusable dataflows to analysts and end-users, reducing dependency on engineering and increasing analytics velocity across departments, working with minimal supervision.
Built ML pipelines for model training, evaluation, and prediction on cleaned datasets, enabling intelligent automation for operations use cases, while demonstrating analytical mindset and problem-solving abilities.
Enabled monitoring and logging for production ETL pipelines to ensure SLA adherence and enable root cause diagnostics, while demonstrating strong PL/SQL skills and attention to detail in data analysis.
Created business glossary artifacts, data dictionaries, and metadata-driven documentation to support data literacy and reduce onboarding time, demonstrating strong communication and presentation skills.
Participated in Agile sprints, backlog grooming, and sprint reviews to align development efforts with business priorities, ensuring timely delivery of analytics features and performance improvements as a team player.
TECHNICAL SKILLS:
Programming Languages - Python (Pandas, NumPy), SQL, R, PL/SQL
Database Technologies - Azure SQL, PostgreSQL, MySQL, BigQuery, Oracle Exadata, Oracle 10g
Cloud Platforms - Microsoft Azure, Google Cloud Platform (GCP), IBM Cloud
Tools & Frameworks - Power BI, Tableau, Apache Spark, Databricks, Git
ETL & Data Engineering - Azure Data Factory, Dataflow (GCP), Apache Beam, Cloud Composer
Data Visualization - Power BI, Google Data Studio, Looker
Operating Systems - Windows, Linux
Others - JIRA, Agile, SDLC, GitHub, Excel (Pivot Tables, VLOOKUP), Microsoft Office Suite
PROFESSIONAL EXPERIENCE:
CVS Health June 2024 – Present
Data Analyst
Responsibilities:
Demonstrated analytical thinking by designing end-to-end ETL pipelines in Azure Data Factory and Databricks, automating data ingestion and transformation for real-time analytics and performance reporting. This required attention to detail in data mapping and transformation logic.
Built 30+ interactive Power BI dashboards using advanced DAX, bookmarks, slicers, drill-throughs, and row-level security, enabling self-service analytics and executive decision-making across pharmacy operations, inventory, and claims. Strong communication skills were essential.
Integrated Azure Data Lake and GCP BigQuery datasets using Spark notebooks to build unified data models, enhancing reporting consistency and enabling compliance with federal and internal data regulations. Problem solving was key.
Developed optimized SQL views and stored procedures with joins, CTEs, and conditional logic to transform and standardize legacy and cloud-based data sources for enterprise-wide reporting pipelines. This required strong PL/SQL skills.
Collaborated with business stakeholders to define KPIs and calculated metrics in Power BI dashboards, aligning data outputs with strategic objectives and increasing adoption of self-service reporting tools. Effective communication was crucial.
Orchestrated dynamic, parameterized ADF pipelines with error handling, alerts, and scheduling logic to support scalable, resilient data flows with minimal manual intervention across departments. Attention to detail was paramount.
Conducted root cause analysis on data issues using SQL profiling scripts and Python diagnostics, identifying anomalies and improving the accuracy of mission-critical executive dashboards. Analytical thinking was essential.
Delivered validated datasets through Power BI dataflows and curated views, reducing dependencies on engineering teams and accelerating time-to-insight for business users and analytics teams. This required innovative thinking.
Modeled semantic layers using star schemas and role-playing dimensions to support performant Power BI dashboards for operational, regulatory, and inventory reporting needs. Strong PL/SQL skills were utilized.
Implemented secure access controls in Power BI service through workspace roles, security groups, and dataset-level permissions to comply with enterprise governance and healthcare data protection standards. Attention to detail was key.
PNC Bank June 2023 – June 2024
Data Analyst Intern
Responsibilities:
Created Power BI dashboards using DAX, slicers, and KPI indicators to visualize customer metrics, automate loan reporting, and enable dynamic exploration of financial performance trends across key business segments. Analytical thinking was applied.
Developed SQL queries and Python scripts to ingest, clean, and validate financial records, converting legacy Excel-based processes into reliable PostgreSQL data pipelines with automated transformation and quality checks. Strong PL/SQL skills were used.
Supported credit risk analytics by designing dashboards that visualized default rates, exposure limits, and delinquency trends, helping internal teams monitor and respond to risk factors in real-time. Attention to detail was important.
Built ETL workflows using IBM DataStage and SQL for aggregating and standardizing credit system data, reducing manual effort and enabling unified reporting across mortgage and consumer lending portfolios. Problem solving was required.
Conducted exploratory data analysis (EDA) using Python (Pandas, NumPy) and SQL, identifying inconsistencies and generating insights that improved model accuracy and dashboard reliability for executive consumption. Innovative thinking was applied.
Converted SAS-based logic into SQL queries and Power BI visuals, eliminating external licensing costs and enabling more scalable, maintainable reporting infrastructure for internal analytics users. Strong communication skills were needed.
Maintained GitHub repositories to manage SQL scripts, transformation logic, and documentation, supporting change control, collaboration, and consistent delivery of reporting features during agile development cycles. Attention to detail was key.
Designed PostgreSQL views and dynamic queries to support Power BI dashboards tracking financial health, segmentation by credit score bands, product holdings, and account-level behaviors across customer cohorts. Strong PL/SQL skills were utilized.
Validated ETL outputs against reference data sources and legacy reports to ensure reporting accuracy, conducting reconciliation efforts and preparing audit logs to meet compliance and internal audit requirements. Analytical thinking was applied.
Participated in Agile/scrum teams including sprint planning, standups, and retrospectives, contributing to feature prioritization and delivery of dashboard improvements based on evolving stakeholder feedback. Team player attitude was essential.
Infosys June 2019 – Dec 2022
Data Analyst
Responsibilities:
Created Power BI dashboards with KPIs, filters, and time-based drilldowns to visualize delivery status, product availability, and operational efficiency across client accounts in logistics, retail, and supply chain sectors. Analytical thinking was applied.
Cleaned and transformed raw data using Python (Pandas) and SQL Server to generate structured, analysis-ready datasets that improved report accuracy and reduced manual data preparation across multiple teams. Strong PL/SQL skills were used.
Built stored procedures and reusable SQL functions to calculate metrics, generate reporting layers, and support self-service dashboards used for tracking sales, inventory, and fulfillment cycle efficiency. Problem solving was required.
Designed and deployed automated ETL pipelines using Azure Data Factory with lookup, conditional split, and aggregate logic to mirror Excel workflows in scalable, cloud-native formats. Attention to detail was important.
Migrated Tableau dashboards and static Excel charts into Power BI reports, enabling dynamic slicing, drill-downs, and real-time refresh through tabular models and direct-query integration. Innovative thinking was applied.
Implemented PySpark pipelines on Azure Databricks to parse and process unstructured text files, storing transformed outputs into Azure Data Lake for downstream reporting and audit usage. Strong communication skills were needed.
Worked with data engineers to validate ingestion logic and maintain schema consistency across rapidly evolving source systems, ensuring stability and accuracy of downstream reporting artifacts. Attention to detail was key.
Maintained GitHub repositories for SQL scripts, pipeline logic, and dashboard documentation, enabling version control, team collaboration, and standardized development workflows across environments. Strong PL/SQL skills were utilized.
Led user training on Power BI features including bookmarks, slicers, and tooltip pages, improving self-service capabilities and reducing ad hoc report requests from business stakeholders. Analytical thinking was applied.
Authored comprehensive documentation for KPIs, ETL workflows, refresh cycles, and dashboard configuration, streamlining onboarding and reducing time spent on support and clarification requests. Attention to detail was key.
Educational Details:
Master of Science in Advanced Data Analytics - University of North Texas
Bachelor of Technology in Mechanical Engineering - Prasad V. Potluri Siddhartha Institute of Technology