Uday Kiran Panuganti
Cleveland, OH *******************@*****.*** +1-216-***-**** LinkedIn
SUMMARY
Data & Business Intelligence Professional with 4+ years of experience in building scalable data pipelines, conducting exploratory data analysis, and enabling real-time analytics. Proficient in Python, SQL, Hive, Hadoop, and Spark with expertise in ETL, data transformation, and streaming workflows. Skilled in data governance, quality assurance, and documentation practices. Experienced in monitoring and root-cause analysis using Splunk and collaborating with cross-functional teams to translate business goals into data-driven solutions that improve performance and decision-making.
TECHNICAL SKILLS
Splunk- Monitoring, log analysis, root-cause investigation
Data Governance & Quality- Documentation, validation, compliance
Streaming Workflows- Kafka, Spark Streaming, Airflow
Power BI- DAX, Power Query, Data Modeling, Report Building
Tableau- Dashboards, Calculated Fields, Tableau Prep
Looker- LookML, Data Exploration, Looker Studio
QlikView/Qlik Sense- Data Load Scripting, Set Analysis
Google Data Studio- Report Creation, Data Blending
Excel (Advanced)- Pivot Tables, Power Query, VBA, Macros
Database & Querying- SQL (MySQL, PostgreSQL, SQL Server, Oracle, Snowflake, BigQuery, Redshift)
NoSQL Databases- MongoDB, Cassandra, DynamoDB
Programming & Scripting- Python, Pandas, NumPy, Matplotlib, Seaborn, SciPy, Scikit-learn, R, JavaScript
Cloud Platforms- AWS (S3, Redshift, Athena), Azure (Synapse, Data Factory), GCP (BigQuery, Dataflow)
Big Data Processing- Apache Spark (PySpark), Hadoop, Databricks
ETL & Data Integration Tools- Informatica, Talend, Alteryx, SSIS, DBT (Data Build Tool), Apache Airflow, RESTful APIs, Web Scraping, Kafka
Statistical & Machine Learning Skills- Statistical Analysis (Hypothesis testing, A/B Testing, ANOVA), Predictive Modeling (Regression, Classification, Clustering)
Version Control- Git, GitHub, BitBucket
Project Management- JIRA, Confluence, Agile/Scrum methodologies
CI/CD for Data Pipelines- GitHub Actions, Jenkins
WORK EXPERIENCE
Senior Data Analyst
Bread Financial- Columbus, OH Mar 2025- Present
Designed and developed interactive dashboards and reports in Power BI, Tableau, and Looker, delivering actionable insights that improved executive decision-making speed by 30%.
Built and optimized Snowflake data models and BigQuery SQL queries, streamlining analytics pipelines and ensuring compliance with governance standards.
Automated KPI alerts and performance tracking in Power BI, eliminating 20+ hours of manual reporting per month.
Developed and deployed machine learning models (Keras + Spark, Scikit-learn) for fraud detection, anomaly identification, and credit risk scoring, improving regulatory reporting and risk controls.
Processed large-scale datasets using AWS, Hadoop, Hive, and Spark, ensuring scalability and accuracy in analytics.
Collaborated in Agile ceremonies (sprint planning, standups, retrospectives, demos) to ensure timely delivery of analytics solutions.
Conducted ETL development and data validation (SQL, Excel, JSON, REST APIs) to integrate and cleanse data from diverse sources.
Environment: AWS, Excel, Hadoop, Hive, Looker, Oracle, Power BI, Python, R, Snowflake, Spark, SQL
Data Analyst
DSW Designer Shoe Warehouse- Columbus, OH Aug 2024- Feb 2025
Developed predictive and prescriptive models using Python and R, enabling better demand forecasting and inventory management.
Built interactive dashboards in Tableau and Power BI with data from Azure SQL, BigQuery, and SQL Server, improving reporting accuracy and decision-making across departments.
Migrated and optimized reporting workflows to Azure Cloud, reducing data refresh times by 35%.
Automated ETL validation and reporting processes, improving data reliability and cutting manual checks by 25%.
Designed and implemented a chatbot using Google Dialogflow, reducing IT support workload by 35%.
Partnered with IT teams to optimize data pipelines for BI dashboards, ensuring smooth integration and scalability.
Conducted data analysis on daily trades and positions data from Investment Data Warehouse (IDW) and Charles River Database (CRD), supporting financial reporting accuracy.
Environment: Azure, SQL Server, MySQL, BigQuery, Power BI, Tableau, Python, R, Spark, Hadoop, SSRS, Excel
Data Analyst
GeBBS Healthcare Solutions- Hyderabad, IN Nov 2021- July 2023
Designed and optimized SQL queries, ETL scripts, and data pipelines, improving reporting efficiency and accuracy for healthcare analytics and RCM processes.
Implemented Spark and Python ML modules for predictive analytics and risk adjustment, improving model accuracy and patient outcome predictions.
Automated data validation and reporting processes with Python and Excel, reducing manual effort by 20% while ensuring compliance with healthcare data standards.
Extracted and integrated data from SQL Server, Azure SQL DB, Salesforce, and Snowflake, ensuring consistency across enterprise reporting systems.
Migrated reports from Spotfire to Tableau, Power BI, and Superset, enhancing data visualization capabilities.
Built and published secure data sources in Tableau Server for organization-wide consumption.
Collaborated in Agile teams, contributing to BRD/LLD/HLD documentation and sprint-based deliverables.
Developed and deployed customer-facing chatbots using Google Dialogflow Enterprise, reducing support workload.
Environment: Azure, SQL Server, Snowflake, BigQuery, Redshift, Hadoop, Hive, Spark, Python, R, Power BI, Tableau, Excel.
Programmer Analyst
Barclays- Hyderabad, IN Apr 2020- Oct 2021
Designed and implemented ETL pipelines with Informatica and Alteryx, ensuring adherence to data quality and governance standards within a regulated financial environment.
Analyzed large-scale user log data with Spark SQL and engineered features for machine learning models, improving classification and prediction accuracy.
Built predictive models in R, Python, and Spark MLlib for credit risk, fraud detection, and regulatory reporting, supporting compliance with banking risk management frameworks
Developed and optimized OLAP cubes in Snowflake and SSAS, enhancing reporting performance and reducing query times by 40%.
Conducted feature engineering, trend analysis, and forecasting, delivering insights that supported product management and consumer awareness initiatives.
Implemented row-level security in Power BI and integrated with Power BI Service for secure dashboard access.
Delivered predictive analytics solutions with BigQuery ML and Looker, visualizing KPIs and enabling more accurate forecasting.
Deployed machine learning models on AWS, improving scalability and production performance of analytics solutions.
Environment: AWS, Snowflake, SQL Server, Informatica, Alteryx, Spark, Hadoop, Kafka, Python, R, Power BI, Looker
EDUCATION
Cleveland State University- Cleveland, OH
Master of Science in Information Systems