Srikanth Boinapelly
Data Analyst
Ohio, USA ********.***********@*****.*** +1-330-***-**** LinkedIn Summary
Results-driven Data Analyst with 5 years of experience in data analytics, AI-driven insights, and cloud data engineering. Skilled in developing scalable pipelines using dbt, PySpark, and data lakehouse platforms (Snowflake, BigQuery, AWS Redshift) to support enterprise analytics. Strong background in predictive analytics, anomaly detection, and forecasting for improving fraud prevention, risk management, and operational efficiency. Adept at creating interactive dashboards, automated insights, and data storytelling solutions with Tableau, Power BI, and Looker. Experienced in leveraging Generative AI (LLMs) and Explainable AI (XAI) to automate reporting, improve transparency, and accelerate data-driven decision-making across healthcare and financial sectors. Technical Skills
• Programming Languages: Python, SQL, R, dbt (Data Build Tool)
• Data Analysis: Pandas, NumPy, Statistical Analysis, A/B Testing, Feature Engineering
• Data Visualization: Tableau, Power BI, Matplotlib, Seaborn, Looker, Data Storytelling, Automated Insights
• Machine Learning: Scikit-learn, XGBoost, Logistic Regression, Random Forest, K-Means, (TensorFlow, PyTorch)
• AI: Explainable AI, Generative AI for Data, Predictive Analytics, Anomaly Detection, Data-Driven Decision Making
• ETL & Big Data: AWS Glue, PySpark, Apache Airflow
• Cloud Platforms: AWS (S3, Lambda, Redshift, EC2), Snowflake, Google Cloud, Azure Data Factory, Databricks
• Databases: MySQL, PostgreSQL, Big Query, Redshift, Oracle, MongoDB
• Development & Tools: Git, Jira, Excel (Power Pivot, VBA), Jupyter Notebook, Google Colab Education
Masters of Computer Science
Kent State University, Kent, Ohio Aug 2022 - Dec 2023 Bachelor of Computer Science
Aurora's Technological and Research Institute, Hyderabad, India Jul 2015 - May 2019 Experience
Bristol Myers Squibb, USA Feb 2024 - Present
Data Analyst
• Developed and optimized complex SQL queries to extract, manipulate, and analyze large datasets from relational databases, improving query efficiency by 30% through advanced indexing and query optimization techniques.
• Leveraged advanced Excel functionalities (Pivot Tables, VLOOKUP, Power Query) to clean, transform, and visualize datasets, reducing manual reporting time by 20% and increasing team productivity.
• Designed interactive Tableau dashboards and reports with custom visuals, cutting report generation time by 40% and providing real-time insights for key stakeholders.
• Utilized Python (Pandas, NumPy) for data preprocessing and manipulation, creating automated scripts that reduced data cleaning time by 25% and improved the accuracy of monthly reports.
• Applied statistical analysis using R, conducting regression models and time-series forecasting that identified a 10% improvement in demand forecasting accuracy, optimizing inventory management.
• Streamlined ETL processes with Talend for extracting, transforming, and loading data from multiple sources, reducing data processing times by 35% and enhancing data quality.
• Maintained data models using dimensional modeling techniques (Star Schema), improving query performance by 15% and enabling more efficient data warehouse management.
• Collaborated with cross-functional teams to identify key metrics and KPIs, developing real-time dashboards that contributed to a 10% increase in decision-making speed for senior management.
• Integrated dbt into data pipelines to streamline transformation workflows, improving model maintainability and aligning with the company’s transition to a data lakehouse architecture.
• Delivered automated insights and data storytelling through Tableau and Looker, enabling senior leadership to track KPIs in real time and make faster, data-driven business decisions.
• Built forecasting models and risk analytics dashboards powered by LLMs (Generative AI), enhancing explainability (XAI) and supporting compliance teams with transparent decision-making processes. Hexaware Technologies, India Jun 2019 - Aug 2022
Data Analyst
• Preprocessed and aggregated large-scale transaction data using SQL (window functions, CTEs, complex joins) to create fraud detection datasets, reducing data preparation time by 40% and storing optimized datasets in AWS S3 for scalable access.
• Performed data analysis using Python (Pandas, NumPy) to identify transaction patterns, detect anomalies, and refine financial data quality. Improved data validation efficiency by 30%, enhancing the accuracy of fraud detection metrics and reporting.
• Collaborated with the data science team to integrate ML-based fraud detection models into financial risk analysis, ensuring seamless data flow and validation. Enhanced fraud detection accuracy by 25% by optimizing dataset structures for model performance.
• Designed real-time Power BI dashboards, providing stakeholders with fraud insights through interactive visualizations, drill- through analysis, and filters, increasing reporting efficiency by 50% and improving fraud response time.
• Automated ETL workflows by integrating AWS Glue for seamless data extraction, transformation, and loading, reducing manual intervention by 70% while documenting processes and training teams on fraud analysis techniques and Power BI usage.
• Standardized and optimized financial datasets to improve fraud detection efficiency, ensuring compliance with regulatory standards. Enhanced reporting accuracy and data-driven decision-making in financial risk assessment processes.
• Applied predictive analytics and anomaly detection to financial transaction data, proactively identifying irregularities that reduced fraud risk exposure by 20%.