Post Job Free
Sign in

Data Analyst Business Intelligence

Location:
Seattle, WA, 98109
Salary:
$70,000-$90,000/year
Posted:
September 10, 2025

Contact this candidate

Resume:

Sakshi Singhal

Data analyst/Business Intelligence analyst

******.*@*********.*** 716-***-**** Seattle WA LinkedIn Summary

Results-driven Data Analyst & Business Intelligence Engineer with 5+ years of experience designing scalable ETL pipelines, building cloud-based data warehouses, and delivering data-driven solutions across e-commerce, finance, and healthcare. Expert in Python, SQL, AWS, Snowflake, Tableau, Power BI, and machine learning to transform complex, high-volume datasets into actionable business insights. Proven track record of optimizing query performance by up to 83%, reducing reporting time by 50%, and improving model prediction accuracy by 15–20%. Adept at data modeling, governance, and dashboard automation with deep expertise in fraud detection, risk modeling, and predictive analytics. Skilled collaborator who partners with cross-functional teams to define KPIs, improve operational efficiency, and enable strategic, data-driven decision-making.

Skills

● Programming & Scripting: Python (Pandas, NumPy, SciPy, Scikit-learn, Seaborn, Matplotlib, Keras, XGBoost), R, SQL (T- SQL, PL/pgSQL), DAX, VBA

● Machine Learning & Statistics: Predictive Modeling, Classification & Regression (Random Forest, Logistic Regression, Decision Trees), Clustering (K-means), Feature Engineering, Time-Series Forecasting, Statistical Hypothesis Testing, Risk Modeling, Anomaly Detection, Fraud Analytics

● Business Intelligence & Visualization: Tableau, Power BI, Looker, AWS QuickSight, SSRS, Looker Studio — advanced dashboard design, interactive reporting, KPI tracking, and executive analytics

● ETL, Data Integration & Warehousing: AWS Glue, Informatica PowerCenter, SSIS, Apache Airflow, AWS Redshift, Snowflake, PostgreSQL, MySQL, SQL Server, MongoDB — ETL pipeline design, orchestration, and optimization

● Cloud Platforms & Big Data: AWS (S3, EC2, Redshift, Lambda, SageMaker, IAM, Glacier), Azure Synapse, Google BigQuery, Data Lake Architecture, Data Governance, Metadata Management

● Data Engineering & Performance Optimization: Query Optimization (reducing execution time by 50–80%), Stored Procedures, Functions, Indexing Strategies, Partitioning, Caching, Data Modeling (Star/Snowflake Schemas), API Data Ingestion, Real-Time Data Streaming

● Data Quality & Governance: Data Cleansing, Validation Checks, Data Mapping, Lineage Tracking, Standardization, Compliance (HIPAA, GDPR), Master Data Management

● Version Control & DevOps: Git, GitHub, CI/CD for Data Workflows, Automated Testing of Data Pipelines Experience

Amazon Robotics, MA Mar 2024 – Present

Business Intelligence analyst II

● Designed, implemented, and deployed machine learning models (Random Forest, Logistic Regression, Decision Trees) using Python, Scikit-learn, and AWS SageMaker to identify risk patterns and anomalies, reducing potential buyer abuse incidents and improving operational efficiency.

● Built and optimized ETL pipelines using AWS Glue, Redshift, and SQL, processing 10TB+ of multi-source data daily from transactional, behavioral, and operational systems into a centralized warehouse for real-time analytics.

● Developed predictive risk algorithms to detect high-risk transactions, achieving 90% accuracy in early fraud detection and enabling proactive intervention.

● Partnered with cross-functional teams (engineering, compliance, and product) to define data models, KPIs, and statistical metrics to monitor buyer behavior and identify abuse trends.

● Conducted feature engineering and statistical modeling to support large-scale risk prevention initiatives, leveraging both structured and unstructured data sources.

● Created automated data querying infrastructure in SQL and Python, reducing manual data extraction time by 50% and supporting both offline and online risk analysis use cases.

● Designed interactive dashboards in Power BI and Tableau to visualize fraud trends, compliance metrics, and operational KPIs, enabling leadership to take data-driven actions.

● Implemented data quality governance and validation checks, improving consistency and accuracy across transactional datasets by 40%.

● Collaborated with investigators and data scientists to translate analytical findings into operational workflows, reducing case resolution time by 20%.

● Researched and implemented novel ML and statistical techniques to enhance fraud detection systems, staying aligned with Amazon’s state-of-the-art buyer risk prevention strategies. Capital One Financial, NY Sep 2023 – Feb 2024

Data Analyst

● Partnered with cross-functional business units to translate strategic goals into data-driven insights, delivering interactive dashboards in Power BI, Looker, and AWS QuickSight that improved decision-making speed by 20%.

● Built well-managed data solutions using AWS Redshift, Snowflake, and SQL to enable self-service analytics for stakeholders across finance and healthcare domains.

● Developed and deployed Python- and SQL-based data pipelines to integrate structured (SQL) and unstructured data sources, increasing data processing efficiency by 20%.

● Applied data quality management principles, including metadata, lineage, and governance, resulting in a 40% improvement in data consistency and reliability.

● Designed predictive models using Python (Scikit-learn, Pandas, NumPy) to analyze customer behavior, achieving 70% accuracy in churn prediction and enabling targeted retention strategies.

● Conducted complex SQL query optimization and performance tuning on Snowflake, reducing execution time by 50% and enhancing user experience.

● Collaborated with technology teams to manage data access governance and implement security controls in AWS (IAM, encryption, role-based access), ensuring compliance with internal and regulatory standards.

● Leveraged Agile methodologies to manage analytics deliverables, ensuring on-time, high-quality delivery across multiple concurrent projects.

Tata Consultancy Services, India Jun 2019 – Aug 2022 Jr. Data Analyst

● Designed, developed, and deployed 20+ interactive dashboards in Looker, Power BI, and Tableau, enabling cross- functional teams to track KPIs in real time and improving decision-making speed by 30%.

● Architected and implemented ETL pipelines using Python, SQL, and Informatica PowerCenter to extract, transform, and load large datasets from MySQL, PostgreSQL, and third-party APIs into a centralized data warehouse, improving data ingestion efficiency by 20%.

● Built and deployed machine learning models using AWS SageMaker (classification, regression, clustering) to forecast sales, improve demand planning, and optimize inventory, achieving 15% higher prediction accuracy compared to traditional models.

● Automated data cleansing, transformation, and validation routines using Python (Pandas, NumPy) and SQL stored procedures, reducing manual intervention by 25% and ensuring data accuracy for analytics and reporting.

● Developed custom PostgreSQL functions and stored procedures for high-volume transactional datasets, reducing query execution time by 40% and saving 2+ hours per week in manual processing.

● Collaborated with marketing teams to design and execute A/B testing experiments, measuring campaign impact on ROI and providing recommendations that improved conversion rates by 12%.

● Applied data governance best practices, creating data dictionaries, naming conventions, and metadata documentation to enhance data consistency and streamline onboarding for analysts and engineers.

● Conducted Exploratory Data Analysis (EDA) and statistical modeling using Python, R, and visualization tools, identifying trends, anomalies, and correlations in customer behavior, which informed strategic product decisions.

● Led integration of AWS cloud services (S3 for storage, EC2 for compute, IAM for access control) to enhance data security, ensure compliance, and support scalability for analytics workloads.

● Worked with stakeholders to define KPIs, build requirements for analytical solutions, and deliver insights that directly contributed to process automation, cost savings, and operational efficiency improvements.. Education

Master of Science in Computer Science GPA: 3.53/4.0 University at Buffalo, The State University of New York, Buffalo, NY Bachelor of Engineering in Computer Science and Engineering GPA: 3.8/4.0 Walchand Institute of Technology, Solapur, India GPA



Contact this candidate