Vinisha Shukla Data Engineer
Missouri, USA ***************@*****.*** +1-816-***-**** LinkedIn SUMMARY
Data Engineer with around 5 years of experience designing and supporting batch data pipelines across banking, payments, risk, and healthcare domains. Hands-on experience with Python, advanced SQL, Snowflake, AWS, and Airflow to build scalable ingestion, transformation, and reporting datasets. Strong background in data modeling, data quality validation, and regulatory reporting support for high-volume enterprise systems. Proven ability to work closely with risk, compliance, analytics, and engineering teams to deliver reliable, audit-ready data. Experienced in improving data reliability, performance, and availability across complex, multi-source environments.
SKILLS
Programming & Data Processing: Python (Pandas, NumPy), SQL (Advanced Queries, Optimization), Bash Cloud & Data Platforms: Amazon Web Services (AWS), Snowflake, Cloud Data Warehouses, Data Lakes, Warehouse Cost & Performance Optimization
Data Engineering & Orchestration: Batch Data Pipelines, ETL / ELT Workflows, Data Ingestion & Processing, Incremental Data Loads, Apache Airflow, Enterprise Schedulers (Cron, BI Scheduling Tools), Git-Based CI/CD for Data Pipelines, Daily Batch Processing at Scale
Data Modeling & Transformation: Dimensional Modeling, Curated & Analytics-Ready Data Layers, SQL-Based Transformations, Python Data Processing, Modular SQL Models Data Quality, Governance & Compliance: Data Validation Rules, Reconciliation Checks, Exception Handling, Audit Controls, Data Lineage
Domains & Analytics: Banking & Financial Data, Payments, Risk & Regulatory Reporting, Healthcare Analytics, KPI Reporting, Exploratory Data Analysis, Data Scale Metrics (Daily Batches, Table Volumes, Data Size Tracking) EXPERIENCE
Data Engineer Citigroup, MO, USA May 2025 – Present
• Built end-to-end data ingestion workflows using Python, SQL, and Snowflake on AWS to process multi-million-record daily payments and transaction datasets, enabling timely availability for regulatory and internal risk reporting.
• Mapped complex source-to-target data relationships across payments, reference, and ledger systems, reducing reconciliation gaps by ~30% and improving accuracy of downstream financial metrics.
• Designed consumption-ready data marts using curated and dimensional modeling techniques, supporting scalable access to liquidity, transaction, and exposure data for global banking teams.
• Optimized batch execution cycles through incremental loading strategies and warehouse performance tuning, improving end-of-day processing reliability during peak transaction windows.
• Enforced data governance controls by implementing quality thresholds, exception tracking, and audit checks, strengthening trust in datasets used for regulatory submissions and executive dashboards.
• Partnered with compliance, risk, and upstream engineering teams to review data flows and logic, accelerating issue resolution and supporting audit-ready traceability across enterprise pipelines. Data Engineer Credit Suisse, MO, USA June 2024 – April 2025
• Implemented batch data pipelines using Python, advanced SQL, and AWS-based cloud data warehouses (Snowflake) to ingest multi-million-record trading, risk, and reference datasets, enabling reliable regulatory and risk analytics.
• Structured curated data layers by applying financial business rules, schema alignment, and dimensional modeling, ensuring accurate aggregation of exposure, P&L, and counterparty risk metrics across asset classes.
• Provisioned scalable data transformations using optimized SQL and Python processing logic to support daily and intraday risk reporting, improving data availability for front-office and risk stakeholders.
• Evaluated data quality by implementing validation rules, reconciliation checks, and exception handling, reducing regulatory reporting breaks and increasing confidence in audited financial datasets.
• Refined pipeline performance through partitioning strategies, incremental loads, and query tuning, improving batch stability and supporting higher data volumes during peak market volatility.
• Enabled audit and compliance readiness by documenting end-to-end data flows, transformation logic, and lineage, accelerating issue resolution and supporting regulatory traceability requirements. Analyst Deloitte USI, Hyderabad, India July 2021 – Dec 2022
• Engineered reusable SQL transformation layers using advanced SQL (CTEs, window functions) to process large-scale financial, telecom, and operational datasets hosted on cloud-based data platforms, supporting analytics for 5+ Fortune 500 clients.
• Orchestrated scheduled data refresh workflows using Python scripts, SQL jobs, and enterprise schedulers (cron / BI scheduling), reducing manual reporting effort by ~25% and ensuring timely dashboard delivery.
• Transformed raw multi-source datasets by applying normalization, joins, and business logic, preparing analytics-ready tables aligned with cloud data warehouse consumption patterns.
• Integrated enterprise data from U.S. and offshore systems through batch-oriented ETL pipelines, handling high-volume operational datasets and improving cross-region reporting consistency.
• Monitored dataset accuracy using reconciliation checks, validation rules, and variance analysis, strengthening trust in data pipelines supporting forecasting and planning workflows.
• Enabled team efficiency by mentoring junior analysts on SQL optimization, data modeling fundamentals, and dashboard performance tuning within large-scale reporting environments. Analytics Engineer CitiusTech, India June 2020 – May 2021
• Supported analysis of healthcare claims, EHR, and clinical datasets using SQL and Python, building a strong foundation in data exploration and data quality assessment for analytics and AI use cases.
• Assisted in developing analytics-ready data models using Snowflake, DBT, and dimensional modeling, gaining hands-on experience with data warehousing concepts used in modern data engineering teams.
• Contributed to building automated ELT pipelines using Python, Airflow, and cloud storage, reducing data refresh time by
~20% and introducing production-style data pipeline practices.
• Performed data checks by applying healthcare business rules and validation logic, strengthening understanding of data governance, compliance, and reliability in healthcare analytics systems.
• Worked alongside data scientists and domain experts to translate AI feature requirements into usable datasets, bridging analytics needs with upstream data engineering processes.
• Helped optimize SQL transformations and queries through tuning and testing, developing performance optimization skills essential for scalable data platforms.
Software Engineer Intern – Data & Reporting ECIL, Hyderabad, India May 2019 – Sep 2019
• Assisted in developing SQL-based reporting datasets from system logs and operational data to support performance monitoring and help identify recurring system bottlenecks.
• Implemented automated reporting workflows using Python, SQL, and Excel, reducing manual reporting effort and contributing to a 30% improvement in reporting efficiency.
• Executed data validation and system testing activities to ensure reporting accuracy, completeness, and reliable data flow across integrated systems.
• Performed exploratory data analysis on system performance metrics and presented insights to engineers, supporting early risk identification and system performance improvements. EDUCATION
Master in Computer Science, University of Missouri – Kansas City, USA May 2024 Bachelors in Computer Science and Engineering, KL University - Andhra Pradesh, India Jun 2021 CERTIFICATIONS
• AWS Certified Solutions Architect – Associate
• Microsoft Azure Fundamentals
• Machine Learning on Google Cloud
• Stanford Certified Machine Learning
• Microsoft Certified: Azure AI Engineer Associate