Post Job Free
Sign in

Senior Data Engineer Cloud Data Platforms Expert

Location:
Grand Prairie, TX, 75050
Posted:
January 13, 2026

Contact this candidate

Resume:

Samikshya Parajuli

********@*****.*** 469-***-**** Dallas,TX LinkedIn

Professional Summary

Innovative and detail-oriented Senior Data Engineer with over 8 years of experience architecting, developing, and optimizing robust data platforms in high-impact domains including healthcare, financial services, pharma, and retail. Specialized in building secure, scalable, and cost-optimized data pipelines using modern cloud technologies, with deep proficiency in Azure, AWS, and GCP. Expert in implementing streaming and batch data architectures, lakehouse models, and multi-layered data transformations across global organizations. Known for combining strong engineering practices with data governance, quality frameworks, and automated CI/CD pipelines to deliver production-grade data solutions that support advanced analytics, machine learning, and regulatory compliance.

Proven track record leading multi-cloud integration, cross-functional collaboration, and the standardization of data engineering workflows using Terraform, PySpark, DevOps pipelines, and more — enabling agility, scalability, and long-term maintainability. Technical Skills

• Cloud Data Platforms & Services: Azure: ADF, Data Lake Gen2, Databricks, Synapse Pipelines, Event Grid, Azure Key Vault, Azure Monitor, Cosmos DB, Azure Wiki, Azure DevOps,

• AWS: S3, Glue (PySpark), Redshift, Lambda, Kinesis, Athena, CloudFormation, ECS Fargate, Step Functions, IAM, SageMaker Feature Store, GCP: BigQuery, Cloud Functions, Pub/Sub, Cloud Storage, Scheduled Exports

• ETL & Data Pipeline Tools: Azure Data Factory (ADF), AWS Glue, Synapse Pipelines, Apache Airflow (MWAA), Azure Event Hubs, AWS AppFlow, Azure Logic Apps, SSIS, Informatica PowerCenter

• Data Processing & Engineering: PySpark, Python (Pandas, NumPy), SQL (T-SQL, Redshift SQL), Spark SQL, Delta Lake, Parquet/ORC, Databricks Notebooks, JSON/XML/CSV

• Data Architecture & Modeling: Lakehouse, Medallion Architecture (Bronze/Silver/Gold), Star & Snowflake Schemas, ODS, Data Vault (basic), Streaming-first Design

• Workflow Orchestration & Scheduling: Step Functions, MWAA (Managed Airflow), Cron Jobs, Azure Data Factory Triggers, Synapse Schedulers

• Infrastructure as Code & DevOps: Terraform, CloudFormation, ARM Templates, Azure Bicep, Azure DevOps (YAML Pipelines), AWS CodeBuild, GitHub Actions, Git, Bitbucket

• Data Governance & Quality: Great Expectations, Azure Purview, Glue Data Catalog, Data Classification, Row-Level Security, Data Masking, PII/PHI Handling

• Security & Identity Management: Azure Managed Identities, AWS IAM, Key Vault, Secret Rotation, RBAC, SSO, Private Endpoints, OAuth2

• Monitoring & Cost Optimization: Azure Monitor, AWS CloudWatch, Azure Cost Management, AWS Cost Explorer, Alerts & Logging Dashboards

• Analytics & Reporting Tools: Power BI, Athena, Looker, SSRS, Redshift Spectrum, Synapse Views, Databricks SQL

• Documentation & Collaboration: Confluence, Azure Wiki, SharePoint, JIRA, Visio, Slack, Teams, Data Dictionaries, Architecture Diagrams

Professional Summary

Capital One Financial Corporation – McLean, VA

June 2022 – Present Senior Data Engineer

• Architected a secure and scalable Azure-based data lakehouse platform using Azure Data Lake Gen2, enabling ingestion and processing of financial transactions, customer behavior, and credit risk data.

• Developed end-to-end data pipelines in Azure Data Factory (ADF) for ingesting data from APIs, SQL Server, and third-party vendors, using parameterized datasets and dynamic linked services.

• Built large-scale batch and streaming data workflows using Azure Synapse Pipelines and Event Grid, supporting regulatory reporting and real-time fraud monitoring systems.

• Implemented Delta Lake architecture in Azure Databricks for efficient, ACID-compliant data storage and scalable transformations of large credit card and lending datasets.

• Created modular PySpark-based ETL frameworks in Databricks, enabling reusable transformation logic for high-volume financial and transactional data.

• Integrated Azure Key Vault, Private Endpoints, and Managed Identities for secure access control across ADF, Databricks, and Synapse pipelines.

• Collaborated with security and compliance teams to enforce GDPR, SOX, and PCI-DSS controls using Azure Purview for data classification and lineage tracking.

• Designed and implemented Data Quality as Code using Python-based validation checks within Databricks notebooks, logging results into Synapse for auditability.

• Built cost-optimized data zones (raw, curated, and analytics) with lifecycle policies and logging using Azure Monitor and Cost Management, reducing cloud spend by 28%.

• Automated CI/CD processes using Azure DevOps, deploying ADF pipelines, Databricks notebooks, and ARM templates via multi-stage YAML pipelines.

• Leveraged Azure Cosmos DB to serve semi-structured data to real-time applications with scalable throughput and JSON- based data modeling.

• Integrated Power BI with Synapse views and curated lakehouse tables to deliver near-real-time dashboards for credit portfolio performance.

• Designed and orchestrated multi-cloud data ingestion by consuming AWS S3-hosted JSON files into ADF using REST APIs and token-based authentication.

• Supported ingestion of GCP BigQuery tables through scheduled exports into Azure Blob Storage, transforming them via Data Factory into unified reporting layers.

• Created custom ADF pipeline templates and Databricks job configurations to standardize development across 6+ agile teams in multiple business domains.

• Conducted root cause analysis and performance tuning of long-running Spark jobs using Azure Databricks Spark UI, improving pipeline runtime.

• Contributed to internal cloud enablement workshops and documentation, coaching junior data engineers on Azure DevOps, ADF orchestration, and data governance practices.

• Maintained and published documentation including data dictionaries, process flows, and system architecture diagrams in Azure Wiki and Confluence for cross-team alignment. Technologies used: Azure Data Factory, Azure Data Lake Gen2, Azure Synapse, Azure Databricks (PySpark), Delta Lake, Power BI, Azure DevOps, Azure Key Vault, Cosmos DB, Azure Monitor, Azure Purview, Event Grid, ARM Templates, AWS S3, GCP BigQuery, Confluence, Azure Wiki

Signify Health – Dallas, TX

June 2019 – May 2022 Cloud Data Engineer

• Designed and implemented scalable AWS-based data lake architecture on Amazon S3, supporting ingestion of clinical, claims, and provider data from multiple healthcare systems.

• Built and maintained ETL pipelines using AWS Glue (PySpark) to transform large healthcare datasets, enabling analytics for value-based care and risk adjustment models.

• Developed serverless data ingestion workflows using AWS Lambda, integrating REST APIs and batch file sources into centralized AWS storage layers.

• Engineered streaming data pipelines using Amazon Kinesis Data Streams and Firehose to process near real-time provider visit and patient assessment data.

• Created optimized Amazon Redshift data warehouses, including schema design, sort keys, distribution styles, and workload management for analytics and reporting teams.

• Implemented Athena-based querying on curated S3 datasets using partitioning and columnar formats, enabling cost- effective ad hoc analytics.

• Orchestrated complex ETL workflows using AWS Step Functions, coordinating Glue jobs, Lambda functions, and validation steps with retry and failure handling logic.

• Applied data quality checks and validation rules using custom PySpark frameworks, ensuring accuracy and consistency of healthcare metrics across pipelines.

• Enforced secure access controls using AWS IAM roles, policies, and S3 bucket policies, ensuring HIPAA-compliant handling of PHI and sensitive datasets.

• Automated infrastructure provisioning for data services using AWS CloudFormation, ensuring repeatable and auditable deployments across environments.

• Integrated CI/CD pipelines using AWS CodePipeline and CodeBuild to version-control ETL scripts, Spark jobs, and infrastructure templates.

• Designed data partitioning and retention strategies using S3 lifecycle policies, reducing storage costs while meeting regulatory data retention requirements.

• Supported analytics and reporting teams by publishing curated datasets to Redshift and Athena, enabling downstream BI tools and dashboards.

• Collaborated with data science teams to prepare feature-ready datasets for predictive models, enabling downstream usage in AWS SageMaker environments.

• Integrated Azure SQL Database sources into AWS ingestion pipelines using secure connectors and scheduled batch processes for cross-cloud data harmonization.

• Participated in limited Azure Data Factory orchestration for vendor data ingestion before transitioning workloads fully into AWS-native pipelines.

• Conducted proof-of-concept integrations with GCP BigQuery exports, validating schema compatibility and ingestion patterns for future multi-cloud analytics use cases.

• Authored comprehensive technical documentation, data flow diagrams, and operational runbooks in Confluence, supporting onboarding and long-term maintainability.

Technologies used:

AWS S3, AWS Glue, PySpark, Lambda, Kinesis, Redshift, Athena, Step Functions, IAM, CloudFormation, CodePipeline, CodeBuild, SageMaker, Azure SQL Database, Azure Data Factory, GCP BigQuery, REST APIs, Parquet, Confluence Technova Global – Texas

September 2017 – April 2019 ETL Developer

• Designed and developed end-to-end ETL pipelines using SSIS (SQL Server Integration Services) to integrate operational data from finance, CRM, and ERP systems into centralized reporting databases.

• Built and optimized T-SQL stored procedures, functions, and complex joins to support large-scale data transformations, aggregations, and historical trend analysis.

• Maintained and enhanced SQL Server 2008/2012 databases, performing indexing, partitioning, and query optimization to improve batch processing performance.

• Developed staging, ODS, and data mart layers to support downstream reporting and analytics while ensuring data consistency and traceability.

• Automated nightly and weekly batch data loads using SQL Server Agent Jobs, Windows Task Scheduler, and batch scripts, ensuring timely data availability.

• Implemented data validation and reconciliation checks to identify duplicates, missing records, and schema mismatches across source and target systems.

• Designed flat-file ingestion frameworks to process CSV, fixed-width, XML, and Excel files received from external vendors and internal departments.

• Created operational and analytical reports using SSRS (SQL Server Reporting Services) to support business intelligence and executive decision-making.

• Supported legacy data integration using Informatica PowerCenter, assisting with workflow troubleshooting, mappings, and production support.

• Performed data cleansing, standardization, and normalization using T-SQL and SSIS transformations to improve overall data quality.

• Collaborated with business analysts to translate reporting requirements into technical ETL specifications and relational data models.

• Developed reusable SSIS packages and templates to standardize ingestion patterns and reduce development time for new data sources.

• Tuned long-running ETL processes using SQL Profiler and Database Engine Tuning Advisor, significantly reducing batch execution windows.

• Maintained data dictionaries, source-to-target mappings, and lineage documentation to support audits and knowledge transfer.

• Implemented error handling, logging, and restartability in ETL workflows to improve reliability and recoverability of batch jobs.

• Supported integration between SQL Server databases and legacy applications built on VB6 and MS Access, ensuring backward compatibility.

• Used TFS (Team Foundation Server) for version control of ETL packages, SQL scripts, and documentation, following SDLC best practices.

• Participated in data migration initiatives during system upgrades, validating historical data loads and ensuring minimal disruption to reporting systems.

Technologies used: SQL Server 2008/2012, SSIS, SSRS, T-SQL, Informatica PowerCenter, SQL Server Agent, Windows Task Scheduler, CSV/XML/Excel/Flat Files, SQL Profiler, Database Engine Tuning Advisor, VB6, MS Access, TFS, Batch Scripting, Data Dictionaries Education: Bachelors in Computer Information Systems, The University of Texas at Arlington,Texas



Contact this candidate