Post Job Free
Sign in

Lead Data Engineer & Cloud Platform Architect

Location:
Cranford, NJ
Posted:
December 08, 2025

Contact this candidate

Resume:

SAEE THOMPSON

Lead Data Engineer Cloud & Big Data Platform Architect Streaming & Real-

Time Solutions

************@*****.*** 308-***-**** Grand Island, NE 68801 PROFILE

Data Engineering & Architecture Leader with 12+ years of experience building enterprise-scale, cloud- native data platforms on AWS, GCP, and Azure. Specialized in real-time and batch processing, Lakehouse architectures, and large-scale data modernization. Expert in Spark, Kafka, Flink, Airflow, and dbt with deep hands-on experience in ETL/ELT design, data governance, and ML-integrated analytics using Databricks, Snowflake, Redshift, and BigQuery. Strong in data modeling, lineage, observability, and performance optimization, driving scalable, automated DataOps/MLOps solutions and cross-cloud transformations.

Skills

Programming & Scripting: Python (PySpark, Pandas, AsyncIO), SQL (Advanced Analytics & Window Functions), Scala, Java, Shell/Bash.

Data Engineering & Processing: Spark, Kafka, Flink, Hadoop, Delta Lake, Iceberg, Hudi Batch & Streaming Pipelines, ETL/ELT, CDC, Real-Time Processing. Data Orchestration & Workflow: Airflow, dbt, Glue, Data Factory, Dataflow, Databricks Workflows, Prefect, Dagster

Cloud Platforms & Infrastructure: AWS (S3, Glue, Redshift, EMR, Lambda, Kinesis, Athena) GCP

(BigQuery, Dataflow, Pub/Sub, Dataproc, Composer) Azure (Synapse, Data Factory, ADLS, Databricks, Event Hubs) Terraform, Docker, Kubernetes, Jenkins, CI/CD. Data Warehousing & Storage: Snowflake, Redshift, BigQuery, Synapse PostgreSQL, MySQL, Oracle, SQL Server MongDB, Cassandra, DynamoDB Data Lakes & Lakehouse (S3, ADLS, GCS, Delta). Data Modeling & Architecture: Dimensional Modeling, Data Vault, Star/Snowflake Schema Lakehouse, Data Mesh, Data Fabric, Real-Time Architecture OLTP/OLAP Optimization, Metadata & Lineage Management.

Data Governance & Quality: Collibra, Atlas, Great Expectations, Monte Carlo IAM, RBAC, Encryption, PII Masking, GDPR/CCPA Compliance Cataloging, Lineage, Observability Frameworks. Analytics & Visualization: Tableau, Power BI, Looker, Superset, QuickSight, Databricks SQL Transformations, KPI Dashboards, Self-Service Analytics. MLOps & Advanced Analytics: Databricks ML, MLflow, SageMaker, Kubeflow, Scikit-learn Feature Stores, Model Monitoring, Deployment Automation LLM & RAG Pipeline Integration. Monitoring & Observability: Prometheus, Grafana, ELK, CloudWatch, OpenTelemetry Data Observability, Alerting, Reliability Engineering.

Collaboration & Methodology: Git, GitHub, Bitbucket, Jira, Confluence Agile/Scrum, CI/CD Governance, Cross- Functional Collaboration Technical Leadership, Mentorship, Data Strategy. PROFESSIONAL EXPERIENCE

Lead Data Engineer

SecureSky

•Managing and mentoring a team of 15 data engineers, driving technical excellence, delivery quality, and consistent adoption of engineering best practices.

08/2020 – Present

•Driving organization-wide adoption of Data Mesh and Lakehouse architectures, enabling decentralized domain ownership, scalable data governance, and high-quality data products.

•Overseeing the implementation of MLOps pipelines using Databricks ML, MLflow, and SageMaker for automated, reliable model training, evaluation, and deployment.

•Defining enterprise-grade data observability, lineage, and cataloging frameworks leveraging OpenTelemetry, Apache Atlas, and Monte Carlo to ensure trust, auditability, and governance across all data assets.

•Architecting resilient, event-driven ETL and CDC frameworks that power real-time personalization, recommendation engines, and streaming analytics use cases.

•Partnering with enterprise architecture, product, and engineering leadership to shape long-term data strategy, modernization roadmaps, and innovation initiatives.

Senior Data Engineer

Snowflake

•Architected real-time data ingestion and processing pipelines using Spark Structured Streaming, Kafka, and Flink to support event-driven systems. 07/2019 – 07/2020

•Led migration of enterprise data warehouse from legacy infrastructure to Snowflake and BigQuery hybrid cloud environment.

•Built and managed CI/CD pipelines for data workflows using Jenkins, Terraform, and Git-based deployment automation.

•Established observability standards with Prometheus, Grafana, and ELK for proactive monitoring of data systems and workflows.

•Implemented data lakehouse architecture with Delta Lake and Iceberg to unify batch and streaming analytics workloads.

•Mentored junior engineers on data modeling principles, performance tuning, and data reliability best practices.

Data Engineer

Enigma Technologies, Inc.

•Designed and deployed scalable batch and streaming data pipelines using Apache Spark and Kafka for large-scale data integration. 08/2015 – 06/2019

•Automated ETL orchestration with Apache Airflow and dbt to streamline data transformations and lineage tracking.

•Integrated AWS Glue and Lambda functions to enable serverless data processing and transformation workflows.

•Contributed to data quality and governance frameworks by implementing Great Expectations and metadata cataloging with Collibra.

•Collaborated with analytics teams to deliver data models and datasets for Power BI and Tableau dashboards supporting business insights. Junior Data Engineer / Database Developer

DataGravity

•Developed and optimized SQL-based ETL workflows for data ingestion and transformation across relational systems including PostgreSQL and Oracle. 06/2013 – 07/2015

•Implemented data validation and cleansing scripts using Python and Pandas to enhance data reliability and reduce manual intervention.

•Supported the migration of on-prem datasets to AWS S3-based data lake, integrating with Redshift for analytical consumption.

•Assisted in schema design and indexing strategies to improve database query performance and support scalable analytics.

EDUCATION

Bachelor of Computer Science

Concordia University



Contact this candidate