Post Job Free
Sign in

Senior Software/Data Engineering Leader

Location:
Austin, TX
Posted:
February 01, 2026

Contact this candidate

Resume:

Nhan Nguyen

Austin, TX

Phone: 512-***-**** Email: **********@*****.***

US Citizen

LinkedIn: https://www.linkedin.com/in/nhan-nguyen-0b9ab5245/

GitHub: https://github.com/person1953-ux/PRODUCT

SUMMARY:

Senior software developer and data engineering lead with over 20 years of experience building and supporting data pipelines used in real production systems. Strong background in Python, SQL, CDC, cloud and enterprise data platforms, and large-scale ETL design. Experienced in manufacturing analytics and other business-critical systems. A hands-on technical leader who remains closely involved in system design, development, deployment, and ongoing production support.

SKILLs:

Programming & Query Languages (15–20+ Years)

Python – Data Engineering (15+ years)

Python, Data Engineering, ETL/ELT pipelines, batch and streaming data processing, data ingestion, data transformation, data validation, automation scripting, pandas, PySpark, Apache Spark, distributed systems, data pipelines, Flask, REST APIs, web scraping, workflow orchestration, software design patterns, clean architecture, scalable systems, Python ecosystem, production data platforms, cloud-ready systems, continuous learning.

SQL / PL/SQL (20+ years): Complex analytical queries, stored procedures, performance tuning

Java / JavaScript / .NET (10+ years): Enterprise data and integration applications

C / C++ / Visual Basic / HTML / CSS (15+ years): Legacy and supporting systems

Advanced SQL skills with 15+ years of deep experience with Orale, mySQL, Postgresql

Cloud & Data Platforms (3+ Years)

AWS (3+ years): S3, Glue (PySpark), Lambda, Athena, Redshift, NoSQL database - DynamoDB, EC2, ECS, IAM, SQS, CloudWatch, ML, Kubernetes/Docker/Airflow

Domain Expertise (15+ Years)

Manufacturing Execution Systems (MES): FAB operations, production data platforms

Semiconductor Manufacturing: SiMAX operations, lot movement, yield, shipment systems

Supported mission-critical, high-availability manufacturing environments

Databases & Data Warehousing (15–20+ Years)

Oracle (20+ years): Exadata, schema design, partitioning, performance tuning, migrations (9i 19c)

PostgreSQL / MySQL / SQL Server / SQLite (10–15+ years): Transactional and analytical workloads

Data Modeling (15+ years): Normalization, dimensional modeling, schema architecture

Automation, Monitoring & Reliability Engineering (15–20+ Years)

Linux & Scripting (20+ years): Bash, KornShell, PowerShell, UNIX cron

Monitoring & Observability (15+ years): Splunk, logging, alerting, SLA-based support

Designed production monitoring and alerting frameworks for mission-critical ETL systems

BI, Analytics & Reporting (8–10+ Years)

Tableau /Chart.js/Spotfire / ChartDirector (8+ years): Executive dashboard; Real time Equipment Monitoring Dashboard.

Manufacturing Analytics (15+ years): Yield, lot movement, EOH, Factory Performance reporting, Shipment reporting.

Development Tools & Delivery (8–10+ Years)

Version Control & Dev Tools (8+ years): Git, GitHub, PyCharm, VS Code, Jupyter Notebook

Project Delivery (10+ years): Jira, Confluence, Agile/Scrum methodologies

Produced system documentation, runbooks, recovery procedures, and performance analysis reports

Leadership & Communication

Led architecture, design, and delivery of large-scale data and analytics platforms; mentored engineers and set coding and design standards.

Communicate complex technical topics clearly to engineers, stakeholders, and senior leadership.

Data & Systems Expertise.

Strong database and SQL performance tuning skills for enterprise reporting systems.

Built and managed data transformations, data models, orchestration, and metadata processes.

15+ years of programming experience in Python, Java, C#, and .NET.

Proven success working with cross-functional teams in fast-paced environments.

Technical Foundations

Solid understanding of networking fundamentals (HTTP, DNS, TCP/IP)

Strong collaborator with experience working across engineering, manufacturing, and operations teams.

WORK EXPERIENCES

Samsung Austin Semiconductor : 12100 Samsung Blvd, Austin, TX 78754

Senior ETL Engineer / Data Engineering Lead 5/2015 – 5/2025

Architected and maintained 1,000+ ETL pipelines populating 1,000+ tables across 100+ Oracle schemas.

Integrated heterogeneous data sources, including structured databases (Oracle, MySQL, MS SQL Server, PostgreSQL, DB2, MS Access) and semi-structured data (CSV, XML, JSON, RDF).

Built ETL and CDC pipelines using TIBCO BusinessWorks, Informatica PowerCenter, Python, Unix scripts, TIBCO RV, and Oracle GoldenGate.

Ensured ACID compliance, data normalization, validation, and cleansing; supported snapshot and near real-time processing with sub-second latency.

Designed end-to-end monitoring and alerting for ETL jobs, application servers, and databases, including performance tuning, slow query analysis, and access control enforcement.

Led annual database standardization and ongoing performance optimization initiatives. Each standardization in reducing production CPU utilization by 30%+ and improving shipment efficiency by 10%+.

Provided 24/7 production support, collaborating with help desk and cross-functional teams to resolve data, performance, and security issues.

Enabled factory-level reports, dashboards, and advanced analytics using JavaScript, Python, Power BI, Tableau, TIBCO Spotfire, and ChartDirector, supporting data-driven decision-making across engineering and operations..

Real-Time Data Replication (Oracle GoldenGate)

Led enterprise CDC initiative to offload read traffic from production databases.

Designed Extract/Replicat processes, checkpointing, bulk loads, and incremental replication.

Architecture aligned with AWS DMS Full Load + CDC patterns.

Manufacturing Certification & Eligibility System

Built automated ETL pipelines integrating data from 12 source systems.

Eliminated manual certification checks and prevented unqualified tool usage.

Saved hundreds of engineering hours and ~$10K/day in operational costs.

Samsung Austin Semiconductor : 12100 Samsung Blvd, Austin, TX 78754

Senior System Engineer – Data Platforms

May 2011 – May 2015

Served as technical lead for MES data platforms supporting large-scale ETL and reporting systems.

Managed 400+ automated ETL jobs and 100+ scheduled data pipelines across distributed environments.

Directed complex TIBCO-based transformation engines, ensuring data quality and consistency.

Designed data architecture and databases for new FAB manufacturing initiatives.

Established QA, validation, and performance testing standards for production data pipelines.

Austin Energy

Senior Program Analyst

Sep 2009 – Apr 2011

Designed and developed a citywide Speaker Bureau scheduling and reporting application.

Samsung Austin Semiconductor : 12100 Samsung Blvd, Austin, TX 78754

Senior Software Developer – Data & Analytics Systems

Jan 1998 – Aug 2009

Built and scaled hourly ETL pipelines aggregating production data from 7+ heterogeneous systems.

Developed mission-critical MES reporting and analytics systems using Java, J2EE, Unix scripts, SQL - PL/SQL

Created the EOH (Ending On Hand) FAB report for by executives and operations leaders (SQL- PL/SQL)

Texas Department of Health : 1100 W 49th St, Austin, TX 78756

Programmer

Jun 1995 – Dec 1997

Developed healthcare systems supporting Medicare/Medicaid scheduling and claims processing.

HANDS-ON:

Jun 2025 -

Data Engineer / ETL Engineer/ Python Engineer/Cloud Engineer:

Python/AWS Glue project - Designed and implemented end-to-end AWS data engineering pipelines using Python, AWS Glue (PySpark), Lambda, Athena, and S3.

AWS Glue ETL scheduled- Built event-driven ingestion architectures, triggering Glue ETL jobs via S3 ObjectCreated events and lambda function.

Python project - Built an REST API get data from URL and insert data to mySQL database, the project use DBT (data build tool) approach : SQL-Based Transformation/Modularization & Dependencie/Testing/Run execution.

Python/ AWS services /REST API/Multiple DB ingestions - Developed ingestion pipelines for CSV, JSON, XML, and REST APIs, loading curated datasets into Oracle, MySQL, PostgreSQL, Snowflake, and Redshift.

Python / ASW services/ pandas project - Implemented data transformation and optimization workflows using pandas, AWS Wrangler, and Parquet formats.

Python/Kubernetes project - Experience with containerization technologies - Built and deployed containerized ETL pipelines using Docker and Kubernetes CronJobs with secure configuration management.

Python project - Built analytics-ready datasets and reporting pipelines integrating Snowflake, Oracle, Databricks, and Tableau.

Python project - Built a modular Flask web application featuring a REST API, serverside rendered UI,and interactive Chart.js dashboard. Implemented clean routing, data filtering, and reusable templates. Designed for extensibility with a clear project structure and productionready patterns.”

Python/PowerBI - Built a Python-based data ingestion pipeline to load manufacturing CSV data into PostgreSQL for real-time analytics.Designed and optimized PostgreSQL schemas and indexes to support high-frequency updates and low-latency BI queries.Developed a live Tableau dashboard using direct database connections to monitor production throughput and operational status.

EDUCATION

McNeese State University 08/1990 – 05/1995

B.S. in Computer Science & Statistics, magna cum laude, GPA = 3.93

CERTIFICATIONS

Informatica PowerCenter (Level 1 & 2)

Oracle SQL & Performance Tuning

Advanced SQL

MaxGauge Database Monitoring & Performance

IBM – Python for Data Engineering

Microsoft – Python Programming Fundamentals

AWS Certified Cloud Practitioner

AWS Developing Machine Learning Solutions

Databricks Fundamentals



Contact this candidate