Senior Data Engineering Lead
100% On-Site in Las Colinas / Irving, TX 75038
Direct-Hire Role
Residency Requirements: Candidates must be authorized to work in the US. No sponsorship available.
About The Position:
Seeking an experienced Senior Data Engineering Lead to join an innovative data team. This is a permanent job that demands deep expertise in cloud-based data infrastructure, large-scale data processing, and engineering leadership. You will play a pivotal role in building resilient, scalable, and high-performance data systems that drive critical business insights and operational efficiency. Collaboration with diverse technical and non-technical teams will be a core part of this role, ensuring that data remains accurate, available, and secure throughout the organization.
Key Responsibilities:
• Architect and implement high-throughput data pipelines and transformation processes (ETL/ELT), ensuring they are reliable and adaptable.
• Work with modern data technologies including but not limited to Kafka, Apache Spark, Flink, HDFS, HBase, and columnar storage formats like Parquet.
• Build and maintain robust ingestion systems and real-time streaming solutions using distributed data processing tools.
• Design and oversee cloud-based data platforms using providers such as AWS, Azure, or GCP to support data lake and warehouse strategies.
• Leverage Databricks for development and orchestration of data workflows and analytics tasks.
• Develop and maintain GraphQL endpoints to expose structured data to various internal platforms and tools.
• Manage and model graph-based data using databases like Neo4j to support complex relationship-driven insights.
• Optimize query performance and manage large-scale vector datasets using VectorDB or similar technologies.
• Act as a strategic partner to stakeholders across analytics, data science, and product teams to meet evolving data needs.
• Implement strong data governance and security controls, ensuring compliance and best practices.
• Handle diverse data formats such as Avro, Parquet, and others to support seamless data transformation and storage.
• Provide mentorship to junior engineering staff and encourage ongoing professional growth within the team.
Qualifications & Experience:
• Degree in Computer Science, Software Engineering, or related technical discipline (graduate-level education is a plus).
• 7+ years of hands-on experience in data engineering or related fields, particularly with data modeling and large-scale ETL development.
• Proficient in Python, Scala, or Java for data-driven application development.
• Extensive practical knowledge of big data ecosystems including tools like DBT, SQLMesh, Kafka, Spark, and Flink.
• Familiar with modern graph and vector databases (e.g., Neo4j, VectorDB), as well as API development using GraphQL.
• Demonstrated experience working with major cloud platforms like GCP, Azure, or AWS.
• Strong command of SQL and experience with both relational and non-relational databases.
• Exposure to BI and visualization platforms such as Looker, Power BI, or Tableau.
• Ability to communicate technical concepts clearly and effectively to a wide audience.
• Experience designing and managing systems in hybrid or multi-cloud environments.
• Knowledge of data quality frameworks and governance practices.
• Prior experience leading a team or mentoring other engineers in a technical capacity.
If you possess these qualifications and are excited about the prospect of contributing to dynamic team, we encourage you to apply.