Overview
About PHOENIX
PHOENIX Retail, LLC is a retail platform operating the Express and Bonobos brands worldwide. Express is a multichannel apparel brand dedicated to a design philosophy rooted in modern, confident and effortless style whether dressing for work, everyday or special occasions. Bonobos is a menswear brand known for being pioneers of exceptional fit and a personalized, innovative retail model. Customers can experience our brands in over 400 Express retail and Express Factory Outlet stores, 50 Bonobos Guideshops, and online at and
About Express
Express is a multichannel apparel brand dedicated to creating confidence and inspiring self-expression. Since its launch in 1980, the brand has embraced a design philosophy rooted in modern, confident and effortless style. Whether dressing for work, everyday or special occasions, Express ensures you look and feel your best, wherever life takes you.
The Company operates over 400 retail and outlet stores in the United States and Puerto Rico, the express.com online store and the Express mobile app.
Location Name
Columbus Corporate Headquarters
Responsibilities
The Technical Consultant is responsible for designing, building, and maintaining scalable data infrastructure and systems that support analytics, machine learning, and business intelligence initiatives. This role combines technical expertise with leadership, guiding a team of data engineers while collaborating cross-functionally to ensure data is reliable, accessible, and aligned with organizational goals.
In this role, you will:
Lead the design and evolution of enterprise-scale data architecture across GCP, BigQuery, Snowflake, PostgreSQL, and Databricks environments
Build, optimize, and maintain scalable ELT pipelines using Python, dbt, Hevo, Spark, Informatica, and Fivetran.
Define and implement data modeling standards to support analytics, reporting, and data science use cases
Drive data quality, governance, and lineage initiatives to ensure trusted and compliant data assets
Partner with cross-functional teams—including analytics, data science, product, and business stakeholders—to deliver impactful data solutions
Establish best practices for DataOps, including CI/CD, testing, monitoring, and deployment of data pipelines
Optimize performance and cost efficiency across large-scale data platforms and cloud infrastructure
Lead incident response and root cause analysis for data pipeline and platform issues
Translate business requirements into scalable, maintainable, and secure data engineering solutions
KEY RESPONSIBILITIES
Lead technical design and implementation of data engineering solutions, ensuring best practices and high-quality deliverables.
Mentor and guide junior engineers, conducting code reviews and technical sessions to foster team growth.
Perform detailed analysis of raw data sources by applying business context and collaborate with cross-functional teams to transform raw data into data products
Create scalable and trusted data pipelines which generate curated data assets in centralized data lake/data warehouse ecosystems.
Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
Create and maintain effective documentation for projects and practices, ensuring transparency and effective team communication.
Provide technical leadership and mentorship on continuous improvement in building reusable and scalable solutions.
Design, build, and enhance semantic layer content, including shared dbt models and reusable components that support scalable analytics.
REQUIRED EXPERIENCE & QUALIFICATIONS
Education: Bachelor's Degree or Advanced/Master's Degree in Computer Science, Software Engineering, Economics, Statistics, Applied Math, or other quantitative disciplines.
7-10 years of experience working with Python, SQL, PySpark, and bash scripts. Proficient in software development lifecycle and software engineering practices.
5+ years of experience developing and maintaining robust data pipelines for both structured and unstructured data for advanced analytical and reporting use cases.
3+ years of experience working with Cloud Data Warehousing (Redshift, Snowflake, Databricks SQL, BigQuery or equivalent) platforms and distributed frameworks like Spark.
Hands-on experience with CI/CD tools (e.g., Jenkins or equivalent), version control (Github, Bitbucket), orchestration (Airflow, Prefect or equivalent).
Strong knowledge of data modeling, ETL/ELT design, and data warehousing methodologies.
CRITICAL SKILLS & ATTRIBUTES
Tools:
Strong experience with programming languages in various combinations: SQL, Python
Hands-on experience with cloud computing, preferably Google Cloud Platform
Advanced expertise with Microsoft Office products.
Traits:
Enterprise architecture and systems thinking
Leadership, mentoring, and team scaling
Cross-functional collaboration and stakeholder alignment
Strong problem-solving and decision-making skills
Focus on data reliability, performance, and governance
Closing
If you would like to know more about the California Consumer Privacy Act click here.
An equal opportunity employer, PHOENIX does not discriminate in recruiting, hiring or any other terms and conditions of employment hiring on the basis of any federal, state, or locally protected characteristic. PHOENIX only hires individuals authorized for employment in the United States. PHOENIX is committed to providing reasonable accommodation to individuals with disabilities. If you need an accommodation to search and apply for a job position due to a disability, please call and say 'Associate Relations' or send an e-mail to and let us know the nature of your request and your contact information.
Notification to Agencies: Please note that PHOENIX does not accept unsolicited resumes or calls from third-party recruiters or employment agencies. In the absence of a signed Master Service Agreement and approval from HR to submit resumes for a specific requisition, PHOENIX will not consider or approve payment to any third-parties for hires made.