Job Details
Optional Work from Home
Description
Walker Overview
Walker is a full-service Experience Management (XM) firm. We believe everyone deserves an amazing experience. This is our purpose, and we fulfill it by providing the world’s leading brands with the services, guidance, and best practices needed to maximize the value of their XM programs.
Walker is located in Indianapolis, IN. We are open to on-site, hybrid, or remote work locations to meet the varying needs of our team members. Remote options are available from any state in which we have operations in the continental U.S. Walker is intentional and mindful about creating a workforce of diverse people who are compensated fairly and are free to be their authentic selves. We know doing this will further enhance the experience our associates and customers have with our company.
Summary
You will help design and implement data engineering solutions leveraging Databricks and Apache Spark, primarily in AWS or Azure environments. You’ll support pipelines, ETL/ELT processes, and collaborate closely with both internal teams and clients to deliver impactful, scalable solutions.
Responsibilities
• Develop, optimize, and maintain data pipelines in Databricks using PySpark/Spark SQL.
• Work with cloud data platforms (AWS or Azure) to ingest and transform data using Delta Lake.
• Support and engineer projects on the Databricks platform with internal and external stakeholders, often larger in scope and fully committed to Databricks.
• Collaborate with business and technical stakeholders to translate requirements into actionable solutions, ensuring strong communication throughout.
• Implement monitoring, logging, and performance tuning for pipelines.
• Ensure data quality, integrity, and security using best practices.
• Quickly develop products or solutions that can be offered in the marketplace through Databricks.
• Document workflows and maintain version control through tools such as Airflow/Jenkins and Git.
• Consistently deliver robust, end-to-end Databricks pipelines supporting business reporting and analytics.
• Proactively optimize workflows and cost-effective cluster configurations.
• Collaborate smoothly with internal teams and external stakeholders, contributing effectively to agile sprint cycles.
• Demonstrate continuous learning through available upskilling or external training.
• Other duties as assigned.
Education & Experience
• 1+ years of experience working in Data Engineering or Python-based application development on Databricks or Apache Spark.
• Bachelor’s degree in computer science, Data Science, Information Technology, or a related field; or equivalent professional experience.
• Hands-on experience with at least one major cloud platform (AWS or Azure).
• Experience working on projects dedicated to Databricks, ideally in a client-facing or consulting environment (preferred).
Knowledge, Skills & Abilities
• Proficiency in Python and SQL.
• Familiarity with Unity Catalog, the backbone of Databricks.
• Strong understanding of ETL/ELT processes and Delta Lake.
• Excellent communication skills for working directly with internal and external stakeholders.
• Ability to proactively translate requirements into effective solutions.
• Self-starter mindset with a passion for emerging data technologies.
• Strong problem-solving and analytical skills.
• Client-facing skills preferred; ability to build trust and communicate value clearly.
• This role requires the ability to travel to client sites, with expected travel approximately once per month.
Preferred Qualifications
• Databricks Certified Data Engineer Associate or similar certifications.
• Experience with PySpark or Scala.
• Exposure to Delta Live Tables (DLT).
• Experience with data workflow orchestration tools (Airflow, Jenkins).
• Prior consulting or client-facing project experience.
• Experience with machine learning concepts.
Walker is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Why work at Walker?
Learn more about the Walker DEI efforts:
Perks and Benefits:
Qualifications
Education & Experience
• 1+ years of experience working in Data Engineering or Python-based application development on Databricks or Apache Spark.
• Bachelor’s degree in computer science, Data Science, Information Technology, or a related field; or equivalent professional experience.
• Hands-on experience with at least one major cloud platform (AWS or Azure).
• Experience working on projects dedicated to Databricks, ideally in a client-facing or consulting environment (preferred).
Knowledge, Skills & Abilities
• Proficiency in Python and SQL.
• Familiarity with Unity Catalog, the backbone of Databricks.
• Strong understanding of ETL/ELT processes and Delta Lake.
• Excellent communication skills for working directly with internal and external stakeholders.
• Ability to proactively translate requirements into effective solutions.
• Self-starter mindset with a passion for emerging data technologies.
• Strong problem-solving and analytical skills.
• Client-facing skills preferred; ability to build trust and communicate value clearly.
• This role requires the ability to travel to client sites, with expected travel approximately once per month.