Sign in

Lead Cloud Data Engineer

Company:
Bey
Location:
San Francisco, California, United States
Salary:
$150-195K
Posted:
March 18, 2019

Description:

This is a direct-client opening for a Lead Cloud Data Engineer located in San Francisco, CA. This is a full-time position with a salary in the $150-195K range and relocation is offered to non-local candidates

Our client is looking to hire experienced Lead (Big Data) Cloud Engineer to play a key role in designing and developing batch and real time data acquisition pipelines and analytics platform using latest open source big data technologies. In this role candidate will report into Data Engineering Director in our west coast (San Francisco) office. Data as a Service (DaaS) is an enterprise platform team that organizes our data and makes it available for applications and business units. It collects, correlates, enriches, and manages our core data to produce single view of truth for use at speed and at scale. We are passionate technologists who thrive on simple and elegant architecture and agility.

We own the architecture and development of comprehensive data management, bringing real-time operational use, near real-time visibility use, and off-line deep analytical use together in a cohesive and holistic way. Innovation and engineering prowess is our core to stay ahead of our user’s demand and the industry changing landscape. Come get challenged in a fast, agile environment with other A players.

Key Accountabilities:

Design and develop scalable data pipelines for optimal ingestion, transformation, storage and computation using latest big data technologies.

Work with engineering, product, and business team(s) to understand requirements, evaluate new features and architecture to help drive decisions.

Design, develop and maintain automated ETL processes for efficient batch records-matching of multiple large-scale datasets.

Translate complex business requirements into a scalable, efficient and high availability data platform. Strong ownership of the full backend stack - from design through to deploy and beyond.

Experience working with continuous integration (CI/CD) framework, building regression-able code within data world using GitHub, Jenkins, Cloud Build, Spinnaker and related applications.

Actively participate in code review and test solutions to ensure it meets best practice specifications. Prototype ideas quickly using cutting edge and new generation technologies.

Build collaborative partnerships with architects, technical leads and key individuals within other cross functional groups. Work with business teams/engineers to define instrumentation and data requirements.

Lead projects independently and also mentor team members and other engineers in the team.

Skills Summary:

Bachelor’s or Master’s degree or equivalent in computer science or related field with minimum of 6+ years of directly related work experience.

Good understanding of data structures and algorithms.

Programming experience in Python, Java or Scala.

Extensive experience with Elastic stack, Kafka and Spark

Experience with big data technologies like Hadoop, Map Reduce, Hive, Spark/Storm, Big Query

Experience with time series and NoSQL databases is a plus.

Experience working with containers and kubernetes is good to have.

Experience designing, implementing and scaling big data solutions on public cloud.

Experience with Unix/ Linux and shell scripting