The DE candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics. He/she is an expert with data modeling, ETL design and business intelligence tools and passionately partners with the business to identify strategic opportunities where improvements in data infrastructure created outsizes business impact. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail) and enjoys working in a fast-paced team. The ideal candidate possesses exceptional technical expertise in large scale data pipelines, BI systems with hands-on knowledge of SQL, Distributed/MPP data storage, Hadoop and AWS services such as EMR, S3 and Redshift.
· Design, implement, and support a platform providing access to large datasets
· Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, and Redshift
· Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
· Build and deliver high quality datasets to support business analyst and customer reporting needs.
· Interface with other technology teams worldwide to extract, transform, and load data from a wide variety of data sources using SQL and other AWS technologies
· Interface with business customers, gathering requirements and delivering complete reporting solutions
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
· Monitor and scale the team’s infrastructure
· Participate in strategic & tactical planning discussions, including annual budget processes
Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline
4+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
Experience using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.)
Experience using business intelligence reporting tools (e.g. Tableau, Business Objects, Cognos, AWS Quicksight, etc.)
Knowledge of data management fundamentals and data storage principles
Knowledge of distributed systems as it pertains to data storage and computing
Proficiency in one of modern programming languages such as Java, Python, R, Ruby or Scala
Ability to communicate in English
Experience defining system architectures and exploring technical feasibility trade-offs
Strong sense of ownership, bias for action, urgency, and drive
Experience in Agile/SCRUM enterprise-scale software development
Excellence in technical communication with peers and non-technical cohorts
Experience working with AWS big data technologies (EMR, Redshift, S3)
Experience providing technical leadership and mentoring other engineers for best practices on data engineering