Full time in San Francisco, CA
Pay: per client, call us
Position ID: 1931
An excellent position with an American machinery and engine manufacturer
* Senior Staff Engineer - AWS and Python *
Please apply ONLY if you have a relevant degree, experience building cloud-based systems using AWS, and Python coding skills
Visa sponsorship IS available for this position
We can ONLY consider your application if you have:
1: 10+ years of software engineering experience
2: 5+ years of experience building and supporting mission critical, global, production-scale, cloud-based systems using AWS
3: Demonstrated expertise in architecting and building large scale processing pipelines
4: Experience designing and implementing geospatial data structures, analysis systems, and visualizations
5: Strong coding skills in Python
6: Experience with C++, Java, and/or Scala
7: Experience with infrastructure-as-code, automation, monitoring, and a DevOps mindset
8: A solid foundation in computer science, particularly with algorithm development
9: Strong analytic skills with high attention to detail and accuracy
10: Strong communications skills
11: An ability to inform and influence others through your strong communication skills
12: Bachelor's degree in an Engineering, Computer Science or Technology related discipline
We are looking for an outstanding Senior Staff Engineer for the intelligent Solutions Group to build a world class data platform for advanced analytics and artificial intelligence. This team will design, develop, and support our Precision Agriculture analytics platform - enabling the ingest, modeling, and efficient retrieval of data at petabyte scale. Our team builds production data products that utilize machine learning algorithms to enable intelligent, automated equipment and to improve farmer decision making. Join a team that is passionate about making a difference by applying cutting edge technology to solve some of the world's biggest problems.
DESIRED (not required) SKILLS:
:: Expertise with geospatial, IoT, or high frequency unstructured data sets
:: Ability to function in a fast paced, collaborative team environment that is distributed across various time zones and locations
:: Experience with Spark or Hadoop
:: Past experience working with GDAL or similar spatial libraries
:: Previous experience with machine learning techniques or tools
:: A strong track record of contributions to the open source community
:: Advanced degree, engineering or computer science preferred
Duties and Responsibilities
== Write clean, well-tested code to enable ingest, storage, retrieval, and transformation of large scale geospatial data for analysis, research, and model development.
== Design and build high performance data pipelines that are efficient and reliable with robust monitoring.
== Design and build next-generation data structures and APIs that enable secure, performant, cost effective access to data for research and model development.
== Identify, adopt, and advocate engineering best practices that drive high performance and high expectations from other engineers.
== Collaborate and communicate closely with data scientists and engineers to identify and build needed infrastructure, tools, and libraries to support machine learning algorithms.
== Coordinate with other teams to ensure appropriate security and data governance controls are maintained for all use cases.
== Provide technical guidance, code reviews, and mentoring to other members of the team.
Please send resume as a Microsoft Word attachment to
Amarx Search, Inc. amarx.com