Build, test, validate, and deploy DevOps pipeline to deliver applications and services at a high rate. Write code and develop predictive/heuristic models. Identify and automate manual processes, maintain Continuous Integration and Continuous Delivery (CI/CD), and provide design, analysis, and performance model improvements. Write unit tests to ensure consistent abstraction/interface behavior and write integration tests to ensure systems capture business requirements and are properly integrated with downstream business customers. Conduct code reviews and quality analysis, conduct knowledge transfer sessions, and provide technical support. Transfer machine learning models to analytics platforms. Build and maintain AWS infrastructure. Build tools to automate the conversion of on-premises models to be hosted in the cloud. Build, maintain, and orchestrate Apache Spark-based models. Perform model validation to ensure they are producing expected output and supporting the consumption of model output. Design and maintain developer libraries/modules for structured analytic models. Develop, monitor, and provide production support for streaming data infrastructure and pipelines. Build, monitor, and audit tools and compute infrastructure. Develop infrastructure to support and enable resource and identity management within Databricks. Hybrid work schedule.
This position requires a bachelor’s degree or foreign equivalent in Computer Science, Information Technology, Engineering, or related plus three years of experience in software engineering. Experience must include Apache Spark, AWS, database management, Linux, stream processing, Infrastructure as Code (IaC), SQL, Python, Scala, Bash, data engineering, Databricks, CI/CD, Containerization, DevOps, and Version Control.
Please copy and paste your resume in the email body (do not send attachments, we cannot open them) and email it to candidates at placementservicesusa.com with reference #0251372 in the subject line.
Thank you.