Client: BNY
Title: Big Data Developer (Spark/PySpark/Python)
Location: Pittsburgh, PA (1-2 days' on-site required)
Duration: 6 Month+ (should be extended)
Interview Process: Video Interview followed by mandatory on-site interview (On-site interview is required - relocation candidates are acceptable but they need to be able to travel to Pittsburgh, PA for on-site interview (no expenses will be reimbursed)
Day to Day Responsibilities
Lead Developer provides application software development services or technical support in situations of moderate complexity.
May also be responsible for requirements gathering and BRD/SRD preparation.
Has thorough knowledge of the Software Development Life Cycle. Conducts reviews of the test Plan and test Data.
Writes new programs of moderate complexity and scope, working with basic application system designs and specifications, utilizing BNY Mellon's standard development methodology, procedures and techniques.
Designs and codes programs, and creates test transactions and runs tests to find errors and revise programs.
Prepares the final and detailed versions of system modification requirements, ensures turnovers are done on time and correctly.
Interfaces with architects to design, code, test and implement application programs.
Conducts analysis of organizational needs and goals for the development and implementation of application systems.
Proposes innovative, creative technology solutions.
Contributes to the achievement of related teams' objectives.
Candidate must be proficient on Spark and Python language. Requirements
Bachelor's degree in computer science engineering or a related discipline and minimum of 7-10+ years of advanced software development/big data development experience.
Minimum of 5+ years of experience in Python development.
Minimum of 5+ years of Apache Spark and its components (Spark SQL, Streaming, MLlib, GraphX) using PySpark.
2-3+ years of previous Hadoop Big Data (does not have to be recent).
2+ years of experience with Kafka and/or Impala.
Experience with GitLab for CI/CD pipelines/deployment.
Understanding of Microservice Architecture.
Demonstrated ability to write efficient, complex queries against large data sets.
Knowledge of data warehousing principles and data modeling concepts.
Proficient understanding of distributed computing principles.
Required Skills : Minimum of 7-10+ years of advanced software development/big data development experience.? Minimum of 5+ years of experience in Python development. Minimum of 5+ years of Apache Spark and its components (Spark SQL, Streaming, MLlib, GraphX) using PySpark. 2-3+ years of previous Hadoop Big Data (does not have to be recent). 2+ years of experience with Kafka and/or Impala. Experience with GitLab for CI/CD pipelines/deployment. Understanding of Microservice Architecture.
Background Check :Yes
Drug Screen :Yes
Notes :
Selling points for candidate :
Project Verification Info :"The information provided below is for Apex Systems AV use only and is not to be distributed publicly, or to any third party. Any distribution of the below information will result in corrective action from Apex Systems Vendor Management. MSA: Blanket Approval Received Client Letter: Will Not Provide"
Candidate must be your W2 Employee :Yes
Exclusive to Apex :No
Face to face interview required :Yes
Candidate must be local :No
Candidate must be authorized to work without sponsorship ::No
Interview times set : :No
Type of project :0008892 Pontoon @ Bank Of New York Mellon Corp.
Master Job Title :
Branch Code :