... Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka. Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD Hands-on experience in designing and maintaining data schema life-cycles. - Sep 22
... NoSQL DBs, Elastic Search, Kafka, Docker, Spark, Storm, and Hadoop is preferred. Technologies such as Kotlin, Node, Netty, Nginx, Apache, JMS, Tomcat, Jersey, Hazelcast, and Redis. - Aug 28
... Spark/Hadoop administrators with Kubernetes will be a plus for this role. Key Responsibilities: Kubernetes and EKS Management: Design, deploy, and manage Kubernetes clusters on AWS EKS. Implement best practices for Kubernetes architecture, including ... - Aug 26
... well in a team environment Desired Skills: Denodo Experience a plus Big Data Experience a plus (Hadoop, MongoDB, Exadata) Compensation: W2 hourly rate of $55 to $75 per hour Rates can vary depending on your experience, years, and type of experience. ... - Sep 18
... Data experience (Hadoop, MongoDB, Exadata) Familiarity with data governance principles and practices Experience in the energy or utility sector Education: Bachelor's degree in Computer Science, Information Systems, Statistics, or a related field. ... - Sep 22
... Collaborate with data engineering and data science teams to integrate Java applications with Big Data technologies such as Hadoop, Spark, or similar platforms. Lead the implementation of Java-based microservices and APIs, leveraging Python for ... - Sep 07
... · Would be preferred to have executed end to end implementation/migration of a Data Warehouse and Big Data (Hadoop) project from on prem to GCP (using Google BigQuery, DataFlow, DataProc etc.). · Working knowledge of microservices using Docker and ... - Sep 13
... • Experience in Cloudera Hadoop/Big Data. • Leading a Product Development team to implement Big Data tools and technologies PNC will not provide sponsorship for employment visas or participate in STEM OPT for this position. Job Description Manages ... - Sep 20