Our client seeks an TigerGraph Engineer/Developer for a Long term project in Charlotte, NC. Below is the detailed requirement
Title: TigerGraph Engineer/Developer
Location : Charlotte, NC
Duration: 12 Months
Job Description:
Bachelor's degree preferably in Computer Science, Information technology, Computer Engineering, or related IT discipline or equivalent experience with 5+ Minimum Experience
Graph database, Tiger Graph or Neo4j or Graph QL or Graph SQL
The role:
TigerGraph Developer will work with a Cyber Crime Defense team to design, implement, and maintain graph database solutions using TigerGraph’s graph database platform to support cyber fraud detection and mitigation.
Responsibilities:
Candidate will be responsible for creating and optimizing data models, writing complex GSQL queries, and ensuring the performance and scalability of our graph database solutions.
Candidate will play an integral role in developing applications that deliver actionable insights from complex, interconnected data.
Candidate should be able to connect the dot with different data feed in RDBMS/Hadoop and try to fetch those data in Graph using SQL query
Graph Data Modeling: Design and build efficient and scalable graph data models based on business requirements.
Query Development: Write and optimize complex queries using GSQL, TigerGraph’s query language, to retrieve and analyze data from large-scale graph databases.
Integration: Integrate TigerGraph with other systems, applications, and data pipelines (e.g., ETL processes, APIs).
Optimization: Monitor the performance of graph queries and work on optimization strategies for large-scale deployments.
Collaboration: Work closely with cross-functional teams (e.g., data engineers, data scientists and cybercrime analysts/investigators) to ensure successful deployment and integration of graph-based solutions.
Maintenance Troubleshooting: Maintain and troubleshoot existing graph databases to ensure smooth operation and resolve any issues that arise.
Requirements:
Able to write popular Graph algo like community detection, Similarity, Centrality on existing data.
Well versed with rest API using Python.
Familiar with Kafka for data transfer and ingestion
Schedular like Autosys
MySQL
HDFS/Hadoop platform
Familiarity with Expero graph visualization tool a plus