Post Job Free
Sign in

Data Systems Engineer ELK/Kafka/Linux

Company:
MARKS IT Solutions
Location:
Alpharetta, GA
Posted:
February 18, 2026
Apply

Description:

This is a hands-on, client-facing engineering role within the Real-Time Operations Intelligence (RTOI) team, supporting large-scale streaming data platforms. The position requires close collaboration with cross-functional teams and support for hundreds of internal users, so strong communication skills and a team-oriented mindset are essential.

Below is a brief overview of the position:

Job Title: Data Systems Engineer – ELK/Kafka/Linux

Location: Alpharetta, GA or Menlo Park, CA (Onsite)

Experience Level: 7–15 years

Required Qualifications:

7+ years of overall IT/application development experience

5+ years of hands-on coding experience in at least one language: Python (preferred), Ruby, Shell, Java, C/C++, or Go

Minimum 2+ years of data engineering experience with Kafka

Strong working knowledge of Linux platform (application deployment, debugging, performance tuning, CPU/memory troubleshooting)

Experience building and supporting real-time ETL/streaming pipelines using Kafka and ELK (Elasticsearch/Logstash/Kibana)

Experience running, deploying, and supporting applications in large-scale Linux cluster environments

Strong understanding of distributed systems architecture, scalability, and Kafka ecosystem design trade-offs

SQL and database experience

Ability to handle full software development lifecycle (requirements, design, implementation, testing, deployment, and support)

Strong debugging and production support experience (ensuring jobs are up and running for hundreds of users)

Excellent communication skills, team-oriented mindset, and strong curiosity/learning ability

Preferred Qualifications:

Experience with Snowflake database

Spark data processing experience

Hadoop ecosystem exposure

Observability and data analysis background

AWS or other cloud technologies

ELK certification

Financial industry experience (nice to have)

Bachelor’s degree (strong plus, not required)

Key Responsibilities:

Design and develop large-scale streaming ETL pipelines using Kafka and ELK

Deploy and manage applications in Linux-based production environments

Ensure scalability and reliability within Kafka cluster environments

Support hundreds of internal stakeholders and dashboards with real-time data needs

Troubleshoot and debug production issues across distributed systems

Work across on-prem and cloud-based platforms

Contribute to development, deployment, and ongoing operational support

This role requires a well-rounded data engineer with strong core application development skills—not just scripting ability—who thrives in large-scale distributed Linux environments.

Apply