Position – Senior Data Engineer/ Data Ops Engineer
Experience: 10+ yrs
Location: Remote
Job Description:
The Role:
We are looking for a highly skilled and experienced Senior Database Engineer to strengthen the data team, If you are passionate about dealing with data, data quality, data platforms designing, we want to hear from you !
Responsibilities:
Monitor and troubleshoot database issues related to reliability, scalability, latency for Aisera's Agentic AI platform supporting high-performance data pipelines and backend applications
Contribute to data design and development best practices to meet database operations requirements
Develop and maintain data quality standards and best practices for Aisera's data infrastructure.
Design and develop robust data architecture, supporting high-performance data pipelines and backend applications.
Collaborate with cross-functional teams (data engineers, data scientists, software engineers, and product managers) to ensure data quality, consistency, and accessibility for AI-powered solutions.
Optimize existing data models and query performance to ensure Aisera's AI models operate on high-fidelity data, enabling accurate and insightful results.
Work closely with business stakeholders & software developers to understand their data needs and translate them into data architecture specs
Review and data designs produced by engineers and ensure they meet the standards defined, provide feedback
Help debug data issues, query performance issues and recommend performance optimizations based on data models
Implement data expirations according to retention policies
Help recommend data migration strategies as needed
Required Qualifications:
10+ years of experience in working with databases – designing, development, database monitoring, performance monitoring.
Strong proficiency in SQL, including complex query writing and performance optimization techniques.
Experience with relational databases (e.g., MariaDB, MySQL, PostgreSQL) and NoSQL databases
Experience with columnar engines like Snowflake, Redshift, ClickHouse, columnar storage like Parquet and data warehousing
Expertise in designing and building high-performance data pipelines using tools like Kafka.
Proficiency in programming languages such as Java and Python.
Experience working with cloud platforms (AWS, Azure, GCP) and cloud-native technologies like Amazon RDS, Amazon Arora
Excellent communication and collaboration skills.
Ability to work independently and as part of a team.
Experience handling data pipelines and analytics over terabytes of data
Preferred Qualifications:
Experience with data warehousing and data lake technologies.
Experience with database replications, high availability
Knowledge of data governance and data quality best practices.
Experience with data visualization tools (e.g., Tableau, Power BI).
Experience working with query engines like Athena, Presto or other open source engines