Data Engineer (4168-1) Columbus, OH
Databricks, ETL pipelines, Docker for containerization, utilizing REST API in Python, Python
Experience level: Mid-senior
Experience required: 10 Years
Education level: Bachelors degree
Job function: Information Technology
Industry: Information Technology and Services
Pay rate: View hourly payrate
Total position: 1
Relocation assistance: No
Visa sponsorship eligibility: No
Job Summary
Strong experience in Databricks
Expertise in implementing batch and real-time data processing solutions using Azure Data Lake Storage, Azure Data Factory, and Databricks
Experience in building ETL pipelines for ingesting, transforming, and loading data from multiple sources into cloud data warehouses
Proficient in Docker for containerization, utilizing REST API in Python for system integration, and applying containerization to improve deployment efficiency and scalability
Experience in data extraction, acquisition, transformation, manipulation, performance tuning, and data analysis
Skilled in using Python libraries to build efficient data processing workflows and streamline ETL operations across large datasets and distributed systems
Expertise in automating data quality checks, reducing data errors by 40%, and ensuring reliable reporting and analytics with data marts
Proficient in data orchestration and automation tools such as Apache Airflow, Python, and PySpark for end-to-end ETL workflows
Experience in deployment activities
#J-18808-Ljbffr