Post Job Free
Sign in

Data Engineer Senior

Location:
europe-remicourt, Aisne, 02100, France
Salary:
55000
Posted:
October 19, 2025

Contact this candidate

Resume:

BARTŁOMIEJ KRÓL SENIOR DATA ENGINEER AWS

******************@*****.*** +48-54-264-**** Mazowieckie, Poland https://www.linkedin.com/in/bartłomiej-król-550082389 SUMMARY

Senior Data Engineer with over 10 years of hands-on expertise in designing and building scalable data architectures, automating ETL pipelines, and optimizing data infrastructures for large-scale organizations across a variety of industries. A strong foundation in leveraging cloud-based services such as AWS to provide highly efficient, secure, and reliable data solutions. Proven ability to implement real-time data analytics systems, improve operational performance, and lead cross-functional teams to deliver impactful results on time and within budget. Skilled in building end-to-end data pipelines, integrating machine learning models, and optimizing data models for complex use cases in sectors like technology, finance, e-commerce, healthcare, and manufacturing. Highly proficient in big data technologies, containerization, and orchestration tools. Adept at mentoring junior engineers and collaborating with stakeholders to build data systems that directly contribute to business growth. An advocate for best practices in code development, quality assurance, and efficient workflows that improve productivity and ensure high-quality deliverables in data-centric projects. WORK EXPERIENCE

Senior Data Engineer, Denologix 10/2023 – 09/2025 Toronto, Canada Remote

- Led the design and implementation of a high-performance data pipeline architecture using AWS Glue, Lambda, and Kinesis to process real-time transaction and customer data. By streamlining the pipeline, I successfully reduced processing time by 40%, enabling faster, data-driven decision- making across business operations.

- Collaborated with product managers, engineers, and business stakeholders to align data solutions with the company's strategic goals. This collaborative effort resulted in delivering accurate and timely insights, directly contributing to critical business initiatives.

- Leveraged AWS Redshift and PostgreSQL to design optimized data models that enhanced query performance and significantly improved reporting systems’ speed by up to 35%, which allowed the company to gain insights faster and make better decisions in a shorter amount of time.

- Developed automated data validation systems and error-handling mechanisms, ensuring high-quality data processing. This improved pipeline consistency, reducing errors by 20% and improving the overall quality of data delivered to stakeholders.

- Managed and mentored a growing team of junior engineers, teaching best practices in cloud-native technologies and data pipeline automation.

This effort resulted in a 30% improvement in team productivity, allowing the team to complete tasks faster and with fewer issues.

- Directed the successful migration of legacy data infrastructure to modern cloud-based solutions, allowing the organization to scale efficiently and reduce operating costs by 25%.

- Advocated for the adoption of CI/CD practices within the team, reducing manual deployment processes and speeding up deployment times by 25%. This ensured rapid and reliable delivery of data updates.

- Designed and implemented customized data solutions that helped streamline operational performance analytics, reducing reporting timelines by 35%. This contributed directly to operational efficiency, enabling faster responses to critical business metrics.

- Ensured high availability and robustness of data systems by conducting extensive testing and profiling of the data pipelines, achieving a 99.9% uptime for key systems. Senior Data Engineer, IT Labs 10/2020 – 09/2023 London, England Remote

- Designed and implemented highly available, real-time data pipelines using AWS Redshift and PostgreSQL to process transactional data, ensuring accurate reporting for business intelligence and analytics in near real-time.

- Built interactive and high-performance reporting dashboards in AWS QuickSight and Tableau, enabling decision- makers to access actionable insights quickly and reducing reporting time by 50%.

- Collaborated with data scientists to integrate predictive analytics models for fraud detection, enabling proactive identification of financial risks and minimizing the impact of fraudulent activities.

- Conducted performance optimization of SQL queries, indexing strategies, and partitioning, resulting in a 30% increase in reporting speed and improving the overall efficiency of data processing workflows.

- Designed and deployed real-time dashboards that provided key business insights, reducing the time to actionable insights by 50% and empowering stakeholders to make data-driven decisions promptly.

- Implemented data quality automation tools, reducing manual intervention by 35%, improving pipeline reliability, and ensuring that business-critical reports were generated without delay.

- Worked with cross-functional teams to ensure the integration of diverse data sources, including transactional and operational data, which resulted in the creation of a unified platform that provided holistic business intelligence.

- Introduced data security measures and ensured compliance with GDPR, PCI DSS, and other regulatory standards, securing sensitive financial data and guaranteeing its integrity during storage and transmission. Data Engineer, MITRIX Technology 09/2017 – 09/2020 Warszawa, Poland Remote

- Developed and optimized ETL pipelines to ingest and process over 100GB of data daily, improving processing efficiency by 30% and ensuring that data was ready for business intelligence analysis within minutes of collection.

- Created a recommendation engine using machine learning algorithms to personalize product suggestions based on user activity, resulting in a 12% increase in product conversion rates and improving the overall user experience.

- Designed customer segmentation models that enabled more effective and targeted marketing campaigns, improving customer retention by 15% and increasing marketing ROI.

- Utilized Apache Spark for distributed data processing, which improved the scalability of our reporting tools by 25%, reducing the time taken to generate insights and making data-driven decisions quicker.

- Built predictive models for inventory management, helping the business optimize stock levels and reduce stockouts by 20%, improving the availability of products and customer satisfaction.

- Integrated data from multiple sources such as transactional logs, customer interactions, and inventory management systems into a unified platform, providing a comprehensive view of customer engagement and enabling better business decisions.

- Developed and deployed a customer-facing dashboard for real-time insights into sales performance, product trends, and customer behavior, directly helping marketing and sales teams to improve strategies.

- Automated data quality checks, reducing anomalies in the processed data by 30% and ensuring that data used for decision-making was accurate and consistent.

Software Engineer, TechMagic 10/2014 – 08/2017 Kraków, Poland

- Built and maintained data pipelines for Electronic Health Record (EHR) data, ensuring compliance with HIPAA regulations and secure data access for clinicians. This allowed healthcare professionals to make real-time, data-driven decisions based on patient information.

- Implemented machine learning models to predict patient outcomes, optimizing care pathways and improving patient outcomes by 20%, ultimately leading to better patient management.

- Integrated multiple healthcare data sources, including lab results, treatment history, and patient demographics, into a unified data warehouse, enabling real-time analysis of patient data.

- Collaborated with medical researchers and clinicians to develop data tools that cut the time for generating insights from weeks to days, helping the medical team act faster on important data.

- Integrated real-time data from medical devices into the data pipeline, enabling continuous monitoring of patient vitals and improving critical care response times.

- Supported clinical trials by ensuring the integrity of the data being collected, optimizing resource allocation, and improving patient recruitment efficiency, which led to faster trial completion.

- Developed custom reporting tools that allowed hospitals to generate timely, actionable reports on patient care, reducing readmission rates and improving hospital resource management. SKILLS

Data Engineering & Cloud: Apache Spark, AWS Glue, AWS Lambda, Athena, Kinesis, DynamoDB, S3, Redshift, Airflow, Databricks, Kafka, SQL, PostgreSQL, MongoDB, NoSQL, ETL Pipelines, Data Warehousing Big Data Technologies: Hadoop, Hive, Apache Flink, HBase, Apache Storm, Apache Kafka, Apache Cassandra, Apache Nifi, Presto, Elasticsearch

Programming Languages: Python (Pandas, NumPy, Scikit-learn, TensorFlow), SQL, Java, Scala, Shell Scripting, Go

Machine Learning & Data Science: Predictive Modeling, LSTM, GANs, TensorFlow, PyTorch, Scikit-learn, Feature Engineering, Deep Learning, NLP

Containerization & Orchestration: Docker, Kubernetes, Terraform, Helm, OpenShift CI/CD & DevOps: Jenkins, Git, Bitbucket, GitHub, Apache Airflow, CircleCI, Travis CI, Ansible, Maven, Docker Swarm

EDUCATION

Bachelor of Science in Computer Science,

Wrocław University of Science and Technology 10/2011 – 09/2014



Contact this candidate