Dayton, OH
... •Improved risk assessment accuracy by 30% by developing and optimizing a scalable data pipeline and robust data architecture with Hadoop, AWS RDS, and NoSQL in an Agile environment. •Enhanced operational efficiency and deployment reliability by ...
- Jul 11
Cleveland, OH
... Primary skills are Cobol (Mainframe) and Informatica - Hadoop is Responsible for ensuring code has a high degree of quality and implements with zero defects. Also, Will act as support backup for handling production calls related to applications in ...
- Jun 23
Avon Lake, OH
... Athena, SNS, SQS), Azure (Synapse Analytics, ADLS Gen2, Data Factory, Functions), GCP (DataProc), Snowflake, Databricks, Hadoop, Hive, YARN ETL & Orchestration: Apache Airflow, AWS Glue workflows, Azure Data Factory, AWS Step Functions Data ...
- Jun 18
Fairborn, OH, 45324
... ●Working knowledge of Apache Hadoop, Kafka, Spark, and LogStorage. ●Very strong knowledge and good experience with the monitoring tools like Amazon Cloud Watch, Splunk, and Datadog to monitor metrics like Load Balancer Logs and Network Logs. ●To ...
- Jun 10
Cleveland, OH
... Hands on experience across Hadoop Ecosystem that includes extensive experience in Big Data technologies like HDFS, Map Reduce, YARN, Apache Cassandra, HBase, Hive, Oozie, Impala, Pig, Zookeeper and Flume, Kafka, Sqoop, Spark. Expertise in leveraging ...
- Jun 03
Cincinnati, OH, 45202
Sravanthi Guduru 972-***-**** # *.*************@*****.*** ï LinkedIn Location: Dallas, TX Summary • Data Engineer with 4+ years of experience in Hadoop, Spark, Kafka, Hive, and Snowflake, delivering scalable, cloud-integrated big data solutions ...
- Jun 02
Kent, OH
... governance using Python, SQL, and cloud platforms (Azure, AWS).Experience in large scale Hadoop projects using Spark, Scala and Python that involves design, develop, and implementation of data models for enterprise level applications and systems. ...
- Jun 02
Cincinnati, OH
... JavaScript, HTML, CSS, Groovy Script, Spring Boot, Hibernate, React, Node.js Data Engineering & Analysis: Apache Spark (PySpark), Apache Kafka, Hadoop (HDFS, MapReduce), ETL, Databricks, Data Pipelines, Data Cleansing, Data Transformation, Airflow. ...
- Jun 02
Columbus, OH
... ● Created data pipelines to extract, transform, and load up to 80TB of data for use in new and in-development products using Scala, Spark, Kafka, Hadoop, AWS S3, NoSQL, and XSLT. ● Designed and developed applications to capture end-user activity ...
- May 31
Dayton, OH
... Data Engineering & Big Data: Apache Spark, Hadoop, Kafka, Airflow, ETL/ELT, Data Pipelines, Apache Beam, Databricks, Snowflake, dbt 5. Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda, Redshift), Microsoft Azure (Machine Learning Studio, Data ...
- May 30