Distance: |
Resume alert |
Resumes 1 - 10 of 110 |
Columbus, OH
... CI/CD: Jenkins, Docker, Maven, Ant, Git, Ansible, Bitbucket, AWS Code Commit Build Tools & Deployment: Maven, Ant, Jenkins, Spark, HDFS, Yarn, CI/CD pipelines Version Control: GitHub, Bitbucket, AWS Code Commit Web Services: SOAP, REST, CXF, JAXWS, ...
- Jun 04
Columbus, OH
... ● Created data pipelines to extract, transform, and load up to 80TB of data for use in new and in-development products using Scala, Spark, Kafka, Hadoop, AWS S3, NoSQL, and XSLT. ● Designed and developed applications to capture end-user activity ...
- May 31
Columbus, OH
... Experience in installation, configuration, Management, supporting and monitoring Hadoop cluster using various distributions such as Apache SPARK, MapR and AWS Service console. Good knowledge on ECS, VPC, Autoscaling, Security Groups, AWS CLI, Cloud ...
- May 21
Columbus, OH
... AWS (Glue, Redshift, SageMaker, Lambda, Athena, S3, EMR, Kinesis, Firehose, IAM) ●Big Data & Workflow Automation: Apache Airflow, Spark, Hadoop, Kafka, ETL/ELT Pipelines ●Data Engineering: Data Extraction, Data Validation, Dimensional Modeling, Data ...
- May 16
Columbus, OH
... AWS Glue, and SSIS, and managing scalable big data environments with Apache Spark, Kafka, Azure Synapse, and Snowflake. ... • Experience managing and scheduling workflows with Apache Airflow on Hadoop clusters. • Strong DevOps skills, including CI ...
- May 09
Columbus, OH
... Trello Databases: PostgreSQL, MS SQL Server, MySQL, Oracle, Teradata, Snowflake Big Data Technologies: Hadoop, HDFS, MapReduce, Hive, Pig, Spark, Google BigQuery, AWS Redshift Cloud Platforms & Services: AWS (S3, Redshift, Glue, Lambda, EC2, IAM), ...
- May 09
Powell, OH
... ●Thoroughly examine and tested the data warehouse and intermediate databases using HIVEQL for Hadoop. ●Tested Hadoop ‘s Hadoop Distributed File System (HDFS) to maintain large data sets across multiple nodes. ●Design and define ETL requirements, ...
- May 01
Columbus, OH
... in/challa-chandra-sekhar Professional Summary: IT Professional with 10 years of experience, including 5 years as a Data Engineer specializing in Python, Apache Spark (PySpark), and Snowflake, and 5 years in Java, microservices, and cloud migration. ...
- Apr 30
Columbus, OH
... Hands on experience in working with Hadoop ecosystem components like HDFS, Mapreduce programming, Hive, Pig, Sqoop, Hbase, Impala,kafka and spark. Experience in usage of Hadoop distribution like cloudera 5.3 and Amazon AWS. Adept at driving end-to ...
- Apr 23
Dublin, OH, 43017
... • Troubleshooting memory and network issues in real-time Spark streaming jobs on AWS EMR clusters, ensuring efficient data processing and system stability • Designed and optimized supervised and unsupervised learning algorithms to solve specific ...
- Apr 18