Sign in

Snowflake Data Engineer

Company:
Intellyk Inc.
Location:
Roswell, GA
Posted:
January 08, 2021

Description:

Role: Snowflake Data Engineer

Location: Remote

long term contract

Job Description

• At least 8 years of IT experience and 4 years or more of work experience in data management disciplines including data integration, modeling, optimization, and data quality.

• Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Java, C++, Scala, others].

• Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.

• Strong experience with popular database programming languages including [SQL, Blob Storage and SAP HANA] for relational databases and certifications on upcoming [MS Snowflake HDInsights, Cosmos] for non-relational databases.

• Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement, API design and access] and upcoming data ingestion and integration technologies such as [stream data integration, CEP and data virtualization].

• Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.

• Strong experience in streaming and message queuing technologies [such Snowflake Service Bus, and Kafka].

• Basic experience working with popular data discovery, analytics and BI software tools like [Tableau, Power BI and others] for semantic-layer-based data discovery.

• Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.

• Demonstrated success in working with large, heterogeneous datasets to extract business value using popular data preparation tools.

• Demonstrated ability to work across multiple deployment environments including [cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as [Docker, Kubernetes].

• Data modelling with Enterprise Data Warehouse and DataMart, Snowflake Data Lake Gen2 & BLOB.,

• Data engineering experience with Snowflake Databricks

• Hands-on experience in SQL, Python, NoSQL, JSON, XML, SSL, RESTful APIs, and other formats of data viz parquet, ORC, AVRO

• Hands-on emphasis with a proven track record of building and evaluating data pipelines, and delivering systems for final production

• Exposure to Big Data Analytics (data and technologies), in-memory data processing using spark.

• Working Experience with various data bases like SAP HANA, Cassandra, Mangodb

• Strong understanding DevOps, on-premise, and cloud deployments

Roles and responsibilities:

• Build Data Pipelines

• Drive Automation through effective metadata management

• Learning and applying modern data preparation, integration and AI-enabled metadata management tools and techniques.

• Tracking data consumption patterns.

• Performing intelligent sampling and caching.

• Monitoring schema changes.

• Recommending — or sometimes even automating — existing and future integration flows.

• Collaborate across departments

• train counterparts in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.

• Participate in ensuring compliance and governance during data use

Regards

Rahul Garg

15 Corporate Pl S, Suite #450

Piscataway Township, NJ 08854

Direct: 732-967-2323

Desk Number: 732-624-6445 Ext:223

Apply