Role Title
Lead Data Engineer - Snowflake
Reporting To:
Location
India â Ahmedabad or Hyderabad
Who we are
Anblicks is Data and AI Company â We bring value to your data.
Anblicks is a Data and AI company, specializing in data modernization and transformation, that helps organizations across industries make decisions better, faster, and at scale. Enabling Enterprises with Data-Driven Decision Making. Since 2004, Anblicks has been enabling customers across the globe, with their digital transformation journey. Anblicks is headquartered in Addison, Texas, and employs more than 550 technology professionals, data analysts, and data science experts in the USA, India, and Australia.
Anblicks is committed to bringing value to various industries using CloudOps, Data Analytics, and Modern Apps. Global customers benefited from our Anblicks Ignite Enterprise Data Platform and Accelerators.
Why Join Anblicks
Anblicks, a company, advancing in leaps and bounds, places tremendous emphasis on values. A fundamental ideology is crucial to manoeuvre an organisation towards success. Through a stable value system, Anblicks is enabling an unprecedented transformation. Not just a digital transformation; it is something as expansive as people and a best-in-class global culture.
Key Facts:
than 550 Technology Professionals
than 200 Customer Served
than 900 Project Completed
by happy clients including Fortune 500 companies
Books authored by Employees
in India, USA & Australia
Role Purpose
We are seeking a Lead Data Engineer to join our team of data experts. The ideal candidate will have a passion for designing and implementing end-to-end data solutions, from data ingestion and processing to analytics and reporting. As a Lead Data Engineer, you will work closely with the Data Architect and other stakeholders to understand their data needs and provide solutions to meet those needs.. The ideal candidate will have a strong background in designing, developing, and maintaining data pipelines and ETL processes using technologies such as Snowflake, DBT, Matallion in Data warehousing. As a Lead Data Engineer, you will work closely with the Data Architect and other stakeholders to understand their data needs and provide solutions to meet those needs. You will also be responsible for leading a team of data engineers and ensuring that all data engineering projects are completed on time and within budget.
Role Responsibilities
with the Data Architect and other stakeholders to understand their data needs and provide solutions to meet those needs
and implement end-to-end data solutions using technologies such as Snowflake, Apache Spark, and Hadoop
a team of data engineers and ensure that all data engineering projects are completed on time and within budget
novel query optimization, major security competencies with encryption.
performance issues and scalability issues in the system.
management with distributed data processing algorithms
ownership right from start to finish.
monitor, and optimizeETL and ELT processes with data models
solutions from on-premises setup to cloud-based platforms.
and implement the latest delivery approaches based on data architecture.
documentation and tracking based on understanding user requirements.
data integration with third-party tools including architecting, designing, coding, and testing phases.
documentation of data models, architecture, and maintenance processes
review and audit data models for enhancement
of ideal data pipeline based on ETL tools.
with BI experts and analysts for customized data models and integration
updates, new code development, and reverse engineering
tuning, user acceptance training, application support
confidentiality of data
assessment, management, and mitigation plans
engagement with teams for status reporting and routine activities
activities from one database to another or on-premises to cloud
Skills & Experience
or masterâs degree in computer science, Information Systems, or a related field
years of experience in data engineering and data architecture
Experience in working with AWS S3/ Azure ADLS Storage Accounts and Snowflake.
Strong experience in data engineering fundamentals (SQL, RDBMS, Data Models, Data Structures, orchestration, Devops etc.)
Knowledge of SQL language and cloud-based technologies
Strong experience building data pipelines with Spark and Python/Scala
Strong experience building ELT pipelines (batch and streaming) in Snowflake cloud warehouse
Good working knowledge of leveraging DBT (SQL and Python models) to perform transformations in Snowflake
Able to write structured and efficient queries on large data sets using Statistical Aggregate functions and Analytical functions and reporting datamarts.
Experience in working with Snowflake concepts like Snowpipe, Streams, Tasks, Cloning, TimeTravel, Data Sharing, Data Replication e.t.c.
Handling large and complex datasets like JSON, ORC, PARQUET,CSV filesfrom various sources like AWS S3,Azure DataLake Gen2.
Understanding customer requirements, analysis, design, development and implementation into the system, gather and define business requirements and enhancing business processes.
Knowledge in Snowflake tools like Snowsight and SnowSQL and any partner connects.
Performance tuning and setting up resource monitors
Snowflake modeling â roles, databases, schemas
SQL performance measuring, query tuning, and database tuning
ETL tools with cloud-driven skills
SQL-based databases like Oracle SQL Server, Teradata, etc.
Snowflake warehousing, architecture, processing, administration
Data ingestion into Snowflake
Enterprise-level technical exposure to Snowflake applications
with data modelling is a plus.
problem-solving and analytical skills
to work independently and as part of a team.
working in an Agile environment.
in building relationships with clients and in practice development activities.
written and oral communication skills; Ability to communicate effectively with technical and non-technical staff.
be open to travel.
Full time