Post Job Free
Sign in

Solution Architect

Location:
Fort Mill, SC
Posted:
December 21, 2020

Contact this candidate

Resume:

AMIT TANEJA

Email : *******.******@*****.***

Contact : 704-***-****

https://www.linkedin.com/in/amit-taneja-32028828/

Profile

•12+ years of Total IT Experience.

•Experience in Data architecture, Data Lake, Business Intelligence, BigData Hadoop, Data warehousing, Blackened distributed design & development, ETL Data Integration and cloud Migration.

•Experience in building and migrating application into AWS and SnowFlake computing cloud platform.

•Around 4 years of Big Data, Hadoop Experience.

•8+ years of Experience on ETL DataStage, ELT, Automation techniques,Informatica, Netezza, Oracle, TeraData, Posgres,MySQL, SQL Server etc.

•8+ Year in Retail, Healthcare Financial and Banking.

•3 + years of Experience in Apache Spark, Scala, Java/J2EE,Hive,Impala,Oozie,Kafka,MongoDb.

•3+ years of Experience in Amazon Web Services: EC2, Amazon Elastic Map Reduce (EMR), Amazon S3, Lambda, Redshift, RDS.

•Experience with AWS Core Services: CloudFormation, EC2, ECS/ Docker, ELB, CodePipeline, CodeDeploy, CodeBuild, CodeCommit / Git, RDS, S3, CloudWatch, Lambda, IAM, Jenkins.

•Certified in AWS Cloud Architect associate .

•1+ years of Experience in Microsoft Azure.

Summary

•Lead a fully operational 15 member team at Capital One and executed the role of Technical Architect/Lead for all operational and delivery issues.

•Provided direction and leadership for 30+ person staff at Premier Inc focused on data operations and implementation delivery.

•Support Technical Program Manager, Research Scientist, and a growing virtual team aimed at analyzing usage data to derive new insights and fuel customer success

•Build a high quality BI and Data Warehousing team and design the team to scale

•Create ETLs to take data from various operational systems like Adobe Target, Adobe Connect, Sales Force and create a unified dimensional or star schema data model for analytics and reporting

•Leading a team of data engineers who are responsible for developing data engineering solutions (Data model design and development, ETL development as well as reporting and analytical solutions)

•Experience in architecture and technology leadership across Batch and Streaming data processing platform in Big Data and Cloud Data technologies

•Experience in BIG DATA Hadoop HDFS, Apache Spark, Hive,Pig and Sqoop, Flume,Cloudera, Hue experience.

•Experience in Design/ Development and Implementation of Big Data Application.

•Expert in wringing SQL in Hive/Impala environment.

•Good experience on MongoDB, Hbase and Kudu.

•Experience in Creating Audit control system for ETL process for Big Data and Data Warehouse Application.

•Strong skills in ETL (DataStage/Informatica) Architect/Design and development and Performance Tuning the Data warehouse/Big Data.

•Proficient in Data Analysis, Data Validation, Data Lineage Data Cleansing, Data Verification and identifying data mismatch

•Experience in Version Control/ Upgrading DataStage Version and Code Migration using GitHub.

TECHNICAL SKILLS

Cloud Platform

AWS, Azure

Big Data / Hadoop:

Cloudera 4.x/5.x Manager/Hue, Apache Hadoop, HortonWork HDP 2.3, Hive, Pig, Sqoop, Kafka 0.9.0.x/0.10.2.0, Ambari, Nifi

ETL Tools:

Informatica 6.2/7/9.5, DataStage 7.5/8.7/11.3, Bteq, Fast load, Mload, TPT.

Languages:

SQL, PL/SQL, Python, Java, Shell Script

BI Analytics / Visualization

Tableau, Microstrategy, Impala, Beeline, Hive, Microsoft Azure, Elastic Search, Real time analytics using Kibana 5.4 (Timelion), Zoomdata

Database/File System:

Mongo Db,Teradata 14/V2R6/V2R5, Oracle 10g/9i, Netezza, SQL Server 2000/2005/2008, DB2, Hadoop HDFS

Operating Systems:

Linux, IBM AIX, Ubuntu

IDE Tools

Intellij, Eclipse, Netbeans

Design Tools

ERwin 9.5.2/7.3/4.1, MS–Visio 2007, Power Designer 15.2/16.5, IBM - InfoSphere Data Architect 8.x/9.x

Agile Tools

Rally, JIRA

EDUCATION:

Masters in Computer Applications Vellore Institute of Technology,India

Bachelor in Software Systems Indraprastha University,India

Bachelor in Commerce Delhi University,India

Professional Experience

Client: Bank of America Oct 2019 to Till Date

Location: Charlotte,NC

Role: Technical Architect

Worked on defining the strategy for data ingestion into the S3 lake and used Hive for defining the consumption of the data.

Design and Development of framework for sourcing the data directly from the source on to Amazon Cloud.

Migrating Bank BI data center (Netezza) to cloud platform (SnowFlake Computing) using AWS as a Data Lake

Working with Enterprise architect team to design solution for new cloud platform

Responsible for cloud application architecture design and deployment

One-time historical data migration framework design

Developing load utility to automate incremental data loading (Ability to load any load ready dataset to a given SnowFlake table)

Developed utility to CDC in ETL ( Ability to Identify Inserts and updated records and load type-2 dimensions)

Developed utility to CDC in ETL ( Ability to load snapshot, fact and transaction tables)

Validation, Auditing and Notification Utility (Ability to track end to end data quality and reports)

Developed utility to generate Target table DDL

Developed Utility to generate Staging and Integration table DDL

Developed Reusable utility to offload data from SnowFlake target tables (Ability to offload data from target SN table to be used in some ETL processes and comply with data sharing agreement)

Developed NZ-SF Data validation utility

Developed re-integration utility

Building ETL pipelines for continuous data load into SnowFlake cloud data warehouse.

Developing scripts using Python programming language, SnowSQL to migrate the data into cloud

Working on existing Data Integration projects to load the data into Netezza as well as loading into Hive as foundation layer

Developing ETL jobs to integrate data from multiple sources and loading into NZ warehouse for BI & reporting Analytics.

Working on Data Sharing and Data Security

Design ETL Architecture and Data Model to support on going on-premises Netezza data center

Defined the CI/CD pipeline for infrastructure deployment using tools like Jenkins and Terraform.

Participated in design reviews, enterprise architecture reviews, and defect prevention discussions.

Mentored team members/interns on emerging tools/technologies and ensured team was comfortable/confident with organization’s technical roadmap.

Lead and grow a strong software engineering team of developers and testers focussing on hiring, mentoring, performance management, and career planning

Responsible for design/development activities which may include leading, participating, or supporting concept, planning, design and execution stages.

Design, implement and maintain all AWS infrastructure and services within a managed service environment.

Client: Capital One April 2018 to Oct 2019

Location: Richmond,Virginia

Role: Lead Engineer/Architect

Responsibilities

Responsible for cloud application architecture and deployment in cloud environment

Migrated data Center from Teradata to SnowFlake cloud Data Warehouse

Building ETL pipelines for continuous data load into SnowFlake cloud data warehouse.

Developing scripts using Python programming language, SnowSQL to migrate the data into cloud

Working on design ETL jobs to support existing on-prem architecture.

Working on SnowFlake computing cloud platform

Working on existing Data Integration projects to load the data into Teradata as well as loading into Hive as foundation layer

ETL jobs to integrate data from multiple sources and loading into Target warehouse for BI & reporting Analytics.

Working on Data Sharing and Data Security

Design ETL Architecture and Data Model to support on going on-prem Teradata data center

Real time data streaming

Involve in gathering data requirement from business stake holders

Managing offshore and on shore team

responsible for end to end data delivery

Worked on cloud platform POCs to explore cloud products for an organization in terms of cost & compute.

Client: Premier Inc Apr 2014 to Feb 2018

Location: Charlotte,NC

Role: ETL/Big Data Lead

Responsibilities:

Ingested Netezza data into Hadoop and vice-versa by using the Sqoop scripts.

Migrated the existing Datastage Jobs into equivalent Spark-Scala with reusable artifacts.

Developed brand new complex Spark-Scala applications to transform the source input files to target output files.

Work closely with non-technical management to establish a product roadmap

Acquires resources for organizational activities, provides technical management of suppliers and leads process improvements

Manages, develops and motivates employees and mid-level managers

Provides recommendations and advise on system development, improvements, optimization, or support efforts, including proactive recommendations

Provide Assistance to teams in Release Scoring: Adherence to timelines, unit tests, and scope

Mentor junior software developers providing them with knowledge on best practices and techniques

Creates an inclusive and fun work environment where every employee engages effectively

Client: Bank of America May 2013 to Apr 2014

Location: Charlotte,NC

Role: ETL Datastage Lead

Responsibilities:

• Leading Onsite-offshore teams, reviewing the datastage designs, providing inputs for the same.

• Defined projects, environment variables, user groups and privileges and set up different environments (Dev/QA/Prod). Created master controlling sequencer jobs using the Data Stage Job Sequencer.

• Extensively used the CDC ( Change Data Capture ) stage to implement the slowly changing Dimensional and Fact tables.

Client: NavCanada Dec 2010 to Apr 2013

Location: Ottawa,Canada

Role: ETL Datastage Lead

Responsibilities:

• Designed the complete 3NF and End State ETL architecture and developed around 150 jobs, routines and sequences on Datastage 8.1 platform for the end to end extraction, transformation and load process meeting aggressive deadlines.

• Extensively used Datastage components Dataset, Sort, Transformer, Modify, Lookup, Join, Merge, Oracle Enterprise stage & Change Data Capture and Slowly changing dimensions.

Client: Walgreens July 2008 to Nov 10

Location: Chennai,India

Role: ETL Developer

Responsibilities:

DataStage job design, development & debugging to populate data warehouse which is used for ad-hoc reporting.

Preparation of interface design and mapping.

Unit and System Integration testing.



Contact this candidate