Post Job Free
Sign in

Data Engineer Azure

Location:
Fort Worth, TX
Posted:
April 22, 2025

Contact this candidate

Resume:

SHEELA DUSA

SENIOR DATA ENGINEER

************@*****.***

945-***-****

PROFESSIONAL SUMMARY

• I have over 10 years of experience as a Data Engineer in IT, specializing in designing and implementing Cloud ETL solutions utilizing Azure Data Factory, Azure Data Lake Storage Gen2, Azure Databricks, Azure Synapse Analytics, and Data Warehousing.

• Extensive experience with Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure Cosmos DB, alongside integration with Snowflake for advanced warehousing solutions.

• Skilled in Snowflake modelling (roles, databases, schemas), performance tuning, setting up resource monitors, and managing data ingestion workflows. Proficient in Snowflake architecture, query optimization, and administration for efficient data storage and retrieval.

• Experience in supporting Linux-based platforms, ensuring high availability, security, and performance optimization.

• Expertise in Java, Python, and Scala for developing high-performance applications and data pipelines.

• Experience in high-performance Java development with strong knowledge of core Java, OOP, and OOAD.

• Designed and implemented Java-based backend systems to support key business functionalities, ensuring integration with other enterprise systems.

• Extensive experience in SQL-based database systems, including Oracle, SQL Server, and Teradata, with advanced skills in query tuning, database optimization, and creating ETL pipelines for robust data integration and transformation.

• Proficient in designing and building ETL pipelines using tools like Azure Data Factory, Talend, and Informatica, with hands-on experience migrating data to cloud-based platforms such as Azure and AWS Snowflake.

• Specialized in Star and Snowflake Schema Modeling, designing fact and dimension tables, and optimizing multi-dimensional models for advanced analytics.

• Extensive experience with Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure Cosmos DB, alongside integration with Snowflake for advanced warehousing solutions.

• Designed and maintained Dimensional Data Models with Slowly Changing Dimensions (SCD – Type 1, 2, 3) for efficient data modeling and reporting.

• Ensured error handling, logging, and monitoring of Matillion jobs for real-time alerting and troubleshooting.

• Proficient in implementing slowly changing dimensions (SCD) within star schema to track historical changes in dimension attributes.

• Strong understanding of star schema design for structuring data warehouses to optimize query performance and reporting.

• Implemented Databricks Delta Lake for data lakehouse architecture, enabling ACID transactions and time-travel capabilities.

• Experience in automating Azure AD security tasks using PowerShell, Azure CLI, and other scripting tools to streamline administrative workflows.

• Migrated large-scale databases from on-premise to cloud-based warehouses like Azure SQL Database and Google Cloud Spanner. Knowledge of data transport protocols (SFTP, XML, JDXDML)Expertized in managing Azure Key Vault for secure storage of secrets, keys, and certificates, ensured secure integration with Azure services and managed access policies, proficient in maintaining encryption keys to safeguard sensitive data.

• Automated infrastructure and application deployments in Azure using ARM templates and Azure DevOps CI/CD pipeline, streamlined build, test, and deployment processes for efficient and reliable delivery of cloud resources and applications.

• Expertise in managing and administering MySQL, PostgreSQL, and MongoDB databases, including installation, configuration, and maintenance.

• Leveraged shell scripting for efficient data processing, system monitoring, and task automation on Azure VMs and Azure Storage. Integrated scripts with Azure services (Azure Data Factory, Azure Databricks, Azure Kubernetes Service) to orchestrate data pipelines, manage Azure Data Lake storage, and automate workflows, enhancing operational efficiency.

• Collaborated with cross-functional teams to migrate legacy ETL pipelines (Informatica, Talend) to Matillion.

• Developed and maintained Unix shell scripts for automation, log monitoring, and system maintenance.

• Working Experience in developing large scale data pipelines using spark and hive, hands on expertise on Kafka and Flume to load the log data from multiple sources directly in to HDFS, master’s in working with Distributed Computing Systems and parallel processing techniques to efficiently deal with Big Data, developing pipelines in spark using Scala and PySpark.

• Expert in designing and creating Hive external tables using shared meta-store with Static & Dynamic partitioning, bucketing, and indexing, well-versed in exploring with Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark context, Spark-SQL, Data Frame, pair RDD's, hands on experience in spark performance tuning.

• Experienced in Database Architecture for OLAP and OLTP Applications, specializing Database designing, Data Migration, Data Warehousing concepts with emphasis on ETL, building data pipelines and performing large-scale data transformations and well-versed in designing and optimizing stored procedures for efficient data processing and manipulation in SQL databases.

• Well-versed in Star Schema and Snowflake Schema Modeling with design and implementation of FACT & Dimensions tables, Physical & Logical Data Modeling, Data Analysis, experienced in using Dimensional and Relational Data Modeling.

• Hands-on expertise in Power BI and Cognos for data visualization and dashboard creation.

• Proficient in using Power BI for creating interactive dashboards and data visualizations, utilized Power Query to shape and clean data, enabling robust data models built with DAX for advanced analysis.

• Specialized in defining user stories and managing agile boards in JIRA, actively participating in sprint demos and retrospectives, skilled in administering GIT and GitHub Enterprise for robust version control and cross-team collaboration and well-versed in setting up CI/CD pipelines with Jenkins, using Terraform for infrastructure, deploying containerized applications with Docker, Kubernetes.

• Proficient in designing and implementing scalable, cloud-native data pipelines using technologies like Apache Kafka, Dataflow, BigQuery, and Vertex AI.

TECHNICAL SKILLS

EDUCATION

• Bachelors in computer science and engineering Jun 2009 – May 2013

WORK EXPERIENCE

Client: Citi Bank, Dallas, TX. Jan 2023– Till Date

Role: Sr. Data Engineer

Project Title: Real-Time Fraud Detection System

Project Description: Developed a real-time fraud detection system for Citi utilizing Azure Databricks, Azure Synapse Analytics, and Kafka to process and analyze streaming data from multiple sources. Integrated machine learning models for anomaly detection using MLflow and Python. Leveraged Spark and PySpark for large-scale data processing and utilized NoSQL databases (MongoDB and Neo4j) for efficient storage and querying. Deployed the solution on Kubernetes with CI/CD pipelines powered by Azure DevOps and Terraform, ensuring high availability, scalability, and seamless model updates.

Responsibilities:

• Implemented end-to-end data pipelines for a fraud detection system, utilizing Azure Databricks, Azure Data Factory (ADF), and Logic Apps.

• Utilized ETL processes to gather information from diverse sources and feed it into a centralized fraud detection platform.

• Designed and implemented end-to-end ETL pipelines integrating data from disparate sources into Snowflake and Azure Synapse Analytics, leveraging Azure Data Factory (ADF) and Python for scalability and efficiency.

• Developed complex data transformation scripts using Python and PySpark in Azure Databricks, processing large-scale datasets with optimized performance.

• Designed and optimized ETL pipelines in Azure Data Factory (ADF) and Matillion, doubling data throughput and seamlessly integrating Snowflake for scalable data warehousing, minimizing operational downtime.

• Migrated data to Snowflake on AWS and Azure, optimizing Snowflake architecture, including role-based access, database, schema design, and performance tuning.

• Hands-on experience in building and maintaining data pipelines using Kafka, Apache Spark, or Apache Flink. Skilled in event-driven architectures and developing real-time, low-latency applications.

• Built real-time data pipelines using Azure Event Hubs and Databricks Structured Streaming, enabling near real-time analytics.

• Developed automated data ingestion pipelines using Snowpipe to enable real-time data loading into Snowflake from cloud storage services like Azure Blob Storage and Amazon S3.

• Developed data models using star schema and snowflake schema to support efficient querying and reporting.

• Integrated Matillion with various data sources, including Snowflake, Azure Synapse for seamless data ingestion.

• Expertise in SQL Server, MySQL, PostgreSQL, including performance tuning, query optimization, indexing strategies, and high availability configurations.

• Hands-on experience with Azure DevOps, Jenkins, GitHub Actions, and GitLab for continuous integration and delivery.

• Developed complex data transformation scripts using Python, Pandas, and NumPy to process large-scale datasets efficiently.

• Automated ML workflows through CI/CD pipelines, ensuring seamless model training, testing, validation, and deployment.

• Expertise in deploying and managing applications on Kubernetes, OpenShift, or ECS.

• Implemented performance tuning strategies to optimize Matillion jobs, reducing execution time and resource utilization.

• Proficient in Infrastructure as Code (IaC) using Terraform, Azure CLI, and Ansible for cloud automation and resource management.

• Strong experience in data warehousing, including dimensional modeling, star schema design, ETL processes, and data governance with Azure Purview.

• Designed and developed SSIS Frameworks to automate ETL processes, ensuring scalability and reusability.

• Expert in SQL Server Clustering, T-SQL, SSRS, Power BI, and advanced DAX for building interactive dashboards.

• Experience with Infogix Suite for data quality, governance, compliance, and reconciliation automation.

• Hands-on experience with Apache Airflow, Kafka, Prefect, and Flume for workflow orchestration and real-time data streaming.

• Developed real-time data pipelines using Kafka and Azure Event Hubs to process and analyze streaming data from multiple sources, enabling near-instantaneous data processing and decision-making.

• Integrated AI/ML models into data pipelines, utilizing Azure Databricks and MLflow for real-time anomaly detection and predictive analytics, enhancing business insights from live data streams.

• Implemented end-to-end real-time streaming architectures leveraging Apache Kafka and Azure Event Hubs, enabling low-latency, high-throughput data ingestion and real-time insights for various business use cases.

• Migrated ODI mappings to Azure Data Factory (ADF) workflows, implementing incremental loads and Change Data Capture (CDC) strategies for efficient data integration.

• Skilled in leveraging the Spring framework for building scalable, secure, and maintainable enterprise applications.

• Developed large-scale data pipelines using Spark, Hive, Scala, and PySpark, integrating HDFS for distributed data processing.

• Demonstrated experience in Databricks workload optimization, including cluster sizing, caching, job tuning, and Delta Lake performance enhancements.

• Designed Data Fabric architectures to integrate data from on-premises and cloud environments, with Snowflake as the central data warehouse for real-time access and analytics.

• Developed conceptual, logical, and physical data models using tools like Erwin, DBT, and Snowflake.

• Optimized cloud-based data warehouses in Snowflake, leveraging multi-cluster architecture for scalable, high-performance data storage and querying.

• Deep understanding of the modern data stack, including DataOps, MLOps, cloud-native storage and compute, and how they integrate within the Databricks ecosystem.

• Experienced in PyBase, Cassandra, CouchDB, NoSQL databases, and graph databases like Neo4j for large-scale data storage and retrieval.

• Proficient in MongoDB/NoSQL DB for handling unstructured and semi-structured data in distributed systems.

• Hands-on expertise in Kubernetes, OpenShift, and containerized deployments for scalable data workflows.

• Experienced in designing and implementing data models and ETL processes using DatoRama for streamlined data integration.

• Experience with Unity Catalog in Databricks for data governance and access control.

• Worked with MLflow and other MLOps tools to orchestrate model training, versioning, and deployment workflows on Databricks.

• Automated deployments using Azure DevOps CI/CD pipelines, ARM templates, and YAML scripting for cloud infrastructure management.

• Integrated MLflow, Spark MLlib, and Azure Databricks for machine learning lifecycle management.

• Built Tableau and Power BI dashboards, leveraging Azure Monitor Logs, Application Insights, and Log Analytics for insights.

• Strong expertise in Kusto (Azure Data Explorer) for real-time data analytics and interactive visualizations.

• Skilled in JavaScript, CSS, SQL, JavaScript libraries, and Power Query for custom BI reporting and front-end enhancements.

• Designed ETL workflows using SSIS, SSRS, SSAS, and implemented slowly changing dimensions (SCD) for reporting.

Environment: Azure, Snowflake, Azure Synapse Analytics, Azure Data Factory, Azure Databricks, Kafka, Python, PySpark, SQL, Spark, MongoDB, NoSQL, Neo4j, Cassandra, Elasticsearch, HDFS, Kubernetes, Terraform, Azure DevOps, CI/CD, Git, MLflow, Power BI, Tableau, SOC 2, HIPAA, GDPR, Azure AD, RBAC, GraphQL, Cypher, Kusto (Azure Data Explorer).

Client: Cummins INC, Columbus, IN. Jan 2021 – Dec 2022

Role: Sr. Data Engineer

Project Title: Manufacturing Quality Control and Process Optimization

Project Description: Developed a quality control and process optimization platform by integrating data from production lines, quality systems, and ERP using Azure Data Factory (ADF). Utilized Azure Databricks and Snowflake for data transformation and analysis to enhance product quality and manufacturing efficiency. Implemented Delta Lake for optimized data storage and versioning. Created Power BI dashboards for real-time monitoring of defect rates, production yield, and efficiency, driving continuous improvement and cost reduction.

Responsibilities:

• Created Pipelines in Azure Data Factory (ADF) to optimize extracting, transforming, and loading data from various sources, including Oracle SQL databases.

• Developed JSON Scripts for deploying Pipelines in ADF to process data efficiently using SQL Activities, focusing on health insurance claim optimization.

• Designed and implemented relational (3NF) and dimensional (Star/Snowflake schema) data models for analytics and reporting.

• Utilized Spark applications with Scala and Spark-SQL in Azure Databricks for data extraction, transformation, and aggregation, revealing valuable insights into usage patterns.

• Utilized SnowSQL and Snowpipe to automate data ingestion, ensuring real-time data loading from cloud storage platforms like Azure Blob Storage and Amazon S3.

• Implemented scalable metadata handling, streaming, and batch unification using Delta Lake, supporting time traveling for data versioning, rollbacks, and reproducible machine learning experiments.

• Created complex SSIS packages for data migration, cleansing, and aggregation to support business reporting and analytics.

• Conducted data ingestion into various Azure Services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processed data in Azure Databricks and Snowflake.

• Developed data warehouse solutions for OLAP systems to support multidimensional data analysis and reporting.

• Integrated Azure Synapse and Snowflake for providing a streamlined experience for ingesting, exploring, preparing, managing, and serving data for immediate BI and machine learning needs.

• Designed and implemented Snowpipe to efficiently handle continuous data ingestion with minimal latency, improving data freshness for real-time analytics.

• Migrated legacy systems to Snowflake, implementing a hybrid cloud strategy with Data Fabric for improved data integration and analytics.

• Developed complex DBT models for data aggregation, ensuring data accuracy, consistency, and improved query performance in Snowflake.

• Developed ETL pipelines using Azure Data Factory and Snowflake for seamless data ingestion, transformation, and loading across cloud and on-prem sources.

• Worked with MLflow and other MLOps tools to orchestrate model training, versioning, and deployment workflows on Databricks.

• Leveraged SnowSQL for data migration, schema creation, and database cloning in Snowflake, ensuring efficient data movement and version control.

• Created custom data transformations in Azure Databricks using PySpark and Scala, improving the efficiency of batch and real-time data processing pipelines.

• Built data lake architectures with Azure Data Lake Storage (ADLS) and Delta Lake, optimizing data partitioning, indexing, and caching to accelerate query performance.

• Integrated Databricks Delta Live Tables (DLT) to enhance automated ETL orchestration, ensuring high availability, fault tolerance, and data quality enforcement.

• Designed and implemented data governance policies using Azure Purview, ensuring data cataloging, lineage tracking, and compliance adherence across data pipelines.

• Leveraged technologies like Azure Data Factory, Databricks, Spark, Kafka, Sqoop, and Flume for building scalable data pipelines. Implemented data streaming and batch data processing.

• Followed organization-defined naming conventions for structuring the flat file, Talend Jobs, and daily batches in the context of optimizing data analysis.

• Led efforts in data integration, cleansing, and structuring, supporting downstream applications in BI, reporting, and AI/ML model development.

• Implemented data quality checks and data reconciliation using Infogix Suite, ensuring data integrity and governance.

• Hands-on expertise in deploying and managing containerized applications using Kubernetes, ensuring scalability, reliability, and efficient resource management.

• Executed hive scripts through Hive, Impala,Oozie and Spark SQL, ensuring comprehensive data processing.

• Utilized Azure Data Factory, SQL API, and MongoDB API for integrating data from MongoDB, MS SQL, and cloud services (Blob, Azure SQL DB, Cosmos DB, Snowflake).

• Developed PySpark applications in Python on a distributed environment to load a vast number of CSV files with different schemas into Hive ORC tables, optimizing data processing.

• Developed complex snowflake schema databases for extremely large data sets stored on multiple fact and dimension tables and star schemas with facts central table containing metrics and dimensions with descriptive attributes of metrics.

• Utilized Power BI reports exporting to PDF/Excel, configuring schedules and setting data alerts for real time visibility into KPIs.

Environment: Azure, Azure Data Factory (ADF), Azure Databricks, PySpark, Scala, Snowflake, Azure Data Lake Storage (ADLS), Delta Lake, Power BI, Azure Purview, Azure DevOps, Apache Kafka, Azure Active Directory (AAD), Role-Based Access Control (RBAC).

Client: State Of Nebraska, Lincoln, NE. Mar 2019 –dec2020

Role: Data Engineer

Project Title: Retail Data Integration and Analytics Solution

Project Description: At the State of Nebraska in the retail domain, I designed and implemented an end-to-end ETL solution using Azure Data Factory to optimize data integration from various retail sources. I leveraged Azure Databricks and PySpark for efficient data transformations, improving processing performance by 60%. Integrated Azure Synapse Analytics and Delta Lake for secure data management and optimized storage. Developed real-time data ingestion pipelines using Azure Data Factory and Azure Event Hubs, reducing latency. Created Power BI dashboards to visualize key retail metrics, enabling faster decision-making and insights.

Responsibilities:

• Designed and implemented end-to-end ETL pipelines using Azure Data Factory, incorporating Copy Activities, Mapping Data Flows, and Control Flow Activities, improving data transfer efficiency.

• Optimized data transformation workflows with Azure Databricks, leveraging PySpark DataFrames, RDDs, and Databricks Runtime to enhance data processing performance by 60%.

• Developed complex SQL scripts in Azure Synapse Analytics, utilizing columnstore indexes and PolyBase for external data source integration, boosting query performance.

• Leveraged the Massively Parallel Processing (MPP) architecture in Azure Synapse to handle large datasets and optimize critical reporting dashboards.

• Implemented Delta Lake architecture on Azure Data Lake Storage Gen2 (ADLS Gen2), utilizing Hierarchical Namespace and Role-Based Access Control (RBAC) for secure, efficient data management.

• Set up Delta Live Tables for continuous data processing and employed Databricks SQL for interactive querying.

• Integrated Azure Function Apps within Azure Data Factory pipelines for automated data processing, increasing system flexibility and responsiveness.

• Developed real-time data ingestion solutions by integrating Azure Data Factory with Azure Event Hubs, improving synchronization accuracy and reducing latency.

• Implemented data integration and transformations within Azure SQL Server, ensuring high availability and performance for enterprise-level applications.

• Created shared workspaces in Fabric for collaborative development environments.

• Designed and developed end-to-end solutions for data ingestion, processing, and visualization using Fabric’s capabilities.

• Hands-on experience in unified development on Fabric (SaaS) using Data Factory, Data Science, Data Warehousing, Data Engineering, Data Activator, Power BI, Real-Time Analytics, and OneLake.

• Optimized complex T-SQL queries in Azure SQL Server for efficient data manipulation and reporting, enhancing data retrieval performance.

• Integrated Azure Key Vault to securely manage secrets, keys, and certificates within Azure Data Factory pipelines, significantly enhancing security and ensuring compliance with industry standards.

• Implemented advanced scheduling and automation using Scheduled Trigger, Tumbling Window Trigger, and Event-Based Trigger in Azure Data Factory to ensure timely ETL execution and data synchronization.

• Designed and developed scalable data models in Snowflake, utilizing its MPP architecture for high-performance data storage and querying.

• Designed and implemented data governance processes with a strong focus on Unity Catalog for access control, metadata management, and lineage tracking.

• Experienced in data modeling, building fact/dimension schemas, and implementing robust data quality checks to ensure high-trust datasets.

• Developed and maintained real-time data streaming pipelines using Apache Kafka, enabling efficient data flow for real-time analytics.

• Integrated Kafka with Azure Event Hubs for seamless data streaming between cloud and on-premises systems, enhancing data synchronization and reducing latency.

• Optimized data pipelines and Spark jobs in Azure Databricks using advanced techniques such as Spark configuration tuning, data caching, and partitioning, achieving superior performance.

• Automated data quality checks with PySpark in Azure Databricks, significantly reducing data errors and enhancing trustworthiness.

• Leveraged PySpark for large-scale data processing tasks within Databricks, boosting performance and reducing execution times.

• Developed and optimized SQL queries for ETL processes across platforms, ensuring accuracy and efficiency in data operations.

• Managed and optimized data storage and processing using file formats like PARQUET, JSON, and CSV for compatibility and efficiency across systems.

• Developed and optimized Snow SQL queries for efficient ETL operations in Snowflake.

• Designed and implemented Snowflake Schema for optimized data warehousing, enabling efficient data storage and retrieval.

• Extensive experience in developing interactive dashboards and visualizations using Tableau, enabling data-driven decision-making.

• Developed Star Schema data models for OLAP systems, ensuring efficient query performance and simplified data navigation.

• Created interactive Power BI dashboards to visualize key business metrics, improving decision-making and enabling real-time insights.

• Integrated Power BI with Azure Synapse Analytics for real-time data visualization, reducing time-to-insight by 40%.

• Worked in an Agile environment, collaborating with cross-functional teams to deliver high-quality data solutions on time and within budget.

Environment: Azure Event Hubs, Vertex AI, BigQuery ML, Infogix, Azure CLI, VPC Interconnect, Azure Synapse, Snowflake, Azure Data Factory, Azure Data Lake Storage, Power BI, Azure Functions, Azure DevOps, Python, Azure Databricks, PySpark, Azure Machine Learning, Apache Airflow, Terraform.

Client: Change healthcare, Nashville, TN. Nov 2017 –Feb2019

Role: Hadoop Developer

Responsibilities:

• Engineered and implemented ETL jobs using Spark-Scala to facilitate data migration from Oracle to new MySQL tables, ensuring data integrity and improved performance.

• Architected and developed a Spark Streaming application, enabling real-time sales analytics for timely business insights.

• Extensively utilized Spark-Scala (RDDs, DataFrames, Spark SQL) and Spark-Cassandra-Connector APIs to accomplish various tasks, including data migration and business report generation.

• Informatica PowerCenter and IICS (Informatica Intelligent Cloud Services) expertise for developing and managing scalable ETL pipelines, ensuring seamless data integration, transformation, and loading across diverse platforms.

• Implemented and managed Big Data Management (BDM) solutions, optimizing large-scale data ingestion, transformation, and analysis processes.

• Experienced in using Informatica Data Quality (IDQ) to enforce data governance, cleansing, and validation, ensuring high-quality, accurate, and consistent data across systems.

• Designed and implemented ETL workflows leveraging Python and PySpark to extract data from external systems, apply business rule transformations, and load processed data into Hadoop.

• Orchestrated data extraction from diverse sources into HDFS using Sqoop, streamlining the data ingestion process.

• Managed end-to-end data processing, including data import from various sources, transformations using Hive and MapReduce, and efficient loading into HDFS.

• Developed and implemented Infrastructure as Code (IaC) practices using Terraform to automate the deployment of complex Azure environments, reducing provisioning time and minimizing human error.

• Implemented security controls in Terraform scripts to harden Azure environments, including role-based access control (RBAC) and network security groups (NSGs).

• Architected and deployed real-time data processing pipelines utilizing Apache Spark and Apache Kafka to support instantaneous data input and analytics.

• Implemented automated deployment processes using YAML scripts for large-scale builds and releases, enhancing DevOps practices.

• Demonstrated proficiency in a wide range of big data technologies, including Apache Hive, Apache Pig, HBase, Apache Spark, Zookeeper, Flume, Kafka, and Sqoop.

• Optimized MapReduce job performance through the strategic implementation of combiners, partitioning techniques, and distributed cache utilization.

• Maintained version control of source code using Git and GitHub repositories, ensuring collaborative and traceable development practices.

Client: Dhruv soft Services Private Limited, Hyderabad,India cJuly 2013 -Sept2016

Role: ETL Data Engineer

Responsibilities:

• Designed and developed various SSIS packages (ETL) to extract and transform data and involved in Scheduling SSIS Packages.

• Worked extensively on system analysis, design, development, testing, and implementation of projects (Complete SDLC)

• Performed SSIS Development and support, developed ETL solutions for integrating data from multiple sources like Flat Files (delimited, fixed width), Excel, SQL Server, Raw File, and Teradata into the OLTP

• Identify and resolve problems encountered during both the development and release of the SSIS code.

• Debugging and troubleshooting the technical issues while implementing the applications.

• Created Complex ETL Packages using SSIS to extract data from staging tables to partitioned tables with incremental load.

• Created packages in SSIS with error handling and worked with...



Contact this candidate